Skip to content

Conversation

@shenchuxiaofugui
Copy link
Contributor

@shenchuxiaofugui shenchuxiaofugui commented Nov 17, 2025

What this PR does / why we need it?

Redundant experts bugfix
The calculation logic for redundant experts has been fixed, allowing the correct number of redundant experts to be calculated using the map. Therefore, there is no longer a need to set the redundant expert parameter when passing the map.

Does this PR introduce any user-facing change?

After configuring the path for experts_map, users do not need to configure iinit_redundancy_expert.

How was this patch tested?

The accuracy of EPLB was tested with and without the use of redundant experts.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the expert load balancing (eplb) logic by removing the redundant global_redundant_expert_num parameter from determine_default_log2phy_map and its call sites. This change simplifies the code and centralizes the calculation of redundant experts. The modifications touch utility functions, core MoE layer initialization, and corresponding tests. While the refactoring is generally sound, there is a critical issue in one of the updated function calls that will cause a runtime error.

Comment on lines 208 to 210
self.log2phy = determine_default_log2phy_map(
self.global_num_experts, self.ep_size, self.ep_rank,
self.global_redundant_expert_num).npu()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This call to determine_default_log2phy_map includes self.global_redundant_expert_num as a fourth argument. However, the function signature for determine_default_log2phy_map in vllm_ascend/eplb/core/eplb_utils.py was changed in this PR to only accept three arguments. This will cause a TypeError at runtime. Please remove the extra argument to match the updated function definition.

                self.log2phy = determine_default_log2phy_map(
                    self.global_num_experts, self.ep_size, self.ep_rank).npu()

@shenchuxiaofugui shenchuxiaofugui changed the title fix eplb redundant 【EPLB】Eplb Redundant Experts Bugfix Nov 19, 2025
@shenchuxiaofugui shenchuxiaofugui force-pushed the eplb_fix_dev branch 2 times, most recently from 22922dc to df87bab Compare November 21, 2025 01:50
@github-actions github-actions bot added documentation Improvements or additions to documentation module:core labels Nov 26, 2025
@github-actions github-actions bot removed documentation Improvements or additions to documentation module:core labels Nov 26, 2025
Copy link
Collaborator

@MengqingCao MengqingCao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

plz update your pr message, and describe clearly the issue it fix and how this pr works on it

@weijinqian0 weijinqian0 added ready read for review ready-for-test start test by label for PR labels Dec 1, 2025
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
Signed-off-by: shenchuxiaofugui <[email protected]>
@wangxiyuan wangxiyuan merged commit 593a960 into vllm-project:v0.11.0-dev Dec 3, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants