Skip to content

Conversation

@shjwudp
Copy link
Contributor

@shjwudp shjwudp commented Nov 19, 2025

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share discuss a design-doc with the team.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either [email protected] or [email protected].

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

1. Upgrade DeviceMesh initialization for M-Core to support heterogeneous parallelism.
2. Fix an issue where parameters remain as dist-param during forward execution in specific cases.
2. Hide pipeline schedule's deallocate_output_tensor activation reference check for Megatron-FSDP compatibility.
Deallocation is usually harmless for activations with views.
@shjwudp shjwudp requested review from a team as code owners November 19, 2025 15:16
@copy-pr-bot
Copy link

copy-pr-bot bot commented Nov 19, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

else:
dp_size = dist.get_world_size(dp_cp_group)
dp_cp_tp_ranks = [None for _ in range(dp_size)]
dist.all_gather_object(dp_cp_tp_ranks, tp_ranks, group=dp_cp_group)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, because we use all_gather_objects, we cannot make these two calls async... :(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean it can be optimized into an all_gather operation?

help='If set, enable full sharding in megatron-fsdp Hybrid Sharded Data Parallel (HSDP) mode.')
group.add_argument('--num-distributed-optimizer-instances', type=int, default=1,
help='Number of Distributed Optimizer copies across Data Parallel domain.')
group.add_argument('--no-mfsdp-comm', action='store_true',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using argparse.BooleanOptionalAction so a use can explicitly opt in too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

argparse.BooleanOptionalAction requires Python 3.9, so I think it's best to use it with caution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shjwudp What's the minimum supported Python version for Megatron as of now? 3.9 is way past EOL?

@Skylion007
Copy link
Contributor

Will this support VPP and help making FSDP more viable for training MOE models?

… TP-duplicated mesh

2. Minor code polish
3. Code formatting
@shjwudp
Copy link
Contributor Author

shjwudp commented Nov 21, 2025

Will this support VPP and help making FSDP more viable for training MOE models?

@Skylion007 There are performance concerns with combining VPP and FSDP (VPP makes FSDP prefetching difficult), so I am not sure this will be beneficial for MoE training. However, I will try to make this PR support VPP as well so that we have more options.

@Skylion007
Copy link
Contributor

performance concerns with combining VPP and FSDP (VPP makes FSDP prefetching difficult), so I am not sure this will be ben

Ah what I really want is to support A2A overlap with FSDP which requires VPP.

@shjwudp shjwudp marked this pull request as draft November 25, 2025 07:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants