Skip to content

Conversation

@Xiao-Chenguang
Copy link

@Xiao-Chenguang Xiao-Chenguang commented Nov 27, 2025

What does this PR do?

model.generate raise an Fix mixed torch.Tensor and DTensor when FSDP2 and LoRA is applied.

The trainer class is the base class for trl trainers like PPOTrainer and GRPOTrainer in which model.generate is called during the training loop.
When FSDP2 is used, all gather and reshard is automatically done for forward method but not for generate.
This lead to the error above.

To fix it, there is a method register_fsdp_forward_method for torch to manage the DTensor for us.
By registering generate, we can get ride of the error.

generate is not properly supported under FSDP2 accelerating. This PR try to fix this error.

Fixes #42417 (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@Rocketknight1
Copy link
Member

cc @3outeille maybe?

@3outeille
Copy link
Member

taking a look today

Copy link
Member

@3outeille 3outeille left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm but modify to this

if self.is_fsdp_enabled:
	self.model = self.model_wrapped = model
    # Fix `got mixed torch.Tensor and DTensor` error in model.generate() for FSDP2 with LoRA
    dist.fsdp.register_fsdp_forward_method(self.model, "generate")

@Xiao-Chenguang
Copy link
Author

lgtm but modify to this

if self.is_fsdp_enabled:
	self.model = self.model_wrapped = model
    # Fix `got mixed torch.Tensor and DTensor` error in model.generate() for FSDP2 with LoRA
    dist.fsdp.register_fsdp_forward_method(self.model, "generate")

Updated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

FSDP2 + LoRA model.generate raise aten.embedding.default: got mixed torch.Tensor and DTensor error

4 participants