Skip to content

Conversation

@zcnrex
Copy link
Contributor

@zcnrex zcnrex commented Dec 5, 2025

Motivation

Error when setting --enable-torch-compile true

glang serve --model-path Wan-AI/Wan2.1-T2V-1.3B-Diffusers --port 3000 --enable-torch-compile true

Traceback (most recent call last):
  File "/home/jobuser/sglang/python/sglang/multimodal_gen/runtime/pipelines_core/executors/parallel_executor.py", line 90, in execute
    batch = stage(batch, server_args)
  File "/home/jobuser/sglang/python/sglang/multimodal_gen/runtime/pipelines_core/stages/base.py", line 192, in __call__
    result = self.forward(batch, server_args)
  File "/home/jobuser/sglang/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
    return func(*args, **kwargs)
  File "/home/jobuser/sglang/python/sglang/multimodal_gen/runtime/pipelines_core/stages/denoising.py", line 784, in forward
    prepared_vars = self._prepare_denoising_loop(batch, server_args)
  File "/home/jobuser/sglang/python/sglang/multimodal_gen/runtime/pipelines_core/stages/denoising.py", line 421, in _prepare_denoising_loop
    self.transformer.forward,
AttributeError: 'function' object has no attribute 'forward'

Modifications

Updated the denoising.py now it runs without error.

Accuracy Tests

Checked output video is the same.

Benchmarking and Profiling

glang serve --model-path Wan-AI/Wan2.1-T2V-1.3B-Diffusers --port 3000

Tested on H100

- disable torch compile enable torch compile
denoising time per step 1.43s 1.28s
end-to-end time 78s 71s

enabled

[12-05 17:59:51] Running pipeline stages: ['input_validation_stage', 'prompt_encoding_stage', 'conditioning_stage', 'timestep_preparation_stage', 'latent_preparation_stage', 'denoising_stage', 'decoding_stage']
[12-05 17:59:51] [InputValidationStage] started...
[12-05 17:59:51] [InputValidationStage] finished in 0.0001 seconds
[12-05 17:59:51] [TextEncodingStage] started...
[12-05 17:59:54] [TextEncodingStage] finished in 2.2956 seconds
[12-05 17:59:54] [ConditioningStage] started...
[12-05 17:59:54] [ConditioningStage] finished in 0.0001 seconds
[12-05 17:59:54] [TimestepPreparationStage] started...
[12-05 17:59:54] [TimestepPreparationStage] finished in 0.0019 seconds
[12-05 17:59:54] [LatentPreparationStage] started...
[12-05 17:59:54] [LatentPreparationStage] finished in 0.0012 seconds
[12-05 17:59:54] [DenoisingStage] started...
/home/jobuser/sglang/.venv/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:1692: UserWarning: Dynamo detected a call to a `functools.lru_cache`-wrapped function. Dynamo ignores the cache wrapper and directly traces the wrapped function. Silent incorrectness is only a *potential* risk, not something we have observed. Enable TORCH_LOGS="+dynamo" for a DEBUG stack trace.
  torch._dynamo.utils.warn_once(msg)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:04<00:00,  1.28s/it]
[12-05 18:00:58] [DenoisingStage] average time per step: 1.2820 seconds
[12-05 18:00:58] [DenoisingStage] finished in 64.1045 seconds
[12-05 18:00:58] [DecodingStage] started...
[12-05 18:01:01] [DecodingStage] finished in 2.9407 seconds
[12-05 18:01:02] Saved output to outputs/aa0220f6-4ced-4bfc-a151-c22d9606443f.mp4
[12-05 18:01:02] Pixel data generated successfully in 71.27 seconds

disabled

[12-05 18:04:52] Running pipeline stages: ['input_validation_stage', 'prompt_encoding_stage', 'conditioning_stage', 'timestep_preparation_stage', 'latent_preparation_stage', 'denoising_stage', 'decoding_stage']
[12-05 18:04:52] [InputValidationStage] started...
[12-05 18:04:52] [InputValidationStage] finished in 0.0001 seconds
[12-05 18:04:52] [TextEncodingStage] started...
[12-05 18:04:55] [TextEncodingStage] finished in 2.2805 seconds
[12-05 18:04:55] [ConditioningStage] started...
[12-05 18:04:55] [ConditioningStage] finished in 0.0001 seconds
[12-05 18:04:55] [TimestepPreparationStage] started...
[12-05 18:04:55] [TimestepPreparationStage] finished in 0.0019 seconds
[12-05 18:04:55] [LatentPreparationStage] started...
[12-05 18:04:55] [LatentPreparationStage] finished in 0.0012 seconds
[12-05 18:04:55] [DenoisingStage] started...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:11<00:00,  1.43s/it]
[12-05 18:06:06] [DenoisingStage] average time per step: 1.4309 seconds
[12-05 18:06:06] [DenoisingStage] finished in 71.5495 seconds
[12-05 18:06:06] [DecodingStage] started...
[12-05 18:06:09] [DecodingStage] finished in 2.9442 seconds
[12-05 18:06:11] Saved output to outputs/acd7ef1d-9127-4a52-921a-b8cdd2dd9610.mp4
[12-05 18:06:11] Pixel data generated successfully in 78.67 seconds
[12-05 18:06:11] Completed batch processing. Generated 1 outputs in 78.67 seconds.

Checklist

@github-actions github-actions bot added the diffusion SGLang Diffusion label Dec 5, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @zcnrex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug in the diffusion pipeline that previously prevented the successful utilization of torch_compile due to an AttributeError. By refactoring how compiled transformer functions are managed and invoked, the changes not only resolve the underlying error but also unlock significant performance gains in the denoising stage, leading to faster overall inference times without impacting output accuracy.

Highlights

  • Error Fix: Resolves an AttributeError that occurred when torch_compile was enabled in the diffusion pipeline's denoising stage, specifically when attempting to call .forward on a compiled function object.
  • Performance Improvement: Enabling torch_compile with these changes leads to a performance boost, reducing denoising time per step from 1.43s to 1.28s and the overall end-to-end processing time from 78s to 71s on an H100 GPU.
  • Code Structure Enhancement: Introduces dedicated attributes (self.transformer_compiled_func and self.transformer_2_compiled_func) to store the compiled versions of the transformer models, preventing the original model references from being overwritten and ensuring proper access to both original and compiled functions.
  • Conditional Execution Logic: Updates the _prepare_denoising_loop method to conditionally use the newly stored compiled transformer functions when torch_compile is active, ensuring the correct optimized path is taken during execution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a crash that occurs when torch.compile is enabled by separating the compiled function from the original model module. While this correctly resolves the AttributeError, the implementation has a couple of significant issues.

First, in the case of lazy model loading within _prepare_denoising_loop, the code still overwrites self.transformer with the compiled function, which reintroduces the original bug under that specific code path. I've added a specific comment with a suggested fix for this.

Second, and more critically, the compiled functions (self.transformer_compiled_func and self.transformer_2_compiled_func) are never actually used for the main model forward pass. The code continues to call the original, un-compiled modules. This means that while the crash is avoided, the performance benefits of torch.compile (as shown in the PR description) are not realized with the current changes. To fix this, the forward method in DenoisingStage needs to be updated to select and call the appropriate compiled function when torch.compile is enabled. Since this is outside the current diff, I couldn't leave a direct comment, but it's a crucial change to make this PR effective.

@zcnrex zcnrex closed this Dec 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

diffusion SGLang Diffusion

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant