-
Notifications
You must be signed in to change notification settings - Fork 617
[MM][Model] Remove Qwen3-VL modeling files #4577
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the Qwen3-VL model integration by removing the dedicated model files and using a patch-based approach instead. This is a good simplification that aligns with how other models are handled in this repository. The changes are generally correct and improve the code structure. I have one suggestion in the new patch file to replace numpy-based tensor operations with more efficient and idiomatic torch operations, which will improve performance and maintainability.
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
e48bdcc to
72f744e
Compare
Signed-off-by: Shanshan Shen <[email protected]>
What this PR does / why we need it?
Following #4349, remove Qwen3-VL modeling files.
Does this PR introduce any user-facing change?
How was this patch tested?