Skip to content

Conversation

@shaopeng-666
Copy link
Contributor

@shaopeng-666 shaopeng-666 commented Nov 28, 2025

What this PR does / why we need it?

Follow the interface changes from the upstream vllm repository ‘
[Core][MM] Add mechanism to configure multimodal fields which should stay on CPU (vllm-project/vllm#28168)

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: 李少鹏 <[email protected]>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the qwen3vl model to support torch.Tensor as an input type for grid_thw, which was previously only a list. While the change is functionally correct, I've identified a performance issue in the implementation. The conversion from a torch.Tensor to a NumPy array and back to a tensor is inefficient and can be optimized. I've provided a suggestion to refactor this logic for better performance.

Comment on lines +152 to 164
if isinstance(grid_thw, list):
grid_thw_list = grid_thw
grid_thw = np.array(grid_thw, dtype=np.int32)
else:
grid_thw_list = grid_thw.tolist()
grid_thw = grid_thw.numpy()

pos_embeds = self.fast_pos_embed_interpolate(grid_thw_list)
hidden_states = hidden_states + pos_embeds
rotary_pos_emb = self.rot_pos_emb(grid_thw)
rotary_pos_emb = self.rot_pos_emb(grid_thw_list)
grid_thw_tensor = torch.tensor(grid_thw,
device=self.device,
dtype=torch.int32)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation for handling grid_thw when it is a torch.Tensor is inefficient. It converts the tensor to a NumPy array using .numpy() (which can cause a GPU-to-CPU data transfer) and then converts it back to a tensor using torch.tensor(). This can be optimized by handling the list and torch.Tensor cases separately to avoid unnecessary conversions.

        if isinstance(grid_thw, list):
            grid_thw_list = grid_thw
            grid_thw_tensor = torch.tensor(grid_thw,
                                           device=self.device,
                                           dtype=torch.int32)
        else:
            grid_thw_list = grid_thw.tolist()
            grid_thw_tensor = grid_thw.to(device=self.device, dtype=torch.int32)

        pos_embeds = self.fast_pos_embed_interpolate(grid_thw_list)
        hidden_states = hidden_states + pos_embeds
        rotary_pos_emb = self.rot_pos_emb(grid_thw_list)

@wangxiyuan
Copy link
Collaborator

@shen-shanshan

@MengqingCao MengqingCao added ready read for review ready-for-test start test by label for PR labels Nov 28, 2025
@shen-shanshan
Copy link
Collaborator

LGTM.
This PR just sync the changes in vllm-project/vllm#28168 to keep interface compatible.
But the while Qwen3-VL ViT may be removed directly to avoid maintaince.

Signed-off-by: 李少鹏 <[email protected]>
@github-actions
Copy link

github-actions bot commented Dec 1, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

merge-conflicts module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants