Skip to content

Commit 66dbb3e

Browse files
wujinyuan1wjy9595
authored andcommitted
[Bugfix]Fix the hang issue of multimodal model when running with DP>1 (#4393)
### What this PR does / why we need it? When cudagraph_mode is set to FULL_DECODE_ONLY, if dp > 1, the dummy-run process will be triggered. When calling the update_attn_params function, the num_tokens parameter needs to be passed, and this value is obtained through positions.shape[0]. However, the multimodal model uses mRope (multi-dimensional rotary positional embeddings), which causes the shape of positions to be 2. As a result, the value obtained from positions.shape[0] is incorrect. We solve this problem by replacing positions.shape[0] with num_tokens. ### Does this PR introduce _any_ user-facing change? NO ### How was this patch tested? vLLM version: v0.11.0rc3 vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0 --------- Signed-off-by: wujinyuan1 <[email protected]> Co-authored-by: wujinyuan1 <[email protected]> Signed-off-by: 刘哲续 <[email protected]>
1 parent 2b6d7b8 commit 66dbb3e

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

vllm_ascend/worker/model_runner_v1.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2322,11 +2322,10 @@ def _generate_dummy_run_hidden_states(self, with_prefill,
23222322
if self.vllm_config.model_config.use_mla:
23232323
# FIXME: Try using `auto_dispatch_capture=True`
23242324
update_mla_attn_params(self.update_stream, forward_context,
2325-
positions.shape[0],
2326-
self.speculative_config)
2325+
num_tokens, self.speculative_config)
23272326
else:
23282327
update_attn_params(self.update_stream, forward_context,
2329-
positions.shape[0])
2328+
num_tokens)
23302329

23312330
if self.drafter and self.drafter.name == SpecDcodeType.EAGLE3:
23322331
hidden_states, _ = hidden_states

0 commit comments

Comments
 (0)