Skip to content

Commit 0d8c0f1

Browse files
ghphotoframejiangweixiangMengqingCao
authored
[Bugfix] Fix out-of-bounds access to token_id due to uninitialized logprobs (#4248)
### What this PR does / why we need it? The logprobs_tensor was not initialized before accessing its token_id member, leading to a crash when tokenizer.decode() is called by passing a negative token_id ### How was this patch tested? Constructed an inference request with two prompts and set SamplingParams(prompt_logprobs=<non-None value>) (e.g., prompt_logprobs=1). After applying the fix (proper initialization of logprobs_tensor), the same request completed successfully without errors, and the returned logprobs matched expected values. - vLLM version: v0.12.0 - vLLM main: vllm-project/vllm@ad32e3e Signed-off-by: jiangweixiang <[email protected]> Co-authored-by: jiangweixiang <[email protected]> Co-authored-by: Mengqing Cao <[email protected]>
1 parent bd8be2e commit 0d8c0f1

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

vllm_ascend/worker/model_runner_v1.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4276,8 +4276,9 @@ def _get_prompt_logprobs_dict(
42764276
else:
42774277
# This is the last chunk of prompt tokens to return.
42784278
num_logits = num_remaining_tokens
4279-
completed_prefill_reqs.append(req_id)
4280-
prompt_logprobs_dict[req_id] = logprobs_tensors
4279+
if num_logits > 0:
4280+
completed_prefill_reqs.append(req_id)
4281+
prompt_logprobs_dict[req_id] = logprobs_tensors
42814282

42824283
if num_logits <= 0:
42834284
# This can happen for the final chunk if we prefilled exactly

0 commit comments

Comments
 (0)