Skip to content

Conversation

@jesse996
Copy link
Contributor

@jesse996 jesse996 commented Sep 19, 2025

What this PR does / why we need it?

this PR based on 19746, support Prompt Embeddings for v1 engine on NPU

Does this PR introduce any user-facing change?

How was this patch tested?

python examples/prompt_embed_inference.py

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for prompt embeddings in the v1 engine. The changes span across the model runner and input batch management to handle requests with embeddings instead of token IDs. A new boolean tensor is_token_ids is introduced to differentiate between the two types of inputs. The logic for preparing inputs, managing cached states, and handling special cases like dummy runs and prompt logprobs has been updated accordingly.

The review identifies a critical issue where inputs_embeds buffer is not initialized when prompt embeddings are enabled for non-multimodal models, which would lead to a runtime error. A fix is suggested.

Comment on lines 288 to 294
if self.is_multimodal_model:
self.inputs_embeds = torch.zeros(
(self.max_num_tokens, self.model_config.get_hidden_size()),
dtype=self.dtype,
device=self.device)
self.inputs_embeds = self._make_buffer(self.max_num_tokens,
self.model_config.get_hidden_size(),
dtype=self.dtype,
numpy=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The inputs_embeds buffer is only initialized for multimodal models. However, it is also required when enable_prompt_embeds is true for non-multimodal models. Without this initialization, an AttributeError will be raised when self.inputs_embeds is accessed later in methods like _dummy_run or _prepare_input_ids. The condition should be updated to include self.enable_prompt_embeds.

Suggested change
if self.is_multimodal_model:
self.inputs_embeds = torch.zeros(
(self.max_num_tokens, self.model_config.get_hidden_size()),
dtype=self.dtype,
device=self.device)
self.inputs_embeds = self._make_buffer(self.max_num_tokens,
self.model_config.get_hidden_size(),
dtype=self.dtype,
numpy=False)
if self.is_multimodal_model or self.enable_prompt_embeds:
self.inputs_embeds = self._make_buffer(self.max_num_tokens,
self.model_config.get_hidden_size(),
dtype=self.dtype,
numpy=False)

@wangxiyuan
Copy link
Collaborator

wow, you are so quick! Let's make vllm-ascend work with vLLM main branch first. #2907 Then you can rebase main and add e2e test for this feature.

And please note that vLLM Ascend should work on both vLLM main and vLLM latest version( it's v0.10.2 now). So you need consider backward capability perhaps.

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: jesse <[email protected]>
@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: jesse <[email protected]>
@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@wangxiyuan
Copy link
Collaborator

please fix the merge conflict

Signed-off-by: jesse <[email protected]>
@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@Potabk
Copy link
Collaborator

Potabk commented Oct 25, 2025

maybe we should add the test to

pytest -sv tests/e2e/singlecard/test_aclgraph.py

Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
@jesse996
Copy link
Contributor Author

maybe we should add the test to

pytest -sv tests/e2e/singlecard/test_aclgraph.py

added

Signed-off-by: jesse <[email protected]>
@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: jesse <[email protected]>
Signed-off-by: jesse <[email protected]>
@wangxiyuan wangxiyuan merged commit 216fc0e into vllm-project:main Oct 30, 2025
24 checks passed
luolun pushed a commit to luolun/vllm-ascend that referenced this pull request Nov 19, 2025
### What this PR does / why we need it?
this PR based on
[19746](vllm-project/vllm#19746), support
Prompt Embeddings for v1 engine on NPU

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

```python
python examples/prompt_embed_inference.py
```


- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@releases/v0.11.1

---------

Signed-off-by: jesse <[email protected]>
Signed-off-by: luolun <[email protected]>
hwhaokun pushed a commit to hwhaokun/vllm-ascend that referenced this pull request Nov 19, 2025
### What this PR does / why we need it?
this PR based on
[19746](vllm-project/vllm#19746), support
Prompt Embeddings for v1 engine on NPU

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

```python
python examples/prompt_embed_inference.py
```

- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@releases/v0.11.1

---------

Signed-off-by: jesse <[email protected]>
Signed-off-by: hwhaokun <[email protected]>
NSDie pushed a commit to NSDie/vllm-ascend that referenced this pull request Nov 24, 2025
### What this PR does / why we need it?
this PR based on
[19746](vllm-project/vllm#19746), support
Prompt Embeddings for v1 engine on NPU

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

```python
python examples/prompt_embed_inference.py
```

- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@releases/v0.11.1

---------

Signed-off-by: jesse <[email protected]>
Signed-off-by: nsdie <[email protected]>
Clorist33 pushed a commit to Clorist33/vllm-ascend that referenced this pull request Dec 10, 2025
### What this PR does / why we need it?
this PR based on
[19746](vllm-project/vllm#19746), support
Prompt Embeddings for v1 engine on NPU

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

```python
python examples/prompt_embed_inference.py
```


- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@releases/v0.11.1

---------

Signed-off-by: jesse <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants