Skip to content

Conversation

@weijinqian0
Copy link
Collaborator

@weijinqian0 weijinqian0 commented Nov 28, 2025

[Refactor] Remove redundant attention operator branches.

Reason:

We replace other attention ops with fused_infer_attention_score expect decode_only state.
clean code and remove 310P support.

#4455

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the attention mechanism by removing redundant operator branches and support for the 310P device. The changes unify different prefill attention paths into a single _forward_prefill method using npu_fused_infer_attention_score, which simplifies the codebase significantly. The _forward_v1_style method has been removed, and a new _forward_encode method has been introduced to handle encoder-only attention, making the main forward method cleaner and more readable. The logic for creating attention masks has also been simplified. Overall, the changes are well-executed, improve code maintainability, and appear to be correct. I have no major concerns.

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

weijinqian_v1 added 2 commits December 1, 2025 14:58
Signed-off-by: weijinqian_v1 <[email protected]>
Signed-off-by: weijinqian_v1 <[email protected]>
@wangxiyuan wangxiyuan added ready read for review ready-for-test start test by label for PR labels Dec 1, 2025
return None
return None

def _make_fia_attention_mask(self) -> torch.Tensor:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fia_mask can also delete.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used in pcp branch. It will be removed next PR.

weijinqian_v1 added 5 commits December 1, 2025 18:15
Signed-off-by: weijinqian_v1 <[email protected]>
Signed-off-by: weijinqian_v1 <[email protected]>
Signed-off-by: weijinqian_v1 <[email protected]>
Signed-off-by: weijinqian_v1 <[email protected]>
Signed-off-by: weijinqian_v1 <[email protected]>
@weijinqian0 weijinqian0 removed ready read for review ready-for-test start test by label for PR labels Dec 1, 2025
@weijinqian0 weijinqian0 added ready read for review ready-for-test start test by label for PR labels Dec 1, 2025
Signed-off-by: weijinqian_v1 <[email protected]>
@weijinqian0 weijinqian0 merged commit b4bf01e into vllm-project:main Dec 2, 2025
23 of 33 checks passed
ChenCangtao pushed a commit to ChenCangtao/vllm-ascend that referenced this pull request Dec 3, 2025
…t#4531)

[Refactor] Remove redundant attention operator branches.

Reason:

We replace other attention ops with fused_infer_attention_score expect
decode_only state.
clean code and remove 310P support.

vllm-project#4455


- vLLM version: v0.11.2
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.2

---------

Signed-off-by: weijinqian_v1 <[email protected]>
Co-authored-by: weijinqian_v1 <[email protected]>
Mercykid-bash pushed a commit to Mercykid-bash/vllm-ascend that referenced this pull request Dec 4, 2025
…t#4531)

[Refactor] Remove redundant attention operator branches.

Reason:

We replace other attention ops with fused_infer_attention_score expect
decode_only state.
clean code and remove 310P support.

vllm-project#4455

- vLLM version: v0.11.2
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.2

---------

Signed-off-by: weijinqian_v1 <[email protected]>
Co-authored-by: weijinqian_v1 <[email protected]>
Signed-off-by: Che Ruan <[email protected]>
Mercykid-bash pushed a commit to Mercykid-bash/vllm-ascend that referenced this pull request Dec 4, 2025
…t#4531)

[Refactor] Remove redundant attention operator branches.

Reason:

We replace other attention ops with fused_infer_attention_score expect
decode_only state.
clean code and remove 310P support.

vllm-project#4455

- vLLM version: v0.11.2
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.2

---------

Signed-off-by: weijinqian_v1 <[email protected]>
Co-authored-by: weijinqian_v1 <[email protected]>
Signed-off-by: Che Ruan <[email protected]>
@zhangxinyuehfad
Copy link
Contributor

zhangxinyuehfad commented Dec 4, 2025

@weijinqian0 #4713 gemma-2-9b-it & gemma-3-4b-it accuarcy test failed after the pr

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants