Skip to content

Conversation

@zhoux77899
Copy link
Contributor

@zhoux77899 zhoux77899 commented Nov 28, 2025

What this PR does / why we need it?

Adds W4A16 quantization method for the Kimi-K2-Thinking model and updates relevant modules to support the new quantization method.

  • Implements complete W4A16 quantization method including weight packing/unpacking, per-group quantization parameter generation, post-processing logic and MoE method application.
  • Adds parameters use_int4_w4a16, w1_offset and w2_offset, adjusts with_quant conditional logic to support W4A16 matrix multiplication.
  • Adds packed_modules_model_mapping for Kimi-K2-Thinking model and processing logic for weight_packed field.

Does this PR introduce any user-facing change?

None.

How was this patch tested?

k2-kimi-thinking

…mi-K2-Thinking quantized experts weights

Signed-off-by: zhoux77899 <[email protected]>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for W4A16 quantization for MoE layers, specifically for Kimi-K2 models. The changes include a new quantization method AscendW4A16FusedMoEMethod, modifications to the MoE MLP logic to handle the new format, and updates to configuration files. Additionally, a bug fix in the rotary embedding implementation is included, which prevents a potential crash. The implementation for W4A16 seems consistent with existing quantization methods for Ascend NPUs. The bug fix is a welcome improvement to robustness.

Comment on lines +69 to +70
if hasattr(self, "cos") and hasattr(self, "sin") and \
self.cos is not None and self.sin is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This change correctly prevents a potential AttributeError. In the previous implementation, if _rope_forward_oot was called when is_first_layer was False in the calling AscendRotaryEmbedding.forward_oot on its first execution, self.cos and self.sin would not have been initialized, leading to a crash. The addition of hasattr checks ensures the attributes exist before they are accessed, making the code more robust.

@github-actions github-actions bot added documentation Improvements or additions to documentation module:tests labels Nov 29, 2025
@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@MengqingCao MengqingCao added ready read for review ready-for-test start test by label for PR labels Dec 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation module:ops module:quantization module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants