-
Notifications
You must be signed in to change notification settings - Fork 624
[Feat] Support native Kimi-K2-Thinking native W4A16 quantized experts weights #4516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…mi-K2-Thinking quantized experts weights Signed-off-by: zhoux77899 <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for W4A16 quantization for MoE layers, specifically for Kimi-K2 models. The changes include a new quantization method AscendW4A16FusedMoEMethod, modifications to the MoE MLP logic to handle the new format, and updates to configuration files. Additionally, a bug fix in the rotary embedding implementation is included, which prevents a potential crash. The implementation for W4A16 seems consistent with existing quantization methods for Ascend NPUs. The bug fix is a welcome improvement to robustness.
| if hasattr(self, "cos") and hasattr(self, "sin") and \ | ||
| self.cos is not None and self.sin is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change correctly prevents a potential AttributeError. In the previous implementation, if _rope_forward_oot was called when is_first_layer was False in the calling AscendRotaryEmbedding.forward_oot on its first execution, self.cos and self.sin would not have been initialized, leading to a crash. The addition of hasattr checks ensures the attributes exist before they are accessed, making the code more robust.
Signed-off-by: zhoux77899 <[email protected]>
Signed-off-by: zhoux77899 <[email protected]>
Signed-off-by: zhoux77899 <[email protected]>
Signed-off-by: zhoux77899 <[email protected]>
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Signed-off-by: Ruri <[email protected]>
Signed-off-by: zhoux77899 <[email protected]>
Signed-off-by: zhoux77899 <[email protected]>
…ze` attr Signed-off-by: zhoux77899 <[email protected]>
Signed-off-by: zhoux77899 <[email protected]>
Signed-off-by: zhoux77899 <[email protected]>
What this PR does / why we need it?
Adds W4A16 quantization method for the Kimi-K2-Thinking model and updates relevant modules to support the new quantization method.
use_int4_w4a16,w1_offsetandw2_offset, adjustswith_quantconditional logic to support W4A16 matrix multiplication.packed_modules_model_mappingfor Kimi-K2-Thinking model and processing logic forweight_packedfield.Does this PR introduce any user-facing change?
None.
How was this patch tested?