-
Notifications
You must be signed in to change notification settings - Fork 629
Refactor test_mla_v1.py to reduce redundant @patch decorators #4473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively refactors the tests in test_mla_v1.py by introducing a pytest fixture to consolidate numerous repeated @patch decorators. This significantly reduces code duplication and improves the maintainability of the test suite. The implementation is solid, but I've identified a few instances where the new fixture is requested redundantly in test methods, which I've commented on. Addressing these will make the refactoring even cleaner.
| logits_soft_cap=None, | ||
| attn_type=None, | ||
| kv_sharing_target_layer_name=None, | ||
| **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mock_distributed fixture is unused in this test method and can be removed. The setUp method already requests this fixture, which ensures that the necessary patches are active for the duration of the test. Including it here is redundant and can be misleading.
| **kwargs) | |
| def test_init(self): |
tests/ut/attention/test_mla_v1.py
Outdated
| self.assertIsNotNone(self.impl.kv_a_proj_with_mqa) | ||
| self.assertIsNotNone(self.impl.kv_a_layernorm) | ||
| self.assertEqual(self.impl.num_queries_per_kv, 32) | ||
| self.assertEqual(self.impl.tp_size, 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mock_distributed fixture is unused in this test method and can be removed. The setUp method already requests this fixture, which ensures that the necessary patches are active for the duration of the test. Including it here is redundant.
| self.assertEqual(self.impl.tp_size, 2) | |
| def test_q_proj_and_k_up_proj(self): |
tests/ut/attention/test_mla_v1.py
Outdated
|
|
||
| self.assertEqual(self.impl.W_UV.shape[0], self.impl.num_heads) | ||
| self.assertEqual(self.impl.W_UV.shape[1], self.impl.kv_lora_rank) | ||
| self.assertEqual(self.impl.W_UV.shape[2], self.impl.v_head_dim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mock_distributed fixture is unused in this test method and can be removed. The setUp method already requests this fixture, which ensures that the necessary patches are active for the duration of the test. Including it here is redundant.
| self.assertEqual(self.impl.W_UV.shape[2], self.impl.v_head_dim) | |
| def test_compute_prefill_context_none(self): |
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Signed-off-by: zzhx1 <[email protected]>
Signed-off-by: zzhx1 <[email protected]>
Signed-off-by: zzhx1 <[email protected]>
Signed-off-by: zzhx1 <[email protected]>
96de0be to
e9e9577
Compare
Signed-off-by: zzhx1 <[email protected]>
Signed-off-by: zzhx1 <[email protected]>
What this PR does / why we need it?
This PR refactors
test_mla_v1.pyto eliminate redundant @patch decorators across multiple test classes