Skip to content

Conversation

@zzhx1
Copy link
Contributor

@zzhx1 zzhx1 commented Nov 26, 2025

What this PR does / why we need it?

This PR refactors test_mla_v1.py to eliminate redundant @patch decorators across multiple test classes

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors the tests in test_mla_v1.py by introducing a pytest fixture to consolidate numerous repeated @patch decorators. This significantly reduces code duplication and improves the maintainability of the test suite. The implementation is solid, but I've identified a few instances where the new fixture is requested redundantly in test methods, which I've commented on. Addressing these will make the refactoring even cleaner.

logits_soft_cap=None,
attn_type=None,
kv_sharing_target_layer_name=None,
**kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The mock_distributed fixture is unused in this test method and can be removed. The setUp method already requests this fixture, which ensures that the necessary patches are active for the duration of the test. Including it here is redundant and can be misleading.

Suggested change
**kwargs)
def test_init(self):

self.assertIsNotNone(self.impl.kv_a_proj_with_mqa)
self.assertIsNotNone(self.impl.kv_a_layernorm)
self.assertEqual(self.impl.num_queries_per_kv, 32)
self.assertEqual(self.impl.tp_size, 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The mock_distributed fixture is unused in this test method and can be removed. The setUp method already requests this fixture, which ensures that the necessary patches are active for the duration of the test. Including it here is redundant.

Suggested change
self.assertEqual(self.impl.tp_size, 2)
def test_q_proj_and_k_up_proj(self):


self.assertEqual(self.impl.W_UV.shape[0], self.impl.num_heads)
self.assertEqual(self.impl.W_UV.shape[1], self.impl.kv_lora_rank)
self.assertEqual(self.impl.W_UV.shape[2], self.impl.v_head_dim)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The mock_distributed fixture is unused in this test method and can be removed. The setUp method already requests this fixture, which ensures that the necessary patches are active for the duration of the test. Including it here is redundant.

Suggested change
self.assertEqual(self.impl.W_UV.shape[2], self.impl.v_head_dim)
def test_compute_prefill_context_none(self):

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: zzhx1 <[email protected]>
Signed-off-by: zzhx1 <[email protected]>
Signed-off-by: zzhx1 <[email protected]>
Signed-off-by: zzhx1 <[email protected]>
Signed-off-by: zzhx1 <[email protected]>
@zzhx1 zzhx1 closed this Nov 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant