Skip to content

Conversation

@wangx700
Copy link
Contributor

@wangx700 wangx700 commented Dec 3, 2025

What this PR does / why we need it?

Fix the incorrect use of numpy's sum function on PyTorch tensors.

Does this PR introduce any user-facing change?

How was this patch tested?

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an issue where Python's built-in sum() was used on a PyTorch tensor, replacing it with torch.sum().item(). However, the refactoring also introduces a critical bug by removing the definition of the max_num_scheduled_tokens variable, which is used later in the function. This will lead to a NameError at runtime. I've added a comment to address this.

@wangx700 wangx700 force-pushed the fix_sum branch 3 times, most recently from 016fc2a to f018b14 Compare December 3, 2025 02:12
@github-actions
Copy link

github-actions bot commented Dec 3, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant