Skip to content

Conversation

@finbarrtimbers
Copy link
Contributor

@finbarrtimbers finbarrtimbers commented Dec 1, 2025

Purpose

Removes outdated documentation indicating that interleaved sliding windows are not supported in KV cache block allocation. This was fixed in February (#13296).

Test Plan

I will calculate the KV cache for various window sizes for a model with interleaved attention layers and look at the results.

Test Result

I have verified this manually by looking at the KV cache generated for Olmo 3. For Olmo 3 7B, which has 3 SWA layers followed by a global attention layer, the KV cache for generating 6144 tokens is 2.58GiB per request [1], while for 34,048 tokens, it's 5.14 GiB per request [2]. If SWA was not supported, I would expect the KV cache to be 5.5x bigger; instead it's 2x bigger.

[1] [gpu_worker.py:298] Available KV cache memory: 57.18 GiB, [kv_cache_utils.py:1091] Maximum concurrency for 6,144 tokens per request: 23.73x
[2] [gpu_worker.py:298] Available KV cache memory: 57.18 GiB, [kv_cache_utils.py:1091] Maximum concurrency for 34,048 tokens per request: 11.12x.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Clarify handling of interleaving sliding windows in models.

Signed-off-by: Finbarr Timbers <[email protected]>
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.

@mergify
Copy link

mergify bot commented Dec 1, 2025

Documentation preview: https://vllm--29796.org.readthedocs.build/en/29796/

@mergify mergify bot added the documentation Improvements or additions to documentation label Dec 1, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully updates the documentation in docs/contributing/model/basic.md by removing outdated information regarding interleaved sliding windows support. The change aligns with the provided context that this functionality was fixed previously. This is a positive update that improves the accuracy of the documentation. No specific review comments are provided as the changes are minor documentation updates and do not introduce any issues of high or critical severity.

@github-actions
Copy link

github-actions bot commented Dec 1, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Copy link
Collaborator

@heheda12345 heheda12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@heheda12345 heheda12345 enabled auto-merge (squash) December 1, 2025 19:14
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 1, 2025
@heheda12345 heheda12345 merged commit 38caf7f into vllm-project:main Dec 1, 2025
8 checks passed
amd-hhashemi pushed a commit to amd-hhashemi/vllm that referenced this pull request Dec 2, 2025
xbfs pushed a commit to xbfs/vllm that referenced this pull request Dec 5, 2025
charlotte12l pushed a commit to charlotte12l/vllm that referenced this pull request Dec 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants