Skip to content

Conversation

@TheCodeWrangler
Copy link
Owner

Purpose

The existing startup warmup (GPUModelRunner.profile_run) often skips initializing attention kernels (e.g., FlashAttention, Triton) for non-FlashInfer backends. This leads to JIT compilation and autotuning occurring on the first real request, causing a noticeable latency spike.

This PR adds a comprehensive attention-inclusive warmup to ensure these kernels are pre-compiled during startup, eliminating first-request latency spikes for all attention backends.

Test Plan

  1. Run vLLM with a model using a non-FlashInfer attention backend (e.g., vLLM_ATTENTION_BACKEND=VLLM_ATTENTION_BACKEND_PAGED_ATTENTION).
  2. Send a single, short prompt immediately after startup.
  3. Measure the latency of the first request.
  4. Compare the first request latency with and without this PR.

Test Result

  • Before: The first request often exhibits higher latency due to JIT compilation and autotuning of attention kernels.
  • After: The first request latency is significantly reduced and more consistent with subsequent requests, indicating successful pre-compilation during warmup.

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

BEFORE SUBMITTING, PLEASE READ https://docs.vllm.ai/en/latest/contributing


Open in Cursor Open in Web

@cursor
Copy link

cursor bot commented Dec 15, 2025

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants