-
-
Notifications
You must be signed in to change notification settings - Fork 11k
[Bugfix] Flashinfer block size for hybrid ssm models #27843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -170,10 +170,10 @@ def get_supported_head_sizes(cls) -> list[int]: | |||||
|
|
||||||
| @staticmethod | ||||||
| def get_supported_kernel_block_size() -> list[int | MultipleOf]: | ||||||
| # Note: Not sure for all platforms, | ||||||
| # but on Blackwell, only support a page size of | ||||||
| # 16, 32, 64 | ||||||
| return [16, 32, 64] | ||||||
| # Note(Chen): FlashInfer backend supports other block_sizes. But as | ||||||
| # the backend doesn't know the block_size selected, we hardcode it as only | ||||||
| # supports 32 for now. | ||||||
| return [32] | ||||||
|
||||||
|
|
||||||
| @classmethod | ||||||
| def validate_head_size(cls, head_size: int) -> None: | ||||||
|
|
@@ -291,6 +291,7 @@ def __init__( | |||||
| self._workspace_buffer = None | ||||||
| self._prefill_wrapper = None # Wrapper for prefill/append | ||||||
| self._decode_wrapper = None # Wrapper for decode (general shape) | ||||||
| block_size = 32 # Note(Chen): Hardcode the block_size as 16 temporarily. | ||||||
|
||||||
| block_size = 32 # Note(Chen): Hardcode the block_size as 16 temporarily. | |
| block_size = 32 # Note(Chen): Hardcode the block_size as 32 temporarily. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The backend now reports only
32as a supported block size, but CUDA platforms still initializecache_config.block_sizeto16by default. When a user runs any non-hybrid model withVLLM_ATTENTION_BACKEND=FLASHINFER,_find_compatible_block_sizesin the GPU model runner queries the backend and fails because 16 is not divisible by 32, raising `ValueError("No compatible block size for 16") before the model starts. This regression removes support for the common 16-token block size that previously worked. Either the backend needs to continue advertising 16 (and 64) or the default cache block size must be bumped to 32 when FlashInfer is selected.Useful? React with 👍 / 👎.