Commit 4071c76
[V1] [Hybrid] Move MiniMaxLinearAttention into layers/mamba (#23831)
Signed-off-by: Thomas Parnell <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>1 parent f1bddbd commit 4071c76
File tree
2 files changed
+448
-410
lines changed- vllm/model_executor
- layers/mamba
- models
2 files changed
+448
-410
lines changed
0 commit comments