Skip to content

Actions: vllm-project/vllm

Actions

Add label on auto-merge enabled

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
4,305 workflow run results
4,305 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[Fix] [gpt-oss] fix non-tool calling path for chat completion
Add label on auto-merge enabled #4779: Pull request #24324 auto_merge_enabled by aarnphm
12s
[Fix] [gpt-oss] fix non-tool calling path for chat completion
Add label on auto-merge enabled #4778: Pull request #24324 auto_merge_enabled by heheda12345
6s
[Bugfix] Fix silu_mul+quant fusion test
Add label on auto-merge enabled #4775: Pull request #24341 auto_merge_enabled by ProExpertProg
9s
[Spec Decode][Benchmark] Add Spec Bench Dataset for benchmarking
Add label on auto-merge enabled #4774: Pull request #23563 auto_merge_enabled by ywang96
5s
[Spec Decode] Fix offline spec_decode.py
Add label on auto-merge enabled #4773: Pull request #24257 auto_merge_enabled by ywang96
6s
QWEN3 Thinking Fused MoE kernels Optimization configs
Add label on auto-merge enabled #4772: Pull request #24330 auto_merge_enabled by houseroad
9s
[New Model]: google/embeddinggemma-300m
Add label on auto-merge enabled #4771: Pull request #24318 auto_merge_enabled by DarkLight1337
8s
[Flashinfer] Support Flashinfer TRTLLM FP8-qkv BF16/FP16-out Attention Kernel
Add label on auto-merge enabled #4770: Pull request #23647 auto_merge_enabled by ProExpertProg
1m 43s
[Feature] Support Decode Context Parallel (DCP) for MLA
Add label on auto-merge enabled #4769: Pull request #23734 auto_merge_enabled by youkaichao
15s
[Multimodal] Improve max video embedding length estimation in V1
Add label on auto-merge enabled #4768: Pull request #24312 auto_merge_enabled by Isotr0py
14m 1s
[gpt-oss][Bugfix]Fix streamableparser for missing handling of certain token_ids
Add label on auto-merge enabled #4767: Pull request #24306 auto_merge_enabled by DarkLight1337
25m 31s
[xpu] upgrade ipex/python3.12 for xpu
Add label on auto-merge enabled #4766: Pull request #23830 auto_merge_enabled by jikunshang
59m 27s
Add data_parallel_size to VllmConfig string representation
Add label on auto-merge enabled #4765: Pull request #24298 auto_merge_enabled by houseroad
11m 17s
[Doc]: fix typos in Python comments
Add label on auto-merge enabled #4763: Pull request #24294 auto_merge_enabled by DarkLight1337
45m 30s
[gpt-oss] Validate gpt-oss python tool during initialization
Add label on auto-merge enabled #4762: Pull request #23856 auto_merge_enabled by heheda12345
1h 22m 27s
[Frontend][Responses API] Support reporting tool output tokens and fix reasoning token count
Add label on auto-merge enabled #4761: Pull request #24285 auto_merge_enabled by heheda12345
1h 11m 29s
Fix Auto_Round Quatization Loading on SM75 and Lower GPUs
Add label on auto-merge enabled #4759: Pull request #24217 auto_merge_enabled by yewentao256
7s
[Frontend] Skip unnecessary detokenization when token_id is requested
Add label on auto-merge enabled #4758: Pull request #24236 auto_merge_enabled by simon-mo
6s
[CI/Build] Reduce the number of redundant cases to test for LoRA
Add label on auto-merge enabled #4757: Pull request #24276 auto_merge_enabled by simon-mo
6s
[Frontend] User-provided uuids for medias in chat. (RFC #22044)
Add label on auto-merge enabled #4756: Pull request #23449 auto_merge_enabled by ywang96
9s