Commit 724d043
[model] Support PanguUltraMoE (#4615)
### What this PR does / why we need it?
To support PanguUltraMoE model
### Test result
#### Start serving using W8A8 quantized model and ACL graph:
Master node:
```
vllm serve $LOCAL_CKPT_DIR \
--host 0.0.0.0 \
--port 8000 \
--data-parallel-size 2 \
--data-parallel-size-local 1 \
--data-parallel-address $MASTER_NODE_IP \
--data-parallel-rpc-port 13389 \
--tensor-parallel-size 16 \
--seed 1024 \
--enable-expert-parallel \
--served-model-name $NAME \
--max-model-len 4096 \
--max-num-batched-tokens 256 \
--max-num-seqs 18 \
--trust-remote-code \
--gpu-memory-utilization 0.90 \
--quantization ascend \
--additional-config '{"ascend_scheduler_config":{"enabled":false, "enable_chunked_prefill":true, "chunked_prefill_enabled":true},"torchair_graph_config":{"enabled":false}}' \
--speculative_config '{"method": "pangu_ultra_moe_mtp", "num_speculative_tokens": 1}' \
```
Other nodes:
```
vllm serve $LOCAL_CKPT_DIR \
--host 0.0.0.0 \
--port 8000 \
--headless \
--data-parallel-size 2 \
--data-parallel-size-local 1 \
--data-parallel-start-rank 1 \
--data-parallel-address $MASTER_NODE_IP \
--data-parallel-rpc-port 13389 \
--tensor-parallel-size 16 \
--seed 1024 \
--enable-expert-parallel \
--served-model-name $NAME \
--max-model-len 4096 \
--max-num-batched-tokens 256 \
--max-num-seqs 18 \
--trust-remote-code \
--gpu-memory-utilization 0.90 \
--quantization ascend \
--additional-config '{"ascend_scheduler_config":{"enabled":false, "enable_chunked_prefill":true, "chunked_prefill_enabled":true},"torchair_graph_config":{"enabled":false}}' \
--speculative_config '{"method": "pangu_ultra_moe_mtp", "num_speculative_tokens": 1}' \
```
Request & Response:
- Request
```
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "system", "content": ""},
{"role": "user", "content": "你是谁?"}
],
"max_tokens": "64",
"top_p": "0.95",
"top_k": "50",
"temperature": "0.6",
"add_special_tokens" : true
}'
```
- Response
```
[unused16] 好的,用户问我是谁,我需要按照之前的设定来回答。首先,我的角色是盘古,由华为开发,属于推理模型。要强调我的主要功能是解答问题和提供信息支持,特别是通过逻辑推理和数据分析处理复杂任务。需要保持回答简洁,用中文,并且符合用户的
```
- vLLM version: v0.12.0
- vLLM main: https://github.com/vllm-project/vllm/commit/v0.12.0
Signed-off-by: lijifu <[email protected]>
Co-authored-by: lijifu <[email protected]>1 parent f0060fc commit 724d043
File tree
2 files changed
+14
-0
lines changed- vllm_ascend
- quantization
- spec_decode
2 files changed
+14
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
221 | 221 | | |
222 | 222 | | |
223 | 223 | | |
| 224 | + | |
| 225 | + | |
| 226 | + | |
| 227 | + | |
| 228 | + | |
| 229 | + | |
224 | 230 | | |
225 | 231 | | |
226 | 232 | | |
| |||
241 | 247 | | |
242 | 248 | | |
243 | 249 | | |
| 250 | + | |
| 251 | + | |
| 252 | + | |
| 253 | + | |
| 254 | + | |
| 255 | + | |
244 | 256 | | |
245 | 257 | | |
246 | 258 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
44 | 44 | | |
45 | 45 | | |
46 | 46 | | |
| 47 | + | |
| 48 | + | |
47 | 49 | | |
48 | 50 | | |
49 | 51 | | |
| |||
0 commit comments