Skip to content

Commit b747c95

Browse files
authored
[Doc] Add single NPU tutorial for Qwen2.5-Omni-7B (#4446)
### What this PR does / why we need it? Add single NPU tutorial for Qwen2.5-Omni-7B - vLLM version: v0.11.2 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.2 Signed-off-by: Ting FU <[email protected]>
1 parent 9af3475 commit b747c95

File tree

2 files changed

+207
-0
lines changed

2 files changed

+207
-0
lines changed
Lines changed: 206 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,206 @@
1+
# Qwen2.5-Omni-7B
2+
3+
## Introduction
4+
5+
Qwen2.5-Omni is an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner.
6+
7+
The `Qwen2.5-Omni` model was supported since `vllm-ascend:v0.11.0rc0`. This document will show the main verification steps of the model, including supported features, feature configuration, environment preparation, single-NPU and multi-NPU deployment, accuracy and performance evaluation.
8+
9+
## Supported Features
10+
11+
Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix.
12+
13+
Refer to [feature guide](../user_guide/feature_guide/index.md) to get the feature's configuration.
14+
15+
## Environment Preparation
16+
17+
### Model Weight
18+
19+
- `Qwen2.5-Omni-3B`(BF16): [Download model weight](https://huggingface.co/Qwen/Qwen2.5-Omni-3B)
20+
- `Qwen2.5-Omni-7B`(BF16): [Download model weight](https://huggingface.co/Qwen/Qwen2.5-Omni-7B)
21+
22+
Following examples use the 7B version deafultly.
23+
24+
### Installation
25+
26+
You can using our official docker image to run `Qwen2.5-Omni` directly.
27+
28+
Select an image based on your machine type and start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker).
29+
30+
```{code-block} bash
31+
:substitutions:
32+
# Update --device according to your device (Atlas A2: /dev/davinci[0-7] Atlas A3:/dev/davinci[0-15]).
33+
# Update the vllm-ascend image according to your environment.
34+
# Note you should download the weight to /root/.cache in advance.
35+
# Update the vllm-ascend image
36+
export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:|vllm_ascend_version|
37+
export NAME=vllm-ascend
38+
# Run the container using the defined variables
39+
# Note: If you are running bridge network with docker, please expose available ports for multiple nodes communication in advance
40+
docker run --rm \
41+
--name $NAME \
42+
--net=host \
43+
--shm-size=1g \
44+
--device /dev/davinci0 \
45+
--device /dev/davinci1 \
46+
--device /dev/davinci2 \
47+
--device /dev/davinci3 \
48+
--device /dev/davinci4 \
49+
--device /dev/davinci5 \
50+
--device /dev/davinci6 \
51+
--device /dev/davinci7 \
52+
--device /dev/davinci_manager \
53+
--device /dev/devmm_svm \
54+
--device /dev/hisi_hdc \
55+
-v /usr/local/dcmi:/usr/local/dcmi \
56+
-v /usr/local/Ascend/driver/tools/hccn_tool:/usr/local/Ascend/driver/tools/hccn_tool \
57+
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
58+
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
59+
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
60+
-v /etc/ascend_install.info:/etc/ascend_install.info \
61+
-v /mnt/sfs_turbo/.cache:/root/.cache \
62+
-it $IMAGE bash
63+
```
64+
65+
## Deployment
66+
67+
### Single-node Deployment
68+
69+
#### Single NPU (Qwen2.5-Omni-7B)
70+
71+
```bash
72+
export VLLM_USE_MODELSCOPE=true
73+
export MODEL_PATH=vllm-ascend/Qwen2.5-Omni-7B
74+
export LOCAL_MEDIA_PATH=/local_path/to_media/
75+
76+
vllm serve ${MODEL_PATH}\
77+
--host 0.0.0.0 \
78+
--port 8000 \
79+
--served-model-name Qwen-Omni \
80+
--allowed-local-media-path ${LOCAL_MEDIA_PATH} \
81+
--trust-remote-code \
82+
--compilation-config {"full_cuda_graph": 1} \
83+
--no-enable-prefix-caching
84+
```
85+
86+
:::{note}
87+
Now vllm-ascend docker image should contain vllm[audio] build part, if you encounter *audio not supported issue* by any chance, please re-build vllm with [audio] flag.
88+
89+
```bash
90+
VLLM_TARGET_DEVICE=empty pip install -v ".[audio]"
91+
```
92+
93+
:::
94+
95+
`--allowed-local-media-path` is optional, only set it if you need infer model with local media file
96+
97+
`--gpu-memory-utilization` should not be set manually only if yous know what this parameter aims to.
98+
99+
#### Multiple NPU (Qwen2.5-Omni-7B)
100+
101+
```bash
102+
export VLLM_USE_MODELSCOPE=true
103+
export MODEL_PATH=vllm-ascend/Qwen2.5-Omni-7B
104+
export LOCAL_MEDIA_PATH=/local_path/to_media/
105+
export DP_SIZE=8
106+
107+
vllm serve ${MODEL_PATH}\
108+
--host 0.0.0.0 \
109+
--port 8000 \
110+
--served-model-name Qwen-Omni \
111+
--allowed-local-media-path ${LOCAL_MEDIA_PATH} \
112+
--trust-remote-code \
113+
--compilation-config {"full_cuda_graph": 1} \
114+
--data-parallel-size ${DP_SIZE} \
115+
--no-enable-prefix-caching
116+
```
117+
118+
`--tensor_parallel_size` no need to set for this 7B model, but if you really need tensor parallel, tp size can be one of `1\2\4`
119+
120+
### Prefill-Decode Disaggregation
121+
122+
Not supported yet
123+
124+
## Functional Verification
125+
126+
If your service start successfully, you can see the info shown below:
127+
128+
```bash
129+
INFO: Started server process [2736]
130+
INFO: Waiting for application startup.
131+
INFO: Application startup complete.
132+
```
133+
134+
Once your server is started, you can query the model with input prompts:
135+
136+
```bash
137+
curl http://127.0.0.1:8000/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer EMPTY" -d '{
138+
"model": "Qwen-Omni",
139+
"messages": [
140+
{
141+
"role": "user",
142+
"content": [
143+
{
144+
"type": "text",
145+
"text": "What is the text in the illustrate?"
146+
},
147+
{
148+
"type": "image_url",
149+
"image_url": {
150+
"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"
151+
}
152+
}
153+
]
154+
}
155+
],
156+
"max_tokens": 100,
157+
"temperature": 0.7
158+
}'
159+
160+
```
161+
162+
If you query the server successfully, you can see the info shown below (client):
163+
164+
```bash
165+
{"id":"chatcmpl-a70a719c12f7445c8204390a8d0d8c97","object":"chat.completion","created":1764056861,"model":"Qwen-Omni","choices":[{"index":0,"message":{"role":"assistant","content":"The text in the illustration is \"TONGYI Qwen\".","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning_content":null},"logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":73,"total_tokens":88,"completion_tokens":15,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null}
166+
```
167+
168+
## Accuracy Evaluation
169+
170+
Qwen2.5-Omni on vllm-ascend has been test on AISBench.
171+
172+
### Using AISBench
173+
174+
1. Refer to [Using AISBench](../developer_guide/evaluation/using_ais_bench.md) for details.
175+
176+
2. After execution, you can get the result, here is the result of `Qwen2.5-Omni-7B` with `vllm-ascend:0.11.0rc0` for reference only.
177+
178+
| dataset | platform | metric | mode | vllm-api-stream-chat |
179+
|----- | ----- | ----- | ----- | -----|
180+
| textVQA | A2 | accuracy | gen_base64 | 83.47 |
181+
| textVQA | A3 | accuracy | gen_base64 | 84.04 |
182+
183+
## Performance Evaluation
184+
185+
### Using AISBench
186+
187+
Refer to [Using AISBench for performance evaluation](../developer_guide/evaluation/using_ais_bench.md#execute-performance-evaluation) for details.
188+
189+
### Using vLLM Benchmark
190+
191+
Run performance evaluation of `Qwen2.5-Omni-7B` as an example.
192+
193+
Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/benchmarks.html) for more details.
194+
195+
There are three `vllm bench` subcommand:
196+
- `latency`: Benchmark the latency of a single batch of requests.
197+
- `serve`: Benchmark the online serving throughput.
198+
- `throughput`: Benchmark offline inference throughput.
199+
200+
Take the `serve` as an example. Run the code as follows.
201+
202+
```shell
203+
vllm bench serve --model vllm-ascend/Qwen2.5-Omni-7B --dataset-name random --random-input 1024 --num-prompt 200 --request-rate 1 --save-result --result-dir ./
204+
```
205+
206+
After about several minutes, you can get the performance evaluation result.

docs/source/tutorials/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,4 +22,5 @@ multi_node_kimi
2222
multi_node_qwen3vl
2323
multi_node_pd_disaggregation_mooncake
2424
multi_node_ray
25+
Qwen2.5-Omni.md
2526
:::

0 commit comments

Comments
 (0)