Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 55 additions & 37 deletions docs/source/tutorials/Qwen3_embedding.md
Original file line number Diff line number Diff line change
@@ -1,56 +1,46 @@
# Qwen3-Embedding-8B
# Qwen3-Embedding

## Introduction
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This guide describes how to run the model with vLLM Ascend. Note that only 0.9.2rc1 and higher versions of vLLM Ascend support the model.

## Run Docker Container
## Supported Features

Using the Qwen3-Embedding-8B model as an example, first run the docker container with the following command:
Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix.

```{code-block} bash
:substitutions:
# Update the vllm-ascend image
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
docker run --rm \
--name vllm-ascend \
--shm-size=1g \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash
```
## Environment Preparation

Set up environment variables:
### Model Weight

```bash
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=True
- `Qwen3-Embedding-8B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Embedding-8B)
- `Qwen3-Embedding-4B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Embedding-4B)
- `Qwen3-Embedding-0.6B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Embedding-0.6B)

# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
```
It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`
### Installation
You can use our official docker image to run `Qwen3-Embedding` series models.
- Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker).

if you don't want to use the docker image as above, you can also build all from source:
- Install `vllm-ascend` from source, refer to [installation](../installation.md).

## Deployment

Using the Qwen3-Embedding-8B model as an example, first run the docker container with the following command:

### Online Inference

```bash
vllm serve Qwen/Qwen3-Embedding-8B --runner pooling
vllm serve Qwen/Qwen3-Embedding-8B --task embed --host 127.0.0.1 --port 8888
```

Once your server is started, you can query the model with input prompts.

```bash
curl http://localhost:8000/v1/embeddings -H "Content-Type: application/json" -d '{
"model": "Qwen/Qwen3-Embedding-8B",
"messages": [
{"role": "user", "content": "Hello"}
]
curl http://127.0.0.1:8888/v1/embeddings -H "Content-Type: application/json" -d '{
"input": [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
}'
```

Expand Down Expand Up @@ -81,7 +71,7 @@ if __name__=="__main__":
input_texts = queries + documents

model = LLM(model="Qwen/Qwen3-Embedding-8B",
runner="pooling",
task="embed",
distributed_executor_backend="mp")

outputs = model.embed(input_texts)
Expand All @@ -98,3 +88,31 @@ Processed prompts: 0%|
Processed prompts: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 31.95it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
[[0.7477798461914062, 0.07548339664936066], [0.0886271521449089, 0.6311039924621582]]
```

## Performance

Run performance of `Qwen3-Reranker-8B` as an example.
Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/) for more details.

Take the `serve` as an example. Run the code as follows.

```bash
vllm bench serve --model Qwen3-Embedding-8B --backend openai-embeddings --dataset-name random --host 127.0.0.1 --port 8888 --endpoint /v1/embeddings --tokenizer /root/.cache/Qwen3-Embedding-8B --random-input 200 --save-result --result-dir ./
```

After about several minutes, you can get the performance evaluation result. With this tutorial, the performance result is:

```bash
============ Serving Benchmark Result ============
Successful requests: 1000
Failed requests: 0
Benchmark duration (s): 6.78
Total input tokens: 108032
Request throughput (req/s): 31.11
Total Token throughput (tok/s): 15929.35
----------------End-to-end Latency----------------
Mean E2EL (ms): 4422.79
Median E2EL (ms): 4412.58
P99 E2EL (ms): 6294.52
==================================================
```
188 changes: 188 additions & 0 deletions docs/source/tutorials/Qwen3_reranker.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
# Qwen3-Reranker

## Introduction
The Qwen3 Reranker model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This guide describes how to run the model with vLLM Ascend. Note that only 0.9.2rc1 and higher versions of vLLM Ascend support the model.

## Supported Features

Refer to [supported features](../user_guide/support_matrix/supported_models.md) to get the model's supported feature matrix.

## Environment Preparation

### Model Weight

- `Qwen3-Reranker-8B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Reranker-8B)
- `Qwen3-Reranker-4B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Reranker-4B)
- `Qwen3-Reranker-0.6B` [Download model weight](https://www.modelscope.cn/models/Qwen/Qwen3-Reranker-0.6B)

It is recommended to download the model weight to the shared directory of multiple nodes, such as `/root/.cache/`

### Installation
You can use our official docker image to run `Qwen3-Reranker` series models.
- Start the docker image on your node, refer to [using docker](../installation.md#set-up-using-docker).

if you don't want to use the docker image as above, you can also build all from source:
- Install `vllm-ascend` from source, refer to [installation](../installation.md).

## Deployment

Using the Qwen3-Reranker-8B model as an example, first run the docker container with the following command:

### Online Inference

```bash
vllm serve Qwen/Qwen3-Reranker-8B --task score --host 127.0.0.1 --port 8888 --hf_overrides '{"architectures": ["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}'
```

Once your server is started, you can send request with follow examples.

### requests demo + formatting query & document

```python
import requests

url = "http://127.0.0.1:8888/v1/rerank"

# Please use the query_template and document_template to format the query and
# document for better reranker results.

prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"

query_template = "{prefix}<Instruct>: {instruction}\n<Query>: {query}\n"
document_template = "<Document>: {doc}{suffix}"

instruction = (
"Given a web search query, retrieve relevant passages that answer the query"
)

query = "What is the capital of China?"

documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]

documents = [
document_template.format(doc=doc, suffix=suffix) for doc in documents
]

response = requests.post(url,
json={
"query": query_template.format(prefix=prefix, instruction=instruction, query=query),
"documents": documents,
}).json()

print(response)
```

If you run this script successfully, you will see a list of scores printed to the console, similar to this:

```bash
{'id': 'rerank-e856a17c954047a3a40f73d5ec43dbc6', 'model': 'Qwen/Qwen3-Reranker-8B', 'usage': {'total_tokens': 193}, 'results': [{'index': 0, 'document': {'text': '<Document>: The capital of China is Beijing.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n', 'multi_modal': None}, 'relevance_score': 0.9944348335266113}, {'index': 1, 'document': {'text': '<Document>: Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n', 'multi_modal': None}, 'relevance_score': 6.700084327349032e-07}]}
```

### Offline Inference

```python
from vllm import LLM

model_name = "Qwen/Qwen3-Reranker-8B"

# What is the difference between the official original version and one
# that has been converted into a sequence classification model?
# Qwen3-Reranker is a language model that doing reranker by using the
# logits of "no" and "yes" tokens.
# It needs to computing 151669 tokens logits, making this method extremely
# inefficient, not to mention incompatible with the vllm score API.
# A method for converting the original model into a sequence classification
# model was proposed. See:https://huggingface.co/Qwen/Qwen3-Reranker-0.6B/discussions/3
# Models converted offline using this method can not only be more efficient
# and support the vllm score API, but also make the init parameters more
# concise, for example.
# model = LLM(model="Qwen/Qwen3-Reranker-8B", task="score")

# If you want to load the official original version, the init parameters are
# as follows.

model = LLM(
model=model_name,
task="score",
hf_overrides={
"architectures": ["Qwen3ForSequenceClassification"],
"classifier_from_token": ["no", "yes"],
"is_original_qwen3_reranker": True,
},
)

# Why do we need hf_overrides for the official original version:
# vllm converts it to Qwen3ForSequenceClassification when loaded for
# better performance.
# - Firstly, we need using `"architectures": ["Qwen3ForSequenceClassification"],`
# to manually route to Qwen3ForSequenceClassification.
# - Then, we will extract the vector corresponding to classifier_from_token
# from lm_head using `"classifier_from_token": ["no", "yes"]`.
# - Third, we will convert these two vectors into one vector. The use of
# conversion logic is controlled by `using "is_original_qwen3_reranker": True`.

# Please use the query_template and document_template to format the query and
# document for better reranker results.

prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"

query_template = "{prefix}<Instruct>: {instruction}\n<Query>: {query}\n"
document_template = "<Document>: {doc}{suffix}"

if __name__ == "__main__":
instruction = (
"Given a web search query, retrieve relevant passages that answer the query"
)

query = "What is the capital of China?"

documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]

documents = [document_template.format(doc=doc, suffix=suffix) for doc in documents]

outputs = model.score(query_template.format(prefix=prefix, instruction=instruction, query=query), documents)

print([output.outputs[0].score for output in outputs])
```

If you run this script successfully, you will see a list of scores printed to the console, similar to this:

```bash
[0.9943699240684509, 6.876250040477316e-07]
```

## Performance

Run performance of `Qwen3-Reranker-8B` as an example.
Refer to [vllm benchmark](https://docs.vllm.ai/en/latest/contributing/) for more details.

Take the `serve` as an example. Run the code as follows.

```bash
vllm bench serve --model Qwen3-Reranker-8B --backend vllm-rerank --dataset-name random-rerank --host 127.0.0.1 --port 8888 --endpoint /v1/rerank --tokenizer /root/.cache/Qwen3-Reranker-8B --random-input 200 --save-result --result-dir ./
```

After about several minutes, you can get the performance evaluation result. With this tutorial, the performance result is:

```bash
============ Serving Benchmark Result ============
Successful requests: 1000
Failed requests: 0
Benchmark duration (s): 6.78
Total input tokens: 108032
Request throughput (req/s): 31.11
Total Token throughput (tok/s): 15929.35
----------------End-to-end Latency----------------
Mean E2EL (ms): 4422.79
Median E2EL (ms): 4412.58
P99 E2EL (ms): 6294.52
==================================================
```
1 change: 1 addition & 0 deletions docs/source/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Qwen3-30B-A3B.md
Qwen3-235B-A22B.md
Qwen3-Coder-30B-A3B
Qwen3_embedding
Qwen3_reranker
Qwen3-8B-W4A8
Qwen3-32B-W4A4
Qwen3-Next
Expand Down
1 change: 1 addition & 0 deletions docs/source/user_guide/support_matrix/supported_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ Get the latest info here: https://github.com/vllm-project/vllm-ascend/issues/160
| Model | Support | Note | BF16 | Supported Hardware | W8A8 | Chunked Prefill | Automatic Prefix Cache | LoRA | Speculative Decoding | Async Scheduling | Tensor Parallel | Pipeline Parallel | Expert Parallel | Data Parallel | Prefill-decode Disaggregation | Piecewise AclGraph | Fullgraph AclGraph | max-model-len | MLP Weight Prefetch | Doc |
|-------------------------------|-----------|----------------------------------------------------------------------|------|--------------------|------|-----------------|------------------------|------|----------------------|------------------|-----------------|-------------------|-----------------|---------------|-------------------------------|--------------------|--------------------|---------------|---------------------|-----|
| Qwen3-Embedding || |||||||||||||||||||
| Qwen3-Reranker || |||||||||||||||||||
| Molmo || [1942](https://github.com/vllm-project/vllm-ascend/issues/1942) |||||||||||||||||||
| XLM-RoBERTa-based || |||||||||||||||||||
| Bert || |||||||||||||||||||
Expand Down