Skip to content

Commit 85204de

Browse files
authored
Merge branch 'main' into bugfix-fusedmoe
2 parents 77e74e4 + a433f32 commit 85204de

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+1532
-611
lines changed

.github/workflows/_e2e_test.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -103,12 +103,12 @@ jobs:
103103
pytest -sv tests/e2e/singlecard/test_sampler.py
104104
pytest -sv tests/e2e/singlecard/test_vlm.py
105105
pytest -sv tests/e2e/singlecard/multi-modal/test_internvl.py
106+
pytest -sv tests/e2e/singlecard/test_xlite.py
106107
107108
# ------------------------------------ v1 spec decode test ------------------------------------ #
108109
pytest -sv tests/e2e/singlecard/spec_decode_v1/test_v1_mtp_correctness.py
109110
pytest -sv tests/e2e/singlecard/spec_decode_v1/test_v1_mtp_torchair_correctness.py
110-
# Fix me: test_eagle_correctness OOM error
111-
#pytest -sv tests/e2e/singlecard/spec_decode_v1/test_v1_spec_decode.py
111+
pytest -sv tests/e2e/singlecard/spec_decode_v1/test_v1_spec_decode.py
112112
113113
e2e-2-cards:
114114
name: multicard-2

docs/source/tutorials/DeepSeek-V3.1.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -430,7 +430,6 @@ vllm serve /weights/DeepSeek-V3.1_w8a8mix_mtp \
430430
"engine_id": "0",
431431
"kv_connector_module_path": "vllm_ascend.distributed.mooncake_connector",
432432
"kv_connector_extra_config": {
433-
"use_ascend_direct": true,
434433
"prefill": {
435434
"dp_size": 2,
436435
"tp_size": 8
@@ -510,7 +509,6 @@ vllm serve /weights/DeepSeek-V3.1_w8a8mix_mtp \
510509
"engine_id": "1",
511510
"kv_connector_module_path": "vllm_ascend.distributed.mooncake_connector",
512511
"kv_connector_extra_config": {
513-
"use_ascend_direct": true,
514512
"prefill": {
515513
"dp_size": 2,
516514
"tp_size": 8
@@ -590,7 +588,6 @@ vllm serve /weights/DeepSeek-V3.1_w8a8mix_mtp \
590588
"engine_id": "2",
591589
"kv_connector_module_path": "vllm_ascend.distributed.mooncake_connector",
592590
"kv_connector_extra_config": {
593-
"use_ascend_direct": true,
594591
"prefill": {
595592
"dp_size": 2,
596593
"tp_size": 8
@@ -670,7 +667,6 @@ vllm serve /weights/DeepSeek-V3.1_w8a8mix_mtp \
670667
"engine_id": "3",
671668
"kv_connector_module_path": "vllm_ascend.distributed.mooncake_connector",
672669
"kv_connector_extra_config": {
673-
"use_ascend_direct": true,
674670
"prefill": {
675671
"dp_size": 2,
676672
"tp_size": 8

docs/source/user_guide/configuration/additional_config.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ The following table lists additional configuration options available in vLLM Asc
2626

2727
| Name | Type | Default | Description |
2828
|-------------------------------------|------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------|
29+
| `xlite_graph_config` | dict | `{}` | Configuration options for xlite graph mode |
2930
| `torchair_graph_config` | dict | `{}` | Configuration options for torchair graph mode |
3031
| `weight_prefetch_config` | dict | `{}` | Configuration options for weight prefetch |
3132
| `refresh` | bool | `false` | Whether to refresh global Ascend configuration content. This is usually used by rlhf or ut/e2e test case. |
@@ -45,6 +46,12 @@ The following table lists additional configuration options available in vLLM Asc
4546

4647
The details of each configuration option are as follows:
4748

49+
**xlite_graph_config**
50+
| Name | Type | Default | Description |
51+
| ---- | ---- | ------- | ----------- |
52+
| `enabled` | bool | `False` | Whether to enable xlite graph mode. Currently only Llama or Qwen dense series models are supported. |
53+
| `full_mode` | bool | `False` | Whether to enable xlite for both the prefill and decode stages. By default, xlite is only enabled for the decode stage. |
54+
4855
**torchair_graph_config**
4956

5057
| Name | Type | Default | Description |

docs/source/user_guide/feature_guide/graph_mode.md

Lines changed: 30 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,10 @@ This guide provides instructions for using Ascend Graph Mode with vLLM Ascend. P
1010

1111
From v0.9.1rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fallback to the eager mode temporarily by setting `enforce_eager=True` when initializing the model.
1212

13-
There are two kinds for graph mode supported by vLLM Ascend:
13+
There are three kinds for graph mode supported by vLLM Ascend:
1414
- **ACLGraph**: This is the default graph mode supported by vLLM Ascend. In v0.9.1rc1, Qwen and Deepseek series models are well tested.
1515
- **TorchAirGraph**: This is the GE graph mode. In v0.9.1rc1, only DeepSeek series models are supported.
16+
- **XliteGraph**: This is the euler xlite graph mode. In v0.11.0, only Llama and Qwen dense serise models are supported.
1617

1718
## Using ACLGraph
1819
ACLGraph is enabled by default. Take Qwen series models as an example, just set to use V1 Engine is enough.
@@ -57,9 +58,36 @@ vllm serve path/to/DeepSeek-R1-0528 --additional-config='{"torchair_graph_config
5758

5859
You can find more details about additional configuration [here](../configuration/additional_config.md).
5960

61+
## Using XliteGraph
62+
63+
If you want to run Llama or Qwen dense series models with xlite graph mode, please install xlite, and set xlite_graph_config.
64+
65+
```bash
66+
pip install xlite
67+
```
68+
69+
Offline example:
70+
71+
```python
72+
import os
73+
from vllm import LLM
74+
75+
# xlite supports the decode-only mode by default, and the full mode can be enabled by setting: "full_mode": True
76+
model = LLM(model="path/to/Qwen3-32B", tensor_parallel_size=8, additional_config={"xlite_graph_config": {"enabled": True, "full_mode": True}})
77+
outputs = model.generate("Hello, how are you?")
78+
```
79+
80+
Online example:
81+
82+
```shell
83+
vllm serve path/to/Qwen3-32B --tensor-parallel-size 8 --additional-config='{"xlite_graph_config": {"enabled": true, "full_mode": true}}'
84+
```
85+
86+
You can find more details abort xlite [here](https://gitee.com/openeuler/GVirt/blob/master/xlite/README.md)
87+
6088
## Fallback to the Eager Mode
6189

62-
If both `ACLGraph` and `TorchAirGraph` fail to run, you should fallback to the eager mode.
90+
If `ACLGraph`, `TorchAirGraph` and `XliteGraph` all fail to run, you should fallback to the eager mode.
6391

6492
Offline example:
6593

docs/source/user_guide/feature_guide/kv_pool.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,6 @@ The environment variable **MOONCAKE_CONFIG_PATH** is configured to the full path
4141
"metadata_server": "P2PHANDSHAKE",
4242
"protocol": "ascend",
4343
"device_name": "",
44-
"use_ascend_direct": true,
4544
"alloc_in_same_node": true,
4645
"master_server_address": "xx.xx.xx.xx:50088",
4746
"global_segment_size": "1GB" (1024MB/1048576KB/1073741824B/1073741824)
@@ -52,7 +51,6 @@ The environment variable **MOONCAKE_CONFIG_PATH** is configured to the full path
5251
**metadata_server**: Configured as **P2PHANDSHAKE**.
5352
**protocol:** Configured for Ascend to use Mooncake's HCCL communication.
5453
**device_name**: ""
55-
**use_ascend_direct**: Indicator for using ADXL engine.
5654
**alloc_in_same_node**: Indicator for preferring local buffer allocation strategy.
5755
**master_server_address**: Configured with the IP and port of the master service.
5856
**global_segment_size**: Expands the kvcache size registered by the PD node to the master.
@@ -133,7 +131,7 @@ python3 -m vllm.entrypoints.openai.api_server \
133131
}
134132
]
135133
}
136-
}' > p.log 2>&1
134+
}'
137135
```
138136

139137
`decode` Node:
@@ -177,7 +175,6 @@ python3 -m vllm.entrypoints.openai.api_server \
177175
"kv_role": "kv_consumer",
178176
"kv_port": "20002",
179177
"kv_connector_extra_config": {
180-
"use_ascend_direct": true,
181178
"prefill": {
182179
"dp_size": 1,
183180
"tp_size": 1
@@ -196,7 +193,7 @@ python3 -m vllm.entrypoints.openai.api_server \
196193
}
197194
]
198195
}
199-
}' > d.log 2>&1
196+
}'
200197
```
201198

202199
#### 2、Start proxy_server.

examples/external_online_dp/run_dp_template.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,4 +29,4 @@ vllm serve model_path \
2929
--trust-remote-code \
3030
--gpu-memory-utilization 0.9 \
3131
--quantization ascend \
32-
--speculative-config '{"num_speculative_tokens": 1, "method":"deepseek_mtp"}' \
32+
--speculative-config '{"num_speculative_tokens": 1, "method":"mtp"}'

mypy.ini

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,3 +27,6 @@ ignore_missing_imports = True
2727
[mypy-msprobe.*]
2828
ignore_missing_imports = True
2929
allow_untyped_imports = True
30+
31+
[mypy-xlite.*]
32+
ignore_missing_imports = True

requirements-dev.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,5 @@ soundfile
2020
pytest_mock
2121
msserviceprofiler>=1.2.2
2222
mindstudio-probe>=8.3.0
23-
arctic-inference==0.1.1
23+
arctic-inference==0.1.1
24+
xlite

tests/e2e/nightly/features/test_mtpx_deepseek_r1_0528_w8a8.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -74,10 +74,7 @@ async def test_models(model: str, mode: str) -> None:
7474
"VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS": "3600000"
7575
}
7676
additional_config: dict[str, Any] = {}
77-
speculative_config = {
78-
"num_speculative_tokens": 2,
79-
"method": "deepseek_mtp"
80-
}
77+
speculative_config = {"num_speculative_tokens": 2, "method": "mtp"}
8178
compilation_config = {
8279
"cudagraph_capture_sizes": [56],
8380
"cudagraph_mode": "FULL_DECODE_ONLY"

tests/e2e/nightly/features/test_prefix_cache_deepseek_r1_0528_w8a8.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -84,10 +84,7 @@ async def test_models(model: str) -> None:
8484
"chunked_prefill_for_mla": True,
8585
"enable_weight_nz_layout": True
8686
}
87-
speculative_config = {
88-
"num_speculative_tokens": 1,
89-
"method": "deepseek_mtp"
90-
}
87+
speculative_config = {"num_speculative_tokens": 1, "method": "mtp"}
9188
server_args = [
9289
"--quantization", "ascend", "--data-parallel-size", "2",
9390
"--tensor-parallel-size", "8", "--enable-expert-parallel", "--port",

0 commit comments

Comments
 (0)