Skip to content

Commit 2be0fe2

Browse files
lulinawangxiyuan
andauthored
[Feat] Add Euler xlite graph wrapper support (#4526)
### What this PR does / why we need it? This patch adds support for the xlite graph wrapper to vllm_ascend. Xlite provides operator implementations of the transformer network on Ascend hardware. For details about xlite, please refer to the following link: https://gitee.com/openeuler/GVirt/blob/master/xlite/README.md The latest performance comparison data between xlite and the default aclgraph mode is as follows: ## Qwen3 32B TPS 910B3(A2) Online Inference Performance Comparison - aclgraph: main(c4a71fc) - xlite-full: main(c4a71fc) + xlite-full - xlite-decode-only: main(c4a71fc) + xlite-decode-only - diff1: Performance comparison between xlite-full and aclgraph - diff2: Performance comparison between xlite-decode-only and aclgraph ### Does this PR introduce _any_ user-facing change? Enable the xlite graph mode by setting xlite_graph_config: --additional-config='{"xlite_graph_config": {"enabled": true}}' # Enabled for decode only --additional-config='{"xlite_graph_config": {"enabled": true, "full_mode": true}}' # Enabled for prefill and decode - vLLM version: v0.12.0 - vLLM main: vllm-project/vllm@ad32e3e --------- Signed-off-by: lulina <[email protected]> Co-authored-by: wangxiyuan <[email protected]>
1 parent 8fdb689 commit 2be0fe2

File tree

13 files changed

+553
-3
lines changed

13 files changed

+553
-3
lines changed

.github/workflows/_e2e_test.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,7 @@ jobs:
103103
pytest -sv tests/e2e/singlecard/test_sampler.py
104104
pytest -sv tests/e2e/singlecard/test_vlm.py
105105
pytest -sv tests/e2e/singlecard/multi-modal/test_internvl.py
106+
pytest -sv tests/e2e/singlecard/test_xlite.py
106107
107108
# ------------------------------------ v1 spec decode test ------------------------------------ #
108109
pytest -sv tests/e2e/singlecard/spec_decode_v1/test_v1_mtp_correctness.py

docs/source/user_guide/configuration/additional_config.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ The following table lists additional configuration options available in vLLM Asc
2626

2727
| Name | Type | Default | Description |
2828
|-------------------------------------|------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------|
29+
| `xlite_graph_config` | dict | `{}` | Configuration options for xlite graph mode |
2930
| `torchair_graph_config` | dict | `{}` | Configuration options for torchair graph mode |
3031
| `weight_prefetch_config` | dict | `{}` | Configuration options for weight prefetch |
3132
| `refresh` | bool | `false` | Whether to refresh global Ascend configuration content. This is usually used by rlhf or ut/e2e test case. |
@@ -45,6 +46,12 @@ The following table lists additional configuration options available in vLLM Asc
4546

4647
The details of each configuration option are as follows:
4748

49+
**xlite_graph_config**
50+
| Name | Type | Default | Description |
51+
| ---- | ---- | ------- | ----------- |
52+
| `enabled` | bool | `False` | Whether to enable xlite graph mode. Currently only Llama or Qwen dense series models are supported. |
53+
| `full_mode` | bool | `False` | Whether to enable xlite for both the prefill and decode stages. By default, xlite is only enabled for the decode stage. |
54+
4855
**torchair_graph_config**
4956

5057
| Name | Type | Default | Description |

docs/source/user_guide/feature_guide/graph_mode.md

Lines changed: 30 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,10 @@ This guide provides instructions for using Ascend Graph Mode with vLLM Ascend. P
1010

1111
From v0.9.1rc1 with V1 Engine, vLLM Ascend will run models in graph mode by default to keep the same behavior with vLLM. If you hit any issues, please feel free to open an issue on GitHub and fallback to the eager mode temporarily by setting `enforce_eager=True` when initializing the model.
1212

13-
There are two kinds for graph mode supported by vLLM Ascend:
13+
There are three kinds for graph mode supported by vLLM Ascend:
1414
- **ACLGraph**: This is the default graph mode supported by vLLM Ascend. In v0.9.1rc1, Qwen and Deepseek series models are well tested.
1515
- **TorchAirGraph**: This is the GE graph mode. In v0.9.1rc1, only DeepSeek series models are supported.
16+
- **XliteGraph**: This is the euler xlite graph mode. In v0.11.0, only Llama and Qwen dense serise models are supported.
1617

1718
## Using ACLGraph
1819
ACLGraph is enabled by default. Take Qwen series models as an example, just set to use V1 Engine is enough.
@@ -57,9 +58,36 @@ vllm serve path/to/DeepSeek-R1-0528 --additional-config='{"torchair_graph_config
5758

5859
You can find more details about additional configuration [here](../configuration/additional_config.md).
5960

61+
## Using XliteGraph
62+
63+
If you want to run Llama or Qwen dense series models with xlite graph mode, please install xlite, and set xlite_graph_config.
64+
65+
```bash
66+
pip install xlite
67+
```
68+
69+
Offline example:
70+
71+
```python
72+
import os
73+
from vllm import LLM
74+
75+
# xlite supports the decode-only mode by default, and the full mode can be enabled by setting: "full_mode": True
76+
model = LLM(model="path/to/Qwen3-32B", tensor_parallel_size=8, additional_config={"xlite_graph_config": {"enabled": True, "full_mode": True}})
77+
outputs = model.generate("Hello, how are you?")
78+
```
79+
80+
Online example:
81+
82+
```shell
83+
vllm serve path/to/Qwen3-32B --tensor-parallel-size 8 --additional-config='{"xlite_graph_config": {"enabled": true, "full_mode": true}}'
84+
```
85+
86+
You can find more details abort xlite [here](https://gitee.com/openeuler/GVirt/blob/master/xlite/README.md)
87+
6088
## Fallback to the Eager Mode
6189

62-
If both `ACLGraph` and `TorchAirGraph` fail to run, you should fallback to the eager mode.
90+
If `ACLGraph`, `TorchAirGraph` and `XliteGraph` all fail to run, you should fallback to the eager mode.
6391

6492
Offline example:
6593

mypy.ini

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,3 +27,6 @@ ignore_missing_imports = True
2727
[mypy-msprobe.*]
2828
ignore_missing_imports = True
2929
allow_untyped_imports = True
30+
31+
[mypy-xlite.*]
32+
ignore_missing_imports = True

requirements-dev.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,5 @@ soundfile
2020
pytest_mock
2121
msserviceprofiler>=1.2.2
2222
mindstudio-probe>=8.3.0
23-
arctic-inference==0.1.1
23+
arctic-inference==0.1.1
24+
xlite

tests/e2e/singlecard/test_xlite.py

Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
#
2+
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved.
3+
# Copyright 2023 The vLLM team.
4+
#
5+
# Licensed under the Apache License, Version 2.0 (the "License");
6+
# you may not use this file except in compliance with the License.
7+
# You may obtain a copy of the License at
8+
#
9+
# http://www.apache.org/licenses/LICENSE-2.0
10+
#
11+
# Unless required by applicable law or agreed to in writing, software
12+
# distributed under the License is distributed on an "AS IS" BASIS,
13+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
# See the License for the specific language governing permissions and
15+
# limitations under the License.
16+
#
17+
"""
18+
Compare the outputs of vLLM with and without xlite.
19+
20+
Run `pytest tests/e2e/singlecard/test_xlite.py`.
21+
"""
22+
23+
import pytest
24+
from vllm import SamplingParams
25+
26+
from tests.e2e.conftest import VllmRunner
27+
from tests.e2e.model_utils import check_outputs_equal
28+
29+
MODELS = [
30+
"Qwen/Qwen3-0.6B",
31+
]
32+
33+
34+
@pytest.mark.parametrize("model", MODELS)
35+
@pytest.mark.parametrize("max_tokens", [32])
36+
def test_models_with_xlite_decode_only(
37+
model: str,
38+
max_tokens: int,
39+
) -> None:
40+
prompts = [
41+
"Hello, my name is", "The president of the United States is",
42+
"The capital of France is", "The future of AI is"
43+
]
44+
45+
sampling_params = SamplingParams(max_tokens=max_tokens, temperature=0.0)
46+
with VllmRunner(
47+
model,
48+
block_size=128,
49+
max_model_len=1024,
50+
enforce_eager=False,
51+
additional_config={"xlite_graph_config": {
52+
"enabled": True
53+
}},
54+
) as runner:
55+
vllm_xlite_outputs = runner.model.generate(prompts, sampling_params)
56+
57+
with VllmRunner(
58+
model,
59+
block_size=128,
60+
max_model_len=1024,
61+
enforce_eager=True,
62+
) as runner:
63+
vllm_eager_outputs = runner.model.generate(prompts, sampling_params)
64+
vllm_xlite_outputs_list = []
65+
for output in vllm_xlite_outputs:
66+
vllm_xlite_outputs_list.append(
67+
(output.outputs[0].index, output.outputs[0].text))
68+
69+
vllm_eager_outputs_list = []
70+
for output in vllm_eager_outputs:
71+
vllm_eager_outputs_list.append(
72+
(output.outputs[0].index, output.outputs[0].text))
73+
74+
check_outputs_equal(
75+
outputs_0_lst=vllm_eager_outputs_list,
76+
outputs_1_lst=vllm_xlite_outputs_list,
77+
name_0="vllm_eager_outputs",
78+
name_1="vllm_xlite_outputs",
79+
)
80+
81+
82+
@pytest.mark.parametrize("model", MODELS)
83+
@pytest.mark.parametrize("max_tokens", [32])
84+
def test_models_with_xlite_full_mode(
85+
model: str,
86+
max_tokens: int,
87+
) -> None:
88+
prompts = [
89+
"Hello, my name is", "The president of the United States is",
90+
"The capital of France is", "The future of AI is"
91+
]
92+
93+
sampling_params = SamplingParams(max_tokens=max_tokens, temperature=0.0)
94+
with VllmRunner(
95+
model,
96+
block_size=128,
97+
max_model_len=1024,
98+
enforce_eager=False,
99+
additional_config={
100+
"xlite_graph_config": {
101+
"enabled": True,
102+
"full_mode": True
103+
}
104+
},
105+
) as runner:
106+
vllm_xlite_outputs = runner.model.generate(prompts, sampling_params)
107+
108+
with VllmRunner(
109+
model,
110+
block_size=128,
111+
max_model_len=1024,
112+
enforce_eager=True,
113+
) as runner:
114+
vllm_eager_outputs = runner.model.generate(prompts, sampling_params)
115+
vllm_xlite_outputs_list = []
116+
for output in vllm_xlite_outputs:
117+
vllm_xlite_outputs_list.append(
118+
(output.outputs[0].index, output.outputs[0].text))
119+
120+
vllm_eager_outputs_list = []
121+
for output in vllm_eager_outputs:
122+
vllm_eager_outputs_list.append(
123+
(output.outputs[0].index, output.outputs[0].text))
124+
125+
check_outputs_equal(
126+
outputs_0_lst=vllm_eager_outputs_list,
127+
outputs_1_lst=vllm_xlite_outputs_list,
128+
name_0="vllm_eager_outputs",
129+
name_1="vllm_xlite_outputs",
130+
)

tests/ut/test_platform.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ def mock_vllm_config():
3232
def mock_vllm_ascend_config():
3333
mock_ascend_config = MagicMock()
3434
mock_ascend_config.torchair_graph_config.enabled = False
35+
mock_ascend_config.xlite_graph_config.enabled = False
3536
mock_ascend_config.enable_shared_expert_dp = False
3637
return mock_ascend_config
3738

@@ -512,6 +513,16 @@ def test_check_and_update_config_v1_worker_class_selection(
512513
"vllm_ascend.torchair.torchair_worker.NPUTorchairWorker",
513514
)
514515

516+
test_ascend_config = TestNPUPlatform.mock_vllm_ascend_config()
517+
test_ascend_config.xlite_graph_config.enabled = True
518+
mock_init_ascend.return_value = test_ascend_config
519+
vllm_config.parallel_config.worker_cls = "auto"
520+
self.platform.check_and_update_config(vllm_config)
521+
self.assertEqual(
522+
vllm_config.parallel_config.worker_cls,
523+
"vllm_ascend.xlite.xlite_worker.XliteWorker",
524+
)
525+
515526
@patch("vllm_ascend.ascend_config.check_ascend_config")
516527
@patch("vllm_ascend.ascend_config.init_ascend_config")
517528
@patch('vllm_ascend.utils.get_ascend_device_type',

vllm_ascend/ascend_config.py

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,10 @@ def __init__(self, vllm_config):
7272
self.torchair_graph_config = TorchairGraphConfig(
7373
torchair_graph_config, vllm_config, additional_config)
7474

75+
xlite_graph_config = additional_config.get("xlite_graph_config", {})
76+
self.xlite_graph_config = XliteGraphConfig(xlite_graph_config,
77+
vllm_config)
78+
7579
ascend_compilation_config = additional_config.get(
7680
"ascend_compilation_config", {})
7781
self.ascend_compilation_config = AscendCompilationConfig(
@@ -291,6 +295,29 @@ def __init__(self, torchair_graph_config, vllm_config, additional_config):
291295
)
292296

293297

298+
class XliteGraphConfig:
299+
"""
300+
Configuration Object for xlite_graph_config from additional_config
301+
"""
302+
303+
def __init__(self, xlite_graph_config, vllm_config):
304+
self.enabled = xlite_graph_config.get("enabled", False)
305+
self.full_mode = xlite_graph_config.get("full_mode", False)
306+
if self.enabled:
307+
if bool(vllm_config.speculative_config):
308+
raise RuntimeError(
309+
"Xlite graph mode is not compatible with speculative decoding. Please disable speculative decoding."
310+
)
311+
if vllm_config.parallel_config.pipeline_parallel_size > 1:
312+
raise RuntimeError(
313+
"Xlite graph mode is not compatible with pipeline parallelism. Please set pipeline_parallel_size to 1."
314+
)
315+
if vllm_config.cache_config.block_size != 128:
316+
raise RuntimeError(
317+
"Xlite graph mode is only compatible with block_size of 128. Please set block_size to 128."
318+
)
319+
320+
294321
class DumpConfig:
295322
"""
296323
Configuration object for dump/PrecisionDebugger settings.

vllm_ascend/platform.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -305,6 +305,11 @@ def check_and_update_config(cls, vllm_config: VllmConfig) -> None:
305305
parallel_config.all2all_backend = "flashinfer_all2allv"
306306
if ascend_config.torchair_graph_config.enabled:
307307
parallel_config.worker_cls = "vllm_ascend.torchair.torchair_worker.NPUTorchairWorker"
308+
elif ascend_config.xlite_graph_config.enabled:
309+
logger.info(
310+
"Euler Xlite enabled. See: https://gitee.com/openeuler/GVirt/tree/master/xlite"
311+
)
312+
parallel_config.worker_cls = "vllm_ascend.xlite.xlite_worker.XliteWorker"
308313
else:
309314
parallel_config.worker_cls = "vllm_ascend.worker.worker_v1.NPUWorker"
310315

vllm_ascend/xlite/__init__.py

Whitespace-only changes.

0 commit comments

Comments
 (0)