You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prefix caching is an important feature in LLM inference that can reduce prefill computation time drastically.
6
+
7
+
However, the performance gain from prefix caching is highly dependent on cache hit rate, while cache hit rate can be limited if one only uses HBM for kv cache storage.
8
+
9
+
Hence, KV Cache Pool is proposed to utilize various types of storages including HBM,DRAM and SSD, making a pool for KV Cache storage, while making the prefix of requests visible across all nodes, increasing the cache hit rate for all requests.
10
+
11
+
vLLM Ascend currently supports [MooncakeStore](https://github.com/kvcache-ai/Mooncake): one of the most recognized KV Cache storage engine;
12
+
13
+
While one can utilize mooncake store in vLLM V1 engine by setting it as a remote backend of LMCache with GPU (see [Tutorial](https://github.com/LMCache/LMCache/blob/dev/examples/kv_cache_reuse/remote_backends/mooncakestore/README.md)), we find it would be better to integrate a connector that directly supports mooncake store and can utilize the data transfer strategy to one that is best fit to Huawei NPU hardware.
14
+
15
+
Hence, we propose to integrate Mooncake Store with a brand new **MooncakeStoreConnectorV1**, which is indeed largly inspired by **LMCacheConnectorV1** (see the `How is MooncakestoreConnectorV1 Implemented?` section).
16
+
17
+
## Usage
18
+
19
+
vLLM Ascend Currently supports Mooncake Store for KV Cache Pool. To enable Mooncake Store, one needs to config `kv-transfer-config` and choose `MooncakeStoreConnector` as KV Connector.
20
+
21
+
For step-by-step deployment and configuration, please refer to the KV Pool User Guide at `vllm-ascend/docs/source/user_guide/feature_guide/kv_pool_mooncake.md`
22
+
23
+
## How it works?
24
+
The KV Cache Pool integrates multiple memory tiers (HBM, DRAM, SSD, etc.) through a connector-based architecture.
25
+
26
+
Each connector implements a unified interface for storing, retrieving, and transferring KV blocks between tiers, depending on access frequency and hardware bandwidth.
27
+
28
+
When combined with vLLM’s Prefix Caching mechanism, the pool enables efficient caching both locally (in HBM) and globally (via Mooncake), ensuring that frequently used prefixes remain hot while less frequently accessed KV data can spill over to lower-cost memory.
29
+
30
+
### 1. Combining KV Cache Pool with HBM Prefix Caching
31
+
Prefix Caching with HBM is already supported by the vLLM V1 Engine.
32
+
By introducing KV Connector V1, users can seamlessly combine HBM-based Prefix Caching with Mooncake-backed KV Pool.
33
+
34
+
The user can enable both features simply by enabling Prefix Caching, which is enabled by default in vLLM V1 unless the --no_enable_prefix_caching flag is set, and setting up the KV Connector for KV Pool(e.g. the MooncakeStoreConnector)
35
+
36
+
**Workflow**:
37
+
38
+
1. The engine first checks for prefix hits in the HBM cache.
39
+
40
+
2. After getting the number of hit tokens on HBM, it queries the KV Pool via the connector, if there is additional hits in KV Pool, we get the **additional blocks only** from KV Pool, and get the rest of the blocks directly from HBM to minimize the data transfer latency.
41
+
42
+
3. After the KV Caches in KV Pool is load into HBM, the remaining process is the same as Prefix Caching in HBM.
43
+
44
+
### 2. Combining KV Cache Pool with Mooncake PD Disaggregation
45
+
46
+
When used together with Mooncake PD (Prefill-Decode) Disaggregation, the KV Cache Pool can further decouple prefill and decode stages across devices or nodes.
47
+
48
+
Currently, we only perform put and get operation of KV Pool for **Prefiil Nodes**, and Decode Nodes get their KV Cache from Mooncake P2P KV Connector, i.e. MooncakeConnector.
49
+
50
+
The key benefit of doing this is that we can keep the gain in performance by computing less with Prefix Caching from HBM and KV Pool for Prefill Nodes while not sacrificing the data transfer efficiency between Prefill and Decode nodes with P2P KV Connector that transfer KV Caches between NPU devices directly.
51
+
52
+
To Enable this feature, we need to setup both Mooncake Connector and Mooncake Store connector with a Multi Connector, which is a KV Connector class provided by vLLM that can call multiple KV Connectors in specific order;
53
+
54
+
For details, please also refer to the Mooncake Connector Store Deployment Guide.
55
+
56
+
## How is MooncakestoreConnectorV1 Implemented?
57
+
**MooncakestoreConnectorV1** inhereits the KV Connector V1 class in vLLM V1: through implementing the required methods defined in the KV connector V1 base class, one can integrate a thrid-party KV cache transfer/storage backend into the vLLM framework.
58
+
59
+
MooncakeStoreConnectorV1 is also largly inspried by LMCacheConnectorV1 in term of the `Lookup Engine`/`Lookup Client` design for looking up KV cache keys, and the `ChunkedTokenDatabase` class for processing tokens into prefix-aware hashes as well as other hashing related designs. On top of this, we have also added our own design including `KVTransferThread` that allows async `get` and `put` of KV caches with multi-threading, and NPU-related data transfer optimization such as removing the `LocalBuffer` in LMCache to remove redundant data transfer.
60
+
61
+
The KV Connector methods that need to be implemented can be categorized into scheduler-side methods that are called in V1 scheduler and worker-side methods that are called in V1 worker, namely:
62
+
### KV Connector Scheduler-Side Methods:
63
+
`get_num_new_matched_tokens`: Get prefix cache hit in number of tokens through looking up into the KV pool.
64
+
`update_states_after_alloc`: Update KVConnector state after temporary buffer alloc.
65
+
`build_connector_meta`: Attach the connector metadata to the request object.
66
+
`request_finished`: Once a request is finished, determine whether request blocks should be freed now or will be sent asynchronously and freed later.
67
+
### Connector Worker-Side Methods:
68
+
`register_kv_caches`: Register KV cache buffers needed for KV cache transfer.
69
+
`start_load_kv`: Perform KV cache load operation that transfers KV cache from storage to device.
70
+
`wait_for_layer_load`: Optional; Wait for layer load in layerwise + async KV load scenario.
71
+
`save_kv_layer`: Optional Do layerwise KV cache put into KV Pool.
72
+
`wait_for_save`: Wait for KV Save to finish if async KV cache save/put.
73
+
`get_finished` Get request that finished KV transfer, `done_sending` if `put` finished, `done_reciving` if `get` finished.
74
+
75
+
## DFX
76
+
1. When looking up a key in KV Pool, if we cannot find the key, there is no Cache Hit for this specific block; we return no hit for this block and do not look up further blocks for current request.
77
+
2. Similaly, when we are trying to put a block into KV Pool and failed, we do not put further blocks (subject to change).
78
+
79
+
## Limitation
80
+
81
+
1. Currently, Mooncake Store for vLLM-Ascend only supports DRAM as the storage for KV Cache pool.
82
+
83
+
2. For now, if we successfully looked up a key and found it exists, but failed to get it when calling KV Pool's get function, we just output a log indicating the get operation failed and keep going; hence, the accuracy of that specific request may be affected. gWe will handle this situation by falling back the request and re-compute everything assuming there's no prefix cache hit (or even better, revert only one block and keep using the Prefix Caches before that).
Copy file name to clipboardExpand all lines: docs/source/user_guide/feature_guide/kv_pool_mooncake.md
+92-84Lines changed: 92 additions & 84 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,14 +8,23 @@
8
8
* PyTorch == 2.7.1, torch-npu == 2.7.1
9
9
* vLLM:main branch
10
10
* vLLM-Ascend:main branch
11
-
* Mooncake:[AscendTransport/Mooncake at pooling-async-memcpy](https://github.com/AscendTransport/Mooncake/tree/pooling-async-memcpy)(Currently available branch code, continuously updated.)
12
-
Installation and Compilation Guide:https://github.com/AscendTransport/Mooncake/tree/pooling-async-memcpy?tab=readme-ov-file#build-and-use-binaries
11
+
* Mooncake:main branch
12
+
13
+
Installation and Compilation Guide:https://github.com/kvcache-ai/Mooncake?tab=readme-ov-file#build-and-use-binaries
14
+
15
+
Make sure to build with `-DUSE_ASCEND_DIRECT` to enable ADXL engine.
16
+
17
+
An example command for compiling ADXL:
18
+
19
+
`rm -rf build && mkdir -p build && cd build \ && cmake .. -DCMAKE_INSTALL_PREFIX=/opt/transfer-engine/ -DCMAKE_POLICY_VERSION_MINIMUM=3.5 -DUSE_ASCEND_DIRECT=ON -DBUILD_SHARED_LIBS=ON -DBUILD_UNIT_TESTS=OFF \ && make -j \ && make install`
20
+
21
+
Also, you need to set environment variables to point to them `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib64/python3.11/site-packages/mooncake`, or copy the .so files to the `/usr/local/lib64` directory after compilation
13
22
14
23
### KV Pooling Parameter Description
15
-
**kv_connector_extra_config**:Additional Configurable Parameters for Pooling
16
-
**mooncake_rpc_port**:Port for RPC Communication Between Pooling Scheduler Process and Worker Process: Each Instance Requires a Unique Port Configuration.
17
-
**load_async**:Whether to Enable Asynchronous Loading. The default value is false.
18
-
**register_buffer**:Whether to Register Video Memory with the Backend. Registration is Not Required When Used with MooncakeConnectorV1; It is Required in All Other Cases. The Default Value is false.
24
+
**kv_connector_extra_config**:Additional Configurable Parameters for Pooling.
25
+
**mooncake_rpc_port**:Port for RPC Communication Between Pooling Scheduler Process and Worker Process: Each Instance Requires a Unique Port Configuration.
26
+
**load_async**:Whether to Enable Asynchronous Loading. The default value is false.
27
+
**register_buffer**:Whether to Register Video Memory with the Backend. Registration is Not Required When Used with MooncakeConnectorV1; It is Required in All Other Cases. The Default Value is false.
19
28
20
29
## run mooncake master
21
30
@@ -29,26 +38,32 @@ The environment variable **MOONCAKE_CONFIG_PATH** is configured to the full path
29
38
"metadata_server": "P2PHANDSHAKE",
30
39
"protocol": "ascend",
31
40
"device_name": "",
41
+
"use_ascend_direct": true,
42
+
"alloc_in_same_node": true,
32
43
"master_server_address": "xx.xx.xx.xx:50088",
33
44
"global_segment_size": 30000000000
34
45
}
35
46
```
36
47
37
-
**local_hostname**: Configured as the IP address of the current master node,
38
-
**metadata_server**: Configured as **P2PHANDSHAKE**,
39
-
**protocol:** Configured for Ascend to use Mooncake's HCCL communication,
40
-
**device_name**: ""
41
-
**master_server_address**: Configured with the IP and port of the master service
42
-
**global_segment_size**: Expands the kvcache size registered by the PD node to the master
48
+
**local_hostname**: Configured as the IP address of the current master node.
49
+
**metadata_server**: Configured as **P2PHANDSHAKE**.
50
+
**protocol:** Configured for Ascend to use Mooncake's HCCL communication.
51
+
**device_name**: ""
52
+
**use_ascend_direct**: Indicator for using ADXL engine.
53
+
**alloc_in_same_node**: Indicator for preferring local buffer allocation strategy.
54
+
**master_server_address**: Configured with the IP and port of the master service.
55
+
**global_segment_size**: Expands the kvcache size registered by the PD node to the master.
`eviction_high_watermark_ratio` determines the watermark where Mooncake Store will perform eviction,and `eviction_ratio` determines the portion of stored objects that would be evicted.
66
+
52
67
## Pooling and Prefill Decode Disaggregate Scenario
# The upper boundary environment variable for memory swap logging is set to mooncake, where 1 indicates enabled and 0 indicates disabled.
75
-
export ASCEND_AGGREGATE_ENABLE=1
76
-
# The upper-level environment variable is the switch for enabling the mooncake aggregation function, where 1 means on and 0 means off.
88
+
export ASCEND_BUFFER_POOL=4:8
89
+
# ASCEND_BUFFER_POOL is the environment variable for configuring the number and size of buffer on NPU Device for aggregation and KV transfer,the value 4:8 means we allocate 4 buffers of size 8MB.
curl -s http://localhost:8100/v1/completions -H "Content-Type: application/json" -d '{ "model": "/xxxxx/Qwen2.5-7B-Instruct", "prompt": "Given the accelerating impacts of climate change—including rising sea levels, increasing frequency of extreme weather events, loss of biodiversity, and adverse effects on agriculture and human health—there is an urgent need for a robust, globally coordinated response. However, international efforts are complicated by a range of factors: economic disparities between high-income and low-income countries, differing levels of industrialization, varying access to clean energy technologies, and divergent political systems that influence climate policy implementation. In this context, how can global agreements like the Paris Accord be redesigned or strengthened to not only encourage but effectively enforce emission reduction targets? Furthermore, what mechanisms can be introduced to promote fair and transparent technology transfer, provide adequate financial support for climate adaptation in vulnerable regions, and hold nations accountable without exacerbating existing geopolitical tensions or disproportionately burdening those with historically lower emissions?", "max_tokens": 256, "temperature":0.0 }'
0 commit comments