Skip to content

Conversation

@wangxiaoteng888
Copy link
Contributor

@wangxiaoteng888 wangxiaoteng888 commented Dec 3, 2025

What this PR does / why we need it?

Clean connector history information when the node restarts.

Does this PR introduce any user-facing change?

No

How was this patch tested?

By ci

Signed-off-by: wangxiaoteng <[email protected]>
@github-actions
Copy link

github-actions bot commented Dec 3, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses the issue of stale connector history by appending a UUID to the engine_id, ensuring a unique identity for each node restart. This is a solid approach. Additionally, it introduces a SizedDict to replace defaultdict for caching remote engine metadata, which prevents unbounded memory growth.

My review includes two main points for improvement:

  1. The implementation of SizedDict can be simplified by removing an unnecessary @dataclass decorator.
  2. The default size limit in SizedDict is very small and could cause performance issues in larger deployments. I've suggested increasing it and making it configurable.

These changes will improve the maintainability and performance robustness of the new caching mechanism.

Comment on lines +71 to +72
@dataclass
class SizedDict(OrderedDict):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The use of @dataclass on the SizedDict class is unnecessary and potentially misleading. This class defines a custom __init__ method and inherits from OrderedDict, which is not a dataclass. The @dataclass decorator is designed for classes that primarily store data and can auto-generate methods like __init__ and __repr__. In this case, it provides no benefit and could cause confusion or unexpected behavior during future maintenance. It's better to define it as a regular class for clarity and correctness.

Suggested change
@dataclass
class SizedDict(OrderedDict):
class SizedDict(OrderedDict):

Comment on lines 697 to +722
self.remote_kv_caches_base_addr: dict[str, dict[int, list[int]]] = \
defaultdict(dict)
SizedDict()
self.remote_te_port: dict[str, dict[int, int]] = \
defaultdict(dict)
SizedDict()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The new SizedDict is initialized with its default max_size of 2. This means it will only cache metadata for the two most recently used remote engines. In a deployment with more than two peer engines, this could lead to cache thrashing, where metadata is frequently evicted and then re-fetched over the network, potentially impacting performance.

It would be more robust to make this cache size configurable. For example, you could add a configuration option and pass it to the SizedDict constructor:

# In MooncakeLayerwiseConnectorWorker.__init__
max_cached_engines = self.vllm_config.kv_transfer_config.get_from_extra_config(
    'max_cached_engines', 128)  # A more reasonable default

# ...

self.remote_kv_caches_base_addr: dict[str, dict[int, list[int]]] = \
    SizedDict(max_size=max_cached_engines)
self.remote_te_port: dict[str, dict[int, int]] = \
    SizedDict(max_size=max_cached_engines)

This would allow operators to tune the cache size based on their specific deployment topology. For now, I'm suggesting a larger default.

Suggested change
self.remote_kv_caches_base_addr: dict[str, dict[int, list[int]]] = \
defaultdict(dict)
SizedDict()
self.remote_te_port: dict[str, dict[int, int]] = \
defaultdict(dict)
SizedDict()
self.remote_kv_caches_base_addr: dict[str, dict[int, list[int]]] = \
SizedDict(max_size=128)
self.remote_te_port: dict[str, dict[int, int]] = \
SizedDict(max_size=128)

Signed-off-by: wangxiaoteng <[email protected]>
Signed-off-by: wangxiaoteng <[email protected]>
Signed-off-by: wangxiaoteng <[email protected]>
Signed-off-by: wangxiaoteng <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant