Skip to content

Commit d895919

Browse files
committed
Change the position of the shared_weight_layer, move it out of the torchair folder.
Signed-off-by: zzhx1 <[email protected]>
1 parent 1b08ffe commit d895919

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

vllm_ascend/attention/mla_v1.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,12 +34,12 @@
3434
from vllm_ascend.compilation.acl_graph import (get_graph_params,
3535
get_mtp_graph_params,
3636
update_graph_params_workspaces)
37-
from vllm_ascend.ops.weight_prefetch import maybe_npu_prefetch
38-
from vllm_ascend.quantization.w8a8 import AscendW8A8LinearMethod
39-
from vllm_ascend.torchair.ops.shared_weight_layer import (
37+
from vllm_ascend.ops.shared_weight_layer import (
4038
post_process_after_loading_for_shared_weight_series,
4139
reach_layer_for_shared_weight_series,
4240
register_layer_to_shared_weight_series)
41+
from vllm_ascend.ops.weight_prefetch import maybe_npu_prefetch
42+
from vllm_ascend.quantization.w8a8 import AscendW8A8LinearMethod
4343
from vllm_ascend.utils import (ACL_FORMAT_FRACTAL_ND, ACL_FORMAT_FRACTAL_NZ,
4444
flashcomm2_o_shared_enabled, is_enable_nz,
4545
prefill_context_parallel_enable,
File renamed without changes.

0 commit comments

Comments
 (0)