Skip to content

Commit 8d6e134

Browse files
committed
fix bug
Signed-off-by: zzhx1 <[email protected]>
1 parent 09f8e76 commit 8d6e134

File tree

1 file changed

+0
-15
lines changed

1 file changed

+0
-15
lines changed

vllm_ascend/distributed/parallel_state.py

Lines changed: 0 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,6 @@
2121
_P_TP: Optional[GroupCoordinator] = None
2222
_FLASHCOMM2_OTP: Optional[GroupCoordinator] = None
2323
_FLASHCOMM2_ODP: Optional[GroupCoordinator] = None
24-
_FC3_QUANT_X: Optional[GroupCoordinator] = None
25-
2624

2725

2826
def get_mc2_group() -> GroupCoordinator:
@@ -325,16 +323,3 @@ def destroy_ascend_model_parallel():
325323
).flashcomm2_oproj_tensor_parallel_size != 1:
326324
_FLASHCOMM2_ODP.destroy()
327325
_FLASHCOMM2_ODP = None
328-
<<<<<<< HEAD
329-
=======
330-
331-
global _FC3_QUANT_X
332-
if _FC3_QUANT_X:
333-
_FC3_QUANT_X.destroy()
334-
_FC3_QUANT_X = None
335-
336-
global _EMBED_TP
337-
if _EMBED_TP:
338-
_EMBED_TP.destroy()
339-
_EMBED_TP = None
340-
>>>>>>> 8d2b4e79 (PullRequest: 691 [Feat] Add embedding tensor parallel in decode scenario)

0 commit comments

Comments
 (0)