Skip to content

Commit 5d81c44

Browse files
committed
fix(bug): fix missing line after resolving conflicts
Signed-off-by: zhoux77899 <[email protected]>
1 parent 18a5c20 commit 5d81c44

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

vllm_ascend/ops/fused_moe/moe_mlp.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -154,6 +154,7 @@ def quant_apply_mlp(hidden_states: torch.Tensor,
154154
group_list_type=group_list_type,
155155
group_type=0,
156156
group_list=group_list,
157+
output_dtype=w2_scale[0].dtype)[0]
157158
elif w1_offset is not None:
158159
# gmm1: gate_up_proj
159160
hidden_states = torch_npu.npu_grouped_matmul(

0 commit comments

Comments
 (0)