Skip to content

Conversation

@MekkCyber
Copy link
Contributor

What does this PR do?

Fixes fbgemm

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's go ! Just a nit

Comment on lines 54 to 62
# Sanity checks
if isinstance(module, FbgemmFp8Linear):
if tensor_name == "weight" and value.dtype == torch.float8_e4m3fn:
raise ValueError("Expect unquantized weights but got a quantized weight")
if tensor_name == "weight_scale":
raise ValueError("Expect unquantized weights but got a weight_scale")
if isinstance(module, FbgemmFp8Llama4TextExperts):
if tensor_name == "gate_up_proj_scale" or tensor_name == "down_proj_scale":
raise ValueError("Expect unquantized weights but got a quantized weight_scale")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's remove those checks, this shouldn't be possible here.

Suggested change
# Sanity checks
if isinstance(module, FbgemmFp8Linear):
if tensor_name == "weight" and value.dtype == torch.float8_e4m3fn:
raise ValueError("Expect unquantized weights but got a quantized weight")
if tensor_name == "weight_scale":
raise ValueError("Expect unquantized weights but got a weight_scale")
if isinstance(module, FbgemmFp8Llama4TextExperts):
if tensor_name == "gate_up_proj_scale" or tensor_name == "down_proj_scale":
raise ValueError("Expect unquantized weights but got a quantized weight_scale")

current_key_name=None,
quantization_config=None,
pre_quantized=False,
config=None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use model.config directly

Comment on lines 267 to 269
if tp_plan is not None:
tp_key = re.sub(r"\d+", "*", f"{module_name}.down_proj_scale")
tp_plan[tp_key] = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment this for now

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

[For maintainers] Suggested jobs to run (before merge)

run-slow: fbgemm_fp8

@MekkCyber MekkCyber merged commit 15b79ea into main Dec 3, 2025
24 checks passed
@MekkCyber MekkCyber deleted the fix-fbgemm branch December 3, 2025 09:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants