Skip to content

Conversation

@ArthurZucker
Copy link
Collaborator

What does this PR do?

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@HandH1998
Copy link

Hello, thanks for your support for deepseek v3.2! I wonder when this PR will be ready?

@ArthurZucker
Copy link
Collaborator Author

Working on it! Hoping by next week 🤗

@ArthurZucker
Copy link
Collaborator Author

wow this got old!

@ArthurZucker ArthurZucker marked this pull request as ready for review November 17, 2025 12:19
@nfywsh
Copy link

nfywsh commented Dec 3, 2025

The submitted code is currently unusable and does not support the Deepseek-v3.2 official version. Is this PR still being updated?

yunkchen added a commit to yunkchen/transformers that referenced this pull request Dec 3, 2025
@yunkchen
Copy link

yunkchen commented Dec 3, 2025

The submitted code is currently unusable and does not support the Deepseek-v3.2 official version. Is this PR still being updated?

https://github.com/yunkchen/transformers/tree/v4.57.3_add_dpskv32

@nfywsh
Copy link

nfywsh commented Dec 3, 2025

The submitted code is currently unusable and does not support the Deepseek-v3.2 official version. Is this PR still being updated?

https://github.com/yunkchen/transformers/tree/v4.57.3_add_dpskv32

There is stillDeepseekV32Attention a bug when using LLMC to quantify the model:[rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/hcufs/env_scripts/xyf_test/llmc/llmc/main.py", line 248, in
[rank0]: main(config)
[rank0]: File "/mnt/hcufs/env_scripts/xyf_test/llmc/llmc/main.py", line 27, in main
[rank0]: model = MODEL_REGISTRYconfig.model.type
[rank0]: File "/mnt/hcufs/env_scripts/xyf_test/llmc/llmc/models/deepseekv3.py", line 20, in init
[rank0]: super().init(config, device_map, use_cache)
[rank0]: File "/mnt/hcufs/env_scripts/xyf_test/llmc/llmc/models/base_model.py", line 40, in init
[rank0]: self.build_model()
[rank0]: File "/mnt/hcufs/env_scripts/xyf_test/llmc/llmc/models/deepseekv3.py", line 40, in build_model
[rank0]: self.model = AutoModelForCausalLM.from_pretrained(
[rank0]: File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 604, in from_pretrained
[rank0]: return model_class.from_pretrained(
[rank0]: File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 277, in _wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4971, in from_pretrained
[rank0]: model = cls(config, *model_args, model_kwargs)
[rank0]: File "/opt/conda/lib/python3.10/site-packages/transformers/models/deepseek_v32/modeling_deepseek_v32.py", line 555, in init
[rank0]: self.model = DeepseekV32Model(config)
[rank0]: File "/opt/conda/lib/python3.10/site-packages/transformers/models/deepseek_v32/modeling_deepseek_v32.py", line 477, in init
[rank0]: [DeepseekV32DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
[rank0]: File "/opt/conda/lib/python3.10/site-packages/transformers/models/deepseek_v32/modeling_deepseek_v32.py", line 477, in
[rank0]: [DeepseekV32DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
[rank0]: File "/opt/conda/lib/python3.10/site-packages/transformers/models/deepseek_v32/modeling_deepseek_v32.py", line 404, in init
[rank0]: self.self_attn = DeepseekV32Attention(config=config, layer_idx=layer_idx)
[rank0]: File "/opt/conda/lib/python3.10/site-packages/transformers/models/deepseek_v32/modeling_deepseek_v32.py", line 316, in init
[rank0]: self.softmax_scale = self.qk_head_dim
-0.5
[rank0]: File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1928, in getattr
[rank0]: raise AttributeError(
[rank0]: AttributeError: 'DeepseekV32Attention' object has no attribute 'qk_head_dim'

@yunkchen
Copy link

yunkchen commented Dec 3, 2025

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

[For maintainers] Suggested jobs to run (before merge)

run-slow: auto, deepseek_v32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants