Replies: 1 comment
-
|
You might have better luck with the merged variant: If I've understood this correctly, the one you were using was just a LoRa extension (so maybe convert_lora_to_gguf.py would be more appropriate, if you wanted to do the merge yourself). But I've never used LoRas myself so idk. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there ,
when i try to convert a moel from huggingface with llama.cpp I get following error when i start the python conversion script :
kiuser@kisystem:/opt/huggingface/ollama-work/llama.cpp$ /opt/huggingface/ollama-work/bin/python3 convert_hf_to_gguf.py ../colqwen2-hf --outfile colqwen2-v1.0.gguf --outtype q8_0 INFO:hf-to-gguf:Loading model: colqwen2-hf Traceback (most recent call last): File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9485, in <module> main() File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9450, in main model_architecture = get_model_architecture(hparams, model_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/huggingface/ollama-work/llama.cpp/convert_hf_to_gguf.py", line 9380, in get_model_architecture raise ValueError("Failed to detect model architecture") ValueError: Failed to detect model architectureHow to fix this ?
Regards ...
Beta Was this translation helpful? Give feedback.
All reactions