Executive Summary
CVE-2025-15379 is a maximum-severity (CVSS 10.0) command injection vulnerability in the MLflow open-source machine learning platform. The flaw resides in the _install_model_dependencies_to_env() function, which is invoked when deploying a model with env_manager=LOCAL. MLflow reads dependency specifications from the model artifact's python_env.yaml file and constructs a shell command to install them — without sanitizing the dependency strings before passing them to the OS.
An attacker who can supply a malicious python_env.yaml (e.g., via a compromised artifact, supply chain attack, or direct artifact upload) can achieve full remote code execution on the MLflow server or deployment target.
CVSS Score: 10.0 (Critical — Maximum)
Vulnerability Overview
| Attribute | Value |
|---|---|
| CVE ID | CVE-2025-15379 |
| CVSS Score | 10.0 (Critical — Maximum) |
| Type | Command Injection / Remote Code Execution |
| Attack Vector | Network |
| Privileges Required | None |
| User Interaction | None |
| Scope | Changed |
| Confidentiality Impact | High |
| Integrity Impact | High |
| Availability Impact | High |
| Affected Component | Model serving container initialization |
| Vulnerable Function | _install_model_dependencies_to_env() |
| Trigger Condition | env_manager=LOCAL model deployment |
| Payload Vector | python_env.yaml dependency strings |
Affected Products
| Product | Affected Versions | Remediation |
|---|---|---|
| MLflow model serving (env_manager=LOCAL) | All unpatched versions | Apply vendor patch immediately |
Technical Analysis
Root Cause
When deploying an MLflow model with env_manager=LOCAL, the platform calls _install_model_dependencies_to_env() to set up the runtime environment. This function reads the dependencies or build_dependencies fields from the model's python_env.yaml file and constructs a pip install command using those values.
The dependency strings are not sanitized before shell execution. An attacker can inject shell metacharacters — semicolons, backticks, $() substitution, or pipe characters — directly into the dependency specification to execute arbitrary commands.
Vulnerable Code Pattern (Conceptual)
The vulnerability arises when unsanitized user-controlled values from python_env.yaml are passed into a subprocess call that evaluates shell syntax:
# Pseudocode — the vulnerable pattern:
# dep_string = read from python_env.yaml (attacker-controlled)
# subprocess.run(["pip", "install", dep_string], shell=True) <-- shell=True + unsanitized input
The safe pattern requires:
- Using
shell=Falsewith a list of arguments (never a concatenated string) - Validating dependency strings against an allowlist regex before passing to subprocess
- Using a sandbox or virtualenv with restricted shell access
Malicious python_env.yaml Example
A crafted python_env.yaml designed to exploit this vulnerability would embed shell metacharacters into a dependency specification:
python_version: "3.10"
build_dependencies:
- pip
dependencies:
- "numpy; <injected-shell-command>"
- "requests<shell-substitution-payload>"When MLflow processes these dependency strings with shell=True, the injected content executes as a shell command on the host.
Attack Scenarios
Scenario 1: Compromised Model Registry
1. Attacker gains write access to MLflow artifact store (misconfigured S3, NFS, etc.)
2. Attacker modifies python_env.yaml in a widely-used model artifact
3. Any deployment with env_manager=LOCAL triggers command injection
4. Full RCE on the deployment server — credentials, environment variables, data exfiltratedScenario 2: Supply Chain via Model Hub
1. Attacker publishes a poisoned model to a public or shared model registry
2. Organization downloads and deploys the model with env_manager=LOCAL
3. _install_model_dependencies_to_env() executes attacker's injected commands
4. Complete compromise of the inference or training serverScenario 3: Insider Threat in Multi-Tenant MLflow
1. Malicious data scientist uploads a model with a crafted python_env.yaml
2. MLOps engineer deploys the model using standard MLflow deployment commands
3. Command injection fires during environment setup — before model validation
4. MLOps server compromised, lateral movement into CI/CD or cloud environmentScenario 4: CI/CD Pipeline Injection
1. MLflow is part of an automated training and deployment pipeline
2. A compromised training dataset or upstream dependency poisons python_env.yaml
3. Automated deployment triggers command injection without human review
4. Pipeline runner compromise cascades to production infrastructureWhy CVSS 10.0?
The maximum CVSS score reflects every factor being at maximum severity:
- Attack Vector: Network — exploitable over the network
- Privileges Required: None — no authentication needed if artifact store is accessible
- User Interaction: None — no victim action required beyond normal model deployment
- Scope: Changed — impact extends beyond MLflow to the host OS
- All CIA impacts: High — full confidentiality, integrity, and availability compromise
Impact Assessment
| Impact Area | Description |
|---|---|
| Full Remote Code Execution | Arbitrary commands execute as the MLflow process user |
| Credential Theft | Access to environment variables, cloud credentials, API keys |
| Data Exfiltration | Model artifacts, training data, and IP accessible to attacker |
| Infrastructure Pivot | Compromise of deployment server enables lateral movement |
| Supply Chain Risk | Poisoned models in public registries can compromise all downstream deployments |
| Persistent Backdoors | Write access via RCE enables persistent attacker presence |
| Ransomware Staging | Model serving servers often have access to datasets and storage |
Remediation
Priority Action: Patch or Restrict Immediately
# Check your MLflow version
python -c "import mlflow; print(mlflow.__version__)"
# Upgrade to the patched version
pip install --upgrade mlflow
# Verify upgrade
python -c "import mlflow; print(mlflow.__version__)"Avoid env_manager=LOCAL for Untrusted Artifacts
# Avoid env_manager=LOCAL when deploying untrusted or externally sourced models
import mlflow.pyfunc
# Instead of LOCAL (vulnerable):
# mlflow.pyfunc.serve(model_uri=model_uri, env_manager="local")
# Use conda or virtualenv isolation, or container-based deployment:
mlflow.pyfunc.serve(model_uri=model_uri, env_manager="conda")
# Or deploy via Docker container for full isolation:
# mlflow models build-docker -m model_uri -n my-model-imageValidate python_env.yaml Before Deployment
import yaml
import re
SAFE_PACKAGE_PATTERN = re.compile(r'^[a-zA-Z0-9_\-\.\[\]<>=!,\s]+$')
def validate_python_env(python_env_path):
with open(python_env_path) as f:
config = yaml.safe_load(f)
for dep in config.get("dependencies", []):
if not SAFE_PACKAGE_PATTERN.match(str(dep)):
raise ValueError(f"Suspicious dependency string: {dep}")
for dep in config.get("build_dependencies", []):
if not SAFE_PACKAGE_PATTERN.match(str(dep)):
raise ValueError(f"Suspicious build dependency: {dep}")
return TrueContainer-Based Isolation
# Deploy MLflow models in isolated Docker containers
FROM python:3.10-slim
# Non-root user for container execution
RUN useradd -m -u 1000 mlflow
USER mlflow
# Limited network access — no outbound during inference
# Network policies enforced at orchestrator level
WORKDIR /app
COPY model/ ./model/
RUN pip install mlflowDetection Indicators
| Indicator | Description |
|---|---|
| Unexpected outbound network connections during model deployment | Potential exploitation (shell callback) |
| Unusual processes spawned by MLflow service account | Command injection execution |
New files in /tmp, cron directories, or home directories | Post-exploitation activity |
python_env.yaml files containing shell metacharacters (;, `, $()) | Malicious artifact indicator |
| pip install commands with semicolons or command substitution in package names | Attack attempt |
| MLflow deployment failures with unusual error messages during env setup | Attempted injection blocked |
Post-Remediation Checklist
- Patch all MLflow installations to the version containing the fix
- Audit python_env.yaml files in all registered model artifacts — check for injected commands
- Review deployment logs — look for unexpected command execution during recent model deployments
- Assume breach if
env_manager=LOCALwas used with externally sourced models — conduct forensic review - Rotate all credentials accessible to the MLflow deployment environment
- Implement artifact signing — cryptographically verify model artifact integrity before deployment
- Switch to container deployment — use
mlflow models build-dockerfor isolation - Restrict artifact store write access — limit who can upload or modify model artifacts