Executive Summary
A critical sandbox escape vulnerability (CVE-2026-39888, CVSS 9.9) has been discovered in PraisonAI, a popular multi-agent AI framework. The flaw exists in the execute_code() function within praisonaiagents.tools.python_tools and allows attackers to bypass the sandbox protection mechanisms designed to safely execute user-supplied Python code.
The vulnerability defeats both the restricted __builtins__ dictionary and the AST-based blocklist that serve as the primary defense layers in sandbox_mode="sandbox". An attacker who can supply code to the PraisonAI execution environment can achieve full remote code execution on the host system. The flaw is fixed in version 1.5.115.
Vulnerability Overview
| Attribute | Value |
|---|---|
| CVE ID | CVE-2026-39888 |
| CVSS Score | 9.9 (Critical) |
| CWE | CWE-693 — Protection Mechanism Failure |
| Type | Sandbox Escape / Remote Code Execution |
| Attack Vector | Network / AI Agent Input |
| Privileges Required | None (depending on deployment) |
| User Interaction | None |
| Affected Component | praisonaiagents.tools.python_tools.execute_code() |
| Fixed Version | 1.5.115 |
Affected Versions
| Package | Affected Versions | Fixed Version |
|---|---|---|
| praisonaiagents | < 1.5.115 | 1.5.115 |
| PraisonAI (full) | All versions bundling praisonaiagents < 1.5.115 | Update praisonaiagents |
Technical Analysis
The Intended Sandbox
The execute_code() function in praisonaiagents.tools.python_tools is designed to safely execute Python code provided by AI agents or users. When called with sandbox_mode="sandbox" (the default), it implements two defense layers:
-
AST-based blocklist — Analyzes the abstract syntax tree of the submitted code before execution and rejects code containing dangerous constructs (e.g., imports of
os,subprocess,sys, specific built-in calls) -
Restricted
__builtins__— Executes the code in a subprocess with a reduced set of Python built-in functions, preventing access to__import__,open,exec,eval, and other dangerous primitives
The Bypass
The vulnerability arises because the AST blocklist itself is embedded inside the subprocess rather than evaluated before the subprocess is spawned. This architectural decision creates a bypass opportunity:
An attacker can craft code that:
- Passes the pre-execution AST check (if one exists at the outer level)
- Manipulates the Python runtime environment within the subprocess before the AST check runs
- Executes arbitrary code through Python's introspection capabilities (
__class__.__mro__,__subclasses__(), object method resolution)
This class of attack — known as a Python sandbox escape via object introspection — exploits the fact that Python's object model is fundamentally accessible from within any execution context, regardless of what __builtins__ are restricted.
Exploit Primitive
The general pattern for this category of bypass:
# Example of introspection-based sandbox escape (conceptual)
# Does not require __import__ or any blocked built-ins
# Access Python's class hierarchy via object introspection
result = ().__class__.__base__.__subclasses__()
# Find a class with access to sys or os modules
for cls in result:
if 'warning' in cls.__name__.lower():
# Use the class's module references to access os
import_func = cls.__init__.__globals__.get('__builtins__', {}).get('__import__')
if import_func:
os_module = import_func('os')
os_module.system('id') # Arbitrary command executionThe key insight is that Python's object model cannot be fully sandboxed without interpreter-level restrictions — pure Python sandbox implementations are fundamentally insufficient against a determined attacker.
Impact Assessment
| Impact Area | Description |
|---|---|
| Remote Code Execution | Full arbitrary code execution on the host running PraisonAI |
| Host Compromise | Access to the filesystem, environment variables, secrets, and credentials |
| Container Escape | If PraisonAI runs in a container, pivot to host may be possible |
| AI Agent Hijacking | Inject malicious behavior into agent workflows |
| Data Exfiltration | Read API keys, database credentials, model weights, and user data |
| Lateral Movement | Use compromised host as a pivot into internal infrastructure |
The CVSS 9.9 score reflects the near-maximum severity — the only factor preventing a perfect 10.0 is the scope constraint in some deployment configurations.
Immediate Remediation
Step 1: Update praisonaiagents
# Update to the patched version
pip install --upgrade praisonaiagents>=1.5.115
# Verify installed version
pip show praisonaiagents | grep Version
# Expected: Version: 1.5.115 or higher
# If using a requirements.txt or pyproject.toml, update the pin:
# praisonaiagents>=1.5.115Step 2: Audit for Code Execution Paths
# Search your codebase for execute_code() usage
grep -r "execute_code" . --include="*.py"
grep -r "python_tools" . --include="*.py"
# Review any agent workflows that accept user-supplied code
grep -r "sandbox_mode" . --include="*.py"Step 3: Implement Defense-in-Depth While Patching
If immediate patching is not possible, apply these mitigations:
# Option 1: Disable execute_code() entirely in your agent configurations
# Do not use python_tools in untrusted environments
# Option 2: Run PraisonAI in an isolated container with restricted syscalls
# Use seccomp profiles or gVisor to limit what the subprocess can do
# Option 3: Sandbox at the OS level using nsjail, bubblewrap, or similarStep 4: Audit for Signs of Exploitation
# Check for unexpected processes spawned by your PraisonAI process
ps aux | grep -i praisonai
# Review system logs for unusual activity from PraisonAI user
journalctl -u your-praisonai-service --since "30 days ago" | grep -i "error\|exec\|bash\|sh"
# Check for unexpected network connections from Python processes
ss -tulnp | grep python
netstat -tulnp | grep pythonDetection Indicators
| Indicator | Description |
|---|---|
| Python processes spawning shell commands (bash, sh) | Active exploitation |
| Unexpected outbound connections from PraisonAI process | Data exfiltration |
| File creation or modification by PraisonAI service user | Post-exploitation activity |
__subclasses__() or __mro__ in submitted code | Sandbox escape attempt |
| Subprocess spawning from within code execution sandbox | Escape in progress |
Broader Context: AI Framework Security
CVE-2026-39888 is part of a growing class of vulnerabilities in AI agent frameworks that expose Python code execution capabilities. As AI agents gain the ability to write and execute code, the security of the underlying execution sandbox becomes critical infrastructure.
Similar vulnerabilities have affected other frameworks in recent months, highlighting that pure Python sandboxes are not a sufficient security boundary for untrusted code execution. The correct approach requires OS-level isolation (containers, VMs, or seccomp) in addition to application-level restrictions.
Post-Remediation Checklist
- Update praisonaiagents to version 1.5.115 or later immediately
- Audit all agent workflows that use
execute_code()orpython_tools - Apply OS-level sandboxing (containers, seccomp, gVisor) as defense in depth
- Review network egress rules from PraisonAI services to limit exfiltration paths
- Rotate any secrets accessible to the PraisonAI process environment
- Monitor for exploitation indicators as described above
- Subscribe to PraisonAI security advisories for future notifications