Executive Summary
A critical code injection vulnerability (CVE-2026-39890, CVSS 9.8) has been discovered in the PraisonAI multi-agent AI framework. The flaw exists in the AgentService.loadAgentFromFile method, which uses the js-yaml library to parse YAML configuration files without disabling dangerous tags.
By crafting a malicious YAML file containing tags such as !!js/function or !!js/undefined, an attacker can cause arbitrary JavaScript code to execute when the file is parsed — achieving full remote code execution on the host running PraisonAI. The vulnerability is fixed in version 4.5.115.
Vulnerability Overview
| Attribute | Value |
|---|---|
| CVE ID | CVE-2026-39890 |
| CVSS Score | 9.8 (Critical) |
| CWE | CWE-94 — Improper Control of Code Generation (Code Injection) |
| Type | YAML Injection / Remote Code Execution |
| Attack Vector | File / Network (malicious YAML input) |
| Privileges Required | None (in applicable attack scenarios) |
| User Interaction | None (when processing attacker-supplied files) |
| Affected Component | AgentService.loadAgentFromFile |
| Fixed Version | 4.5.115 |
Affected Versions
| Package | Affected Versions | Fixed Version |
|---|---|---|
| PraisonAI (AgentService) | < 4.5.115 | 4.5.115 |
Technical Analysis
Root Cause
The AgentService.loadAgentFromFile method loads agent configuration from YAML files to initialize PraisonAI agents. The method passes the YAML file content directly to js-yaml's load() function without specifying a safe schema.
The js-yaml library supports multiple schema types:
| Schema | Safety | Supports |
|---|---|---|
DEFAULT_SAFE_SCHEMA | Safe | Standard YAML types only |
DEFAULT_FULL_SCHEMA | Unsafe | Including !!js/function, !!js/undefined, !!js/regexp |
When load() is called without explicitly specifying DEFAULT_SAFE_SCHEMA (or using safeLoad() in older js-yaml versions), it defaults to DEFAULT_FULL_SCHEMA — which allows dangerous JavaScript-specific YAML tags that execute code during deserialization.
The Dangerous Tags
| Tag | Description | Impact |
|---|---|---|
!!js/function | Embeds a JavaScript function as a YAML value | Code executed on parse |
!!js/undefined | Represents a JavaScript undefined value | Runtime manipulation |
!!js/regexp | Embeds a JavaScript regular expression | ReDoS potential |
How the Attack Works
A malicious YAML file uses the !!js/function tag to embed a JavaScript function body in what appears to be a standard configuration field. When js-yaml parses the file under DEFAULT_FULL_SCHEMA, it instantiates and invokes the embedded function as part of the deserialization process — before any application-level validation can run.
The attack requires no special privileges or complex setup: the attacker simply provides a YAML file where a configuration field value is replaced with a !!js/function block containing their payload. The code runs with the full permissions of the PraisonAI process.
Attack Scenarios
Scenario 1: Malicious Configuration File An attacker uploads or provides a crafted agent YAML configuration file. When PraisonAI loads the file (e.g., from a web interface, API endpoint, or shared storage), the malicious function executes.
Scenario 2: Supply Chain / Repository Compromise A compromised repository or template library contains a malicious agent configuration YAML. Any PraisonAI installation that loads this template executes the attacker's code.
Scenario 3: API Endpoint Accepting YAML
If PraisonAI exposes an endpoint accepting agent definitions in YAML format and that input is parsed with loadAgentFromFile, remote exploitation is possible with no local access required.
Impact Assessment
| Impact Area | Description |
|---|---|
| Arbitrary Code Execution | Execute any Node.js/system code as the PraisonAI process user |
| Credential Theft | Access API keys, model credentials, database passwords from environment |
| Data Exfiltration | Exfiltrate agent configurations, user data, model outputs |
| Lateral Movement | Use compromised host to pivot into internal infrastructure |
| Persistence | Install backdoors or modify agent configurations for ongoing access |
| Agent Hijacking | Modify agent behavior to produce malicious outputs or leak data |
Immediate Remediation
Step 1: Update PraisonAI to 4.5.115
# Update via pip
pip install --upgrade "praisonai>=4.5.115"
# Verify version
pip show praisonai | grep Version
# For Node.js / TypeScript installs
npm install praisonai@latest
npm show praisonai versionStep 2: Audit YAML Loading Code
If you have custom code that loads YAML with js-yaml, ensure you use the safe loading method:
// VULNERABLE - uses DEFAULT_FULL_SCHEMA
const yaml = require('js-yaml');
const config = yaml.load(fileContent);
// SAFE - uses DEFAULT_SAFE_SCHEMA
const config = yaml.load(fileContent, { schema: yaml.DEFAULT_SAFE_SCHEMA });
// ALSO SAFE (older js-yaml API)
const config = yaml.safeLoad(fileContent);# In Python, use PyYAML's safe_load instead of load
import yaml
# VULNERABLE
config = yaml.load(file_content)
# SAFE
config = yaml.safe_load(file_content)Step 3: Validate YAML Sources
# Implement schema validation before parsing
# Only accept agent YAML files from trusted, verified sources
# Restrict file upload endpoints to authenticated users only
# Validate YAML content against a strict schema before any parsingStep 4: Audit for Compromise
# Check for suspicious processes spawned by PraisonAI
ps auxf | grep -A5 praisonai
# Review outbound network connections
ss -tulnp | grep -i node
ss -tulnp | grep -i python
# Search for recently modified files in PraisonAI directories
find /path/to/praisonai -newer /path/to/praisonai/package.json -type f 2>/dev/null
# Check for unauthorized agent configuration files with dangerous YAML tags
grep -r "js/function\|js/undefined\|js/regexp" --include="*.yaml" --include="*.yml" .Detection Indicators
| Indicator | Description |
|---|---|
YAML files containing !!js/function | Exploit payload present |
YAML files containing !!js/undefined or !!js/regexp | Suspicious YAML tags |
| Node.js process spawning unexpected subprocesses | Active exploitation |
| Unexpected outbound connections from PraisonAI process | Exfiltration in progress |
| New or modified agent configuration files | Possible persistence mechanism |
Why YAML Deserialization Attacks Are High-Risk
YAML deserialization attacks are one of the most dangerous vulnerability classes because:
- Execution occurs at parse time — before any application logic validates the input
- No user interaction required — simply loading a file triggers the payload
- Wide attack surface — any application that processes user-supplied YAML files is potentially vulnerable
- Easy to exploit — exploit payloads are simple, well-documented, and widely known
This vulnerability class has affected many major frameworks and libraries over the years, including PyYAML, SnakeYAML (Java), and Ruby's Psych — making it a recurring and well-understood threat.
Post-Remediation Checklist
- Update PraisonAI to version 4.5.115 or later immediately
- Audit all YAML loading code to ensure
DEFAULT_SAFE_SCHEMAorsafeLoad()is used - Validate YAML inputs against strict schemas before any deserialization
- Restrict who can provide agent configuration files — implement authentication and authorization
- Scan existing YAML files for
!!js/function,!!js/undefined, and!!js/regexptags - Review process and network logs for evidence of prior exploitation
- Rotate credentials accessible to the PraisonAI process
- Subscribe to PraisonAI security advisories for future vulnerability notifications