Overview
A critical SQL injection vulnerability (CVE-2026-42208) has been disclosed in LiteLLM, a widely deployed open-source proxy server and AI gateway used to route requests to OpenAI, Anthropic, Azure, and other LLM providers. The vulnerability exists in the API key validation code path and carries a CVSS score of 9.8 (Critical).
The flaw allows an unauthenticated attacker to inject arbitrary SQL into the database query used to check API key validity. Because this check occurs before any authentication is established, there is no credential barrier to exploitation.
Affected Versions
| Product | Affected Versions | Fixed Version |
|---|---|---|
| LiteLLM proxy server | 1.81.16 – 1.83.6 | 1.83.7 |
Technical Details
LiteLLM's proxy API key check assembles a database query by embedding the caller-supplied key value directly into the query string rather than using parameterized queries or prepared statements:
# Vulnerable pattern (simplified)
query = f"SELECT * FROM api_keys WHERE key_hash = '{supplied_key}'"
result = db.execute(query)An attacker can supply a crafted API key value such as:
' OR '1'='1'--
This terminates the intended string literal and appends a condition that always evaluates to true, bypassing the key check. More sophisticated payloads can leverage UNION-based injection to extract arbitrary table contents, or time-based blind injection to enumerate data when direct output is suppressed.
What Can Be Extracted
Because LiteLLM stores operational configuration in its database, a successful injection could expose:
- LLM provider API keys (OpenAI, Anthropic, Azure, Cohere, etc.)
- Virtual key metadata and spending limits
- Model routing configuration
- Team and user records
- Audit logs of prior requests
In multi-tenant or enterprise LiteLLM deployments, this represents a significant blast radius — a single injection can yield credentials for all upstream AI provider accounts.
No Authentication Required
The injection point is the API key check itself, meaning the attacker does not need a valid key to reach the vulnerable code. Any HTTP client that can reach the LiteLLM proxy endpoint is a potential attacker.
Impact
- Full database read access via SQL injection
- Theft of all upstream LLM provider credentials stored in LiteLLM
- Potential lateral movement to AI provider accounts (OpenAI, Anthropic, Azure AI)
- Unauthorized LLM usage billed to the victim organization
Organizations running LiteLLM as a centralized AI gateway — a common pattern in enterprise deployments — face elevated risk since a single compromised instance can expose all downstream provider credentials.
Remediation
Upgrade LiteLLM to version 1.83.7 or later immediately.
# Update LiteLLM
pip install --upgrade litellm
# If running via Docker, pull the latest image
docker pull ghcr.io/berriai/litellm:main-latest
# Verify the installed version
litellm --versionAdditional Mitigations
- Network isolation: Restrict LiteLLM proxy access to trusted internal networks only; do not expose it directly to the internet
- Rotate credentials: If you are running an affected version and the proxy was network-accessible, rotate all LLM provider API keys stored in LiteLLM immediately
- Enable WAF rules: If a web application firewall is in front of the proxy, enable SQL injection detection rules as a temporary compensating control
- Audit access logs: Review LiteLLM and database access logs for unusual patterns — repeated API key validation failures or unexpected query volumes may indicate exploitation
References
Timeline
| Date | Event |
|---|---|
| 2026-05-08 | CVE published to NVD |
| 2026-05-08 | LiteLLM 1.83.7 released with fix |