Hackers Exploit CVE-2026-42208 in LiteLLM LLM Gateway
Threat actors are actively targeting LiteLLM, a popular open-source proxy and gateway for managing large language model (LLM) API calls, by exploiting a critical pre-authentication SQL injection vulnerability tracked as CVE-2026-42208. Security researchers have confirmed active exploitation in the wild, placing organizations that route AI workloads through LiteLLM at immediate risk of credential theft and data exposure.
LiteLLM serves as a centralized gateway used by developers and enterprises to route requests across multiple LLM providers — including OpenAI, Anthropic, Azure OpenAI, Google Gemini, and others — using a single unified API. Because LiteLLM instances store API keys, routing configurations, and usage logs for these provider integrations, a successful SQL injection attack against the gateway can expose an organization's entire AI infrastructure.
Vulnerability Details
| Attribute | Value |
|---|---|
| CVE ID | CVE-2026-42208 |
| Severity | Critical |
| Attack Type | Pre-Authentication SQL Injection |
| Affected Product | LiteLLM (open-source LLM gateway) |
| Authentication Required | None — pre-auth exploitation |
| Active Exploitation | Confirmed |
| Patch Available | Update to latest version |
The vulnerability is classified as pre-authentication, meaning attackers do not need a valid account or credentials on the LiteLLM instance to trigger the injection. This significantly raises the risk profile, as any internet-exposed LiteLLM deployment is a potential target without any prior account compromise.
What Is at Risk
LiteLLM deployments store a variety of sensitive data that becomes accessible upon successful SQL injection:
Provider API Keys
| Provider | Risk |
|---|---|
| OpenAI | GPT-4/GPT-4o API keys — billing abuse, data access |
| Anthropic | Claude API keys — quota exhaustion, prompt injection |
| Azure OpenAI | Azure subscription credentials |
| Google Gemini | GCP service account tokens |
| AWS Bedrock | AWS access key pairs |
Configuration and Operational Data
- Routing rules — which models handle which request types
- Budget and rate limit settings — spending caps and quotas per team or user
- Usage logs — records of all prompts routed through the gateway
- Team and user records — internal user directories and access controls
- Callback and webhook configurations — downstream system integrations
How the Attack Works
The SQL injection vulnerability exists in a publicly accessible endpoint of LiteLLM's management API or user-facing interface. Because the endpoint does not require authentication, attackers can send crafted SQL payloads directly:
GET /[vulnerable-endpoint]?param=1' UNION SELECT api_key,model,NULL FROM litellm_virtualkeys-- HTTP/1.1
Host: <litellm-instance>
A successful UNION-based injection against LiteLLM's key management tables can return virtual API keys, provider credentials, and associated metadata in the HTTP response.
Attackers are reportedly using the extracted credentials to:
- Drain API quotas — making large numbers of LLM API calls billed to the victim's account
- Exfiltrate prompt data — accessing historical prompt and completion logs
- Pivot to upstream providers — using stolen API keys to access OpenAI, Anthropic, or Azure directly
- Access internal tooling — exploiting LiteLLM's agent and tool routing configurations
Exploitation Context: AI Gateway Attack Surface
LiteLLM is widely deployed in enterprise AI pipelines, development environments, and AI-as-a-service platforms. The centralized nature of an LLM gateway makes it a high-value target:
- A single compromised LiteLLM instance can expose credentials for dozens of AI provider accounts
- Organizations using LiteLLM for internal AI tools may expose employee query history and system prompts
- Enterprises with per-team API routing may expose the credentials and usage data of multiple business units simultaneously
The attack surface is further amplified by the fact that many LiteLLM deployments are exposed to the internet for developer convenience, particularly in cloud-hosted and container-based environments.
Affected Versions and Remediation
Organizations running LiteLLM should take immediate action:
Immediate Steps
- Apply the latest LiteLLM update — patch CVE-2026-42208 by updating to the most recent release
- Rotate all stored API keys — assume any credentials stored in the compromised instance are exposed
- Audit API key usage — review OpenAI, Anthropic, Azure, and other provider dashboards for anomalous usage spikes
- Restrict network access — place LiteLLM behind a VPN or internal network if internet exposure is not required
- Review access logs — check LiteLLM logs for unusual query patterns targeting the vulnerable endpoint
Longer-Term Hardening
# LiteLLM deployment hardening checklist
network:
- expose only to internal networks or VPN
- use reverse proxy with IP allowlisting for admin endpoints
- enable authentication on all management API endpoints
secrets:
- store provider API keys in a secrets manager, not in LiteLLM's database
- use short-lived, scoped API keys where possible
monitoring:
- alert on unusual LLM API usage spikes
- log all key access events
- monitor for unexpected database query patternsBroader Trend: AI Infrastructure as an Attack Target
This exploitation follows a pattern of increased threat actor interest in AI-adjacent infrastructure. Notable incidents include:
- Mercor/LiteLLM supply chain attack (April 2026) — attackers used a compromised LiteLLM dependency to steal credentials from developer machines
- Vercel AI tool breach (April 2026) — employee access to an AI coding tool led to credential theft and downstream customer data exposure
- Anthropic MCP design vulnerability (April 2026) — structural issues in the Model Context Protocol enabled RCE via malicious MCP servers
As organizations increasingly centralize AI API management through gateway tools, these systems become high-value targets offering lateral movement opportunities across cloud environments.
Key Takeaways
- CVE-2026-42208 is a critical pre-authentication SQL injection in LiteLLM enabling credential theft without any prior account access
- Successful exploitation exposes AI provider API keys for OpenAI, Anthropic, Azure, and other services stored in the gateway
- Attackers are confirmed to be actively exploiting this flaw — patch immediately and rotate all stored credentials
- LiteLLM instances exposed to the internet are at highest risk — restrict network access as a priority mitigation
- This incident is part of a broader trend of AI infrastructure emerging as a primary attack surface in enterprise environments