A sophisticated supply chain attack targeted LiteLLM, one of the most widely deployed AI abstraction packages in the Python ecosystem, pushing malicious versions to the Python Package Index (PyPI) that went live for approximately two hours on March 24, 2026. The compromised releases — versions 1.82.7 and 1.82.8 — were designed to silently exfiltrate sensitive credentials from affected systems before the packages were pulled from the registry.
The incident underscores a growing and largely unresolved tension in the open-source software supply chain: packages maintained by small teams that power a significant fraction of enterprise AI infrastructure represent high-value, low-friction targets for sophisticated threat actors.
What Is LiteLLM?
LiteLLM is an open-source Python library that provides a unified interface for interacting with over 100 large language model APIs, including those from OpenAI, Anthropic, Google, and Amazon. It abstracts provider-specific authentication, request formatting, and response parsing into a single consistent SDK — making it an essential building block for teams deploying AI-powered applications at scale.
The package reports approximately 3 million daily downloads from PyPI and is estimated to be present in roughly 36% of all cloud environments, according to post-incident analysis. That density of deployment means even a brief window of malicious availability can translate into thousands of compromised enterprise systems.
How the Attack Unfolded
The malicious versions were uploaded using valid publishing credentials, suggesting the attackers obtained access to a maintainer's account — either through credential theft, phishing, or by compromising the maintainer's development environment. The specific method of account takeover has not been confirmed by the LiteLLM project team as of publication.
Once uploaded, the trojanised packages appeared as routine patch releases in the project's versioning cadence, making them difficult to distinguish from legitimate updates without source code review. Automated dependency managers and CI/CD pipelines running pip install litellm --upgrade would have silently pulled the malicious build.
The packages remained available on PyPI for approximately two hours before being identified and removed. Response time from detection to removal was considered fast relative to similar incidents, but the two-hour window is sufficient for automated build systems globally to have ingested the compromised release.
Malware Capabilities
The malicious code embedded in versions 1.82.7 and 1.82.8 was engineered to perform several actions on the compromised host:
- Credential harvesting: Extraction of cloud provider credentials (AWS, GCP, Azure), environment variables containing API keys, and authentication tokens stored in common configuration paths
- Cryptocurrency wallet theft: Scanning and exfiltration of local cryptocurrency wallet files and seed phrases
- Persistent backdoor installation: Deployment of a downloader component designed to pull additional payloads for follow-on intrusions
- Delayed C2 check-in: The malware contacts its command-and-control server only every 50 minutes, an unusual interval likely chosen to evade automated sandbox detection systems that typically time out after 5–10 minutes of inactivity
The 50-minute C2 polling interval is a noteworthy operational security choice. Most sandbox environments complete their analysis within 10–15 minutes — a delay of 50 minutes means the malicious network activity would be invisible to any automated analysis that doesn't run significantly longer than standard sandbox timeouts.
Attribution: TeamPCP
A threat group calling itself TeamPCP claimed responsibility for the attack via a public Telegram channel. The group had previously been largely unknown to the threat intelligence community, suggesting either a new actor or an established group operating under a fresh identity. TeamPCP's use of a public Telegram channel for claiming credit is consistent with tactics observed among groups seeking notoriety alongside operational gain.
Further attribution analysis is ongoing, and no nation-state connection has been publicly established.
Scope and Impact Assessment
The combination of LiteLLM's deployment breadth and the nature of the stolen data makes this incident particularly severe:
| Factor | Detail |
|---|---|
| Daily downloads | ~3 million |
| Cloud environment presence | ~36% |
| Exposure window | ~2 hours |
| Primary targets | Cloud credentials, API keys, crypto wallets |
| Persistence mechanism | Downloader for follow-on payloads |
Organisations using AI-powered applications built on LiteLLM — particularly those that automatically update dependencies — should assume any system that pulled a package update during the window of March 24 was potentially compromised.
Recommended Response
Immediate actions:
- Audit dependency lock files — Review
requirements.txt,pyproject.toml, andpoetry.lockfiles for any reference to LiteLLM versions 1.82.7 or 1.82.8 - Rotate all cloud credentials — Treat any API keys, IAM credentials, or tokens present on systems that may have pulled the malicious version as compromised
- Review environment variable configurations — Check for exposed secrets in
.envfiles, CI/CD pipeline variables, and container runtime environments - Search for the malicious downloader — Conduct endpoint investigation for persistence mechanisms installed by the malware's second-stage component
- Pin dependency versions — Until the investigation concludes, lock LiteLLM to a verified-clean version in all deployment manifests
The LiteLLM project team has released clean versions following the incident. All organisations should upgrade to the current clean release and verify the integrity of the package against the official repository checksums.
The Broader Supply Chain Threat
The LiteLLM compromise is the latest in a series of supply chain attacks targeting AI and machine learning infrastructure. The open-source AI tooling ecosystem has grown rapidly with minimal security scrutiny — many widely-deployed packages are maintained by small teams with limited security resources, using PyPI publishing workflows that do not enforce hardware security keys or multi-factor authentication for maintainer accounts.
As AI packages become foundational to enterprise cloud infrastructure, their security posture must be treated with the same rigour applied to operating system packages, network libraries, and authentication frameworks.