Mercor, the AI-powered hiring and talent assessment platform, has confirmed a security incident directly tied to the LiteLLM PyPI supply chain attack attributed to threat group TeamPCP. In a separate development, the notorious hacking gang Lapsus$ claimed on its website to have obtained hundreds of gigabytes of data from Mercor — a claim that, if accurate, would represent a significant escalation beyond what the supply chain attack alone would explain.
What Is Mercor?
Mercor is an AI-driven hiring platform used by companies to automate candidate evaluation, technical screening, and talent pipeline management. Its platform integrates large language models (LLMs) to assess candidates and generate hiring recommendations, making it a particularly sensitive target given the volume of personal data, resumes, assessment results, and employer confidential hiring criteria it processes.
| Attribute | Details |
|---|---|
| Victim | Mercor (AI hiring platform) |
| Attack Vector | LiteLLM PyPI supply chain compromise |
| Attacker | TeamPCP (supply chain); Lapsus$ (data claim) |
| Claimed Data | Hundreds of gigabytes |
| Source | The Record |
The LiteLLM Supply Chain Attack
LiteLLM is a popular open-source Python package that provides a unified interface to call over 100 LLM APIs including OpenAI, Anthropic, Azure, and others. It is widely used by AI application developers — including companies like Mercor — to build applications that abstract provider-specific API differences.
TeamPCP compromised the LiteLLM package published to PyPI (the Python Package Index), injecting malicious code into what appeared to be a legitimate release. Applications that installed the backdoored version would execute the attacker's payload alongside the legitimate LiteLLM functionality, typically without any visible indication to developers or users.
Supply Chain Attack Flow:
1. TeamPCP gains access to LiteLLM's PyPI publishing account
(likely through compromised developer credentials or
a CI/CD pipeline breach)
2. Malicious code is injected into a new LiteLLM release
and published to PyPI
3. Organizations using LiteLLM install the update via
standard dependency management (pip install --upgrade)
4. The malicious payload activates on affected systems:
- Exfiltrates environment variables (API keys, secrets)
- Establishes persistence or reverse shell capability
- Collects application data accessible to the LiteLLM process
5. Mercor's infrastructure, running the backdoored LiteLLM
version, is compromisedThis attack pattern mirrors the xz-utils backdoor (2024) and the Codecov breach (2021) — incidents where injecting malicious code into a widely-used developer dependency provided attackers with broad access to organizations that trusted the package.
Lapsus$ Involvement
Complicating the attribution picture, Lapsus$ independently claimed responsibility for obtaining "hundreds of gigabytes" of Mercor data and posted this claim on its website. Lapsus$ — known for high-profile breaches of Okta, Microsoft, Samsung, and Nvidia — typically operates via social engineering, SIM swapping, and insider recruitment rather than technical supply chain exploitation.
The dual-attribution scenario raises several questions:
- Did Lapsus$ gain separate access through its own methods, leveraging credentials or access obtained by TeamPCP?
- Is Lapsus$ claiming credit for data actually exfiltrated by TeamPCP in order to boost its public profile?
- Did Lapsus$ exploit the LiteLLM window independently once the compromise became known within underground communities?
Security researchers have noted increasing collaboration and information-sharing between distinct threat actor groups, making clean attribution increasingly difficult. It is plausible that multiple groups exploited the same initial access vector.
What Data May Have Been Exposed
Mercor's platform processes highly sensitive data across its hiring workflow:
CANDIDATE DATA:
- Full resumes, work history, and contact information
- Video interview recordings and AI-generated assessments
- Technical assessment results and scoring rubrics
- Compensation expectations and career data
EMPLOYER DATA:
- Confidential job descriptions and hiring criteria
- Internal team structure and headcount planning
- Evaluation notes and hiring decision rationale
- Integration credentials for ATS systems (Greenhouse, Lever, etc.)
INFRASTRUCTURE DATA (most at risk via supply chain attack):
- Cloud provider API keys in environment variables
- LLM provider API keys (OpenAI, Anthropic, etc.)
- Database connection strings
- Internal service authentication tokensIf TeamPCP's payload targeted environment variables (a common technique in supply chain attacks against Python applications), API keys and infrastructure credentials would be the highest-priority stolen assets — potentially enabling further lateral movement into Mercor's cloud environment.
Mercor's Response
Mercor confirmed the security incident in a statement to The Record, acknowledging the connection to the LiteLLM supply chain attack. The company indicated it was investigating the scope of the incident and taking steps to remediate the affected systems.
Specific details about the nature of the data accessed, the duration of the compromise, or remediation steps taken were not disclosed at the time of reporting. Mercor did not directly address Lapsus$'s data theft claim.
Broader Implications for AI Application Security
This incident underscores a rapidly growing attack surface: AI application stacks built on open-source LLM tooling. The explosion of applications built on packages like LiteLLM, LangChain, LlamaIndex, and similar frameworks has created a new class of supply chain risk:
- These packages are updated frequently — applications often auto-update dependencies, reducing the window to detect malicious releases
- They run with broad permissions — LLM orchestration frameworks typically have access to production API keys, databases, and external services
- They are installed in production environments — unlike development tools that may have limited blast radius, LLM frameworks are increasingly core application dependencies
The LiteLLM attack demonstrates that threat actors are actively studying this ecosystem and willing to invest in supply chain compromise to access the organizations building on top of it.
Recommended Actions
For Organizations Using LiteLLM
IMMEDIATE:
- Pin LiteLLM to a known-safe version and verify the package
hash against official release signatures
- Audit your dependency tree for any LiteLLM version installed
during the affected window
- Rotate ALL API keys, tokens, and secrets accessible to
systems where LiteLLM was running
SHORT-TERM:
- Implement dependency pinning and hash verification in all
Python application CI/CD pipelines
- Subscribe to PyPI security advisories and LiteLLM's GitHub
security advisories
- Conduct a secrets scan across all repositories and
infrastructure using tools like Trufflehog or GitleaksFor AI Startup Leadership
- Treat open-source AI tooling as a supply chain risk — apply the same scrutiny to LLM framework dependencies as to cloud infrastructure
- Implement secrets management — API keys should never be stored in environment variables accessible to application processes; use vaults (HashiCorp Vault, AWS Secrets Manager)
- Establish a software bill of materials (SBOM) — know what packages are in production and receive alerts when those packages publish new versions
- Include AI framework vendors in your threat model — assume that popular packages will be targeted and plan accordingly
Source: The Record — April 1, 2026