OpenAI has issued an advisory urging macOS users to update their software in response to an expanding npm supply chain attack that compromised the widely-used TanStack open-source library ecosystem. The campaign also swept up additional npm and PyPI packages directly tied to several major AI companies, raising the scope of the incident beyond TanStack itself.
What Happened
The attack targeted TanStack — a collection of popular open-source JavaScript/TypeScript libraries including TanStack Query, TanStack Router, and TanStack Table, which are used by hundreds of thousands of developers and deployed across millions of applications.
Threat actors compromised packages within the TanStack ecosystem and injected malicious code designed to steal credentials and developer secrets. The campaign then expanded to additional packages in both npm (the Node.js package registry) and PyPI (the Python Package Index), specifically targeting packages tied to AI companies — including tooling associated with OpenAI's development ecosystem.
The malicious packages were crafted to blend in with legitimate software, making them difficult to detect without explicit security scanning.
Why macOS Users Specifically
OpenAI's advisory specifically flagged macOS users because the malicious payload in the compromised packages included code paths that targeted macOS credential storage and keychain access. On macOS, developer tools often store API keys, authentication tokens, and cloud credentials in accessible locations that the malicious package code attempted to harvest.
The concern is particularly acute for developers who:
- Install npm or PyPI packages in global environments
- Use AI development tools (OpenAI SDK, LangChain, and similar) alongside TanStack
- Store API keys and cloud credentials in their development environment
Scope of the Campaign
The supply chain attack is notable for its breadth across the open-source AI tooling ecosystem:
| Component | Status |
|---|---|
| TanStack npm packages | Compromised — malicious versions published |
| PyPI packages (AI-linked) | Additional malicious packages identified |
| OpenAI SDK-adjacent packages | Under investigation |
| Other AI company packages | Multiple affected — investigation ongoing |
The campaign appears to be related to the Mini Shai Hulud supply chain worm that targeted TanStack, Mistral AI Guardrails, and other packages in prior weeks — suggesting an ongoing, coordinated operation rather than a one-time attack.
What the Malicious Code Did
Based on researcher analysis, the compromised packages attempted to:
- Exfiltrate developer credentials — API keys, tokens, and environment variables
- Harvest SSH keys and configuration files from developer machines
- Access macOS Keychain entries where credentials may be stored
- Transmit stolen data to attacker-controlled infrastructure
- Persist on affected systems if run with sufficient privileges
The malicious payload was embedded in ways designed to execute during normal package installation or usage, not only during explicitly malicious operations.
How to Check If You Are Affected
For npm Projects
# Audit your npm dependencies for known malicious packages
npm audit
# Check for TanStack packages and their installed versions
npm list | grep tanstack
# Review your npm install logs for unexpected packages
cat ~/.npm/_logs/*.log | grep -i "tanstack\|malicious"
# Use a dedicated supply chain scanner
npx better-npm-audit auditFor PyPI / Python Projects
# Check installed packages against known compromised versions
pip list | grep -i tanstack
# Use pip-audit for supply chain scanning
pip install pip-audit
pip-audit
# Review requirements files for unexpected additions
cat requirements.txt requirements-dev.txtFor macOS Credential Exposure
# Check if any unexpected processes accessed your Keychain recently
# System Preferences > Privacy & Security > Keychain Access
# Review environment variables that may have been exposed
env | grep -i "key\|token\|secret\|password\|api"
# Check for unexpected SSH connections or data transmission
sudo lsof -i -n -P | grep ESTABLISHEDImmediate Actions
If you have TanStack packages installed:
- Identify all installed TanStack package versions across your projects
- Cross-reference against published malicious version ranges (check npm advisory database)
- Update to clean versions that have been verified as uncompromised
- Rotate any credentials that could have been exposed — API keys, tokens, SSH keys
- Review your developer machine for signs of credential harvesting activity
- Audit your npm and PyPI lock files for unexpected dependency additions
For organizations:
- Scan CI/CD pipelines for exposure to the compromised packages
- Review dependency caching — poisoned packages may be cached in CI artifact stores
- Alert developers to rotate credentials for any system they access from affected machines
- Implement software composition analysis (SCA) tools in the build pipeline
The Broader Supply Chain Picture
This incident is the latest in a sustained wave of supply chain attacks targeting the open-source AI and developer tooling ecosystem in 2026. The pattern is consistent: attackers target widely-used libraries that AI developers depend on, maximizing the blast radius of a single compromise.
| Campaign | Target | Method |
|---|---|---|
| Mini Shai Hulud | TanStack, Mistral AI, Guardrails AI | npm worm self-replication |
| TanStack expansion | npm + PyPI AI packages | Package poisoning |
| SAP npm compromise | SAP-related npm packages | Credential theft payload |
| Axios npm attack (April) | Axios maintainer account | Social engineering |
| Checkmarx Jenkins plugin | DevSecOps CI/CD tooling | Supply chain injection |
The common thread: developers and AI tooling are high-value targets because compromised developer machines provide access to production credentials, source code, and downstream customers.
OpenAI's Guidance
OpenAI's update advisory recommends:
- Update the OpenAI macOS application to the latest version immediately
- Rotate any OpenAI API keys that have been used on potentially affected machines
- Review API key usage in the OpenAI platform dashboard for unexpected activity
- Report unusual API usage to OpenAI security at
security@openai.com