This week's threat landscape is a mix of high-urgency vulnerabilities, expanding supply chain compromises, and a growing class of AI-targeting attacks that continue to demonstrate how quickly offensive tooling is evolving. From an actively exploited Palo Alto Networks flaw to AI systems discovering bugs in core libraries — here is the week in security.
PAN-OS Remote Code Execution Under Active Exploitation
Palo Alto Networks' PAN-OS, the operating system powering the company's enterprise firewalls and network security appliances, is facing active exploitation of a remote code execution vulnerability. Attackers are abusing the flaw to gain root-level access to affected devices, making this one of the most serious threats of the week.
Organizations running Palo Alto firewalls in perimeter or internal segmentation roles should:
- Verify current PAN-OS versions and apply vendor-issued patches immediately
- Enable Threat Prevention signatures that detect exploitation attempts
- Review firewall management interface access — restrict to known management IPs only
- Monitor for unusual command execution or configuration changes on affected appliances
Palo Alto Networks has acknowledged exploitation and released patches. CISA has flagged related Palo Alto vulnerabilities in its KEV catalog this week, reinforcing the urgency.
Mythos AI Discovers cURL Memory Safety Bug
Anthropic's Mythos — its AI-powered vulnerability research system — identified a memory safety bug in cURL, the ubiquitous command-line tool and library used in millions of applications and operating systems worldwide. This represents another milestone in AI-assisted vulnerability discovery, following Mythos's earlier findings of vulnerabilities in Vim and Emacs.
The cURL bug involves incorrect memory handling that could be triggered by a specially crafted server response. While exploitation complexity varies by context, cURL's near-universal deployment makes even moderate-severity findings significant from a supply chain risk perspective.
Key context:
- cURL is embedded in everything from mobile operating systems to cloud APIs to enterprise middleware
- Memory safety bugs in widely-used libraries can be chained with other vulnerabilities
- The cURL maintainer team has been coordinating with Anthropic on responsible disclosure
The discovery continues a pattern of AI systems proving capable of finding bugs that human researchers missed despite years of code review.
AI Tokenizer Attacks: A New Class of Prompt Injection
Researchers this week detailed how AI tokenizer weaknesses can be systematically exploited to craft inputs that bypass safety filters and inject malicious instructions at scale. Unlike traditional prompt injection attacks, tokenizer-based attacks exploit the way language models split input text into tokens — creating edge cases where safety classifiers see safe content while the model processes something different.
Practical implications for organizations deploying AI:
| Attack Surface | Risk |
|---|---|
| Public-facing AI chatbots | User-supplied inputs crafted to bypass content filters |
| AI-powered email security | Malicious emails crafted to evade AI-based detection |
| Code review AI tools | Injected instructions causing the AI to approve malicious code |
| RAG systems | Poisoned documents in knowledge bases that manipulate AI responses |
Mitigations include multi-layer input validation, output filtering independent of the model, and treating AI systems as untrusted components in security-sensitive pipelines.
Supply Chain: TanStack npm Compromise Expands
The TanStack npm supply chain attack — which prompted OpenAI to warn macOS users to update — continued to expand this week as researchers identified additional compromised packages in both npm and PyPI tied to AI companies. The campaign is notable for its breadth across the open-source AI tooling ecosystem.
See our dedicated coverage: OpenAI Asks macOS Users to Update After TanStack npm Supply Chain Attack.
Additional Stories This Week
Windows Zero-Days — BitLocker Bypass and CTFMon Privilege Escalation: A researcher dropped proof-of-concept exploits for two Windows vulnerabilities — one enabling BitLocker bypass on physical access scenarios and another abusing the CTFMon service for local privilege escalation.
18-Year-Old nginx Rewrite Module Flaw: A critical vulnerability in nginx's rewrite module, lurking undetected for nearly two decades, was disclosed this week. The flaw enables unauthenticated remote code execution on affected nginx deployments — a significant risk given nginx's massive share of the web server market.
KongTuke Hackers Pivot to Microsoft Teams: The KongTuke threat actor, previously known for email-based corporate breaches, has shifted tactics to Microsoft Teams as an initial access vector — impersonating IT support to deliver malware via Teams messages.
PraisonAI CVE-2026-44338 Auth Bypass: An authentication bypass vulnerability in PraisonAI, a popular multi-agent AI framework, was targeted within hours of its public disclosure. The rapid weaponization highlights the shrinking window between vulnerability disclosure and exploitation.
FrostyNeighbor APT Targets Poland and Ukraine: A Belarusian nation-state threat group, dubbed FrostyNeighbor, was observed fingerprinting government targets in Poland and Ukraine before delivering spear-phishing payloads in an ongoing espionage campaign.
Cisco Catalyst SD-WAN Auth Bypass Added to CISA KEV: CVE-2026-20182, a critical authentication bypass in Cisco Catalyst SD-WAN Controller and Manager, was added to CISA's Known Exploited Vulnerabilities catalog.
What Security Teams Should Prioritize This Week
| Priority | Action |
|---|---|
| Critical | Patch PAN-OS — root RCE is actively exploited |
| Critical | Review Cisco SD-WAN deployments for CVE-2026-20182 |
| High | Audit nginx versions for rewrite module vulnerability |
| High | Scan npm and PyPI dependencies for TanStack-linked compromises |
| High | Update cURL across all systems once patch is available |
| Medium | Brief teams on Teams-based social engineering from KongTuke |
| Medium | Review AI deployments for tokenizer attack exposure |