Microsoft and Salesforce have both patched critical prompt injection vulnerabilities in their AI agent platforms that allowed external, unauthenticated attackers to exfiltrate sensitive customer data — without ever gaining authenticated access to the underlying systems.
The flaws highlight a growing and poorly understood attack surface created by AI agents that process external inputs alongside privileged business data.
The Salesforce Agentforce Flaw — "PipeLeak"
The Salesforce vulnerability, internally codenamed PipeLeak by researchers, targeted Agentforce — Salesforce's agentic AI platform that automates customer service and CRM workflows.
Attack flow:
- An attacker submits a crafted message via a public Salesforce lead capture form — no account required
- The message contains hidden prompt injection instructions telling the AI agent to treat the request as a trusted administrator command
- The Agentforce agent processes the form submission, interprets the injected instructions, and executes them as part of its normal workflow
- The agent returns CRM data — including contact records, deal information, and internal notes — via email to an attacker-controlled address
The root cause was Agentforce's failure to distinguish between untrusted user input (form submissions from the public internet) and trusted administrative instructions. The AI agent's design assumed certain inputs were safe to act on autonomously.
The Microsoft Copilot Flaw — CVE-2026-21520
Microsoft's Copilot vulnerability (CVE-2026-21520, CVSS 7.5) affected its enterprise AI assistant when integrated with SharePoint.
Attack flow:
- An attacker crafts a malicious entry in a SharePoint form input accessible to the Copilot integration
- The injected content instructs Copilot to execute connected actions — specifically, to forward data using its connected capabilities
- Copilot, treating the SharePoint input as a trusted source, executes the action and sends customer or employee data to an attacker-controlled endpoint
Unlike traditional injection attacks, no code execution or privileged access was required. The attacker simply needed the ability to submit a form that Copilot's data pipeline would eventually process.
Why Traditional Security Controls Fail Here
| Traditional Security | Why It Fails Against Prompt Injection |
|---|---|
| Authentication & authorization | Attacker doesn't need credentials — they manipulate the AI |
| Input validation (WAF, sanitization) | AI agents require natural language input — semantic filtering is hard |
| Network segmentation | The AI agent bridges internal data and external inputs by design |
| Audit logging | Agent "decisions" may not be logged with enough fidelity to detect manipulation |
These vulnerabilities bypass traditional perimeter defenses because the attack vector is the AI's reasoning process, not the application's code.
Patches and Mitigations
Both companies have shipped fixes:
- Salesforce patched Agentforce to implement input trust boundaries — form submissions from unauthenticated sources are now processed in a restricted context that prevents privileged actions
- Microsoft patched CVE-2026-21520 in Copilot and updated its SharePoint integration to sandbox input sources before allowing connected actions
For organizations deploying AI agents:
1. Apply vendor patches immediately — both are now available
2. Audit AI agent permissions — apply least privilege to connected actions
3. Separate untrusted input pipelines from trusted instruction channels
4. Enable enhanced logging for AI agent actions (especially outbound data operations)
5. Implement human-in-the-loop review for high-impact agent actionsThe Broader Threat Landscape
These two patches are part of a growing wave of AI agent security disclosures in 2026. Security researchers have identified prompt injection as one of the most prevalent and difficult-to-fix vulnerability classes in agentic AI systems.
The underlying problem is architectural: AI agents are designed to follow instructions in natural language, and distinguishing between legitimate instructions from a trusted operator and malicious instructions injected via untrusted data is an unsolved problem at the model level.
Key risk factors for organizations deploying AI agents:
- Public-facing inputs processed by the agent (web forms, email, chat)
- Broad permissions granted to agent-connected actions (email, calendar, CRM write access)
- Insufficient logging of agent decisions and actions
- Assumption that AI agents are just another internal service — they are not
Recommendations for Security Teams
| Control | Effectiveness Against Prompt Injection |
|---|---|
| Apply vendor patches | Essential — blocks the specific known vectors |
| Enforce agent least-privilege | Limits blast radius if injection succeeds |
| Separate input trust zones | Prevents public input from reaching privileged agent contexts |
| Deploy semantic input analysis | Emerging — specialized models for detecting injection attempts |
| Human review for sensitive agent actions | High effectiveness — adds a break in the automated chain |
| Incident response playbook for AI agents | Essential for detecting and containing active injection attacks |
The patching of these two flaws closes the immediate risk, but organizations should treat prompt injection as a persistent threat class that requires ongoing attention as AI agent deployments expand.
Source: Dark Reading