Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

698+ Articles
118+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. News
  3. Microsoft, Salesforce Patch AI Agent Data Leak Flaws
Microsoft, Salesforce Patch AI Agent Data Leak Flaws
NEWS

Microsoft, Salesforce Patch AI Agent Data Leak Flaws

Prompt injection vulnerabilities in Salesforce Agentforce and Microsoft Copilot would have allowed unauthenticated attackers to exfiltrate sensitive CRM and customer data by hijacking AI agent logic via public-facing inputs.

Dylan H.

News Desk

April 19, 2026
5 min read

Microsoft and Salesforce have both patched critical prompt injection vulnerabilities in their AI agent platforms that allowed external, unauthenticated attackers to exfiltrate sensitive customer data — without ever gaining authenticated access to the underlying systems.

The flaws highlight a growing and poorly understood attack surface created by AI agents that process external inputs alongside privileged business data.

The Salesforce Agentforce Flaw — "PipeLeak"

The Salesforce vulnerability, internally codenamed PipeLeak by researchers, targeted Agentforce — Salesforce's agentic AI platform that automates customer service and CRM workflows.

Attack flow:

  1. An attacker submits a crafted message via a public Salesforce lead capture form — no account required
  2. The message contains hidden prompt injection instructions telling the AI agent to treat the request as a trusted administrator command
  3. The Agentforce agent processes the form submission, interprets the injected instructions, and executes them as part of its normal workflow
  4. The agent returns CRM data — including contact records, deal information, and internal notes — via email to an attacker-controlled address

The root cause was Agentforce's failure to distinguish between untrusted user input (form submissions from the public internet) and trusted administrative instructions. The AI agent's design assumed certain inputs were safe to act on autonomously.

The Microsoft Copilot Flaw — CVE-2026-21520

Microsoft's Copilot vulnerability (CVE-2026-21520, CVSS 7.5) affected its enterprise AI assistant when integrated with SharePoint.

Attack flow:

  1. An attacker crafts a malicious entry in a SharePoint form input accessible to the Copilot integration
  2. The injected content instructs Copilot to execute connected actions — specifically, to forward data using its connected capabilities
  3. Copilot, treating the SharePoint input as a trusted source, executes the action and sends customer or employee data to an attacker-controlled endpoint

Unlike traditional injection attacks, no code execution or privileged access was required. The attacker simply needed the ability to submit a form that Copilot's data pipeline would eventually process.

Why Traditional Security Controls Fail Here

Traditional SecurityWhy It Fails Against Prompt Injection
Authentication & authorizationAttacker doesn't need credentials — they manipulate the AI
Input validation (WAF, sanitization)AI agents require natural language input — semantic filtering is hard
Network segmentationThe AI agent bridges internal data and external inputs by design
Audit loggingAgent "decisions" may not be logged with enough fidelity to detect manipulation

These vulnerabilities bypass traditional perimeter defenses because the attack vector is the AI's reasoning process, not the application's code.

Patches and Mitigations

Both companies have shipped fixes:

  • Salesforce patched Agentforce to implement input trust boundaries — form submissions from unauthenticated sources are now processed in a restricted context that prevents privileged actions
  • Microsoft patched CVE-2026-21520 in Copilot and updated its SharePoint integration to sandbox input sources before allowing connected actions

For organizations deploying AI agents:

1. Apply vendor patches immediately — both are now available
2. Audit AI agent permissions — apply least privilege to connected actions
3. Separate untrusted input pipelines from trusted instruction channels
4. Enable enhanced logging for AI agent actions (especially outbound data operations)
5. Implement human-in-the-loop review for high-impact agent actions

The Broader Threat Landscape

These two patches are part of a growing wave of AI agent security disclosures in 2026. Security researchers have identified prompt injection as one of the most prevalent and difficult-to-fix vulnerability classes in agentic AI systems.

The underlying problem is architectural: AI agents are designed to follow instructions in natural language, and distinguishing between legitimate instructions from a trusted operator and malicious instructions injected via untrusted data is an unsolved problem at the model level.

Key risk factors for organizations deploying AI agents:

  • Public-facing inputs processed by the agent (web forms, email, chat)
  • Broad permissions granted to agent-connected actions (email, calendar, CRM write access)
  • Insufficient logging of agent decisions and actions
  • Assumption that AI agents are just another internal service — they are not

Recommendations for Security Teams

ControlEffectiveness Against Prompt Injection
Apply vendor patchesEssential — blocks the specific known vectors
Enforce agent least-privilegeLimits blast radius if injection succeeds
Separate input trust zonesPrevents public input from reaching privileged agent contexts
Deploy semantic input analysisEmerging — specialized models for detecting injection attempts
Human review for sensitive agent actionsHigh effectiveness — adds a break in the automated chain
Incident response playbook for AI agentsEssential for detecting and containing active injection attacks

The patching of these two flaws closes the immediate risk, but organizations should treat prompt injection as a persistent threat class that requires ongoing attention as AI agent deployments expand.


Source: Dark Reading

#AI Security#Prompt Injection#Microsoft#Salesforce#Data Breach

Related Articles

Microsoft Discovers 'AI Recommendation Poisoning' via

Microsoft's Defender team tracked over 50 unique prompt injection payloads from 31 companies using 'Summarize with AI' buttons to manipulate chatbot...

3 min read

Medusa Ransomware Exploits Zero-Days to Deploy Ransomware Within 24 Hours

Microsoft has raised the alarm over Medusa ransomware's unprecedented operational speed, with the group now exploiting zero-day vulnerabilities before...

4 min read

Medusa Ransomware Group Exploits Zero-Days to Strike Within 24 Hours

Microsoft warns that Medusa ransomware operators are exploiting zero-day vulnerabilities approximately one week before public disclosure, enabling the...

4 min read
Back to all News