Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

735+ Articles
120+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. News
  3. Microsoft and Salesforce Patch AI Agent Prompt Injection Flaws Enabling Data Leaks
Microsoft and Salesforce Patch AI Agent Prompt Injection Flaws Enabling Data Leaks
NEWS

Microsoft and Salesforce Patch AI Agent Prompt Injection Flaws Enabling Data Leaks

Security researchers disclosed prompt injection vulnerabilities in both Microsoft Copilot and Salesforce Agentforce that would have allowed unauthenticated attackers to exfiltrate sensitive data through manipulated AI agent responses. Both vendors have issued patches.

Dylan H.

News Desk

April 15, 2026
5 min read

Two of the enterprise software industry's largest AI agent platforms — Microsoft Copilot and Salesforce Agentforce — have each patched prompt injection vulnerabilities that could have allowed attackers to steal sensitive data from business users without any authentication. The disclosures, reported by Dark Reading, highlight a growing class of security risks that emerge when AI agents are given access to sensitive corporate data and external-facing surfaces.

What Are Prompt Injection Flaws?

Prompt injection attacks exploit the fact that large language model (LLM)-based AI agents process both instructions and data through the same input channel — natural language. When an attacker can insert malicious instructions into content that the AI agent will read or process, those instructions can override the agent's legitimate directives, redirect its behavior, or cause it to exfiltrate data to an unintended destination.

In agentic AI contexts — where the AI is not merely answering questions but taking actions, retrieving documents, sending emails, or querying databases on behalf of users — prompt injection vulnerabilities can have significant downstream consequences beyond simple information disclosure.

Microsoft Copilot: The Vulnerability

The Microsoft Copilot flaw involved an indirect prompt injection attack path. Researchers found that Copilot, when operating in enterprise deployments with access to SharePoint documents, emails, and Teams messages, could be manipulated by maliciously crafted content embedded in documents it was asked to summarize or analyze.

If an attacker could place a document into a location where a Copilot-enabled user would later ask the AI to summarize it, the embedded payload could instruct Copilot to:

  • Exfiltrate the contents of other documents accessible to the user
  • Summarize and transmit sensitive email content
  • Redirect Copilot's response to include attacker-controlled external URLs as output

The attack required no access to the victim's credentials. An attacker only needed the ability to inject crafted text into a Copilot-accessible data source — something achievable through shared document libraries, public SharePoint sites, or email-based social engineering.

Microsoft addressed the flaw through server-side changes to how Copilot processes and scopes external content instructions, with no client-side patches required. The company confirmed the fix in its April 2026 security advisory.

Salesforce Agentforce: The Vulnerability

The Salesforce Agentforce vulnerability was similarly an indirect prompt injection, exploiting the platform's ability to retrieve and act on CRM data, emails, and Salesforce Flow automation triggers on behalf of users.

Researchers demonstrated a scenario in which malicious instructions embedded in a CRM record — such as a lead description or a case comment added by an external party — could hijack Agentforce's behavior when a sales or service agent asked the AI to analyze or summarize that record. The injected instructions could cause Agentforce to leak other records accessible to the agent user, or include crafted hyperlinks in AI-generated responses intended to harvest user credentials.

Salesforce patched the vulnerability by implementing stricter separation between instruction context and data context within the Agentforce execution pipeline. The fix was deployed server-side to all Salesforce tenants.

Why AI Agents Introduce New Attack Surface

Traditional application security is based on a clear separation: code executes instructions, and data is passively processed. AI agents collapse this distinction — they interpret natural language instructions and natural language data through the same model, making it structurally difficult to enforce boundaries between trusted instructions and untrusted content.

Key risk factors include:

  • Agentic tool access: AI agents increasingly have access to email, calendars, documents, and databases — making successful injection attacks much more impactful than in traditional chatbot contexts
  • Trust escalation: Enterprise deployments often grant AI agents elevated permissions to act on users' behalf, amplifying the blast radius of any injection
  • Limited observability: Prompt injection attacks may not generate logs that traditional SIEM tools would flag, since the agent is executing a seemingly normal query
  • Multi-step reasoning: Modern AI agents can chain multiple tool calls, meaning a single injected instruction can trigger a sequence of actions across multiple systems

Implications for Enterprise Deployments

Security teams deploying AI assistants like Microsoft Copilot or Salesforce Agentforce should treat externally-reachable content sources as untrusted inputs:

  1. Restrict agent data access to the minimum required — do not grant AI agents access to sensitive data repositories unless there is a clear business need
  2. Review system prompts to include explicit instructions about not following directives found in document content or CRM records
  3. Monitor agent activity logs for unusual output patterns, such as AI responses containing links to external domains or unexpectedly referencing unrelated records
  4. Apply the principle of least privilege to Copilot licenses — users who don't need access to broad SharePoint or email corpora should not have Copilot graph permissions over them
  5. Test your deployment with known prompt injection payloads against your AI-integrated data sources before granting broad enterprise rollout

Vendor Response

Both Microsoft and Salesforce responded quickly to the disclosures and deployed fixes without requiring end-user action. Neither company disclosed a CVSS score or CVE identifier for the underlying issues, consistent with how both vendors have historically handled LLM-specific security disclosures.

The researchers who identified the flaws were acknowledged by both vendors. Neither company disclosed whether the vulnerabilities were observed being exploited in the wild prior to patching.

References

  • Microsoft, Salesforce Patch AI Agent Data Leak Flaws — Dark Reading
  • What Is Prompt Injection? — OWASP LLM Top 10
  • Microsoft Copilot Security Overview — Microsoft Learn
#AI Security#Prompt Injection#Microsoft#Salesforce#Data Breach#Vulnerability

Related Articles

Microsoft and Salesforce Patch Prompt Injection Flaws in AI Agents

Both Microsoft Copilot and Salesforce Agentforce contained prompt injection vulnerabilities that allowed external attackers to leak sensitive data through AI agent interactions. Both flaws have now been patched.

5 min read

Microsoft Discovers 'AI Recommendation Poisoning' via

Microsoft's Defender team tracked over 50 unique prompt injection payloads from 31 companies using 'Summarize with AI' buttons to manipulate chatbot...

3 min read

Data Breach at EdTech Giant McGraw Hill Affects 13.5 Million Accounts

ShinyHunters has leaked over 100GB of data from 13.5 million McGraw Hill user accounts after exploiting a Salesforce misconfiguration. Names, addresses, phone numbers, and emails were exposed in the extortion campaign.

5 min read
Back to all News