Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

429+ Articles
114+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. News
  3. AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE
AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE
NEWS

AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE

Security researchers disclosed critical flaws across three major AI platforms: Amazon Bedrock AgentCore's sandbox can be bypassed via DNS to exfiltrate...

Dylan H.

News Desk

March 17, 2026
8 min read

Critical Flaws Expose AI Execution Platforms to Data Exfiltration and Remote Code Execution

Security researchers have disclosed a trio of high-severity vulnerabilities affecting Amazon Bedrock AgentCore, LangSmith, and SGLang — three widely deployed platforms that power AI agent execution and LLM orchestration. Taken together, the disclosures highlight an emerging class of security risk: AI infrastructure that fails to enforce the isolation guarantees it advertises.

The most notable finding, published by BeyondTrust on March 17, 2026, reveals that Amazon Bedrock AgentCore Code Interpreter's "no network access" sandbox mode permits outbound DNS queries — a gap that researchers weaponized to establish covert command-and-control channels and exfiltrate sensitive data without triggering standard network security controls. Separately, Miggo Security disclosed CVE-2026-25750 in LangSmith (CVSS 8.5), enabling token theft and account takeover, while an unnamed researcher uncovered CVE-2026-3989 in SGLang (CVSS 7.8), allowing unauthenticated remote code execution via insecure deserialization.


Incident Overview

AttributeValue
Platforms AffectedAmazon Bedrock AgentCore, LangSmith, SGLang
Vulnerability TypesSandbox bypass, token theft, insecure deserialization
CVEsCVE-2026-25750 (LangSmith), CVE-2026-3989 (SGLang)
CVSS Scores8.5 (LangSmith), 7.8 (SGLang), 7.5 (Bedrock — no CVE)
ResearchersBeyondTrust, Miggo Security
PublishedMarch 17, 2026
Patch StatusBedrock: mitigated (use VPC mode); LangSmith: patched; SGLang: patch required

Amazon Bedrock AgentCore: DNS Escape from an "Isolated" Sandbox

The Advertising vs. Reality Gap

Amazon Bedrock AgentCore Code Interpreter, launched in August 2025, is a managed service that enables AI agents to execute code in isolated sandbox environments. Amazon markets the service's Sandbox mode as providing code execution without network access — a critical security property for agents that process sensitive data.

BeyondTrust researchers found that this guarantee does not hold. Despite the "no network access" configuration, the sandbox permits outbound DNS queries. This is not a theoretical risk: the researchers demonstrated a complete DNS data exfiltration attack chain using the oversight.

How the DNS Escape Works

1. Attacker-controlled AI agent code runs inside Bedrock AgentCore Sandbox
2. Agent crafts sensitive data into DNS subdomain lookups
   (e.g., base64-encoded-data.attacker-domain.com)
3. DNS queries egress the sandbox — DNS is permitted despite "no network" policy
4. Attacker's DNS server receives the queries and reconstructs exfiltrated data
5. Agent establishes interactive shell via DNS tunneling protocol
6. Full covert C2 channel operational — bypassing all network isolation controls

The attack carries a CVSS score of 7.5 and does not require any prior compromise of the AWS environment — only the ability to submit code for execution within an AgentCore instance.

Real-World Threat Scenario

StageAction
Initial AccessPrompt injection or malicious tool call triggers agent to execute attacker code
Sandbox ExecutionCode runs inside AgentCore Sandbox — operator believes it is isolated
Data DiscoveryAgent reads environment variables, secrets, or internal data available to the execution context
ExfiltrationSensitive data encoded and transmitted via DNS queries to attacker nameserver
C2 EstablishmentDNS tunnel provides interactive shell for further exploitation

LangSmith: CVE-2026-25750 — Token Theft and Account Takeover

Miggo Security disclosed CVE-2026-25750 (CVSS 8.5) in LangSmith, the observability and monitoring platform for LLM applications built on LangChain.

The vulnerability exposes authenticated users to token theft and account takeover. While full technical details have not been published pending broader patch adoption, the flaw allows an attacker who can influence a victim's LangSmith session (e.g., via a crafted link, shared workspace, or prompt injection) to steal authentication tokens and assume the victim's identity — gaining access to all LLM traces, datasets, evaluation results, and API keys stored in the victim's workspace.

AttributeValue
CVECVE-2026-25750
CVSS8.5 (High)
ComponentLangSmith session handling
ImpactToken theft, full account takeover
ResearcherMiggo Security
Patch StatusPatched — update LangSmith to the latest version

SGLang: CVE-2026-3989 — Unauthenticated RCE via Insecure Deserialization

CVE-2026-3989 (CVSS 7.8) affects SGLang, the high-throughput LLM serving framework used to deploy large language models at scale. The vulnerability exists in replay_request_dump.py, which uses Python's native object deserialization function without input validation or sanitization when replaying request dumps.

The Deserialization Risk

Python's native serialization format (pickle) is well-documented in security research as a format that executes arbitrary code during deserialization — making any unsanitized deserialization of untrusted data functionally equivalent to remote code execution. An attacker who can supply a malicious serialized payload to any SGLang deployment that exposes its multimodal generation or disaggregation features to the network achieves unauthenticated RCE with the privileges of the SGLang server process.

The root cause is the use of CWE-502: Deserialization of Untrusted Data — one of the most reliably exploitable vulnerability classes in Python ML infrastructure.

AttributeValue
CVECVE-2026-3989
CVSS7.8 (High)
CWECWE-502: Deserialization of Untrusted Data
VectorNetwork — any exposed multimodal or disaggregation endpoint
Privileges RequiredNone
Componentreplay_request_dump.py deserialization logic
ImpactUnauthenticated RCE, full server compromise

Impact Assessment

Impact AreaDescription
AI Agent Data ExposureAgentCore sandbox bypass exposes any data accessible to executing agents, including secrets and environment variables
LLM Trace LeakageLangSmith account takeover exposes all LLM traces, which may contain PII, proprietary prompts, or sensitive outputs
Inference Infrastructure RCESGLang RCE gives attackers code execution on GPU-backed inference servers — potentially expensive and sensitive infrastructure
Regulatory RiskExfiltration of AI-processed data may trigger GDPR, CCPA, or HIPAA breach notification requirements
Supply Chain ExposureCompromised LangSmith credentials can expose downstream systems integrated via API keys stored in the workspace

Recommendations

For Amazon Bedrock AgentCore Users

  • Migrate from Sandbox mode to VPC mode for all AgentCore Code Interpreter instances handling sensitive data
  • VPC mode provides proper network isolation with configurable egress controls
  • Audit all existing AgentCore deployments to identify instances running in Sandbox mode
  • Review agent code for prompt injection risks that could trigger unauthorized code execution

For LangSmith Users

  • Update LangSmith to the latest patched version immediately
  • Rotate all API keys stored in LangSmith workspaces as a precaution
  • Review workspace access logs for unexpected token usage or login events
  • Enable multi-factor authentication on all LangSmith accounts

For SGLang Deployers

  • Audit all SGLang deployments for network exposure of multimodal generation or disaggregation endpoints
  • Apply firewall rules to restrict access to SGLang ports to trusted sources only
  • Monitor the SGLang GitHub repository for a patched release and apply immediately when available
  • Audit all custom SGLang integrations for unsafe deserialization patterns and replace with safer alternatives (e.g., JSON schema validation)

Key Takeaways

  1. AI platform "isolation" cannot be taken at face value — DNS exfiltration bypassing Bedrock's "no network" mode demonstrates that sandbox guarantees require rigorous independent verification.
  2. DNS is a frequently overlooked egress channel — many security controls block TCP/UDP while leaving DNS unrestricted; AI sandboxes are not exempt from this oversight.
  3. LLM observability platforms are high-value targets — LangSmith workspaces aggregate traces, prompts, and API keys, making them attractive for account takeover attacks.
  4. Insecure deserialization remains a persistent threat in Python ML infrastructure — SGLang's RCE is a textbook example of a known-dangerous pattern; all ML serving frameworks should audit deserialization of any externally-influenced input.
  5. The attack surface of AI infrastructure is rapidly expanding — as more organizations deploy AI agents with broad data access, security flaws in execution environments translate directly into data breach risk.
  6. Defense in depth is essential — VPC mode, network segmentation, regular patching, and monitoring for DNS anomalies collectively reduce the exposure created by flaws like these.

Sources

  • AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE — The Hacker News
  • Bypassing AWS Bedrock AgentCore Sandbox via DNS — BeyondTrust
  • AWS Bedrock tool vulnerability allows data exfiltration via DNS leaks — SC Media
  • AWS Bedrock's 'isolated' sandbox comes with a DNS escape hatch — CSO Online
  • AWS Bedrock AgentCore Sandbox Bypass — Cryptika Cybersecurity
#AWS#Amazon Bedrock#LangSmith#SGLang#AI Security#Cloud Security#RCE#Data Exfiltration

Related Articles

OpenClaw AI Agent Flaws Enable Prompt Injection, 1-Click

China's CNCERT has warned that OpenClaw (formerly Clawdbot/Moltbot), the viral self-hosted AI agent, carries over 250 disclosed vulnerabilities including...

6 min read

European Commission Investigating Breach After Amazon Cloud Account Hack

The European Commission is investigating a security breach after a threat actor gained unauthorized access to its Amazon Web Services cloud environment and claims to have stolen over 350 GB of data including databases, employee information, and email server data.

4 min read

Supply Chain Attack Hits Widely-Used AI Package, Risking Thousands of Companies

Malicious versions of LiteLLM — a Python package with 3 million daily downloads present in roughly 36% of cloud environments — were quietly pushed to PyPI...

5 min read
Back to all News