Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

429+ Articles
114+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. News
  3. Cloudflare BGP Routing Error Cascades Across AWS, X, and
Cloudflare BGP Routing Error Cascades Across AWS, X, and
NEWS

Cloudflare BGP Routing Error Cascades Across AWS, X, and

A routine configuration update at Cloudflare's Ashburn data center introduced a BGP routing error on February 16 that cascaded across the internet,...

Dylan H.

News Desk

February 16, 2026
4 min read

When the Cloud Goes Dark

At approximately 1:15 PM ET on February 16, 2026, a routine configuration update at Cloudflare's Ashburn, Virginia data center introduced a Border Gateway Protocol (BGP) routing error that cascaded across the internet. The incident knocked out AWS US-East-1, X (formerly Twitter), and thousands of websites for several hours — one of the most significant internet outages of 2026.


Impact Summary

ServiceImpactDuration
CloudflareOrigin of BGP misconfiguration~4 hours
AWS US-East-1Intermittent connectivity degradation~3 hours
X (Twitter)Complete outage for most users~2.5 hours
Thousands of websitesUnreachable via Cloudflare CDN~4 hours

Cascading Failure

The incident demonstrates the fragility of internet infrastructure when a single provider's configuration error propagates across the global routing table:

1:15 PM ET   — BGP misconfiguration deployed at Cloudflare Ashburn
1:20 PM ET   — Corrupted routes begin propagating to upstream providers
1:30 PM ET   — AWS US-East-1 reports "intermittent connectivity degradation"
1:45 PM ET   — X goes down for majority of users
1:55 PM ET   — Cloudflare engineering identifies root cause
2:10 PM ET   — Rollback initiated, but cached corrupted routes complicate recovery
3:30 PM ET   — AWS begins recovering as clean routes propagate
4:00 PM ET   — X services gradually restored
5:15 PM ET   — Full recovery confirmed across all affected services

Root Cause: BGP Routing Error

BGP (Border Gateway Protocol) is the routing protocol that holds the internet together, directing traffic between autonomous systems (ISPs, CDNs, cloud providers). A misconfiguration in BGP can cause:

  • Route leaks — Announcing routes you don't own
  • Route hijacking — Redirecting traffic through unintended paths
  • Black holes — Making destinations unreachable

In this case, Cloudflare's configuration update announced incorrect routes that were cached by upstream providers. Even after Cloudflare rolled back the change, the corrupted routing tables had already been cached, extending the outage far beyond the initial misconfiguration window.


Why Recovery Was Slow

Cloudflare's engineering team identified the root cause within 40 minutes — fast by any standard. However, rollback took hours because:

  1. BGP propagation is slow — Route updates take time to propagate globally
  2. Caching by upstream providers — ISPs and transit providers cache routes for efficiency
  3. No global "undo" button — Each ISP must independently process the corrected routes
  4. Convergence time — The global routing table needs time to stabilize

Internet Concentration Risk

This incident reignites the debate about internet concentration risk. A significant portion of the internet's traffic flows through a small number of providers:

ProviderEstimated Global Traffic Share
Cloudflare~20% of all websites
AWS~32% of cloud infrastructure
Google Cloud~11% of cloud infrastructure
Microsoft Azure~22% of cloud infrastructure

When one of these providers experiences an issue, the blast radius is enormous. The February 16 outage affected an estimated millions of users and caused measurable economic impact.


Lessons for Organizations

Immediate

  1. Multi-CDN strategy — Don't rely on a single CDN provider for all traffic
  2. DNS failover — Configure DNS health checks that can route around CDN failures
  3. Status page monitoring — Subscribe to status alerts from all infrastructure providers
  4. Incident communication plans — Have pre-drafted status updates for provider outages

Strategic

  1. Multi-cloud architecture — Distribute workloads across cloud providers
  2. Edge redundancy — Use multiple CDN providers with traffic splitting
  3. BGP monitoring — Tools like BGPStream and RIPE RIS can alert on anomalous route changes
  4. Chaos engineering — Test your systems' resilience to provider outages

February 2026: An Unusually Turbulent Month for Cloud

This is the fourth major cloud outage in February 2026 alone:

DateProviderImpact
Feb 2-3Azure VM/VMSSConfiguration change restricting storage access
Feb 7-8Azure West USPower interruption affecting multiple services
Feb 10AWS CloudFrontDNS failure cascading across 8 AWS services
Feb 16Cloudflare/AWS/XBGP routing error — global cascading outage

Organizations should review their cloud resilience strategies given this unprecedented month of outages.


Sources

  • Dataconomy — AWS Is Down: February 16 Outage Explained
  • WebProNews — When the Cloud Goes Dark: Inside the Cascading Infrastructure Failure
  • Tom's Guide — X Was Down
  • DevOps.com — Three Key Lessons from the Recent AWS and Cloudflare Outages
#Cloudflare#AWS#BGP#Outage#Internet Infrastructure#X

Related Articles

YouTube Suffers Major Global Outage Affecting 300,000+ Users

YouTube experienced a massive global outage on February 17 after a failure in Google's recommendations system cascaded across the entire platform,...

3 min read

European Commission Investigating Breach After Amazon Cloud Account Hack

The European Commission is investigating a security breach after a threat actor gained unauthorized access to its Amazon Web Services cloud environment and claims to have stolen over 350 GB of data including databases, employee information, and email server data.

4 min read

Cloud Security Startup Native Exits Stealth With $42 Million to Enforce Security-by-Design Across Multi-Cloud

Native, founded by ex-AWS security leaders, has emerged from stealth with $42 million in backing from Ballistic Ventures and General Catalyst to build the...

6 min read
Back to all News