Skip to main content
COSMICBYTEZLABS
NewsSecurityHOWTOsToolsStudyTraining
ProjectsChecklistsAI RankingsNewsletterStatusTagsAbout
Subscribe

Press Enter to search or Esc to close

News
Security
HOWTOs
Tools
Study
Training
Projects
Checklists
AI Rankings
Newsletter
Status
Tags
About
RSS Feed
Reading List
Subscribe

Stay in the Loop

Get the latest security alerts, tutorials, and tech insights delivered to your inbox.

Subscribe NowFree forever. No spam.
COSMICBYTEZLABS

Your trusted source for IT intelligence, cybersecurity insights, and hands-on technical guides.

429+ Articles
114+ Guides

CONTENT

  • Latest News
  • Security Alerts
  • HOWTOs
  • Projects
  • Exam Prep

RESOURCES

  • Search
  • Browse Tags
  • Newsletter Archive
  • Reading List
  • RSS Feed

COMPANY

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

© 2026 CosmicBytez Labs. All rights reserved.

System Status: Operational
  1. Home
  2. Status
  3. Cloudflare BGP Routing Error: Cascading Outage Takes Down
Resolvedcritical

Cloudflare BGP Routing Error: Cascading Outage Takes Down

A BGP misconfiguration at Cloudflare's Ashburn data center cascaded across the internet on February 16, causing hours-long outages for AWS US-East-1, X...

February 16, 2026
Affected:
Cloudflare CDNAWS US-East-1X (Twitter)Cloudflare-proxied Websites

Incident Summary

On February 16, 2026 between approximately 18:15 UTC and 22:15 UTC, a BGP routing error originating from Cloudflare's Ashburn, Virginia data center cascaded across the internet, causing widespread service disruptions affecting AWS US-East-1, X (formerly Twitter), and thousands of Cloudflare-proxied websites.

FieldDetails
Duration~4 hours
Root CauseBGP misconfiguration during routine update
OriginCloudflare Ashburn, VA data center
ImpactGlobal — millions of users affected
Services DownCloudflare CDN, AWS US-East-1, X, thousands of websites

Timeline

Time (UTC)Event
18:15Routine configuration update deployed at Cloudflare Ashburn
18:20BGP routing error begins propagating to upstream providers
18:30AWS US-East-1 reports "intermittent connectivity degradation"
18:45X goes down for majority of users
18:55Cloudflare engineering team identifies BGP misconfiguration as root cause
19:10Rollback initiated — corrupted routes cached by upstream ISPs complicate recovery
20:30AWS US-East-1 begins recovering as clean routes propagate
21:00X services gradually restored
22:15Full recovery confirmed across all affected services

Root Cause

A routine configuration update at Cloudflare's Ashburn data center introduced incorrect BGP route announcements. These corrupted routes were cached by upstream transit providers and ISPs, creating a cascading effect:

  1. Incorrect routes propagated from Cloudflare to upstream providers
  2. Traffic was misrouted or black-holed for affected IP ranges
  3. AWS US-East-1 experienced connectivity degradation due to disrupted peering
  4. Cloudflare-proxied sites became unreachable as CDN edge nodes lost connectivity
  5. X's infrastructure was disrupted due to Cloudflare dependency

Why Recovery Was Slow

Cloudflare identified the root cause within 40 minutes, but full recovery took approximately 4 hours because:

  • BGP route updates propagate slowly across the global routing table
  • Upstream providers cached the corrupted routes
  • Each ISP independently processes corrected route announcements
  • Global convergence requires time for all routing tables to stabilize

Services Impacted

Cloudflare CDN

  • All Cloudflare-proxied websites experienced intermittent or total unavailability
  • DNS resolution through Cloudflare's 1.1.1.1 was intermittently affected
  • DDoS protection and WAF services disrupted

AWS US-East-1

  • Intermittent connectivity degradation across the region
  • Services dependent on internet-facing endpoints affected
  • Internal AWS service-to-service communication remained functional

X (formerly Twitter)

  • Complete outage for the majority of users globally
  • Both web and mobile applications affected
  • API endpoints returned errors

Downstream Impact

  • Thousands of SaaS applications using Cloudflare or AWS US-East-1
  • E-commerce platforms during peak weekend traffic
  • Content delivery for media and news organizations

Impact Assessment

Global Impact: Users worldwide experienced service disruptions. The outage affected an estimated millions of end users across consumer and enterprise services.

Business Impact:

  • E-commerce transactions disrupted during peak weekend hours
  • SaaS applications dependent on Cloudflare or AWS US-East-1 were unavailable
  • Real-time communication platforms experienced message delivery failures
  • Content delivery and streaming services degraded

Lessons Learned

For Organizations

  1. Multi-CDN Strategy — Don't rely on a single CDN provider for all web traffic
  2. DNS Failover — Configure automated DNS health checks to route around CDN failures
  3. BGP Monitoring — Use tools like BGPStream and RIPE RIS to detect routing anomalies
  4. Incident Communication — Have pre-drafted status pages for upstream provider failures

For the Industry

This incident highlights the internet concentration risk when a handful of providers carry a disproportionate share of global traffic. A single BGP misconfiguration at one provider can cascade across the entire internet.


February 2026 Cloud Outage Trend

DateProviderDurationRoot Cause
Feb 2-3Azure10.3 hoursConfiguration change restricting storage access
Feb 7-8Azure West US~12 hoursPower interruption
Feb 10AWS CloudFront~4 hoursDNS failure cascading across 8 services
Feb 16Cloudflare/AWS/X~4 hoursBGP routing misconfiguration

Current Status

Resolved: All services fully recovered as of 22:15 UTC on February 16, 2026.

Cloudflare is conducting a full post-incident review and is expected to publish a detailed Root Cause Analysis. AWS has updated its Service Health Dashboard with incident details. Organizations that experienced SLA violations should file service credit requests through their respective provider portals.


References

  • Dataconomy — AWS Is Down: February 16 Outage Explained
  • Cloudflare — Incident Update: BGP Routing Issue
  • WebProNews — When the Cloud Goes Dark
  • Tom's Guide — X Was Down
  • DevOps.com — Three Key Lessons from the Recent AWS and Cloudflare Outages
Back to Service Status