When the Cloud Goes Dark
At approximately 1:15 PM ET on February 16, 2026, a routine configuration update at Cloudflare's Ashburn, Virginia data center introduced a Border Gateway Protocol (BGP) routing error that cascaded across the internet. The incident knocked out AWS US-East-1, X (formerly Twitter), and thousands of websites for several hours — one of the most significant internet outages of 2026.
Impact Summary
| Service | Impact | Duration |
|---|---|---|
| Cloudflare | Origin of BGP misconfiguration | ~4 hours |
| AWS US-East-1 | Intermittent connectivity degradation | ~3 hours |
| X (Twitter) | Complete outage for most users | ~2.5 hours |
| Thousands of websites | Unreachable via Cloudflare CDN | ~4 hours |
Cascading Failure
The incident demonstrates the fragility of internet infrastructure when a single provider's configuration error propagates across the global routing table:
1:15 PM ET — BGP misconfiguration deployed at Cloudflare Ashburn
1:20 PM ET — Corrupted routes begin propagating to upstream providers
1:30 PM ET — AWS US-East-1 reports "intermittent connectivity degradation"
1:45 PM ET — X goes down for majority of users
1:55 PM ET — Cloudflare engineering identifies root cause
2:10 PM ET — Rollback initiated, but cached corrupted routes complicate recovery
3:30 PM ET — AWS begins recovering as clean routes propagate
4:00 PM ET — X services gradually restored
5:15 PM ET — Full recovery confirmed across all affected servicesRoot Cause: BGP Routing Error
BGP (Border Gateway Protocol) is the routing protocol that holds the internet together, directing traffic between autonomous systems (ISPs, CDNs, cloud providers). A misconfiguration in BGP can cause:
- Route leaks — Announcing routes you don't own
- Route hijacking — Redirecting traffic through unintended paths
- Black holes — Making destinations unreachable
In this case, Cloudflare's configuration update announced incorrect routes that were cached by upstream providers. Even after Cloudflare rolled back the change, the corrupted routing tables had already been cached, extending the outage far beyond the initial misconfiguration window.
Why Recovery Was Slow
Cloudflare's engineering team identified the root cause within 40 minutes — fast by any standard. However, rollback took hours because:
- BGP propagation is slow — Route updates take time to propagate globally
- Caching by upstream providers — ISPs and transit providers cache routes for efficiency
- No global "undo" button — Each ISP must independently process the corrected routes
- Convergence time — The global routing table needs time to stabilize
Internet Concentration Risk
This incident reignites the debate about internet concentration risk. A significant portion of the internet's traffic flows through a small number of providers:
| Provider | Estimated Global Traffic Share |
|---|---|
| Cloudflare | ~20% of all websites |
| AWS | ~32% of cloud infrastructure |
| Google Cloud | ~11% of cloud infrastructure |
| Microsoft Azure | ~22% of cloud infrastructure |
When one of these providers experiences an issue, the blast radius is enormous. The February 16 outage affected an estimated millions of users and caused measurable economic impact.
Lessons for Organizations
Immediate
- Multi-CDN strategy — Don't rely on a single CDN provider for all traffic
- DNS failover — Configure DNS health checks that can route around CDN failures
- Status page monitoring — Subscribe to status alerts from all infrastructure providers
- Incident communication plans — Have pre-drafted status updates for provider outages
Strategic
- Multi-cloud architecture — Distribute workloads across cloud providers
- Edge redundancy — Use multiple CDN providers with traffic splitting
- BGP monitoring — Tools like BGPStream and RIPE RIS can alert on anomalous route changes
- Chaos engineering — Test your systems' resilience to provider outages
February 2026: An Unusually Turbulent Month for Cloud
This is the fourth major cloud outage in February 2026 alone:
| Date | Provider | Impact |
|---|---|---|
| Feb 2-3 | Azure VM/VMSS | Configuration change restricting storage access |
| Feb 7-8 | Azure West US | Power interruption affecting multiple services |
| Feb 10 | AWS CloudFront | DNS failure cascading across 8 AWS services |
| Feb 16 | Cloudflare/AWS/X | BGP routing error — global cascading outage |
Organizations should review their cloud resilience strategies given this unprecedented month of outages.