Incident Summary
On February 16, 2026 between approximately 18:15 UTC and 22:15 UTC, a BGP routing error originating from Cloudflare's Ashburn, Virginia data center cascaded across the internet, causing widespread service disruptions affecting AWS US-East-1, X (formerly Twitter), and thousands of Cloudflare-proxied websites.
| Field | Details |
|---|---|
| Duration | ~4 hours |
| Root Cause | BGP misconfiguration during routine update |
| Origin | Cloudflare Ashburn, VA data center |
| Impact | Global — millions of users affected |
| Services Down | Cloudflare CDN, AWS US-East-1, X, thousands of websites |
Timeline
| Time (UTC) | Event |
|---|---|
| 18:15 | Routine configuration update deployed at Cloudflare Ashburn |
| 18:20 | BGP routing error begins propagating to upstream providers |
| 18:30 | AWS US-East-1 reports "intermittent connectivity degradation" |
| 18:45 | X goes down for majority of users |
| 18:55 | Cloudflare engineering team identifies BGP misconfiguration as root cause |
| 19:10 | Rollback initiated — corrupted routes cached by upstream ISPs complicate recovery |
| 20:30 | AWS US-East-1 begins recovering as clean routes propagate |
| 21:00 | X services gradually restored |
| 22:15 | Full recovery confirmed across all affected services |
Root Cause
A routine configuration update at Cloudflare's Ashburn data center introduced incorrect BGP route announcements. These corrupted routes were cached by upstream transit providers and ISPs, creating a cascading effect:
- Incorrect routes propagated from Cloudflare to upstream providers
- Traffic was misrouted or black-holed for affected IP ranges
- AWS US-East-1 experienced connectivity degradation due to disrupted peering
- Cloudflare-proxied sites became unreachable as CDN edge nodes lost connectivity
- X's infrastructure was disrupted due to Cloudflare dependency
Why Recovery Was Slow
Cloudflare identified the root cause within 40 minutes, but full recovery took approximately 4 hours because:
- BGP route updates propagate slowly across the global routing table
- Upstream providers cached the corrupted routes
- Each ISP independently processes corrected route announcements
- Global convergence requires time for all routing tables to stabilize
Services Impacted
Cloudflare CDN
- All Cloudflare-proxied websites experienced intermittent or total unavailability
- DNS resolution through Cloudflare's 1.1.1.1 was intermittently affected
- DDoS protection and WAF services disrupted
AWS US-East-1
- Intermittent connectivity degradation across the region
- Services dependent on internet-facing endpoints affected
- Internal AWS service-to-service communication remained functional
X (formerly Twitter)
- Complete outage for the majority of users globally
- Both web and mobile applications affected
- API endpoints returned errors
Downstream Impact
- Thousands of SaaS applications using Cloudflare or AWS US-East-1
- E-commerce platforms during peak weekend traffic
- Content delivery for media and news organizations
Impact Assessment
Global Impact: Users worldwide experienced service disruptions. The outage affected an estimated millions of end users across consumer and enterprise services.
Business Impact:
- E-commerce transactions disrupted during peak weekend hours
- SaaS applications dependent on Cloudflare or AWS US-East-1 were unavailable
- Real-time communication platforms experienced message delivery failures
- Content delivery and streaming services degraded
Lessons Learned
For Organizations
- Multi-CDN Strategy — Don't rely on a single CDN provider for all web traffic
- DNS Failover — Configure automated DNS health checks to route around CDN failures
- BGP Monitoring — Use tools like BGPStream and RIPE RIS to detect routing anomalies
- Incident Communication — Have pre-drafted status pages for upstream provider failures
For the Industry
This incident highlights the internet concentration risk when a handful of providers carry a disproportionate share of global traffic. A single BGP misconfiguration at one provider can cascade across the entire internet.
February 2026 Cloud Outage Trend
| Date | Provider | Duration | Root Cause |
|---|---|---|---|
| Feb 2-3 | Azure | 10.3 hours | Configuration change restricting storage access |
| Feb 7-8 | Azure West US | ~12 hours | Power interruption |
| Feb 10 | AWS CloudFront | ~4 hours | DNS failure cascading across 8 services |
| Feb 16 | Cloudflare/AWS/X | ~4 hours | BGP routing misconfiguration |
Current Status
Resolved: All services fully recovered as of 22:15 UTC on February 16, 2026.
Cloudflare is conducting a full post-incident review and is expected to publish a detailed Root Cause Analysis. AWS has updated its Service Health Dashboard with incident details. Organizations that experienced SLA violations should file service credit requests through their respective provider portals.