A major 2026 application security report from OX Security has surfaced a striking disconnect between raw alert volume and actual risk growth: while the number of security findings grew by 52% year-over-year, the proportion classified as critical risk surged by nearly 400%. The findings, drawn from analysis of 216 million security findings across 250 organizations over a 90-day period, point to a structural "velocity gap" between how fast development teams ship software and how fast security teams can respond.
Key Findings at a Glance
| Metric | Value |
|---|---|
| Security findings analyzed | 216 million |
| Organizations studied | 250 |
| Analysis period | 90 days |
| Year-over-year alert volume growth | +52% |
| Year-over-year critical risk growth | ~+400% (4x) |
| Primary driver | AI-assisted development velocity |
The Velocity Gap: What It Means
The report's central concept is the velocity gap — the widening distance between how quickly modern development practices (particularly AI-assisted coding) introduce code and dependencies into production, and how quickly security tooling can evaluate, triage, and remediate findings.
The gap is structural rather than a tooling failure. AI coding assistants generate functional code at a pace that outstrips traditional security review cycles. The result:
- More code ships faster — expanding the attack surface per unit of time
- More third-party dependencies are introduced with minimal vetting
- Security teams receive more findings but with less developer context
- Critical findings get buried in high-volume alert queues
The 4x growth in critical risk versus 52% growth in alert volume suggests that the severity composition of findings is deteriorating — modern software stacks are accumulating more severe vulnerabilities per unit of code than previous generation stacks did.
AI-Assisted Development: A Double-Edged Sword
AI coding tools — GitHub Copilot, Cursor, Claude, and similar platforms — have dramatically accelerated software development. However, the report identifies several risk patterns associated with AI-generated code:
Insecure Code Patterns at Scale
AI coding assistants trained on large corpora of existing code inherit the insecure patterns present in that training data. Common issues surfaced in the OX Security dataset:
- Hardcoded credentials introduced by AI suggestions accepted without review
- Insecure cryptographic algorithms recommended when safer alternatives exist
- Missing input validation in AI-generated API handlers
- Vulnerable dependency versions selected by AI tools using outdated training data
Dependency Sprawl
AI tools frequently suggest importing libraries to solve specific problems, leading to unchecked dependency growth. Each new dependency is a potential supply chain attack vector, and the report found that AI-assisted projects had measurably more transitive dependencies than human-authored equivalents.
The Review Gap
Pull request review cycles have not accelerated proportionally with AI-generated code volume. When a single developer can produce 5x more code with AI assistance, the code review burden per reviewer increases dramatically — and security-relevant findings are more likely to slip through.
Critical Risk: What Changed
The 4x increase in critical-severity findings is driven by several converging factors identified in the report:
1. Supply Chain Exposure The proliferation of open-source dependencies means that a single upstream compromise can cascade across thousands of dependent applications simultaneously. The report documents an increase in critical findings stemming from third-party package vulnerabilities versus first-party code issues.
2. Secrets and Credential Sprawl AI tools that generate configuration, infrastructure-as-code, and CI/CD pipelines frequently produce artifacts that contain or reference secrets. The volume of exposed credentials in code repositories has grown faster than developer awareness programs can offset.
3. Container and IaC Misconfigurations AI-generated Kubernetes manifests, Terraform configurations, and Dockerfiles frequently contain overly permissive settings that compound at scale in cloud-native environments.
4. API Security Debt AI-assisted API development has accelerated endpoint proliferation without corresponding security review, leaving authentication gaps, missing rate limiting, and broken object-level authorization vulnerabilities across large API surfaces.
What Security Teams Can Do
The report offers a set of practical recommendations for bridging the velocity gap:
Shift Left, But Intelligently
Not all static analysis findings are created equal. Teams should:
- Triage by reachability — findings in code that is actually reachable from external entry points are higher priority than theoretical issues in dead code
- Prioritize secrets findings — hardcoded credentials have near-zero false positive rates and near-100% exploitability
- Focus on dependency critical/high CVEs — apply EPSS and CISA KEV status as filters to identify which vulnerable dependencies are actually exploitable in your context
Integrate Security Into AI Coding Workflows
Security tooling needs to integrate at the point where AI code is generated, not after the fact:
- IDE-level SAST provides immediate feedback on AI suggestions before they are accepted
- Pre-commit hooks catch secrets and obvious issues before they enter version control
- Dependency pinning policies prevent AI tools from pulling in the latest (potentially compromised) package versions
Reduce Mean Time to Remediation
The report found that the median time-to-remediate for critical findings was far too long in most organizations studied. High-performing organizations (top quartile) achieved:
- Critical findings remediated in < 48 hours for actively exploited issues
- High-severity findings remediated in < 7 days
- Automated remediation for well-understood vulnerability classes (dependency upgrades, simple misconfigurations)
Application Security Posture Management
Organizations with the lowest critical finding density were those using Application Security Posture Management (ASPM) platforms that correlate findings across tools, deduplicate noise, and surface only actionable, contextualized risk — rather than flooding developers with raw scanner output.
Industry Context
The 4x critical risk growth figure is consistent with observations from other 2026 security research:
- Cloudflare's 2026 Threat Report documented record-breaking DDoS volumes and a 40% increase in application-layer attacks
- Google's GTIG reported 90 zero-days exploited in 2025, many targeting application-layer vulnerabilities
- CISA KEV catalog growth has accelerated, with more critical exploited vulnerabilities added per quarter in 2026 than in any prior year
The OX Security report adds quantitative weight to a trend that practitioners have observed qualitatively: the combination of AI-assisted development, complex cloud infrastructure, and sophisticated threat actors is creating a risk environment that requires a fundamentally different operational model than traditional vulnerability management.
Source: The Hacker News