A new report from OX Security has analyzed 216 million security findings across 250 organizations over a 90-day period, and the primary takeaway is stark: while raw security alert volume grew by 52% year-over-year, the volume of findings classified as prioritized critical risk grew by nearly 400% — a 4x increase.
The divergence between total alert volume and critical risk volume points to a deepening problem in modern application security: AI-assisted development is generating code — and vulnerabilities — faster than security teams can meaningfully address them.
The Velocity Gap
The report introduces the concept of a "velocity gap" — the widening distance between the speed at which new code (and new vulnerabilities) is introduced and the speed at which security teams can remediate findings.
AI coding assistants — GitHub Copilot, Claude, Cursor, and similar tools — have dramatically accelerated developer output. But the security tooling and remediation capacity on the other side of the equation has not kept pace. The result: organizations are drowning in alerts while the truly dangerous findings get lost in the noise.
| Metric | Year-over-Year Change |
|---|---|
| Total security findings | +52% |
| Prioritized critical risk | +400% (+4x) |
| Organizations studied | 250 |
| Findings analyzed | 216 million |
| Study period | 90 days |
Why Critical Risk Is Growing Faster Than Alert Volume
The 4x growth in critical risk findings — outpacing the 52% growth in total volume — is not simply a matter of more code producing more bugs. The OX Security report identifies several structural factors driving the disproportionate growth in high-severity issues:
1. AI-Generated Code Quality Gaps
AI coding tools accelerate feature development but can introduce subtle security flaws — injection vulnerabilities, insecure default configurations, improper secret handling — that are difficult to detect through conventional code review. As AI-generated code enters production at scale, the density of security-relevant issues within large codebases is increasing.
2. Supply Chain Exposure Amplification
The surge in supply chain attacks throughout 2025-2026 means that critical risks increasingly originate not from first-party code but from dependencies, development tools, and CI/CD pipeline components. A single compromised dependency can introduce critical risk across hundreds of downstream organizations — OX Security's multi-org dataset captures this amplification effect.
3. Prioritization Model Maturity
As security tooling improves its ability to distinguish genuinely exploitable, high-impact findings from theoretical vulnerabilities, the absolute count of "critical" findings rises while the total alert count includes a growing proportion of lower-confidence, lower-severity issues. Better prioritization reveals more real critical risks.
4. Attack Surface Expansion
Cloud-native development, microservices architectures, serverless deployments, and widespread API proliferation have expanded the attack surface faster than many organizations' security controls have adapted. New infrastructure patterns introduce new classes of vulnerability.
What 216 Million Findings Reveal
The scale of the dataset — 216 million findings across 250 organizations over 90 days — provides statistical validity that smaller studies cannot. Key patterns from OX Security's analysis:
Finding distribution:
- The vast majority of findings are low-to-medium severity and require routine remediation workflows
- A small percentage of findings account for the majority of actual exploitable risk — reflecting the well-known "long tail" problem in vulnerability management
- Critical findings are concentrated in specific technology stacks and code patterns, suggesting targeted remediation focus is more effective than broad-sweep approaches
Remediation lag:
- The average time between a critical finding being surfaced and a validated fix being merged has not improved proportionally to the increase in finding volume
- Teams with automated fix pipelines (AI-assisted remediation, automated PR generation) show meaningfully better critical-risk closure rates
AI code impact:
- Organizations with higher AI-assisted development adoption show both higher total finding counts and higher critical finding densities — consistent with the velocity gap hypothesis
- The same organizations tend to show faster remediation when they also deploy AI-assisted security tooling, suggesting the most effective response is to apply AI on both the development and security sides
The Prioritization Imperative
The 4x critical risk growth makes one thing clear: security teams cannot treat all findings equally. Trying to remediate 216 million findings at uniform priority is impossible — the velocity gap ensures that approach will always result in critical issues being buried under lower-priority noise.
Effective approaches highlighted by the report:
Risk-Based Prioritization
Not all CVSS 9+ vulnerabilities carry equal real-world risk. A CVSS 9.8 vulnerability in a library that is never actually called in a reachable code path poses less risk than a CVSS 7.5 issue in a directly internet-exposed, frequently-called API endpoint. Reachability analysis and exploitability context are essential to separating theoretical risk from actual risk.
Automated Fix Generation
AI-assisted remediation tools that automatically generate pull requests for common vulnerability patterns allow security teams to close high-confidence findings faster without consuming human review bandwidth.
Supply Chain Focus
Given the disproportionate impact of supply chain findings, organizations benefit from prioritizing dependency management — automated SBOM generation, dependency pinning, and supply chain integrity verification — before broader application scanning.
Developer Security Enablement
Security findings discovered post-merge are more expensive to fix than findings caught during development. Shifting detection left — IDE plugins, pre-commit hooks, automated PR scanning — reduces the total volume of findings that reach production while improving developer security awareness.
Implications for Security Teams
The report's findings have direct operational implications:
For security leadership (CISOs/VPs of Security):
- The 4x critical risk growth outpaces typical team size and budget growth — organizations need to evaluate AI-assisted security tooling to maintain viable remediation rates
- Traditional KPIs (MTTR, finding closure rate) need to be supplemented with risk-reduction metrics that account for finding quality, not just quantity
For application security teams:
- Alert fatigue is worsening — teams need better triage tooling that surfaces genuinely actionable findings without overwhelming analysts
- Supply chain risk should be treated as a first-class priority in 2026, not a secondary concern
For developers:
- AI coding assistants require security-aware configuration — guardrails, security linting, and prompt engineering for secure code generation should be part of standard development toolchains
- Security findings generated by AI code merit extra scrutiny during code review
The Bigger Picture
The OX Security report arrives at a moment when the security industry is grappling with the paradox of AI: the same technology that is enabling faster, more capable development is also increasing the volume and complexity of security findings. Organizations that treat AI as purely a developer productivity tool — without also deploying it on the security side — are likely to see their critical risk exposure continue to grow.
The 216 million findings across 250 organizations provide the most comprehensive empirical view yet of how this dynamic is playing out in production environments. The 4x increase in critical risk is not a projection or a model — it is the measured reality for hundreds of organizations already.
Sources: The Hacker News