
Happy New Year 2026!
As we turn the page on another year and raise our glasses to new beginnings, we at FIRST have been busy doing what we do best: thinking quantitatively about what lies ahead. And our forecast for 2026 is both sobering and, we hope, useful.
Our prediction: 2026 will be the year we cross 50,000 published CVEs.
In fact, our median forecast sits at approximately 59,000 vulnerabilities for the year—a number that should give pause to anyone responsible for patch management, detection engineering, or coordinated vulnerability disclosure.
| Year | Median Forecast | Lower 90% CI | Upper 90% CI |
|---|---|---|---|
| 2026 | 59,427 | 30,012 | 117,673 |
| 2027 | 51,018 | 18,765 | 138,703 |
| 2028 | 53,289 | 14,714 | 192,993 |
While our central estimate for 2026 hovers around 59,000, we believe it is entirely realistic that this year we reach 70,000 to 100,000 vulnerabilities. The upper bound of our 90% confidence interval sits at nearly 118,000—a number that would represent a paradigm shift in vulnerability management workloads. We think it is more likely to be closer to 60k, but it is important that we prepare for more extreme scenarios such as 70 or 80k as well.
These forecasts are not an exercise in academic curiosity. They exist to help us prepare our patch and signature writing capabilities and capacities for the year ahead.
Whether you are:
...it helps to think about capacity at this time of year. Much like a city planner considering population growth before commissioning new infrastructure, security teams benefit from understanding the likely volume and shape of vulnerabilities they will need to process in the coming year. So ask yourself, are my people and processes ready to handle the 50k of CVEs next year?
We believe a forecast worth its salt should embody several principles:
Be quantified, with confidence intervals. A single point estimate is almost useless for planning. Our forecast includes ranges precisely because the range is more important than the mean. It is the range that helps us plan for a spectrum of possible outcomes throughout the year. The difference between preparing for 30,000 vulnerabilities and 100,000 is not merely operational—it is strategic.
Aid decision making and capacity planning. A forecast should incorporate uncertain factors honestly. We don't pretend to have a crystal ball, but we do have statistical models that capture the fundamental uncertainty in vulnerability publication rates. Giving you realistic sense of the range of possibilities is more important than precision of the number.
Get into the details. Aggregate counts are a starting point, but a truly useful forecast should eventually decompose into vendor, product, and CVSS vector breakdowns. These details become actionable when paired with good asset registers—if you know you run a lot of Microsoft Exchange, a forecast weighted by vendor exposure is considerably more useful than a raw count. There’s no reason you can’t train up someone in your team to do forecasting, time series analysis tools and models are easily explored in this vibecoding era.
Respect that the risks of your readers differ from each other. Your asset priorities, your threat landscape, and your risk appetite are unique. A good forecast should allow you to re-prioritise based on your own circumstances while still providing enough information to support strategic decisions in the boardroom.
Be reviewed for accuracy. We hold ourselves accountable. Our 2025 forecast achieved a MAPE of 7.48% for the yearly prediction and 4.96% for Q4—results we are proud of, and results that give us confidence in the methodology underpinning this year's projections.
We’re using a new model Within the vulnerability forecasting community, it’s widely accepted that 2017 was the toughest year to forecast. It marks a new epoch, and one where the upward trends of CVE publication can be much larger. Many people believe we should only include data from after 2017 in our models, because of this change in structural CVE data. We’ve chosen a model that includes it and accounts for it to allow us more realistic prediction intervals. In other words we chose to optimise our forecasts to reflect the range of possibilities instead of the accuracy of the point prediction. Of course, we also needed to back test the model to see how it performs, and you can see that below. Please note, this is the performance of the new model and NOT the accuracy of all our previous blog posts. In short, we’re willing to let the model be wrong up to 2018, in training exercises so it can more realistically reflect possible outcomes of 2026.

We’ve included those testing results here, and it really does perform better for every year after that epoch change in 2017-18. It gives asymmetric confidence intervals too, which is realistic, we’re more likely to be above the median than below it, because CVEs just seem to be growing.
Throughout 2026, we will publish quarterly forecast updates that refine our predictions as new data arrives. These updates will begin to incorporate more granular analysis, including expected CVSS v3 vector distributions amongst the CVEs we anticipate. Understanding not just how many vulnerabilities but what kind—their attack vectors, complexity, and impact profiles—will further sharpen capacity planning efforts.
The widening confidence intervals as we look further into the future (note how 2028's upper bound approaches 193,000) remind us that forecasting is not about predicting the future with certainty. It is about preparing for a range of plausible futures. Like the branching of a great oak, the further out we look, the more paths remain possible.
Here's to a year of thoughtful preparation, rigorous analysis, and—we hope—fewer surprises than there might otherwise have been.
From all of us at FIRST, happy 2026!