By Éireann Leverett
Monday, December 29, 2025
As 2025 draws to a close, we find ourselves in the satisfying position of reviewing forecasts that worked. Next year’s forecast will look and feel a bit different, but you can expect that in January and we like to keep them separate. This review is about building confidence in the community, both that forecasting a year in advance is accurate enough to be useful, but also that the community is growing and resilient, and not dependent on only a few people or proprietary code.
With two days remaining in the calendar year, we have 49,183 CVEs published—and both our yearly and quarterly predictions landed within their confidence intervals. It is possible they get exceeded in the next few days, but usually there’s a lull in CVE publication at the New Year. What can I say, I like to live life on the edge. I’m going to add the frisson of gambling that these predictions will be successful to the joy of our NYE party.
This is worth celebrating, not because we got lucky with the numbers, but because it builds confidence that vulnerability forecasting is maturing into a useful tool for defenders. As we’ll see later in the blog post it’s not just FIRST that is forecasting vulnerabilities now, but many other organisations. By giving this knowledge away, we believe we can really help each other internationally.
Back in February, we predicted 45,505 ± 4,363 CVEs for calendar year 2025, giving a 90% confidence interval of 41,142 to 49,868. The actual count of 49,183 falls comfortably within this range, though toward the upper bound—a pattern we've observed in previous years.
| Metric | Value |
|---|---|
| Predicted Mean | 45,505 |
| 90% CI Lower | 41,142 |
| 90% CI Upper | 49,868 |
| Actual (2 days remaining) | 49,183 |
| MAPE from Mean | 7.48% |
| MAPE from Upper CI | 1.39% |
The Mean Absolute Percentage Error of 7.48% from our central estimate tells one story, but the MAPE of just 1.39% from our upper confidence bound tells another: the actual count tracked remarkably close to what we considered the reasonable upper limit. This suggests that our models may be slightly conservative, and that defenders planning for the higher end of our intervals are doing sensible risk management.
Our Q4 forecast predicted 12,972 ± 1,157 CVEs, with a 90% confidence interval spanning 11,815 to 14,129. The actual count of 12,359 sits neatly within this range, slightly below the mean.
| Metric | Value |
|---|---|
| Predicted Mean | 12,972 |
| 90% CI Lower | 11,815 |
| 90% CI Upper | 14,129 |
| Actual | 12,359 |
| MAPE from Mean | 4.96% |
| MAPE from Lower CI | 4.40% |
A MAPE under 5% for quarterly forecasting is precisely the kind of accuracy that makes forecasts actionable. If you can tell your patch management team, with reasonable confidence, that they should prepare for roughly 13,000 new CVEs next quarter—and be right within 5%—that changes how you plan sprints, allocate analyst time, and budget for tooling.
Perhaps more important than any single forecast is what we're seeing across the ecosystem. The whole point of open-sourcing the Vuln4Cast project was to help defenders get ahead of the problem and break out of the siege mentality that characterises so much of vulnerability management. We believe vulnerabilities are foreseeable, and that exploits and exploitation will follow similar patterns we can learn to predict.
Others are now taking up this work and pushing it forward. Jerry Gamblin at Cisco has built CVEForecast.org, applying XGBoost and other machine learning approaches to the problem with a transparent model comparison framework. The CIRCL team in Luxembourg continues to expand their Vulnerability-Lookup platform, including new sightings tracking that captures when and where vulnerabilities are discussed across the internet, and comprehensive statistics pages that make the data accessible to researchers and practitioners alike.
This is exactly what we hoped would happen. A single forecasting team can only explore so many methodological variations. When the community starts iterating—trying different algorithms, different features, different time horizons—we all learn faster. As this exposure science develops, defence can be better planned to match capacity.
There's plenty more work to be done to become predictive of the risk in this space. Volume forecasting is necessary but not sufficient—rather like knowing tomorrow's temperature without knowing whether it will rain. We need forecasts that tell us about vendor distributions, CVSS vectors, CWEs, and ultimately, which vulnerabilities are likely to be exploited. If a forecast doesn't change how you allocate resources, we haven't yet made it useful enough.
We welcome students and grizzled industry veterans alike to this work. The methodological challenges are fascinating—time series with structural breaks, fat-tailed distributions, regime changes in disclosure behaviour—and the practical applications are immediate. Every improvement in forecasting accuracy is another few hours your team can spend on remediation rather than triage. There are improvements being made in CWE root cause analysis, exploit prediction, exploitation prediction, and CNA forecasting, CVSS vector forecasting, and CVSS prediction. FIRST has slowly and carefully nurtured this community from our initial conference 3 years ago, and it is starting to bear fruit.
If you're interested in vulnerability forecasting, exposure science, or the broader questions of predictive security, we'd love to see you at VulnOptiCon next year in Luxembourg, generously hosted by CIRCL. It's an opportunity to share methodologies, compare notes, and push this field forward together.
Until then, a happy new year to all the defenders out there—and may your 2026 patches be well-prioritised and your cyber risks well managed.
Eireann Leverett and the vulnerability forecasting team