The rising tide of vulnerabilities…might be more predictable than you think.

By Eireann Leverett
Wednesday, November 22nd, 2023

Over two days in late September, attack surface management teams, incident responders, data scientists, and vulnerability management practitioners gathered in Cardiff, Wales. It was the first Vuln4Cast conference, hosted by Matilda Rhode, Art Manion, and Éireann Leverett. We are an informal bunch, and the first day held presentations from a variety of well know organisations in the coordinated vulnerability disclosure space. The day opened with keynote realism from Ceri Jones, advocating an approach to security work that recognises other people have to do their jobs. She reminded us that there’s a cognitive load on people’s desks before we even arrive to dictate security advice, and that plain English and less FUD will take us much farther in security operations than playing the expert card. She’s a wonderful speaker now employed by Lego, and perfectly set the tone of casual, approachable, thoughtful. We know it wasn’t what our audience expected, and we like that.

Joshua ‘Kernelsmith’ Smith went next detailing a short history of the Zero Day Initiative, and the kinds of patterns they see in the data they receive. He shared many inside stories of the good and bad submissions to ZDI, changes in the types of vulnerabilities submitted, and the submissions to pwn2own. I found his insights particularly helpful and he shared some ways for other researchers to collaborate with ZDI on the data sets is has around disclosure; a significant thing when most people are studying the NVD and ignoring other datasets. Since ZDI submissions has a particularly high rate of exploits written, it is also an interesting resource for those more interested in exploitation than vulnerability.

Next the style and tone switched to a more academic presentation from Carlos E. Budde. His starting point was with source code, and an understanding of external library dependencies…how can you foresee how many vulnerabilities you may need to fix? He shared a deep understanding of the ways in which dependencies can influence the presence, discoverability, and accessibility of vulnerabilities. Using source code and dependency information can provide a greater level of detail and accuracy, and is primarily useful to the producers, maintainers, and deployers of the software itself. I think the work he presented for his team is the state of the art in that category of vulnerability prediction for software developers.

Switching tracks, we started to focus more on exploitation patterns, and prioritising the incident response. Here Roel van der Jagt gave us insight into a new piece of software from Tesorion. When working an incident, and in particular lateral movement investigations, you may be presented with a machine that you suspect was compromised. You may not know how it was compromised, and you want to know where to start…By gathering the CPE strings, you can input them into the vulnerability explorer, and it will rank them by EPSS score. This isn’t the goal of EPSS, which is instead to tell you how likely a vulnerability is to be exploited. However, it is a wonderful side effect, and clever use case. They are continually updating the software, and we hope to see more of this DFIR work prioritisation in the future.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) sent Elizabeth Cardona and Kevin Donovan to tell us about how CISA prioritises vulnerabilities with evidence. They mainly presented SSVC and how it can be used by organisations trying to use information specific to the organisation applying the patch to make healthy decisions about what and where to patch. However, they also answered questions about the CISA Known Exploited Vulnerabilities catalog, and generally how they prioritise which vulnerabilities they receive reports and publish alerts about.

Ben Edwards of Cyentia and Sander Vinberg of F5 delivered the kinds of presentations we love to see. Collaborative between more than one company, and also highly numerate. This data visualisation heavy presentation showed many aspects of the NVD catalog over the last few decades. In particular Ben posited that a single linear regression for vulnerability growth doesn’t work, and one must use non-linear methods, or account for different patterns in different time periods. They presented a model which predicts the number of vulnerabilities from the NVD while taking into account how many CNAs are contributing in a given time period. I believe understanding how the growth number of CNAs influences (or doesn’t influence) the number of published vulnerabilities is a crucial area for research in the coming years. Ben and Sander’s presentation really shined in this audience, and the way they presented their data really reflected the research stature of the institutions they represented at the conference.

Our final presentation of the day came from Mathew Berning of Marsh MacLennan, in which he used various social media and public vulnerability data to ask the question ‘are vulnerabilities getting worse’? He defined worse in the manner a cyber insurer would, in the sense of costly across a portfolio of companies, or across society in general, rather than impactful to a single machine, or having higher CVSS scores. He applied many approaches, and bravely left the question open ended. This in turn opened us up to collaborative workshopping the next day.

We broke for a walking tour of Cardiff, and the weather was predictably soggy. Which a nice metaphor for our topics of interest. Even fifty years ago, you were better off using an almanac to predict the weather. Now we have sophisticated models and accuracy two weeks out is very good, so we all knew to bring rain jackets. Why can’t vulnerabilities be forecast with the same level of accuracy? Maybe they can and maybe they can’t but we’re keen to find out!

However, those that braved the forecasts learned the history of Cardiff and Wales. As hosts, we enjoyed getting people out of London, into other parts of the UK. From there it was pints, fish and chips, and eventually late night cocktails and discussions. It was great to see people networking and learning from each other, and I think more than a few friendships were born that night.

Once underway again on the second daywe did some workshops around the NVD and MITRE datasets, as well as ZDI’s data. Matilda and I offered some explanation of our forecasting tools, and the day continued in unstructured learning mode.

We made hard choices to host this event in person and I believe the way people freely shared works in progress, data sources, and how they got something wrong; was directly due to this personal experience. I don’t regret it, and while we had many requests for an online version, I’m still not sure I would change it.

We established a new mailing list Vulnerability Forecasting, which you can join by filling in the Request to join form. We intend to discuss all things related to measuring, modelling, predicting, or forecasting…disclosures, vulnerabilities, exploits, and exploitation.

The main take away is that advances are being made predicting vulnerabilities from source code, from data sets, and even predicting exploitation. We are moving ever closer to predicting what types of vulnerabilities might occur, or how severe they are, in some cases up to a year in advance. While more work needs to be done in this area, there’s a growing community of people keen to do it. More importantly though, we need the practitioners and early adopters to tell us what makes a useful forecast for their work.

We agreed we’d like to hold the conference again in the future, and the location is under discussion on the mailing list. So Join us!

Thanks to our sponsors Cyber Innovation Hub Wales and Concinnity Risks, and Tramshed Tech for hosting us!