Reevaluating vulnerability management
Things are getting complicated.
With the exception of breach analysis and podcasting, I probably spend more time focused on vulnerability management than anything else I do. Let’s start with a bit of context before diving in.
This article concerns managing patches and vulnerabilities for commercially bought hardware and software and open source. Finding and fixing vulnerabilities in your own organization’s code is a very different post for another time. I won’t touch on AppSec at all in this post.
My perspective here comes stems from a few places.
My research into vulnerabilities. This focuses on documenting the vulnerabilities that cause damages/loss for organizations. I’ve also been spending a lot of time looking at time-to-exploit statistics. My hope is that, by looking at patterns in hindsight, I get useful insights that I can pass on to… 👇🏽
IANS clients that I do advisory work for. Of all the advisory work I do, vulnerability management is the most frequent (sometimes 3+ per week), with AI a close second.
I spent part of my career in offensive security, so I tend to look at vulnerabilities through a “how can I turn this into a compromise” lens.
Complications
Let’s start with few key complications that will help you understand why I’m concerned about vulnerability management.
Complication #1: Only a small percentage of vulnerabilities are a threat to organizations. This fact has been well known, studied, and documented. This leaves unanswered questions, like:
what do these vulnerabilities have in common?
when are they getting exploited?
why isn’t complication #1 making the work of vulnerability management easier?
Complication #2: Figuring out which vulnerabilities are a threat, when they’re the greatest threat is an unsolved challenge. I think this is a solvable problem and it’s something I’m working on, but that’s a post for another day. This complication generated an entire separate market segment: Risk-Based Vulnerability Management (RBVM). Awkwardly and expensively, this market emerged separately from the vendors that build the vulnerability scanners.
Complication #3: Remember the time-to-exploit research I mentioned? The news isn’t good, y’all. The short version is that the majority of exploited vulnerabilities get exploited before disclosure. Meaning, they’re zero day vulns. This means:
There’s no CVE yet. That means no CVE enrichment, no way to calculate EPSS.
There’s no patch yet. Nothing to fix or remediate.
I’m going to repeat this again, because it’s the primary wrench that has been thrown into the vulnerability management works.
The majority of exploited vulnerabilities get exploited before disclosure.
I see you turning purple and I promise, I’m right there with you. Maybe you have doubts. Maybe you think I must be mistaken. I’d LOVE to be mistaken - my advisory calls would be a lot simpler. Let’s take a look at the data.
Time-to-Exploit Trends
Ellie Sattler: I can see the shed from here. We can make it if we run.
Robert Muldoon: No. We can’t.
Ellie Sattler: Why not?
Robert Muldoon: Because we’re being hunted.
One of the primary goals of vulnerability and patch management is to outrun exploitation. The primary question here is always, “how fast do we have to be to outrun the attack?” The answer to this question was once an achievable goal. A few years ago, the ground shifted under our feet.
Research teams tracking the average time-to-exploit for vulnerabilities noticed that it was dropping. Traditional 30/45/60 or 30/60/90 day patching SLAs don’t make much sense when the average time-to-exploit is a moving target! Mandiant has been tracking this trend for a while now and the numbers aren’t encouraging:
In 2019, the average time-to-exploit was 63 days
By 2023, that number dropped to 5 days
In 2024, it dropped to -1 days
How can the average time-to-exploit be less than 0 days? The clock starts counting when the general public becomes aware that a vulnerability exists - when the vulnerability is disclosed. In 2023, Mandiant found that 70% of exploited vulnerabilities were zero-days when first exploited. If a vulnerability is exploited 40 days before it is discovered and disclosed to the public, we could think of it as a -40 day vulnerability. This is what moved the average to the negative side of the number line.
So, 70% of the time, how fast you patch doesn’t matter? Ouch.
I’m regularly taking calls from enterprises complaining that their 5 day or 7 day SLA for criticals is nearly impossible to meet, asking “are we the only ones? Are our peers managing this? If so, how?” They’re not the only ones. Most organizations I advise are weary of fully automating patching, for fear of breaking things. Even those that are allowed to move quickly hit a long tail: 70-80% in 24 hours and then maybe a month to remediate the last 20-30%.
What about the 30% where patching speed does matter? More than half of these vulnerabilities were exploited within a month. 29% within 7 days. 12% within 24 hours.
Adding to the challenge of prioritization, Mandiant found that:
58% of vulnerabilities that received media coverage were not exploited in the wild
72% of vulnerabilities with available exploits or PoCs were not exploited in the wild
And VulnCheck found that 29% of vulns on CISA’s KEV list were exploited on or before CVE publication, neutering any process dependent on CVE details or enrichment (taking CVSS/EPSS-dependent processes out of the picture)
25% of N-day vulnerabilities weren’t exploited until after the 6 month mark
Right off the presses is another great time-to-exploit resources, Serjej Epp’s Zero Day Clock.
This is a LOT to take in.
Most of the vuln mgmt problem is now a 0day problem
We have to patch much faster than most of the books, standards, regulations, and best practices would have us believe
Our prioritization processes and models are almost certainly built on some bad assumptions
There’s still another problem to consider, though.
Asset management is still broken
The first time you see the output of a vulnerability scanner, you’re not thinking, “I need more data”. You’re missing data though.
Vulnerability scans miss a lot of critical information. This is because they fail to identify some types of assets - IoT in particular. Once an asset is misidentified, the scanner can’t tell you much that’s useful about it and it tends to get dismissed by analysts. It makes sense in context, when you’re looking at results that feel like (emphasis mine):
WINDOWS 2008 Server OMG WHY IS THIS STILL RUNNING IT HAS BEEN EOL FOR A COON’S AGE CRITICAL CRITICAL ALL IS LOST
WINDOWS 2012 Server OMG WHY IS THIS STILL RUNNING SLIGHTLY LESS CRITICAL, LIKE 98.7% AS CRITICAL, YOU SHOULD STILL BE FREAKING OUT
Something running Linux?
ADOBE FLASH STILL EXISTS ON YOUR SYSTEMS? IS THIS A MUSEUM? WHY IS THIS HERE
Something else maybe running Linux? Port 80 is open. Informational. Don’t bother with this.
ANOTHER WINDOWS 2008 Server I CANT BELIEVE THIS IS REAL LIFE OMG CRITICAL CRITICAL ALL IS LOST
Hmmm, Linux you say? That’s helpful. It could be a mainframe, a toaster, a lightbulb, a web server, a wireless access point, a network firewall with its management console exposed to public Internet, OpenClaw, or a satellite in low earth orbit. Yeah, that really narrows it down.
This is bad, because the vulnerability scanner is trying to prioritize vulnerability remediation workloads with incomplete data. Worse, the data they collect on misidentified or unidentified assets actively deprioritize them. This is a system that makes unknown or unidentified assets look safe by default. Analysts will gladly treat them as safe, since they have 1.2 million critical vulnerabilities to chase down.
The kicker here is that some of these misidentified assets are representing the tiny fraction of vulnerabilities that can cause damage. What are the chances that these unknown, possibly unmanaged assets are hardened? That they’re getting patched? That they don’t have default credentials? We know that a large number of exploited vulnerabilities in recent years are Linux-based edge devices. These are network devices, file transfer appliances - exactly the types of devices that vulnerability scanners fail to recognize.
Surveys show that security leaders are well aware that critical assets are camouflaged by a lack of data and a lack of certainty. Asset management and/or vulnerability management processes have a gap to fill here.
Yes, some orgs still need traditional vuln mgmt
There are still plenty of ‘N-day’ vulnerabilities, where we don’t see active exploitation until days, weeks, or even months after they are disclosed. Most of the vulnerability and exploit intelligence we’ve been discussing focuses on when exploitation was first seen, but what are we seeing in actual breaches?
When studying breach details, I’ve found it very common to see attackers successfully use exploits months or even years after patches have been available. Vulnerability remediation isn’t always a bell curve with a long tail. It’s quite possible to remediate 100% of vulnerabilities and see a resurgence. So, sometimes it’s a bell curve with a stegosaurus tail?
Perhaps someone clones an old VM and brings it online without patching it. The same can happen with gold images for workstations. People occasionally need old versions of software or old operating systems for various reasons.
Compliance is still very dependent on traditional vulnerability scanning. PCI DSS, SOC 2, ISO27k, and many other standards and regulations have auditors expecting to review traditional scan results.
Sometimes, patching a critical vulnerability requires patching non-critical items, because some systems have linear software updates - you can’t apply update 13 unless you’ve already applied 12.
Vulnerability scanning tools are also commonly used for configuration management - identifying when hardened configurations have drifted, or haven’t been applied.
There are still a lot of reasons to keep old school scanners around, but maybe not for all the same reasons you bought them.
Prioritization is also an ongoing challenge. It made logical sense to prioritize patching vulnerabilities that are exploitable, where exploits are available, and when we see active exploitation. We now have data telling us that only 28% of vulnerabilities with available exploit code were exploited in the wild. Even what is lauded as the best evidence, “active exploitation in the wild” can be unreliable.
Consider a common example: what if the vulnerability is information disclosure, and using the exploit simply returns the internal IP address of a server? Our tools would report “exploit available” and “exploitation seen in the wild”, even though it’s totally inconsequential vulnerability in most scenarios. At best, it could possibly be chained with several other vulnerabilities.
Building new strategies
Build systems as if there is always a zero day and the patch is never coming.
I now strongly believe that vulnerability management must be divided into two use cases, each with their own set of processes and tools.
Exploitation prevention
Compliance and system/asset management
It should already be clear that even the UK NCSC’s more aggressive 5/7/14 day SLA recommendations aren’t enough to address exploitation that happens prior to disclosure. The only way to address exploits we don’t know about is with preventative, proactive approaches.
Exploitation prevention: 0days
I’ve got a few ideas that I’ve been workshopping. Would love to hear if others have anything to share.
Reduce attack surface: remove/disable unnecessary stuff. Getting hacked is bad enough - getting hacked because you had CUPS installed and running on a web server for no good reason? Ouch.
Regularly scan external infrastructure for insecure, abandoned, and unidentified assets. If you see “Copyright 2011” at the bottom of a webpage, that web server deserves a closer look.
Hardening and passive exploit mitigation
endpoint exploit mitigation
immutable infrastructure
old-school chroot jails, or the same principal applied with newer tech
application control
Detection: If you fail to prevent the exploit, all you’ve got left is to quickly detect and respond to the attack. Since you don’t know what the attack looks like, the best bet is to target behavior. Attackers have to do attacker things and we know what most of those are: gather information, find and abuse credentials, authenticate to other systems, establish persistence, exfiltrate tons of data, etc.
Behavior-based EDR rules
Deception (no guessing required, puts detection on easy mode!)
Large data transfer detection
Anomalous system behavior (in databases, IAM, anywhere the attacker wants or needs to be)
oh, and don’t forget to test your detections to make sure they work!
Last, but not least get rid of notoriously vulnerable products and protocols
ditch vendors that repeatedly show up on CISA KEV, year after year
get rid of the asbestos of IT - products that have safer alternatives
This list isn’t meant to be exhaustive, but to get other folks thinking and potentially contributing.
Exploitation prevention: N-days
For the N-Day vulns that are exploited quickly, but after disclosure, it’s clear that a scan-driven approach can’t be effective. We’re not going to wait for a vulnerability check to get created, QA’ed, pushed to production, downloaded by our scanner, wait for the next scheduled scan, and then wait for a human to see it. This could take days or weeks.
An intel-driven approach makes much more sense, though it requires reliable hardware and software asset inventories. The moment a vulnerability is disclosed, an analyst queries asset inventories, analyzes the impact, and sets remediation into motion, based on the severity they’ve determined. This can be completed in minutes after disclosure - no waiting for scans necessary.
Compliance
Organizations in regulated industries may find it difficult to get away from traditional vulnerability management tools. These processes are well established and expected by both auditors and standards. While some standards (like PCI DSS) allow for custom scoring to deprioritize non-critical vulnerabilities, others force remediation regardless of prioritization’s impact on scoring. These tools and processes aren’t going away any time soon.
Conclusion
It has always been true that vulnerability management was tightly linked to other processes and teams, but I often find it more isolated than it should be. When Linux admins roll with default RHEL installs, they’re making vulnerability management work more difficult. When SecOps builds detections without consulting with vulnerability analysts, they’re missing opportunities. When the security program assumes the only mitigation is applying a patch, vulnerability management can’t achieve its goals.
We now have the challenge of more tightly linking vulnerability management to SecOps, asset owners, and other groups. On top of this, most organizations still have to run a traditional vulnerability management program. PCI needs quarterly clean scans. SOC 2/ISO27k expect traditional scans to be available for review. Systems still need to be kept up-to-date. That means the clients I’m advising are still considering purchasing RBVM solutions and other prioritization methods. They’re still adding vulnerability intelligence tools and processes on top of their scan-driven processes.
The most common setup I see today is an old school network scanner, running on a schedule, performing a mix of authenticated and unauthenticated scans, perhaps with some agents installed on remote systems. To summarize:
If the data I’ve presented here is correct, the best this setup can do is to address 30% of that exploit prevention goal.
If we assume 40% of the assets being scanned are not correctly identified, this number drops to 18%.
And we can only claim that 18% if we’re doing a perfect job of prioritizing all the right vulnerabilities and getting them remediated within 24 hours.
If we can’t patch this 18% within 7 days (most of the orgs I’m working with cannot), we lose another 29%. That brings us below 13%.
Is the best case scenario that the majority of organizations are struggling to address 13% of the exploit prevention problem? I hope not - please tell me my math is bad.
I don’t have any great answers on simplifying it either. It looks to me like vuln management gets more complex than ever. I’m hoping others have some helpful thoughts and suggestions on this.



I thought Risk-Based Vulnerability Management (RBVM) had been overcome by events - namely Kenna getting bought, and I think recently going EOL.
Fixing the right vulnerabilities on the right timeline is just a huge challenge as you point out - for a lot of reasons. The 'fix' I've been focused on, especially in middle market companies is resilience and recovery (plus patching where you can).
There's also an organizational function tied to roles and responsibiliites - separating the VM identification from the pattching fix feels artificial today - 20 years ago we needed the separation for honesty and integrity. Today, and looking at the future, hiding known issues is only going to get harder in most organizations.
If you can’t segment it, monitor it, also can’t patch it fast, it might be time to consider moving that function to SaaS!