Matthew Gamble's Blog
← Back to reflections
When Critical Isn't Critical

code

When Critical Isn't Critical

M
Matthew Gamble
7 min read
"Last week I was reviewing a vulnerability scan report for a client when something caught my eye."

Last week I was reviewing a vulnerability scan report for a client when something caught my eye. Buried in a list of "critical" findings was a Log4j vulnerability - you know, the one that broke the internet back in 2021. CVE-2021-44228. CVSS score of 10.0. The scanner was practically screaming at us to drop everything and fix it immediately.

There was just one problem: the "vulnerable" file was a jar sitting in a backup archive of a decommissioned server. Not running. Not deployed. Not even on a system that was powered on. It was a file, on a disk, doing nothing.

But according to the scanner, and by extension, the compliance checklist we were working against—this was a Critical Finding that needed immediate remediation.

This is the state of enterprise vulnerability management in 2026, and honestly, I think we've lost the plot.

The CVE Score Trap

Don't get me wrong - the Common Vulnerability Scoring System (CVSS) serves a purpose. It provides a standardized way to communicate the severity of vulnerabilities across the industry. When a new CVE drops, having a quick shorthand for "this is really bad" versus "this is somewhat bad" is genuinely useful.

The problem is when organizations treat these scores as gospel and build entire security programs around chasing numbers rather than understanding risk.

A CVSS score tells you how bad a vulnerability could be under certain assumptions. It doesn't tell you:

  • Whether the vulnerable component is actually deployed in your environment
  • Whether the vulnerable code path is reachable
  • Whether the conditions required for exploitation exist in your specific context
  • Whether compensating controls already mitigate the risk
  • Whether anyone would actually bother exploiting this in your environment

In other words, it tells you the theoretical severity but nothing about the actual risk to your organization.

The Log4j Example in Detail

Let's go back to that Log4j jar file in the backup. The CVSS score of 10.0 is based on the assumption that:

  1. The vulnerable library is running in a Java application
  2. That application processes untrusted input
  3. That input reaches the vulnerable logging function
  4. The system can make outbound LDAP/RMI connections
  5. An attacker can reach the application

In the case of a file sitting in a cold backup? None of these conditions are true. The "vulnerability" isn't a vulnerability at all, it's just a file. You might as well flag a PDF about Log4j as a critical security issue.

Yet here we are, with security teams spending hours documenting why a dormant file doesn't need a patch, writing exception requests, and going through change control processes to delete an archive that posed zero actual risk.

When a 10.0 Isn't a 10.0

The same logic applies to running systems. I've seen organizations panic over critical CVEs in libraries that:

  • Aren't actually used — The library is bundled in a framework but the vulnerable function is never called. The code path is completely unreachable.
  • Are behind layers of controls — Yes, that internal application has a vulnerability, but it's only accessible from a management VLAN that requires VPN access with MFA from a corporate device. The attack surface is negligible.
  • Require authentication to exploit — The CVE assumes unauthenticated access, but your implementation requires valid credentials. The CVSS score doesn't account for this.
  • Are in non-production environments — A vulnerability in a dev server with synthetic data is not the same as one in production handling customer PII.

Context matters. A lot.

The Real Problem: Compliance-Driven Security

So why do organizations keep doing this? In my experience, it comes down to compliance-driven security theatre.

Auditors want to see that you have a vulnerability management program. They want metrics. They want to see that critical vulnerabilities are remediated within X days. They don't necessarily have the technical depth to understand that not all criticals are created equal, and frankly, nuance doesn't fit neatly into a compliance checklist.

So organizations optimize for the audit, not for actual security. They chase CVSS scores because that's what gets measured. They spend cycles on low-risk "critical" findings while actual risks go unaddressed because they don't trigger the right scanner alerts.

I've seen teams spend weeks patching a theoretical vulnerability in an obscure library while ignoring the fact that their admin portal was using "admin/admin" as the default credentials. One had a CVE. The other didn't. Guess which one was actually more likely to get them breached?

What Should We Do Instead?

I'm not saying throw out your vulnerability scanners. They're useful tools. But they need to be inputs to a risk-based decision process, not the final word.

Here's what a more sensible approach looks like:

1. Add Context Before Prioritization

Before treating a CVE as critical, ask:

  • Is the vulnerable component actually deployed and running?
  • Is the vulnerable code path reachable from untrusted input?
  • What compensating controls exist?
  • What's the actual blast radius if this were exploited?

2. Focus on Exploitability, Not Just Severity

CVSS has an "Exploitability" component, but it's still theoretical. Look at:

  • Is there a public exploit?
  • Is it being actively exploited in the wild?
  • Does it require conditions that don't exist in your environment?

Resources like CISA's Known Exploited Vulnerabilities (KEV) catalog are more useful for prioritization than raw CVSS scores.

3. Build Threat Models, Not Just Scan Reports

Understand how your applications actually work. Where does untrusted data enter? What's the attack surface? Which components are internet-facing versus buried behind layers of controls?

A basic threat model will help you understand which vulnerabilities actually matter in your environment.

4. Push Back on Checkbox Security

If your auditors or compliance team don't understand why a dormant jar file isn't a critical risk, educate them. Document your risk-based rationale. Good auditors will appreciate the nuance, and if they don't, you have a different problem.

5. Track What Actually Matters

Instead of measuring "time to remediate critical CVEs," measure things like:

  • Reduction in internet-facing attack surface
  • Coverage of critical assets by compensating controls
  • Mean time to detect actual security incidents

These metrics are harder to game and more closely aligned with actual security outcomes.

The Bottom Line

Vulnerability management is supposed to reduce risk, not generate busywork. When we treat CVE scores as absolute truth divorced from context, we end up with security teams drowning in noise while real risks go unaddressed.

A CVE score is a starting point for a conversation, not the end of one. The vulnerability in your environment isn't the same as the vulnerability in the CVE description - it's filtered through your architecture, your controls, your threat model, and your actual deployment.

Security teams need to get comfortable saying "yes, this is a critical CVE, and no, it's not a critical risk to us" and being able to defend that position with evidence. That takes more work than just running a scanner and chasing red items. But it's the only way to actually improve security rather than just checking boxes.

That Log4j jar file in the backup? We deleted it anyway. Took five minutes. But the hours spent documenting, justifying, and processing it through change control? That's time we could have spent on things that actually matter.

Let's stop optimizing for audit theatre and start optimizing for security.

Comments (0)

Sign in to join the discussion

Loading comments...