
The gap between vulnerabilities that are present and vulnerabilities that matter is wider than most teams realize. Reachability analysis is how you find it.
Here is a number that security teams rarely say out loud: the Cyentia Institute found that teams can realistically remediate roughly one in 10 vulnerabilities in their environment in a given month. That is not a failure of process. It is a reflection of how many findings modern scanning tools generate relative to how many engineers exist to address them, how long it takes to safely validate and deploy a fix, and how often teams are simply waiting on upstream maintainers to ship a patch at all. The assumption that you can simply work faster to close the gap is wrong. The question is whether you are working on the right things.
The underlying issue is that most scanners answer the wrong question. They tell you what vulnerabilities are potentially present in your dependency versions. They do not tell you which of those vulnerabilities can actually be reached in your application. A library can be listed in your dependency graph, flagged as critical, and assigned to an engineer for remediation, while sitting in a code path that is never invoked at runtime, never loaded in production, and physically unreachable by any request your application will ever receive.
Reachability analysis addresses this directly. The question it answers is not “is this vulnerability present?” but “can this application’s execution reach the vulnerable code?” These are different questions with different answers, and the gap between them is where most remediation effort gets wasted.
Reachability Is Not Exploitability. The Difference Matters.
Exploitability is a property of a vulnerability: a working proof-of-concept exists in the wild, and an attacker can use it. Reachability is a property of your specific deployment: given how this application is actually built and executed, can an attacker get from an entry point to the vulnerable function? A vulnerability can be exploitable in theory and completely unreachable in your environment. Prioritizing by CVSS score and EPSS alone does not tell you which situation you are in.
What reachability analysis actually answers is a set of questions that maps directly to what an AppSec team needs to make a decision: which assets are impacted, whether the vulnerable functionality appears in your code paths, where it appears with file and line evidence, how the application reaches it through call chains, and what action to take next. This is not a theoretical framework. It is the difference between triaging thousands of findings by severity score and knowing, for a specific vulnerability in a specific service, whether it is something that requires an engineer’s attention today.
The Gap in Every Scanner You’re Already Using
Static analysis tools and SCA platforms are fast, repeatable, and foundational. They are not going away, and this is not an argument against using them. It is an argument about what they cannot do.
Most tools trace call graphs and ask whether a vulnerable function appears in a reachable path. This is useful as far as it goes. The limitation is that it treats reachability as a code analysis problem when it is equally a vulnerability analysis problem. These tools do not ask what conditions the vulnerability actually requires to be triggered, and whether those conditions exist in this codebase. A vulnerability description contains implicit requirements: a specific input format, an authenticated session, a particular configuration state, a code path that processes untrusted data in a specific way. Translating that description into a concrete set of conditions, then cross-referencing those conditions against the code, is a fundamentally different approach than tracing execution paths alone.
The consequence is that existing tools either flag everything conservatively, generating the volume problem described above, or miss the cases where the vulnerability conditions are satisfied in non-obvious ways. Neither outcome helps a security team make better decisions faster.
LLMs Can Reason About Vulnerability Conditions. With the Right Inputs.
This is where applying LLMs to reachability analysis changes the picture, with a significant caveat about how they are applied. An LLM given a raw code snippet and asked whether a vulnerability is reachable will frequently produce a confident wrong answer. The model lacks the context to reason about control flow, conditional branches, or the specific conditions the vulnerability requires. The output looks authoritative and is not reliable.
The approach that works is structured context. Provide the model with an abstract syntax tree, a call graph, and targeted code slices scoped to the relevant execution paths, alongside the vulnerability description translated into a set of triggering conditions, and the analysis changes qualitatively. The model can evaluate whether a conditional branch is ever satisfied given the code’s logic. It can reason about whether the data flow required by the vulnerability actually exists in the paths that reach the vulnerable function. It can identify paths that static tools flag as critical but that cannot execute given the code’s actual structure.
This is not theoretical. Comparative testing across models, including open-weight models like Llama 3.1 and Qwen3, and security-fine-tuned models like Foundation-Sec-8B, shows measurable differences in false-positive reduction and consistency across repeated analyses. The honest finding from that benchmarking is that model choice matters, and context quality matters more. There are also categories of failure, particularly around missing runtime assumptions and framework-managed control flow, where human validation remains essential. The goal is not to automate judgment out of the process. It is to direct that judgment toward the findings that actually warrant it, rather than distributing it thinly across thousands of results that a rule-based tool could not prioritize.
This is also the point that most vendors discussing reachability do not make. Endor Labs uses call graphs. Orca uses agentless side-scanning. Mend uses what it calls Effective Usage Analysis. All of them present reachability as primarily a code traversal problem. The framing that vulnerability conditions need to be analyzed and cross-referenced against code, not just that code paths need to be traced, represents a meaningfully different approach to the problem.
Containers Make the Problem Worse. Reachability Helps Here Too.
The same principle extends into containerized environments, where the gap between what is present and what is reachable is often even wider. Container scanners flag vulnerable code in image layers that never execute in production. Multi-stage builds strip runtime dependencies. Entrypoint configurations exclude entire sections of an image from the executable path. A vulnerable library present in a build stage that does not survive into the runtime image is not a risk to a running application. A scanner that does not account for container structure treats it as one anyway.
Consider a microservice built with a multi-stage Docker build. The build stage includes a C compiler toolchain, several build-only dependencies, and a utility library with a known critical CVE. None of these survive into the runtime image. The final image contains only the compiled binary and its runtime dependencies. A standard container scanner returns the critical finding on the utility library. Reachability analysis that accounts for container structure closes it immediately. The engineer who would have spent time on that finding instead works on something that can actually be reached.
An attacker does not care what is in your build layer. They care what runs. Security tooling should reflect the same reality.
The Right 10 Percent
The operational outcome of reachability analysis is a smaller, more reliable, more defensible set of vulnerabilities to act on. Not because risk was deprioritized. Because the analysis finally reflects how the application actually behaves, where it runs, what it executes, and what conditions would have to be true for a vulnerability to be exploited in this specific deployment.
If you can realistically remediate one in 10 vulnerabilities per month, the question is not how to go faster. It is how to make sure you are fixing the right 10 percent. Combining static analysis, container-aware scanning, and LLM-based reasoning with structured vulnerability context is the most accurate and scalable path to answering that question. The volume of findings is not going down. The tooling needs to catch up to the reality of how applications are built and deployed, not just report on what is technically present in a dependency graph.
___
About the author
Alexandra Selldorff is Head of Engineering at Manifest, leading work on SBOM vulnerability scanning. Previously, she was an Engineering Manager at Rula and a Forward Deployed Engineer at Palantir. She has built and operated software in highly regulated environments, including healthcare and government, and is passionate about delivering mission-critical systems quickly and securely. Lexi enjoys getting deep into data, and her work at Manifest focuses on the real-world challenges of vulnerability matching, package identification, and reducing noise in vulnerability management.
Join our LinkedIn group Information Security Community!

















