Go back

3 dimensions of application risk you need to prioritize and reduce your alert backlog

Educational
|
October 5 2023
|
7 min read

Not every vulnerability is a risk to your business.

Traditionally, application security testing (SCA, DAST, SCA, etc.) tools and processes simply cross-reference vulnerability databases, industry lists, and policy libraries to flag security issues. But not every XSS, exposed secret, or unencrypted S3 bucket is worth your team’s time and focus. Unfortunately, distinguishing which issues are important can be like panning for gold in a rushing river—never-ending and demanding diligence, patience, and a touch of luck.

With the pace at which modern applications evolve and the sheer complexity of all their interconnected components (data flows, pipelines, environments, infrastructure, technologies, etc.), identifying real, business-critical risks is more challenging than ever. But it’s also more important than ever. Taking a one-dimensional approach to identifying application security risks creates noisy backlogs of false positives that under-resourced security teams can’t possibly handle.

To keep up, teams need a holistic way to prioritize risk based on their application architecture, the nature of their business, and overall risk tolerance. Only when you can determine what is and isn’t a real risk can you allocate your resources to more efficiently and effectively improve your application security posture.

In this post, we’ll cover the three dimensions of risk prioritization—from basic risk indicators to the likelihood and impact of risk—to make sense of your backlog and eliminate risk.

Dimension 1: Basic risk indicators

The first dimension of risk includes indicators that are well-defined by various industry frameworks and which first-generation AppSec tools over-rely to flag and prioritize potential risks.

  • OWASP Top Ten: Globally recognized as one of the most important standards for developers and application security, the OWASP Top Ten and affiliated projects such as the OWASP API Security Top 10 and cheatsheets such as the Secure Product Design Cheatsheet include categories of risks that have informed tools and processes for decades.
  • CVSS Calculator: Common Vulnerabilities and Exposures (CVE) used for identifying and tracking specific security vulnerabilities and exposures, and includes a brief description of the vulnerability, its severity, affected software, potential impact, and references to related information. A CVSS score quantifies the potential impact and exploitability of a CVE and assigns a score to the severity level between 0 and 10.
  • CWSS Scoring System: Common Weakness Enumeration (CWE): catalogs and describes common weaknesses, providing details about the type of vulnerability, common manifestations, consequences, and potential mitigations. A CWSS score is a community-based effort to “prioritize software weaknesses in a consistent, flexible, open manner.”
  • CIS Benchmarks: These benchmarks provide configuration guidelines for cloud providers and software supply chains and are leveraged in IaC, CI/CD, and SCM security tools to build policies to enforce these best practices.
  • Known Exploited Vulnerabilities (KEV): Maintained by CISA, this catalog includes known vulnerabilities with confirmed exploits in the wild.
  • Exploit Prediction Scoring System (EPSS): EPSS analyzes data from CVE details, CVSS v3 vectors, security scanners, published exploit code, etc., and assigning a probability score between 0 and 1 (0% and 100%). The higher the score, the greater the probability that a vulnerability will be exploited in the next 30 days.

While these examples are the bedrock of application security best practices, focusing on these findings without the context of your specific application and business can lead to a massive amount of noise and a never-ending backlog.

Determining whether or not a finding is actually risky to your organization—and to what degree—depends on factors like the potential likelihood of a risk materializing given your unique application architecture and the nature of the finding, as well as the potential impact such a risk would have on your business or application.

Dimension 2: Likelihood of a risk materializing

To evaluate the likelihood of each security issue materializing into a risk given your application architecture and environment, consider factors like whether or not this code is…

  • Deployed: If the risk is deployed in production or test, potential consequences are amplified. Any vulnerability in production has the potential to affect users directly, expose data, and cause downtime, loss of revenue, and reputational damage.
  • Internet exposed: Internet-exposed vulnerabilities add an extra layer of risk and urgency since there’s increased accessibility for potential attackers.
  • Imported in code: If the vulnerability stems from an imported package or module, this could significantly impact your attack surface.
  • Behind a gateway, like an API Gateway or Web Application Firewall (WAF): Gateways might mitigate risk to some extent, which can influence urgency and priority, depending on how effective the gateway’s protection is against the specific vulnerability. However, having a vulnerability behind a gateway doesn’t eliminate the risk—it shifts the focus of defense to a specific layer. Gateways can still be bypassed, misconfigured, and exploited, so deeply understanding the vulnerability and its context is critical in order to prioritize.
  • Exposure timeline: Knowing how long a vulnerability or weakness has been exposed is an important factor in determining the likelihood of your business being exposed to risk.

Determining these factors will help shed light on the general nature and context of that specific risk and how it relates to your unique environment. These signals, along with domain-specific signals such as whether an exposed secret is valid or a vulnerability is exploitable, will determine the likelihood of it posing a risk to your business. Evaluating these factors will help eliminate low-likelihood alerts, effectively downgrading risk that is not likely to present significant risk.

Dimension 3: Impact of a risk

The third dimension of risk assesses the broad impact of a potential risk—from both a business and a remediation impact perspective. Determining impact requires looking at the bigger picture, assessing connections between application and supply chain components, and taking the business as a whole into consideration. Indicators you can use to ascertain the overall impact include if the risk is found in…

  • High Business Impact (HBI) applications: Risks in repositories and applications that have a significant impact on revenue, user experience, or business continuity or have sensitive data need to be addressed with a greater sense of urgency.
  • Shared modules: A vulnerability in a shared module or library can have far-reaching consequences beyond just the codebase where it is initially identified. Whether that’s a vulnerability in a module being used in multiple locations or a vulnerability that repeats in multiple locations, understanding the potential blast radius of a vulnerability and its root cause can help identify and prioritize the issues that have the most significant impact on your security posture. Additionally, remediating the root cause will also help solve other connected security issues across your codebase.
  • Toxic combinations: On their own, some vulnerabilities might carry risk, but when combined with other issues, this risk can be compounded into a high-severity risk. For example, if there’s an API that is exposed to the internet, doesn’t have authentication and authorization mechanisms in place, and also exposes sensitive data, this creates a high-severity risk that needs to be addressed urgently. Exposing toxic combinations in your codebase helps identify and prioritize critical issues that independently may not pose a threat, but given their architecture and dependencies, become a severe risk.

Understanding a risk’s impact will reveal the true risk to the organization and inform how it should be prioritized accordingly against other risks and security priorities.

Automating and operationalizing risk-based prioritization at scale

Ultimately, by taking all three dimensions into account, you’re able to cut through the noise to focus on high-severity risk, ensuring you’re able to move quickly without sacrificing quality or security. To operationalize risk prioritization and effectively manage attack surface risk, you need rich, consistent context and a way to take action on risks across the development lifecycle.

  • Build and maintain a comprehensive and continuous inventory: The only way to get the context you need to prioritize based on likelihood and impact is to have a solid foundational inventory. Maintaining an up-to-date eXtended Software Bill of Materials (XBOM), including all code modules, dependencies, code contributors, pipelines, entry points, etc., will ensure you have insight into all the various components, controls, data, tools, and processes that modern applications and supply chains are made up of, their connections and associated risks, and how they change over time. While code is the single source of truth for your application, gathering as much context as you can from your runtime environment is important for understanding several likelihood factors.
  • Streamline remediation and embed risk-based developer guardrails: The whole point of being able to detect high business impact risks is to cut down the backlog and noise of irrelevant security issues that may distract your team and waste your precious resources. That noise also creates friction between security and developers when low-priority or false positive alerts get in the way of a release. By automatically prioritizing risk through the contextual lens of the organization, you can right-size the actions to take when issues are flagged. For example, you may want to only block a build for a business-critical risk that involves sensitive data and not every 8+ CVSS score vulnerability. This risk-based approach empowers your organization to balance development velocity and security.

Using deep, real-time code analysis and runtime intelligence, Apiiro uniquely contextualizes security findings—from third-party tools or native Apiiro solutions—to determine their impact and likelihood of risk. That context determines how findings are prioritized, which workflows are triggered, and how feedback is surfaced to developers. With risk-specific insights, such as whether a secret is valid or a vulnerability is exploitable, along with actionable remediation guidance, Apiiro is built to make AppSec efficiency a reality.

To see how prioritization can reduce your backlog, schedule an Apiiro demo, or learn how to build and scale a risk-based program in this comprehensive guide.

Payton O'Neal
LI