Cookies Notice
This site uses cookies to deliver services and to analyze traffic.
📣 Introducing AI Threat Modeling: Preventing Risks Before Code Exists
To keep pace with the speed of AI code generation and vulnerability scanning, enterprises need an AI control plane.
In April 2026, GitHub’s CEO reported that weekly volume had climbed from 1 billion in 2025 to roughly 275 million commits, putting the platform on pace for 14 billion commits by year-end: a 14x jump in a single year.
How is this exponential growth possible? Agentic coding.
That is just the production side of the curve. The attack side is moving just as fast, and in the same direction.
It’s the wild west of AI coding. We need to regain control.
1 – AI coding agents are producing more code, and more of it is exploitable.
This is the signal we have been broadcasting for a year: 4x more code, 10x more risk.
AI-generated code contains vulnerabilities at 2.74x the rate of human-written code, with AI co-authored code showing 1.7x more major issues, 75% higher misconfiguration rates, and security vulnerabilities occurring at 3x the baseline.
Code churn is up, duplication has quadrupled, and refactoring – the hygienic practice that keeps codebases healthy – has collapsed from 25% of developer activity to roughly 10% by 2024.
Mean TTE (Time-to-Exploit) is now at 10 hours.
Not only is there more code, the code being generated is more prone to exploitability than ever before. And if AI is writing the code, who’s securing it?
In 3 years we’re still going to have a lot of legacy code. We’re still going to have engineers that have endless backlogs, and we’re going to unlock productivity responsibly. We will achieve and manage growth with AI agents to produce code, AI agents to review code, and AI agents to secure code, like Apiiro Guardian Agent.
2 – AI is accelerating the discovery of vulnerabilities faster than defenders can triage them.
AI systems can now generate working CVE exploits in 10 to 15 minutes at roughly $1 per exploit, and 67% of CVEs in 2026 are zero-days (many discovered by AI), up from 16% in 2018. March 2026 alone saw 35 CVEs attributed to AI-generated code, up from just 6 in January.
NIST’s National Vulnerability Database is falling behind on CVE enrichment and pushing a growing share into its backlog, meaning the federal baseline for vulnerability context is no longer timely; enrichment is something enterprises have to do privately.
Open-source maintainers cannot keep up with the flood of AI-generated bug reports they now receive. The average application depends on third-party packages; what happens when those packages are abandoned by their maintainer, or overwhelmed by unactionable vulnerability reports?
“AI is industrializing both the rate of generated code and the discovery of vulnerabilities at a rate that defenders cannot match. Open-source maintainers, the stewards of components running in 97% of commercial codebases, cannot triage all their reports. The institutional scaffolding we built over the last two decades is fracturing.”
The honest diagnosis: you cannot fully inventory, let alone fully secure, your attack surface.
Risk elimination is a losing battle. The attack surface is now expanding faster than any team, federal agency, or maintainer community can track. The only solution forward is twofold:
1. 𝗥𝗲𝗱𝘂𝗰𝗲 𝘁𝗵𝗲 𝗯𝗮𝗰𝗸𝗹𝗼𝗴: through risk-based prioritization and automatic fixes.
2. 𝗦𝘁𝗼𝗽 𝘁𝗵𝗲 𝗯𝗹𝗲𝗲𝗱𝗶𝗻𝗴: by preventing vulnerable and non-compliant code from ever being generated, turning every prompt into a secure prompt.
Every CISO operating in this environment now needs these two capabilities working in tandem.
At a high level, this is a mindset shift from “what are my most critical vulnerabilities” to “how can I continuously patch and minimize my blast radius?” Utilize SCPs, data boundaries, and approved terraform modules to enforce cloud guardrails, shifting to an “assume breach” mindset.
Progress here requires risk-based prioritization grounded in exploitability, reachability, and business impact, paired with automated remediation that doesn’t impede developers.
This is what Apiiro’s AutoFix Agent is built for. It operates directly inside developer IDEs, without plugins, and decides when to fix a vulnerability, when to enforce a guardrail, and when to generate an audit-ready risk acceptance, all based on runtime context and organizational policy.
Reducing the backlog is only half the equation if vulnerable and non-compliant code keeps flowing into the codebase. The cheaper and more scalable intervention is to prevent that code from being generated in the first place.
This is what Apiiro’s Guardian Agent does. It rewrites developer prompts into secure prompts before the AI coding agent executes them, dynamically injecting security and compliance instructions based on intent, code-to-runtime context, and organizational policy.
In summary: Code is being written faster than humans can review it, and the reviewers themselves are increasingly AI. This is widely known, but here is the part that determines whether AppSec AI actually works at enterprise scale:
A secure prompt and an accurate fix both require the same underlying asset: a deep, deterministic understanding of each customer’s unique software architecture, from code to runtime.
Apiiro calls this the Software Graph and Risk Graph, both generated continuously by our patented Deep Code Analysis (DCA) technology. It maps repositories, services, APIs, data flows, trust boundaries, runtime deployments, compensating controls, and ownership.
Without this deterministic Data Fabric, an AppSec AI agent is guessing. With it, the agent can tell you not just that a vulnerability exists, but whether it is actually reachable in production, whether the affected service is internet-facing, who owns the code, what compensating controls already sit in front of it, and whether a fix can be applied safely
DCA answers why security findings matter to the business, and what to do next.
Last decade, enterprises adopting multiple clouds discovered that cloud-specific tools were not enough; they needed a cross-cloud control plane to govern identity, policy, and posture consistently. The same architectural moment has arrived for AI-assisted development.
Enterprises are not standardizing on a single coding agent. They are running Copilot, Claude Code, Cursor, Windsurf, internal agents, and increasingly autonomous agent swarms, across an engineering organization that spans thousands of repositories and dozens of business units.
Governing this environment tool-by-tool, model-by-model, IDE-by-IDE, is the same mistake the industry made with per-cloud security tools a decade ago. The answer then, and the answer now, is an AI control plane: a single layer that sees across the estate, understands the architecture, enforces organizational policy consistently, and ties every automated decision back to a source of truth..
The curve is not going to bend back. Code volume will keep rising and AI-assisted exploitation will keep getting cheaper. Leaders can control is the operating model they put in place to deal with it.
Prioritize ruthlessly, based on real exploitability and business impact, because chasing every finding is no longer a credible plan.
Prevent at the source by governing AI coding agents with architecture-aware secure prompts. The cheapest vulnerability is the one that is never generated.
Unify the control plane, because a fleet of coding agents without a governance layer is just chaos under the illusion of control.
This is what Apiiro’s Agentic Application Security Platform is designed to be, consolidating the essential AppSec pillars of ASPM, AST, software supply chain security, threat modeling, and secure coding under one graph, one policy engine, and one set of trusted agents.
See the Apiiro AI Control Plane in action → Get a Demo