Cookies Notice
This site uses cookies to deliver services and to analyze traffic.
📣 Introducing AI Threat Modeling: Preventing Risks Before Code Exists
Unified risk and vulnerability management across application, infrastructure, and code quality scanners, with code-to-runtime actionable context
Automated security controls validation and assurance based on your organization’s SDLC policies, with actionable context from your CMDB
Risk Graph policy engine and developer’s guardrails at every phase: design, development (pull request), and delivery (build/deploy)
In Q3 2024, Google’s CEO Sundar Pichai revealed that more than 25% of new code at Google was being generated by AI. By Q4 2025, that number had doubled to roughly 50%.
What Google has accomplished reflects what’s happening across the enterprise software world. And it raises a question every security leader needs to answer: as AI generates more of our code, how do we ensure the security model keeps pace?
The productivity benefits of AI coding assistants are real and well-documented. Developers are shipping features faster, with less friction, and organizations are seeing measurable gains in output.
Our own research found that AI-assisted teams produce 3–4× more commits than their non-AI peers. That’s the upside. The same research found that those teams also ship 10× more security findings.
More code means more APIs, more dependencies, more integration points, and more potential vulnerabilities accumulating faster than security teams can manually review.
Most security leaders are in the same position.
Traditional AppSec was designed for a world where humans wrote code and scanners evaluated it after the fact. That model worked when development cycles were measured in weeks or months. It struggles when AI can generate a feature in minutes.
When code is generated at machine speed, there is no natural moment for a human security review to occur before vulnerabilities are introduced. By the time SAST, SCA, or penetration testing surfaces an issue, the code has already been written, reviewed, merged, and often deployed. Remediation at that stage is expensive and increasingly difficult to sustain at scale.
The implication isn’t to slow down AI adoption. It’s to evolve security so it operates at the same speed as development.
Every major security domain has followed this arc: from detection to prevention.
Network security moved from intrusion detection to firewalls. Endpoint security moved from incident response to prevention. Web security evolved from monitoring to WAFs. Application security has historically remained detection-heavy, and the rapid growth of AI-generated code is accelerating its evolution.
The organizations that will manage AI-era development most effectively are those that shift security left of the code itself, embedding architectural context, organizational policy, and threat intelligence directly into the AI coding workflow before a vulnerable line is ever generated.
Google’s trajectory is a leading indicator, not an outlier. As AI coding becomes the norm across the enterprise, three principles will define how security leaders stay ahead:
Embrace the productivity, then secure it. AI-generated code is not inherently less secure, but it requires security infrastructure designed for its scale and velocity.
Shift investment toward prevention. Detection tools remain essential, but cannot be the primary defense against AI-scale code generation. The earlier security is embedded, the less expensive it is.
Demand architecture-aware security. Generic scanning tools cannot understand the full context of AI-generated code. Security must be grounded in how your systems are actually built from code to runtime.
Apiiro built Guardian Agent specifically for this moment.
Rather than scanning AI-generated code after it’s written, Guardian Agent operates at the prompt level. It rewrites developer prompts with security guidelines, threat context, organizational policies, and real architecture signals before AI coding assistants generate a single line of code.
Guardian Agent fills a gap that has existed in application security for decades. Network, endpoint, and web security all evolved from detection to prevention. AppSec was the last holdout, and Guardian Agent is its first real prevention layer.
This means:
As AI generates a growing share of enterprise code, the organizations that stay ahead will be those that govern AI at the source, before risk is introduced.
See Guardian Agent in action. Book a demo →
This site uses cookies to deliver services and to analyze traffic.