Apiiro Blog ﹥ Webinar Recap: The Evolution of AppSec…
Educational, Event

Webinar Recap: The Evolution of AppSec for the AI Era

Timothy Jung
Marketing
Published October 9 2025 · 6 min. read

AI isn’t arriving. It’s here—living inside our applications, accelerating how software gets built, and reshaping the relationship between developers and security. In a recent webinar, Apiiro’s Idan Plotnik and IDC analyst Katie Norton joined Application Security Weekly’s Mike Shima to unpack what this shift means for AppSec leaders and engineering teams on the ground: the risks, the bottlenecks, and the path forward.

You can watch the full conversation here:

The reality: AI boosts output—and compounds risk

Katie opened with a clear observation from IDC’s research: AI coding assistants are materially changing the dynamics of software creation. Developers self-report notable productivity gains, which maps to what we’re seeing in the wild: more code, changed review patterns, and a dramatic rise in findings. 

In Apiiro’s own analysis of Fortune 20 enterprises, developers using AI assistants produced 3–4× more commits and security findings (OSS + SAST) spiked over 10×, especially as AI-assisted changes consolidated into larger, more complex pull requests. These shifts overwhelmed traditional review gates and legacy scanning workflows.

Volume is only half the story. Idan emphasized that context—where code runs, what compensating controls exist, and which policies actually matter—is the difference between a “vulnerability” and a business risk worth fixing now. AI code assistants don’t inherently understand your architecture or organizational standards; left unguided, they can introduce unvetted dependencies, licensing risks, secrets exposure, and design-level flaws at unprecedented speed.

Bottom line: AI accelerates change. Without intelligent guardrails and runtime-aware context, your AppSec backlog will grow faster than human teams can manage.

Trust, provenance, and the “chain of intent”

A subtle but important shift in the AI era has to do with trust. With human-written code, there’s at least a chain of intent and authorship. With LLM-generated code, provenance can be opaque. Was that snippet hallucinated? Copied from an example with a problematic license? Does it duplicate capabilities you already have? The lack of provenance and architectural awareness introduces governance blind spots, especially as AI suggests frameworks or patterns that don’t align with your standards.

As Idan explained, you can’t assess risk by analyzing code in isolation. Your tools (and any AI working alongside developers) must understand:

  • Where the code will run and how it’s exposed
  • Compensating controls (e.g., in gateways, IAM, runtime)
  • Organizational policies and the risk acceptance lifecycle

When that context is missing, you get generic fixes that may be technically “correct” but operationally unsafe, or simply irrelevant to the real risk.

Why legacy checklists (still) break under AI velocity

Mike noted the industry’s tendency to hand out checklists—“do threat modeling,” “follow the OWASP Top 10”—despite 20 years of evidence that lists alone don’t scale. Katie agreed: anything not automated simply erases the productivity gains created by AI. Teams are hitting friction because code volume and complexity now outpace manual reviews and siloed scanners.

Idan’s diagnosis is blunt: when a company with 10,000 developers effectively behaves like it has 30,000 or more, processes, scanners, and guardrails collapse. Threat modeling, security reviews, and remediation queues balloon while AppSec headcount remains flat. To keep up, organizations need a fundamental redesign of AppSec programs—one that delivers guidance and fixes at the point of change (in the IDE) and that uses real runtime context to decide what truly matters.

Three buckets of “AI + Security” to keep straight

Katie and Idan underscored a source of confusion that slows decision-making. “AI security” isn’t one problem—it’s three:

  1. Securing AI in runtime

Protecting LLM-powered features your product exposes (e.g., prompt injection defenses, output controls).

  1. Securing AI usage in code

Governing how developers and AI assistants change your codebase (dependencies, patterns, secrets, design risk).

  1. Using AI for security

Applying AI to traditional AppSec tasks: automated threat modeling, code review assistance, fix recommendations.

The tooling, data, and guardrails for each bucket are different. Conflating them leads to gaps in coverage and misplaced investments. The second bucket—governing and fixing risks introduced by AI-assisted development—was the focus of this discussion.

What “good” looks like: guardrails + context in the IDE

Katie pointed to platform engineering trends and “golden paths.” In the AI era, durable progress comes from guardrails that make the secure path the easy path, enforced as close to the developer as possible. Idan took it further: the IDE is the new perimeter. If your agent in the IDE understands your software graph, policies, and runtime controls, it can prevent unnecessary noise and guide developers to policy-aligned fixes—automatically where safe, and through informed risk acceptance when not.

This is exactly where Apiiro has invested. The Apiiro AI Agent operates directly inside developers’ IDEs (no plugins required) to evaluate each change against company-specific policies and runtime-aware context, then decide whether to fix, enforce a guardrail, or generate audit-ready risk acceptance—all without slowing down development.

Data makes the difference: from “context” to software intelligence

Most tools promise “context.” Very few have the software intelligence required to autofix safely at scale. Apiiro’s platform builds a dynamic, real-time map of your software architecture—from code to runtime—and correlates it with your policies and workflows. Three capabilities power this:

  • Deep Code Analysis (DCA): Continuously discovers and inventories your changing software architecture (APIs, data flows, OSS, GenAI frameworks, models).
  • Code-to-Runtime Matching: Connects runtime components and findings back to code and code owners, so prioritization reflects actual exploitability and business impact.
  • Risk Graph & Policy Engine: Correlates, deduplicates, and weighs findings from scanners and third-party tools using your architecture, policies, blast radius, runtime context, and risk acceptance processes.

This is the difference between a tool that suggests generic code edits and an agent that autofixes in alignment with how your systems actually work.

Autofix—done responsibly

“Autofix” can be a scary word for teams that have been burned by over-eager bots. Apiiro’s approach is intentionally conservative where it must be and assertive where it can be:

It understands when not to fix—for example, when runtime mitigations already neutralize a code-level finding or when a change would violate policy. It delivers policy-compliant fixes directly in the IDE when remediation is warranted, tailored to your architecture and past secure patterns (including what’s already passed review). And it generates audit-ready risk acceptance when a finding is real but not a business risk—reducing noise and aligning to your governance processes.

The result is trust: fewer false positives, fewer contextless “red lights,” and faster throughput on the issues that matter.

What to measure in the next six months

Katie recommended leaders move past vanity metrics and track outcomes that reflect developer experience and risk reduction:

  • Autofix acceptance rate in the IDE and PRs (a proxy for accuracy and developer trust).
  • Class-level elimination (e.g., eradicate XSS across the portfolio through targeted campaigns + automated remediation).
  • Backlog burn-down for material risks and mean time to remediate by exploitability.
  • DORA-adjacent indicators (e.g., change failure rate, MTTR) to ensure security isn’t trading off resiliency.

Idan added a north star: a steady decline in new risks landing in code repos—even as AI-driven throughput grows. That’s how you know security is keeping pace with the business.

Why Apiiro is leading the way

IDC’s recent ASPM research—where Katie’s team analyzed the space that Apiiro has helped define—aligns with what we’re seeing from customers: the winners will integrate seamlessly into the IDE, bring software intelligence that reflects the real architecture and runtime, and automate risk decisions in ways that developers trust and security leaders can audit.

Apiiro’s Agentic Application Security Platform with the AutoFix AI Agent is built for exactly this moment:

  • AutoFix design and code risks using runtime context
  • AutoGovern with policy enforcement and secure coding guardrails
  • AutoManage risk lifecycle and measurement across the SDLC—without adding headcount or slowing delivery

The agent operates like a dedicated AppSec engineer per team—triggering automated threat modeling before code is written, evaluating every change in real time, and deciding whether to fix, enforce, or accept based on exploitability and business impact. And it does this inside the developer’s IDE with no plugins to maintain.

See the full discussion — [watch the webinar] to hear Katie, Idan, and Mike walk through real-world patterns and where leaders are investing now.

Final thought

Security’s mandate hasn’t changed: protect the business while enabling innovation. What’s changed is where and how we deliver it. In the AI coding era, you won’t keep up by adding scanners or hiring your way out. You’ll keep up by embedding software-intelligent, runtime-aware automation directly into the developer’s daily flow—so the secure path is the fastest path.

Ready to see it in your environment?

Apiiro helps Fortune-scale enterprises reconcile AI velocity with real risk reduction. If you’re exploring ASPM or rethinking how to govern AI-assisted development, we’d love to show you what’s possible.

→ Get a demo of Apiiro’s AutoFix AI Agent.