Apiiro Blog ïč„ OWASP Israel Panel: AI Velocity and…
Event

OWASP Israel Panel: AI Velocity and the Breaking Point of Security Frameworks

Timothy Jung
Marketing
Published February 19 2026 · 5 min. read

At a recent OWASP Israel panel, four leaders across software development, application security, and enterprise risk gathered to address a hard truth: AI is not just accelerating development, it is straining the foundations of modern security frameworks.

The panel featured:

  • Thomas Dohmke, Strategic Advisor at Apiiro and former CEO of GitHub
  • Idan Plotnik, CEO & Co-Founder at Apiiro
  • Sean M. Lyons, SVP of Application & Infrastructure Security at Akamai
  • Moran Ashkenazi, Investor at Khemin and former two-time CSO

The discussion focused on how AI-driven development is reshaping velocity, ownership, governance, and risk prioritization.

Session Highlights

1. Where does application security rank for CISOs right now, and has that changed in the past year?

Moran Ashkenazi, Cybersecurity Investor @ Khemin and former 2x CSO:

“Everything runs so fast
 every software engineer is now a team lead of agents.” For her, application security and supply chain security are now “one of the three major risks” in cyber, precisely because AI-driven pace makes old gatekeeping models hard to sustain.

She noted a concrete shift: “One of the KPIs
 was just block releases with critical vulnerabilities
 You cannot do that anymore. You just need to narrow it down
 it has become almost impossible.”

Idan Plotnik, CEO & Co-founder @ Apiiro:

Idan pushed the conversation earlier than vulnerabilities: “When an agent writes the code
 and expand[s] the attack surface, we don’t have the fundamental thing which is inventory to know what this agent introduces.” His diagnosis: speed has created a visibility gap – teams are approving and moving on before they understand what changed. And when velocity jumps, “it breaks the physics of AppSec. You cannot ask the developers to deal with fixing more and more vulnerabilities in every sprint.”

Sean M. Lyons, SVP of Application & Infrastructure Security @ Akamai:

Sean called the moment “the wild, wild west,” pointing to both internal and external acceleration: “What is the visibility of the actual use of these agents within the organization?” and, more urgently, “SLAs used to be
 minutes. Now SLAs are at the speed of a prompt – milliseconds.”

Thomas Dohmke:

Thomas offered the counterweight: AI increases risk, but also improves defense. “It has never been easier to find out about vulnerabilities
 you can just ask your favorite agent
 why this code has a SQL injection
 and then
 ‘fix it for me.’” His framing: it’s still a cat-and-mouse game, but defenders finally have comparable leverage.

2. Is AppSec finally getting more budget now that AI has accelerated development?

Idan Plotnik:

“It changed the priorities.” He described a move from bottom-up tool justification to top-down urgency: “Now it’s a top down approach where the CTO, CIO, and CISO are going down to the team and saying, ‘how are we dealing with the expansion of the attack surface?’ and ‘how do we prevent these risks from reaching production?’”

Thomas Dohmke:

Thomas added nuance: total budgets often stay flat, but the economics of AI are changing planning. “What has been rising massively is your token cost
 there’s a direct function of your productivity tied to how much you’re using an agent.” He shared a blunt example: teams can spend “$50 to $100 per engineer per day” on tokens, turning AI usage into an operational line item that competes with everything else, including security review.

3. Are AppSec teams using AI to build internal solutions – and what are they actually building?

Moran Ashkenazi:

“AI or die. 
You cannot think about, ‘should I adopt AI as an AppSec pillar?’ You have to.” But she emphasized buy-vs-build realism: teams aren’t building because it’s fun – they’re building to inject missing context. Her example was practical AppSec triage: “I found a vulnerability, but who is the owner? Where should I go to reach out to the person that will fix it as fast as we can?”

Idan Plotnik:

He argued large enterprises still aren’t freely building new tools; policy and budget gates make that hard. Instead, he sees AppSec using AI for the thing they’re most starved of: situational awareness. Developers “do not understand because the velocity doesn’t allow them to understand the software architecture. So they need another agent to understand that.”

Sean M. Lyons:

“Everybody’s a believer now.” His view was that AI becomes leverage for sophisticated practitioners – an augmentation that keeps improving each generation – eventually pushing organizations toward more agentic execution.

Thomas Dohmke:

Thomas reframed “junior vs senior” in an AI-native world: new graduates will arrive already fluent in these workflows. “You’re no longer a junior developer from 10 years ago, you’re at the skill set of a senior developer
 show us the way.”

4. With teams adopting AI tools and embedding GenAI in products, what visibility and control mechanisms are emerging?

Thomas Dohmke:

He argued software engineering has a head start because DevOps already normalized guardrails: branch protection, CI/CD, scanning, review. “We don’t actually trust our human developers either. If your CI/CD fails
 let’s not merge that code whether it comes from a human or from AI.” The bigger problem is the spectrum of legacy: many environments still lack the foundations that make those guardrails enforceable.

Idan Plotnik:

He made a key distinction: “We confuse the risks of using AI coding agents with adding GenAI frameworks or agents inside our code.” These are “two different attack surfaces,” but intertwined: an agent can add a framework and create new data egress paths. He described a governance trend: policies in source control limiting which GenAI frameworks/versions are allowed, driven by customer scrutiny: “You cannot sell a deal above $1 million without an inventory of all AI in our code and guard rails.”

Moran Ashkenazi:

She shared her own concrete governance program called “Trusted Ops” built on phased policy and enforced attestations. “Software cannot be released to production without attestations
 SAST
 code owners
 [and] vulnerabilities under SLA must be remediated.” Her point: assurance can’t be vibes; it needs measurable gates, but those gates aren’t the whole solution when findings explode.

5. What happens when “non-engineers” (like marketing) spin up software and connect it to sensitive data?

Idan Plotnik:

He called it “a huge challenge”: teams can onboard tools themselves and “connect an Excel spreadsheet that has all the PII.” His framing: it’s not purely AppSec – it’s visibility into SaaS usage and data governance so sensitive data doesn’t get poured into unsafe workflows.

Thomas Dohmke:

He rejected the instinct to “prevent”: “How amazing is it for anyone in a company to use AI tools to build software themselves?” His answer wasn’t prohibition, but enablement plus standards: “We gotta skill people, give them the right tools. If we expect you to follow the processes and the policies, we can’t say marketing is not allowed to ship software – yes they should be, as long as they’re following the same processes.”

The Final Word

AI isn’t just writing more code. It’s compressing the time available for understanding architecture, assigning ownership, and making risk decisions. Across the panel, the common thread was clear: visibility and context have become prerequisites, and the human bottlenecks (review, governance, bureaucracy) won’t survive without agentic help.

“AI is creating an opportunity for prevention – to guard the AI coding agent from generating non-compliant and vulnerable code.”

Idan Plotnik