Cookies Notice
This site uses cookies to deliver services and to analyze traffic.
đŁ Guardian Agent: Guard AI-generated code
At a recent OWASP Israel panel, four leaders across software development, application security, and enterprise risk gathered to address a hard truth: AI is not just accelerating development, it is straining the foundations of modern security frameworks.
The panel featured:
The discussion focused on how AI-driven development is reshaping velocity, ownership, governance, and risk prioritization.
1. Where does application security rank for CISOs right now, and has that changed in the past year?
Moran Ashkenazi, Cybersecurity Investor @ Khemin and former 2x CSO:
âEverything runs so fast⊠every software engineer is now a team lead of agents.â For her, application security and supply chain security are now âone of the three major risksâ in cyber, precisely because AI-driven pace makes old gatekeeping models hard to sustain.
She noted a concrete shift: âOne of the KPIs⊠was just block releases with critical vulnerabilities⊠You cannot do that anymore. You just need to narrow it down⊠it has become almost impossible.â
Idan Plotnik, CEO & Co-founder @ Apiiro:
Idan pushed the conversation earlier than vulnerabilities: âWhen an agent writes the code⊠and expand[s] the attack surface, we donât have the fundamental thing which is inventory to know what this agent introduces.â His diagnosis: speed has created a visibility gap â teams are approving and moving on before they understand what changed. And when velocity jumps, âit breaks the physics of AppSec. You cannot ask the developers to deal with fixing more and more vulnerabilities in every sprint.â
Sean M. Lyons, SVP of Application & Infrastructure Security @ Akamai:
Sean called the moment âthe wild, wild west,â pointing to both internal and external acceleration: âWhat is the visibility of the actual use of these agents within the organization?â and, more urgently, âSLAs used to be⊠minutes. Now SLAs are at the speed of a prompt â milliseconds.â
Thomas Dohmke:
Thomas offered the counterweight: AI increases risk, but also improves defense. âIt has never been easier to find out about vulnerabilities⊠you can just ask your favorite agent⊠why this code has a SQL injection⊠and then⊠âfix it for me.ââ His framing: itâs still a cat-and-mouse game, but defenders finally have comparable leverage.
2. Is AppSec finally getting more budget now that AI has accelerated development?
Idan Plotnik:
âIt changed the priorities.â He described a move from bottom-up tool justification to top-down urgency: âNow itâs a top down approach where the CTO, CIO, and CISO are going down to the team and saying, âhow are we dealing with the expansion of the attack surface?â and âhow do we prevent these risks from reaching production?ââ
Thomas Dohmke:
Thomas added nuance: total budgets often stay flat, but the economics of AI are changing planning. âWhat has been rising massively is your token cost⊠thereâs a direct function of your productivity tied to how much youâre using an agent.â He shared a blunt example: teams can spend â$50 to $100 per engineer per dayâ on tokens, turning AI usage into an operational line item that competes with everything else, including security review.
3. Are AppSec teams using AI to build internal solutions â and what are they actually building?
Moran Ashkenazi:
âAI or die. âŠYou cannot think about, âshould I adopt AI as an AppSec pillar?â You have to.â But she emphasized buy-vs-build realism: teams arenât building because itâs fun â theyâre building to inject missing context. Her example was practical AppSec triage: âI found a vulnerability, but who is the owner? Where should I go to reach out to the person that will fix it as fast as we can?â
Idan Plotnik:
He argued large enterprises still arenât freely building new tools; policy and budget gates make that hard. Instead, he sees AppSec using AI for the thing theyâre most starved of: situational awareness. Developers âdo not understand because the velocity doesnât allow them to understand the software architecture. So they need another agent to understand that.â
Sean M. Lyons:
âEverybodyâs a believer now.â His view was that AI becomes leverage for sophisticated practitioners â an augmentation that keeps improving each generation â eventually pushing organizations toward more agentic execution.
Thomas Dohmke:
Thomas reframed âjunior vs seniorâ in an AI-native world: new graduates will arrive already fluent in these workflows. âYouâre no longer a junior developer from 10 years ago, youâre at the skill set of a senior developer⊠show us the way.â
4. With teams adopting AI tools and embedding GenAI in products, what visibility and control mechanisms are emerging?
Thomas Dohmke:
He argued software engineering has a head start because DevOps already normalized guardrails: branch protection, CI/CD, scanning, review. âWe donât actually trust our human developers either. If your CI/CD fails⊠letâs not merge that code whether it comes from a human or from AI.â The bigger problem is the spectrum of legacy: many environments still lack the foundations that make those guardrails enforceable.
Idan Plotnik:
He made a key distinction: âWe confuse the risks of using AI coding agents with adding GenAI frameworks or agents inside our code.â These are âtwo different attack surfaces,â but intertwined: an agent can add a framework and create new data egress paths. He described a governance trend: policies in source control limiting which GenAI frameworks/versions are allowed, driven by customer scrutiny: âYou cannot sell a deal above $1 million without an inventory of all AI in our code and guard rails.â
Moran Ashkenazi:
She shared her own concrete governance program called âTrusted Opsâ built on phased policy and enforced attestations. âSoftware cannot be released to production without attestations⊠SAST⊠code owners⊠[and] vulnerabilities under SLA must be remediated.â Her point: assurance canât be vibes; it needs measurable gates, but those gates arenât the whole solution when findings explode.
5. What happens when ânon-engineersâ (like marketing) spin up software and connect it to sensitive data?
Idan Plotnik:
He called it âa huge challengeâ: teams can onboard tools themselves and âconnect an Excel spreadsheet that has all the PII.â His framing: itâs not purely AppSec â itâs visibility into SaaS usage and data governance so sensitive data doesnât get poured into unsafe workflows.
Thomas Dohmke:
He rejected the instinct to âpreventâ: âHow amazing is it for anyone in a company to use AI tools to build software themselves?â His answer wasnât prohibition, but enablement plus standards: âWe gotta skill people, give them the right tools. If we expect you to follow the processes and the policies, we canât say marketing is not allowed to ship software â yes they should be, as long as theyâre following the same processes.â
AI isnât just writing more code. Itâs compressing the time available for understanding architecture, assigning ownership, and making risk decisions. Across the panel, the common thread was clear: visibility and context have become prerequisites, and the human bottlenecks (review, governance, bureaucracy) wonât survive without agentic help.
âAI is creating an opportunity for prevention â to guard the AI coding agent from generating non-compliant and vulnerable code.â
Idan Plotnik