Cookies Notice
This site uses cookies to deliver services and to analyze traffic.
📣 Guardian Agent: Guard AI-generated code
Featuring Thomas Dohmke (Apiiro Strategic Advisor & former CEO of GitHub), Liora Shechter (CEO of Mataf at First International Bank), Yevgeny Dibrov (CEO and Co-Founder of Armis), and Idan Plotnik (Co-Founder and CEO of Apiiro).
AI-driven software development is forcing a fundamental shift of development speed and a rethink of application security. In this executive forum at the Microsoft offices in Hertsliya, senior technology and security leaders share their insights into AI-powered AppSec models for an AI-driven world.
See the full exclusive executive panel to hear deep insights into managing risk across an expanding attack surface, determining responsible ownership of AI adoption, and how enterprises can prepare for a fully AI-driven development process. Summarized highlights below:
1. AI increases velocity by 4 times, but risk by 10 times – so what is the first thing to break in today’s bank risk frameworks?
Yevgney Dibrov, CEO / Co-Founder @ Armis:
“AI is expanding the attack surface 10x, sometimes 100x, and you have ephemeral assets that compound the risk factor. So the question becomes; how do you remediate and how do you prioritize when you have all these security findings based on vulnerabilities in code?”
Idan Plotnik, CEO / Co-Founder @ Apiiro:
“CISOs today understand that hardware expands the attack surface – more cameras, more laptops – but what they don’t understand is that a quadrupling of code volume doesn’t fully capture the growth of the attack surface, because inside those code repositories are APIs, open source dependencies, and data models.
When banking organizations adopt AI to be innovative against other banks, they do so to release more features – and these features, behind the scenes, introduce more code. But not every code element is actually exploitable. And only when you understand the software architecture, then you can accurately assess risk.”
Thomas Dohmke, Strategic Advisor @ Apiiro:
“I think the first thing that broke in risk frameworks – that has already broken – are processes.
All of a sudden, mid-process, AI accelerates code so quickly that code review becomes impossible.
You have more and more developers using more and more agents, running in parallel, all writing code. And if you have a stable dev process, the next thing after writing code is reviewing code, and that’s a human process. So the human factor is slowing you down even though AI has accelerated all the development work. And that gap is the break in the risk framework.”
Liora Shechter, CEO of Mataf @ First International Bank:
“There are four steps to reduce the risk of frameworks breaking down. First is to have an organizational methodology and policy of how to adopt AI. Second is to have the means to enact that policy. Third is to train people how to use AI wisely. Fourth is actually reviewing all the code they generate, without slowing the process down.
Right now we have no regulation of how to develop AI securely, and the finance sector is supposed to be a highly, highly regulated sector. The responsibility is on us to secure customers’ money, and so using AI in an intelligent way, without slowing innovative deployment, is essential.”
2. How has AI code reshaped exposure management across large enterprises?
Yevgney Dibrov
“AI creates this huge expansion of asset categories, like we had for laptop servers, mobile, cloud, SAS code, etc. It means organizations need to take a very general view of all their potential exposures and then prioritize what’s important, which they can’t do without context – that AI agent in your retail store is more important than the AI agent in the distribution center, for example. Regulatory frameworks that are currently in place were designed for humans, but now AI is writing code, so accountability is in question.”
Thomas Dohmke
“We’re already living in a world where 90% of the code that you’re using in your projects is actually not owned by your developers – it’s owned by the world’s open source community. And every time you’re putting a new open source library into your project, you’re effectively giving access to somebody you don’t know, who’s not in your org chart, who doesn’t follow any of your policies.
We have accepted this risk for the last 20 years because your organization couldn’t be competitive if it didn’t rely on OSS – because everybody else, especially Silicon Valley startups, built everything on top of the shoulders of open source communities. We have accepted the risk because we cannot live without the innovation, and the same is true with AI.”
Liora Shechter
“Organizations must approach responsible AI usage and security from the risk perspective. Who can assess the risk the best and say, ‘the bank’s exposure in the AI space is at this level, and this is how we mitigate it’? Because you can’t stop this proliferation of AI. It is happening, it will happen, same as cloud and IOT.”
3. What are some of the AppSec issues and change motivators that board members should know about, and should they be involved in setting policies or standards?
Liora Shechter
All C-level roles in the organization need to leverage their expertise and collaborate – the CISO about security, the risk management team about prioritization. Business impact is most important, in my view. Velocity and tool adoption are nice, but what we really want is to drive our advantages in the market.
Thomas Dohmke
There is a spectrum of board-level conversations depending on company size, but budgets in general are being shifted towards AppSec thanks to board interest.
Idan Plotnik
I got two requests just recently from CISOs just last week, saying they need to report to the board about AI coding assistant adoption. They want to see the velocity of the code development vs. risk in their development lifecycle. We see that as a compelling event. Application security was the new shiny tool for the practitioner, but now board members hear about it and know about it.
4. If we look ahead a few years, will AI coding agents reduce enterprise risk, or will it concentrate it into a rarer but more catastrophic failure?
Yevgney Dibrov
Cyber is always a game of cat and mouse – attacker vs. defender. In a few years, AI will expand the attack surface, making life easier for attackers, but it will also empower defenders with the right tools to protect an expanded attack surface. Knowing the full context of their software architecture enables defenders to better leverage AI to defend their environment. I believe CISOs today see code ripples as a big part of the attack surface. They view expanding codebases as an important asset to manage, and put it in the CMDB alongside all workflows. I wouldn’t say it’s yet the first thing on their minds, we’re getting there.
Liora Shechter
In 3 years we’ll still have a mainframe and still have Cobalt, still have CICS, and so on. However, I believe that our code will, in some percentage, be fully developed by AI – 30% perhaps. All development tasks will be done by an AI agent.
Thomas Dohmke
In 3 years we’re still going to have a lot of legacy code. We’re still going to have engineers that have endless backlogs, and we’re going to unlock productivity responsibly. We will achieve and manage growth with AI agents to produce code, AI agents to review code, and AI agents to secure code, and prevent security vulnerabilities, like Apiiro Guardian Agent.
Idan Plotnik
All the risk and exposure management that organizations have implemented in the last 10 years will be copy and pasted to software. Same MO, same mindset, same concept of discovering, assessing, preventing. This is my prediction; a few specialized agents, all operating in concert, each with their own area of expertise.
The evolution of application security into the prevention side is going to be critical in the coming years. When you prevent vulnerabilities from the get-go and, and you train AI agents writing code to not build those vulnerabilities into the code, then you will minimize the attack surface.
Idan Plotnik