Cookies Notice
This site uses cookies to deliver services and to analyze traffic.
📣 New: Apiiro launches AI SAST
Back in 2023, we shared how Apiiro helps organizations uncover shadow GenAI frameworks in their codebases using our Deep Code Analysis (DCA) technology. That visibility-first approach gave AppSec leaders clarity into which frameworks and exit points were being introduced, where they lived in the codebase, and what risks they carried.
Fast forward to today: AI adoption has accelerated dramatically. Teams are no longer just experimenting with frameworks: they’re building AI Agents, deploying MCP Servers, and embedding AI models into production code. For CISOs and AppSec leaders, this creates a pressing challenge: how do you enable fast innovation while managing new risks such as insecure inputs and outputs, secrets exposure, and data leakage? The stakes are clear. You can’t govern what you can’t see.
AppSec leaders have lived this story many times by now. A new technology emerges — APIs, open source, secrets, containers, infrastructure as code — and the first response is to build a specialized scanner. Those tools identify narrow issues, but because they don’t connect to the broader risk landscape, teams are left with fragmented visibility and endless alert backlogs.
AI is no different. Looking at it through a siloed lens obscures fundamental questions:
Unsurprisingly, most approaches to AI risk detection have repeated the mistakes of the past: narrow scanners that identify issues in isolation but fail to show how those risks connect across your code and infrastructure.
Apiiro takes a fundamentally different path.
Our Deep Code Analysis (DCA) continuously discovers, inventories, and contextualizes AI-specific resources within the broader software architecture graph — not as outliers, but as first-class citizens alongside APIs, open source, containers, and sensitive data. These resources include:

Because they’re mapped into the software graph, AI risks can be correlated with vulnerabilities from SAST, SCA, DAST, CSPM, and API security tools — normalized, deduplicated, and enriched with runtime and business context.

Take a practical example: use of the Hugging Face Python client inside a service that also contains a sensitive API. On its own, each issue may look minor. But once correlated in the graph with findings from API security and SAST, the combination is revealed to be toxic — and the real risk score jumps. That’s how DCA turns noisy alerts into actionable insights.

Finally, inventory without accountability is insufficient. That’s why Apiiro integrates with CMDBs like ServiceNow, tying AI components to business services and owners. Security leaders can not only see where AI lives in the codebase, but also who is responsible for governing it.
At Apiiro, we’ve built our platform on the principle that security requires context. We embed AI directly into the software graph alongside open source dependencies, infrastructure, APIs, secrets, and runtime signals.
This means AI Agents, MCP servers, models, datasets, and frameworks aren’t treated as standalone objects. They are mapped, inventoried, and continuously analyzed with the same visibility, governance, and remediation workflows as any other technology in your stack.
The result is an AI Bill of Materials (AI BOM) that sits within a unified view of your entire application ecosystem — one that powers not just detection, but governance, policy enforcement, and threat modeling.


The above capabilities are providing you with:
Rather than drowning in AI-specific alerts, developers see AI risks in the same context as other security feedback — integrated into pull requests, linked to ownership, and tied to remediation guidance.
By treating AI as part of the software graph, security leaders can avoid reinventing the wheel for each new technology trend. Instead of separate silos, you get a multidimensional risk view that:


The history of security tells us the same story again and again: cloud, containers, open source — each wave brought innovation and risk. Organizations that chased technology-specific scanners ended up with fragmented tools and unsustainable backlogs. Those that modeled the entire graph of software risk, and governed it holistically, were able to scale.
AI is the latest chapter, but not a unique one. By embedding AI into the software graph from the start, organizations can innovate with confidence, govern with clarity, and avoid repeating the mistakes of past technology waves. See how in a demo with our team.