Cookies Notice
This site uses cookies to deliver services and to analyze traffic.
📣 Introducing AI Threat Modeling: Preventing Risks Before Code Exists
AI writes your code now, but your security tools still assume a human wrote every line. We built a CLI that works with your AI coding assistant, scanning, analyzing, and remediating security risks as part of the development flow.
Development teams now deliver code at four times the velocity of three years ago. AI coding assistants – Claude Code, Cursor, Copilot – aren’t autocomplete tools anymore. They architect features, refactor entire services, and ship modules end-to-end in a single session. Half of Google’s code is AI-generated.
The attack surface has expanded accordingly. Along with that 4x speed comes 10x more risk, and Apiiro research from a Fortune 20 enterprise shows that adding more scanners, rules, and extensions to an AI-accelerated environment can broaden the attack surface by 6x.
AI agents can write code, search the web, and run tests. They cannot, however, check your security risk posture, scan for leaked credentials, or consult your security platform. Security is a blind spot for every AI coding assistant today; not because they lack the capability, but because security platforms were never built to talk to them.
That gap is compounding with every line of AI-generated code.
Turning developers into mediators between their coding assistants and their security tools is leaving humans too deep in the loop. We need an AI-native security platform, one that puts coding assistants directly in touch with the security capabilities that developers and engineers use every day.
The industry’s response to AI-driven risk has largely been to layer more AI on top of legacy detection workflows. The current cycle remains:
Code is generated → Scanned → Triaged → Fixed (sometimes).
This creates a compounding security backlog that grows faster than any team can address. Detection does not scale at the speed of AI.
True AI-native security looks like a security platform that AI agents can interact with directly, querying risks, scanning code, and getting security guidance as part of their natural workflow. The AI agent becomes a first-class user of your security platform, not an afterthought.
This is the goal of Apiiro CLI.
The Apiiro CLI brings the full Apiiro platform to your terminal and to your AI coding assistants, giving them six native security capabilities: scanning, risk management, remediation, an AI security analyst (via Apiiro Guardian Agent), AI Threat Modeling, and prompt enrichment. It installs in seconds on macOS, Linux, and Windows via brew, direct download, or RPM.
Apiiro CLI ships with agent skills – structured capability definitions that AI coding assistants like Claude Code and Cursor can read and invoke autonomously. These install with one simple command –
npx skills add apiiro/cli-releases
– and, once installed, give your AI assistants a clear understanding of what Apiiro can do, and invoke the right capability with the right software graph context.
No memorized commands. No context switching. No dashboard. Just tell your AI assistant what you need –
– and security becomes part of the conversation.
The traditional security workflow (find → report → ticket → fix) takes days to weeks. When vulnerabilities get exploited in minutes, that cycle length is unacceptable.
But when security is built into the AI coding assistant, the loop becomes: enrich → prevent → verify. Security findings get surfaced within developer workflows, remediation time collapses, and vulnerable patterns are never generated in the first place. This prevention occurs at every commit, across every repo, without adding headcount.
Here are the six security skills that ship with Apiiro CLI:
Trigger: When the user mentions scanning code, secrets detection, or OSS vulnerabilities.
Fast local scanning for leaked secrets and open-source vulnerabilities, with results in seconds. After your AI assistant generates code, it can run a scan on changed files, report any findings, and apply fixes, all before a single line reaches a commit. For CI/CD pipelines, diff-scan compares git references and blocks on critical findings, creating an auditable security gate whether the code was written by a human or an agent.
Outcome: Secrets and known CVEs caught at the moment of generation, not weeks later in a ticket queue.
Trigger: When the user asks about security risks, vulnerabilities, or findings.
Your AI assistant queries Apiiro’s full risk inventory, filtered by severity, category, or finding type, and explains each finding in the context of your codebase. No dashboard. No spreadsheet. No context switch. Risk data reaches developers through the tool they already use, reducing mean time to remediate (MTTR) by transforming vulnerability investigation into part of the coding conversation.
Outcome: Developers engage with security findings inside their workflow, not in a backlog they never open.
Trigger: When the user wants to fix, remediate, or resolve a security risk.
Apiiro’s risk intelligence connects to your AI assistant’s coding ability. It retrieves risk details, pulls remediation instructions tailored to the finding type, and applies the fix directly in your codebase. For secrets, it removes the exposure. For vulnerable dependencies, it upgrades to a patched version. For code-level findings, it rewrites the vulnerable pattern. When automated remediation isn’t available, it falls back to Apiiro Guardian Agent for guided advice, and applies the fix either way.
Outcome: Remediation collapses from days to minutes, without requiring a developer to leave their IDE.
Trigger: When the user wants AI-powered security analysis or asks questions about codebase security.
Guardian is Apiiro’s AI security agent. It knows your codebase, your dependencies, and your risk history. Its answers are specific to your repository, not generic advice. Ask it anything: “Is my auth implementation secure?” “What’s the attack surface of this service?” “How should I handle file uploads safely?”
For security leaders, Guardian’s org-wide mode answers natural-language posture questions across all repositories: “What are our top critical risks this week?”
No dashboards. No query languages. No waiting for a weekly report.
Outcome: Every developer has an AppSec engineer on demand. Every security leader has instant, org-wide visibility.
Trigger: When the user wants threat analysis or STRIDE review of a design or feature spec.
Give the CLI a feature description, spec, or architectural change, and it returns a STRIDE-based threat analysis before code generation begins. This is prevention at its earliest possible point.
The real power is chaining the threat-model with Apiiro Secure Prompt: describe a feature, receive a structured threat analysis, then feed those threats into Secure Prompt to generate security-hardened implementation requirements for each countermeasure.
Outcome: Threat modeling shifts from a quarterly exercise to a per-feature habit, with zero additional overhead on the developer. The workflow becomes: describe → threat-model → secure-prompt → build.
Trigger: When the user wants to add security requirements to a coding task.
Give the CLI a development task, and it returns that same task enriched with security requirements specific to your repo’s stack, dependencies, and known risk profile. The business intent is preserved, but security guardrails are added around it, before the AI agent writes a single line.
Outcome: Vulnerable patterns are never generated in the first place. The cost to fix drops to zero.
For developers and AppSec practitioners, Apiiro CLI turns secure development into a trusted conversation with your AI coding agents. Once your assistants have access to Apiiro security capabilities, development scenarios can be secured in seconds with a simple secure prompt:
| Scenario | What to ask your AI Assistant |
| Writing new code | “Add security requirements to this task.” Secure Prompt enriches the prompt before code is generated. |
| Designing a new feature | “Threat model this feature.”STRIDE analysis plus countermeasure specs via AI Threat Modeling. |
| Before committing | “Scan for secrets.”Context-enriched Fast Scan catches secrets and vulnerable dependencies in seconds. |
| Investigating a finding | “What risks does this repo have?” Guardian Agent leverages the Apiiro Risk Graph to explain findings in context. |
| Fixing a vulnerability | “Fix this risk.” Your AI coding assistant leverages the Fix skill from Apiiro CLI and adds application context from Apiiro to apply the remediation. |
| Cross-repository questions | “What are our top risks this week?” Guardian Agent’s org-wide mode answers security posture questions. |
AI is rewriting how software is built. Security platforms that weren’t designed for AI agents will become irrelevant because AI agents can’t interact with them.
The Apiiro CLI is proof that being AI-native means more than using AI inside your platform. It means building a platform that AI can use. One where the AI agent that writes the code can also scan it, risk-assess it, threat-model it, and fix it, before it ever reaches production.
Security should be easily accessible to the developer and visible to the leader. The CLI is how we make that real.
Book a demo and we’ll show you exactly how the Apiiro CLI integrates into your development workflow.Â
Or, try it out for yourself – get started in 4 steps:
# Install
brew tap apiiro/tap && brew install apiiro
# Authenticate
apiiro login
# Add skills for your AI assistant
npx skills add apiiro/cli-releases
Then ask your AI assistant: “What risks does this repo have?”
Not using an AI coding assistant yet? The CLI works directly from your terminal too, giving AppSec practitioners the same scanning, risk management, and threat modeling capabilities from the command line. Start with apiiro –help
CLI releases and skills available here.