Wallarm's Q2 2025 API ThreatStats Report just dropped some numbers that should make any CISO sweat. 639 API-related CVEs in three months. That's seven new vulnerabilities every single day, and the majority are Critical or High severity.
But here's what's really scary: 34 of these vulnerabilities specifically target AI-powered APIs. That's not just theoretical attack surface - that's active exploitation of the infrastructure running your machine learning models, AI agents, and automated decision systems.
The AI Attack Surface Is Real Now
When companies started slapping AI onto everything, security teams knew this day would come. AI systems need APIs to function - they call external services, access training data, receive user inputs, and return responses. Every one of those touchpoints is a potential attack vector.
Ivan Novikov, Wallarm's CEO, hits the nail on the head: "Attackers are no longer just scanning for outdated libraries, they're exploiting the way APIs behave, especially those powering AI systems and automation."
The vulnerabilities aren't just traditional SQL injection and authentication bypasses. We're seeing logic-layer attacks that exploit how AI systems make decisions. Attackers send carefully crafted inputs that cause AI models to behave unpredictably, leak training data, or bypass security controls.
Real Breaches Are Already Happening
The report documents actual breaches hitting SaaS collaboration platforms and cloud infrastructure. These aren't lab experiments - they're production systems getting owned because their AI APIs had insecure defaults, weak authentication, or no runtime monitoring.
One particularly nasty case involved an AI agent vulnerability where attackers manipulated the agent's decision-making process through API calls. Instead of performing legitimate tasks, the compromised agent started executing unauthorized operations with elevated privileges.
The Traditional Security Model Is Broken for AI
Traditional API security focuses on known attack patterns and signatures. But AI-powered APIs behave differently. They're stateful, context-aware, and make decisions based on complex inputs that change over time.
Static security testing misses these dynamic vulnerabilities entirely. You can't scan an AI model's API the same way you'd test a CRUD application. The vulnerability might only surface when the AI encounters specific input combinations or decision trees.
Runtime protection becomes critical because AI systems can fail in unexpected ways. A language model might start outputting sensitive training data when prompted correctly. An image recognition API might misclassify malicious payloads as benign content. A recommendation engine might be tricked into promoting harmful content.
Logic-Layer Attacks Are The New Normal
The report highlights a "dramatic rise in logic-layer vulnerabilities." These aren't coding bugs - they're flaws in how applications implement business logic and decision-making processes.
For AI systems, logic-layer attacks are particularly dangerous because they exploit the core intelligence of the system. Attackers aren't just trying to break in - they're trying to manipulate the AI's reasoning process itself.
Prompt injection is one example that everyone knows about, but the real threats are more sophisticated. Attackers are finding ways to:
- Manipulate training data through APIs to poison model behavior
- Extract proprietary model architectures through carefully crafted queries
- Bypass content filtering by exploiting edge cases in AI decision trees
- Escalate privileges by tricking AI agents into performing unauthorized actions
The Numbers Tell The Story
639 API vulnerabilities in one quarter represents a 25% increase over Q1 2025. The acceleration is clear, and AI-powered systems are driving much of the growth.
More concerning is the severity distribution. The majority of these CVEs are Critical or High severity, meaning they provide immediate pathways to system compromise. These aren't low-risk theoretical vulnerabilities - they're actively exploitable attack vectors.
The 34 AI-specific vulnerabilities represent a new category that barely existed two years ago. As AI adoption accelerates across enterprise applications, this number will only grow.
What Security Teams Need to Do Right Now
First, inventory your AI-powered APIs. If you don't know which systems are using AI models or AI agents, you can't protect them. Many organizations have AI functionality embedded in third-party services they don't even realize are AI-powered.
Second, implement runtime API monitoring specifically designed for AI systems. Traditional WAFs and API gateways won't catch logic-layer attacks against AI models. You need solutions that understand AI behavior patterns and can detect anomalous decision-making.
Third, assume your AI systems will be targeted. The attack surface is expanding faster than security controls, and attackers are already adapting their techniques. Plan incident response procedures specifically for AI system compromises.
The Bigger Picture
This isn't just about API security - it's about the fundamental challenge of securing AI systems that we don't fully understand. When a neural network makes a decision, can you explain why? When an AI agent performs an action, can you trace the reasoning?
The complexity of AI systems creates blind spots that attackers are learning to exploit. Traditional security monitoring looks for known bad patterns, but AI systems can fail in novel ways that have never been seen before.
Wallarm's report is documenting the early stages of what will likely become the dominant cybersecurity challenge of the next decade. As AI becomes critical infrastructure, the attacks targeting these systems will become more sophisticated and more damaging.
The 639 vulnerabilities in Q2 2025 are just the beginning.