Is Vibe Coding Safe? Here's What the Research Actually Shows
Only 10.5% of AI-generated code is both functional and secure. A Wiz study found 20% of vibe-coded apps have serious vulnerabilities. Here's what the data says.
Vibe coding — describing what you want in natural language and letting AI build it — has gone from a novelty to the default way many people build software. But "does it work?" and "is it safe?" are different questions, and a growing body of research is providing answers.
The short version: most AI-generated code that works is not secure. The gap between functional and safe is massive, and most developers don't know it exists.
The Numbers: Functional vs. Secure
A 2025 research paper from Cornell ("Is Vibe Coding Safe?", published on arxiv) benchmarked AI coding agents on real-world tasks. Their finding: while 61% of solutions generated by SWE-Agent with Claude Sonnet were functionally correct, only 10.5% were both functional and secure. That means roughly 5 out of 6 working AI-generated solutions have at least one security vulnerability.
A separate study by Wiz found that 20% of vibe-coded applications have serious vulnerabilities or configuration errors. And Aikido.dev's analysis concluded that 45% of AI-generated code contains vulnerabilities from the OWASP Top 10.
These aren't scare numbers from anti-AI advocates. These are from researchers and security companies who use AI tools themselves. The data is consistent: AI tools generate functional code reliably but secure code rarely.
Why Functional Code Isn't Secure Code
When a developer writes code manually, they bring context that AI doesn't have. They know the user settings endpoint needs authorization because they designed the data model. They add rate limiting because they've seen brute force attacks in production. They validate file uploads because they've read about SSRF.
Vibe coding removes that context. You describe the feature, the AI builds it, and the output looks correct because it does what you asked. The security gaps are invisible because they're not about what the code does — they're about what the code doesn't prevent.
The Monoculture Problem
When one developer writes insecure code, one application is vulnerable. When an AI model generates insecure patterns, every application built with that model shares the same blind spots. AI models are trained on the same data, produce the same patterns, and make the same omissions.
This creates a monoculture of vulnerability. Attackers figure out the pattern once — "AI-built Next.js apps usually don't have authorization on API routes" — and apply it across thousands of targets. The scale of AI code generation turns individual vulnerabilities into systemic risks.
Who's Actually at Risk
The people most at risk aren't experienced developers using AI as a productivity tool. They have the security knowledge to review AI output and catch gaps. The people most at risk are the new wave of builders — designers, product managers, entrepreneurs — who are building real applications with AI tools and don't have the background to evaluate security.
These builders are shipping apps that handle real user data, real payments, and real business logic. They're moving fast because the tools let them. And the research shows that the tools' own security checks (like Lovable's built-in scan) miss a third of vulnerabilities.
Is It Safe? It Depends on One Thing
Vibe coding is as safe as your verification process. If you build with AI and test what it produces, you catch the 5-out-of-6 solutions that work but aren't secure. If you build with AI and ship without testing, you're relying on the 10.5% chance that the AI happened to generate secure code.
The answer isn't to stop vibe coding. The productivity gains are real and the trend is irreversible. The answer is to add one step to the workflow: scan before you ship. It takes less time to run a security scan than it took to read this article.
Find out what your AI tool missed — free scan at nullscan.io