The OWASP Top 10 in AI-Generated Code: Where Vibe Coding Goes Wrong
How each OWASP Top 10 vulnerability specifically shows up in code generated by AI tools like Cursor, Lovable, and Bolt — with real patterns and fixes.
The OWASP Top 10 is the definitive list of web application security risks. Every security audit and penetration test references it. But the standard explanations assume code written by humans. AI-generated code has its own patterns — it produces certain OWASP vulnerabilities far more often than others, and in predictable ways.
Here's how each OWASP item specifically manifests in AI-generated code, with the patterns to look for and the fixes that work.
1. Broken Access Control — The #1 AI Code Vulnerability
This is the OWASP item AI tools fail at most. AI generates API endpoints that fetch data by ID correctly but don't verify the requesting user has permission to access that ID. The route works, the data returns, but any authenticated user can access any other user's data by changing the ID in the URL.
The AI pattern: you ask for "a settings page." The AI builds GET /api/users/[id]/settings that fetches by ID. It doesn't add the check that the session user IS that ID. The fix: always filter by the authenticated user's ID, not a URL parameter.
// AI generates this (vulnerable):
const settings = await db.settings.findUnique({ where: { userId: params.id } })
// What it should generate:
const session = await getServerSession()
const settings = await db.settings.findUnique({ where: { userId: session.user.id } })2. Cryptographic Failures
AI tools rarely make classic cryptography mistakes (they won't hash passwords with MD5). But they make a subtler error: storing sensitive data in the wrong place. Environment variables with the NEXT_PUBLIC_ prefix get bundled into client-side JavaScript. AI tools sometimes put API keys or configuration secrets there because that's the simplest way to make the code work.
The fix: any secret, key, or connection string must be a server-side-only environment variable without the NEXT_PUBLIC_ prefix.
3. Injection
Modern AI tools usually generate ORM-based queries (Prisma, Drizzle), which handle SQL parameterization automatically. The risk appears when AI uses raw SQL for complex queries the ORM doesn't support easily, or when building search functionality with dynamic filters.
Watch for: any Prisma $queryRaw, Drizzle sql``, or Mongoose $where that includes user input. Also watch for dangerouslySetInnerHTML in React — AI tools reach for this when asked to render formatted text or markdown content.
4. Insecure Design
AI tools build what you ask for, not the security constraints around it. A password reset flow will work (send email, click link, set new password) but the reset token might never expire and could be reusable. A file upload will accept and store files but won't validate file types server-side.
This is the hardest OWASP category to test for automatically. You need to think about abuse cases: what if someone requests 1,000 password resets? What if they upload a 10GB file? What if they submit the same form 10,000 times? AI doesn't ask these questions.
5. Security Misconfiguration
AI tools generate application code but rarely modify framework-level configuration. This means apps ship with CORS set to allow all origins, debug/verbose error messages enabled, GraphQL introspection accessible in production, and no security headers. The AI wrote your features within the framework's defaults, and those defaults are optimized for development, not security.
6. Vulnerable Components
AI tools install packages to solve problems but don't evaluate those packages for security. They might install an abandoned package with known CVEs because it was popular in training data. Run npm audit after any AI-generated project setup and address critical findings before deploying.
7. Authentication Failures
AI generates working auth flows (login, register, password reset) without the security hardening: no rate limiting on login endpoints, session tokens in localStorage instead of httpOnly cookies, no account lockout after failed attempts, missing CSRF protection. The auth works for normal usage but collapses under any adversarial testing.
8-10: Integrity, Logging, SSRF
Software integrity failures: AI-generated CI/CD configurations and build scripts are usually insecure by default. Logging: AI almost never adds security event logging — failed logins, access denials, and input validation failures go unrecorded. SSRF: any AI-generated feature that accepts a URL (link preview, image import, webhook) fetches it without validating that it points to a public resource.
The Pattern
Items 1, 5, and 7 (Broken Access Control, Security Misconfiguration, Authentication Failures) account for the vast majority of vulnerabilities in AI-generated code. They're also the most testable — an automated scanner catches all three categories in minutes. You don't need to memorize the OWASP Top 10; you need to scan for it.
See how your AI-generated code scores against the OWASP Top 10 — free scan at nullscan.io