SecurityFebruary 8, 20266 min read

The Most Common Vulnerabilities in Vibe Coded Apps

A breakdown of the security issues that show up most often in apps built with AI coding tools — and why AI keeps making the same mistakes.

"Vibe coding" has become the shorthand for building applications with AI assistance — describing what you want in natural language and letting tools like Cursor, Lovable, Bolt, or v0 generate the code. It's fast, it's accessible, and it's producing a wave of new applications from people who couldn't build them before.

But it's also producing a wave of security vulnerabilities. Not because the tools are bad, but because security requires a mindset that AI coding assistants don't have.

SQL Injection: Still the Classic

SQL injection has been on the OWASP Top 10 for over two decades, and AI-generated code still produces it. The issue typically shows up when AI writes database queries using string concatenation instead of parameterized queries.

Modern ORMs like Prisma, SQLAlchemy, and Drizzle generally protect against this, but AI tools sometimes bypass the ORM for complex queries or use raw SQL when the ORM doesn't support what's needed. Those raw queries are often vulnerable.

The fix is straightforward — always use parameterized queries or your ORM's query builder. But the point is that AI doesn't always default to the safe approach.

Cross-Site Scripting (XSS)

XSS vulnerabilities appear when user input is rendered in the browser without proper sanitization. Modern frameworks like React and Next.js provide built-in XSS protection through JSX escaping, but there are common ways AI-generated code bypasses these protections.

The most frequent pattern: using dangerouslySetInnerHTML (React) or v-html (Vue) to render user-generated content. AI tools reach for these when asked to display formatted text, markdown content, or HTML from an API. Each usage is a potential XSS vector if the content isn't sanitized server-side.

Broken Authentication

AI tools can set up authentication flows quickly — login pages, JWT tokens, session management. What they often miss is the security around those flows:

  • No rate limiting on login attempts, allowing brute force attacks
  • Password reset tokens that don't expire or can be reused
  • Session tokens stored in localStorage instead of httpOnly cookies
  • Missing CSRF protection on state-changing requests
  • JWT tokens with overly long expiration times
  • No account lockout after repeated failed attempts

Insecure Direct Object References (IDOR)

IDOR is one of the most common vulnerabilities in AI-built apps because it requires understanding authorization context that AI often doesn't have.

Here's the typical pattern: AI generates a REST API with endpoints like /api/users/123/settings. It correctly fetches and returns the data for user 123. But it doesn't check whether the currently authenticated user IS user 123. Any logged-in user can access any other user's settings by changing the ID.

This happens because AI implements the happy path — getting the right data for the right ID — without implementing the security check of verifying the requester has permission to access that data.

Missing Security Headers

This is the single most common issue we see. AI-generated applications almost never include security headers because headers don't affect functionality. Your app works identically with or without them.

But security headers are your browser-level defense layer. Content-Security-Policy prevents XSS from executing. HSTS forces HTTPS connections. X-Frame-Options prevents clickjacking. Without them, every other vulnerability becomes easier to exploit.

Server-Side Request Forgery (SSRF)

SSRF shows up in AI-built apps that accept URLs as input — for features like link previews, image imports, webhook configurations, or PDF generation. AI implements the feature (fetch this URL and return the content) without implementing the protection (validate that the URL points to a public resource, not an internal service).

In cloud environments, SSRF can be particularly dangerous because it can be used to access cloud metadata endpoints that contain credentials and configuration data.

The Pattern

Every vulnerability on this list follows the same pattern: AI builds the feature correctly but doesn't build the security around it. It implements what you asked for without considering what an attacker might ask for.

This isn't a limitation that will be fixed soon. Security requires adversarial thinking — imagining how someone would misuse a feature — and that's fundamentally different from the generative approach AI tools take.

The solution isn't to stop using AI tools. It's to test what they produce. Automated security scanning catches the vast majority of these issues in minutes.

Check your app for these vulnerabilities — free scan at nullscan.io

nullscan://terminal
NULLSCAN v2.0.0 - Autonomous Penetration Testing
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 
Initializing secure connection...
Connection established.
 
Enter target URL to begin reconnaissance:
>