← All posts
May 11, 2026lovableboltcursorsecuritycve

Why AI Code Generators Keep Producing the Same Security Vulnerabilities

CVE-2025-48757 wasn't a one-off. The same RLS pattern shows up across Lovable, Bolt, and Cursor apps. Here's why — and what has to change.

CVE-2025-48757 exposed a specific vulnerability in Lovable-generated apps: Row Level Security policies configured with USING (true), which made tables readable by anyone — authenticated or not.

What made it a CVE rather than a one-off mistake was the scale. The same pattern appeared in 170+ apps, all generated by the same tool, all with the same flaw in the same place.

That's not a bug. That's a training distribution problem. And it applies to every AI code generator, not just Lovable.


Why AI tools produce the same vulnerabilities at scale

1. They optimize for "it works," not "it's safe"

AI code generators are trained on human-written code and evaluated on functional correctness. A model that generates code passing all tests scores the same as a model generating secure code passing all tests. Security doesn't show up in the training signal.

USING (true) works. The app runs. Queries return data. No error is thrown. From the model's perspective, the output is correct.

2. Security requirements are implicit, not stated

When you prompt "add a users table with row level security," the intent is usually "users can only see their own rows." But you didn't say that. The model generates the least-constrained interpretation that satisfies the literal instruction: RLS is enabled, policy exists, done.

Implicit intent doesn't survive the compression of a language model's training. Explicit, specific constraints do.

3. Training data contains vulnerable patterns

Public GitHub repositories — a primary training source for all major code models — contain millions of examples of misconfigured RLS, hardcoded credentials, and missing security headers. Not because developers intended to be insecure, but because tutorial code, prototypes, and first drafts get committed and never cleaned up.

The model learned from that code. It reproduces those patterns.

4. One prompt, millions of identical outputs

The same scaffolding prompt, run a million times, produces essentially the same code. A single misconfigured pattern in the training data or output template propagates to every app generated with that prompt. This is the multiplication effect that made CVE-2025-48757 a CVE rather than a personal mistake.


The pattern across tools

The specific manifestation differs by tool, but the underlying cause is the same:

Lovable: USING (true) in RLS policies. Tables accessible to anonymous users. Direct consequence of the model generating syntactically correct but semantically permissive SQL.

Bolt: Similar RLS patterns. Also observed: Supabase anon key used where service role key was intended, granting broader table access than the developer expected.

Cursor / Copilot: Credentials autocompleted from context. The model sees a nearby .env file reference and fills in a plausible-looking key value — sometimes a real one from the same file.

All tools: Missing security headers. No AI scaffolding tool generates next.config.ts security header configuration, Content-Security-Policy, or Strict-Transport-Security by default. These require explicit prompting, and most tutorials don't include them.


What the data shows

In two weeks of VibeScan scans across 61 apps:

  • 87% had no security headers
  • 9 apps had Twilio credentials hardcoded in client JavaScript
  • 2 apps had live Stripe secret keys in their bundle
  • Multiple apps had Supabase tables open to unauthenticated reads

These aren't advanced vulnerabilities. They're the kind of thing a security-aware developer catches on first review. AI-built apps often skip that review step because the code "came from the AI" and feels more authoritative than it is.


What has to change

At the tool level

AI builders need to treat security-sensitive operations differently from functional code generation:

  • Default deny for database policies. Generate USING (auth.uid() = user_id) as the default, not USING (true). Require explicit justification for permissive policies.
  • Credential placement checks. Refuse to autocomplete credentials into client-side files. Warn when a variable named SECRET_KEY, AUTH_TOKEN, or similar appears in a frontend component.
  • Security header templates. Include security header configuration in every web app scaffold, not as an advanced option.

At the developer level

Until tools change, the gap has to be closed manually:

  1. Assume the first draft is insecure. Every AI-generated backend needs a RLS audit before shipping.
  2. Check your JavaScript bundle. Open DevTools → Sources → search for sk_live, SID, secret, key. If you find a real credential, rotate it immediately and move the call server-side.
  3. Add security headers explicitly. They take 15 minutes and prevent a category of attacks that AI tools routinely leave apps open to.
  4. Run an automated scan. VibeScan checks for all of these patterns in 30 seconds without requiring access to your source code.

CVE-2025-48757 as a test case

The CVE was published nine days ago. Since then, some affected app owners have applied the RLS fix. The vulnerability in their specific apps is patched.

But new Lovable apps built today with the same scaffolding prompt produce the same USING (true) pattern. The CVE fixed individual instances. It didn't change what the model generates.

That's the structural problem. Individual disclosures don't fix training distributions. Scale of the fix has to match scale of the vulnerability.

Until AI tools change how they generate security-sensitive code by default, this is a developer responsibility — and most developers building with these tools don't know what they don't know.

Scan your app before you ship →

Check your own app

Free scan — no GitHub access needed. Takes 30 seconds.

Scan my app free