Research & Guides

Security for AI-built apps

Practical guides on the vulnerabilities AI tools generate — and how to fix them.

Check your app now

Free scan — no GitHub access needed.

Scan my app free

Why AI-built apps need a different kind of security review

AI coding tools — Lovable, Bolt, Cursor, Claude Code, v0 — ship working code fast. The problem is that “working” and “secure” are not the same thing. AI models are trained to produce code that passes tests and satisfies prompts. They are not trained to reason about attacker intent, data exposure at scale, or the gap between what a developer meant and what the code actually permits.

The result is a consistent set of vulnerability classes that show up across every independent audit of AI-generated code: credentials left in client bundles, Row Level Security policies that are syntactically valid but semantically open, authentication logic with a missing ! operator that inverts the check, and security headers that are simply absent because no one asked for them.

What you'll find in this research

Supabase RLS deep dives

Row Level Security is the primary data access control in Supabase-backed apps. AI tools generate RLS policies that look correct but are wide open. We cover the exact policy patterns to audit, what USING (true) really means, and how to write policies that actually scope to the authenticated user.

CVE analysis and incident breakdowns

When a vulnerability in an AI platform is disclosed, we break down what happened, why the AI generated it, and what developers should look for in their own apps. CVE-2025-48757 (Lovable), CVE-2025-54136 (Cursor MCP RCE), and related disclosures are covered with technical detail.

Pre-launch security checklists

Actionable checklists for developers shipping AI-built apps to real users. Covering secrets management, header configuration, auth logic review, dependency auditing, and Supabase-specific surface area. Designed to be run in one session before a launch.

Platform-specific guidance

Security defaults differ across Lovable, Bolt, Cursor, and Claude Code. Each platform has its own patterns and failure modes. Our guides are written for the specific stack and scaffolding each tool generates.

The research methodology

Our analysis draws from published third-party audits (PreBreach, VibeWrench, ShipSafe, Lorikeet Security), CVE disclosures, community-reported incidents, and direct scanning of public app surfaces. We do not exploit live apps or access private data. All vulnerability examples come from published sources or apps whose owners have consented to analysis.

Where we make statistical claims — such as 87% of AI-built apps having high or critical vulnerabilities — we cite the specific audit or dataset the number comes from. The findings are consistent across multiple independent sources, which gives us confidence the patterns are real and widespread rather than artifacts of a single methodology.