Your AI-built app works great in testing. Then someone changes a number in the URL and suddenly sees everyone's data.
I've reviewed security for over 50 AI-generated codebases in the last year. Same pattern every time: fast initial progress, then a preventable security disaster kills all momentum right when the product starts working.
The good news is that most security holes follow the same few patterns. Fix these three things and you'll stop 95% of the attacks that actually hit early-stage startups.
1. Row-Level Security (Let the Database Protect Your Users) #
Row-Level Security means your database automatically prevents users from seeing each other's data. When you query the database, it only returns rows that specific user is allowed to see. The enforcement happens at the database level, not in your application code, which is critical because AI-generated code often handles permissions inconsistently - one endpoint checks permissions, another forgets.
Last month I changed a single URL parameter in a founder's dashboard and immediately saw 400 other users' data. The founder had no idea this was even possible. This was two weeks before their investor demo.
Copy this prompt into Claude or Cursor:
Implement Row-Level Security in my Supabase database.
Tables: [list them]. Each row only accessible to the user who created it.
Generate SQL policies to enable RLS on all tables.
Restrict access based on auth.uid().
Include policies for SELECT, INSERT, UPDATE, DELETE.
After Claude generates the policies, test them yourself: log in as one user, then try to access another user's ID in the URL. If you can see their data, RLS isn't working. Go back to Claude with the specific failure and ask it to fix the policy.
This takes 20 minutes to set up properly and prevents the data leak that ends your company.
2. Rate Limiting (Protect Your Costs) #
Rate limiting caps how many requests a single IP address can make per hour. Without it, one attacker can generate thousands of fake accounts, fill your database with garbage, burn through your email quota, and rack up massive API costs while you sleep.
I watched this happen to a founder last quarter. Woke up to a $600 AWS bill from a single night of bot traffic. The app worked fine. But their entire monthly runway disappeared because they didn't have rate limiting configured.
Give this prompt to your AI coding assistant:
Add rate limiting to all my API routes. Limit each IP address to
100 requests per hour. Apply globally to all API routes.
Return a clear error message when the limit is exceeded. Show me
where to add this and how it will work.
Start strict at 100 requests per hour. Real users never hit this limit—bots do. You can always increase it later if legitimate users complain, but in 50+ implementations I've never seen that happen. This also protects you when things go right: your app goes viral, traffic spikes 1000x overnight, and the rate limits keep your costs predictable instead of bankrupting you.
3. Keep API Keys Out of Your Code #
API keys in your codebase will get stolen. GitHub has bots scanning for exposed credentials 24/7. When they find your Stripe key or AWS credentials, someone will use them within hours. During security reviews, I find exposed keys in about 40% of AI-generated repositories, usually because Claude or Cursor wrote them directly into the code during development.
Use this prompt to fix it:
Move all my API keys to environment variables. Find every place
in my code using API keys directly (Stripe, AWS, database URLs,
third-party services). Show me: 1) how to create a .env.local file,
2) how to update code to use process.env, 3) what to add to .gitignore,
4) how to set these in Vercel/my hosting platform.
After this, your actual keys never appear in your repository. Set a calendar reminder to rotate all keys every 90 days - ask Claude to generate new keys and update your environment variables without breaking anything.
The Bottom Line #
Security breaches kill momentum. These three controls prevent the disasters that actually hit early-stage startups. They take one afternoon to implement properly and work consistently after that. Do it before you launch, not after someone finds everyone's data.
