Your app is live. Users are signing up. Revenue is trickling in.
Then you wake up to an email: "We found a security vulnerability in your app. Here's proof."
Attached is a screenshot of your production database. Or customer data. Or admin dashboard.
This is what happens when you skip the security audit.
AI code tools (Cursor, Claude, v0, Bolt) make building fast. They don't make building *secure*.
Because AI models are trained on public code — including thousands of examples of insecure code. When Claude writes an API route, it might copy patterns from outdated Stack Overflow posts. When Cursor autocompletes an authentication check, it might skip edge cases human developers learn the hard way.
Here's how to audit your AI-generated code before someone else does it for you.
Step 1: Check for Hardcoded Secrets and API Keys
The risk: AI code frequently hardcodes API keys, database URLs, and secret tokens directly in source files.
Why AI does this: Training data includes thousands of code examples with hardcoded credentials (often placeholders, but not always). AI models reproduce this pattern by default.
What to check:
- Search your codebase for
apiKey,secretKey,password,token,API_KEY - Grep for patterns like
"sk-"(OpenAI keys),"pk_"(Stripe public keys),"Bearer " - Check
.envfiles are in.gitignore(AI often creates.env.examplebut forgets.gitignore) - Verify your git history doesn't contain old committed secrets (even if you deleted them later, they're still in history)
How to fix:
- Move all secrets to environment variables
- Use
.env.local(Next.js) or platform environment variables (Vercel, Cloudflare, Railway) - Rotate any keys that were committed to git (assume they're compromised)
- Add a pre-commit hook to block commits containing common secret patterns
Tool: Use git-secrets or truffleHog to scan your repo history for accidentally committed credentials.
Step 2: Audit API Route Authentication
The risk: AI-generated API routes often have no authentication checks, or broken authentication that can be bypassed.
Common vulnerabilities:
- No auth check at all (route is public when it should be private)
- Auth check exists but can be bypassed by manipulating headers
- Session validation happens client-side only (can be faked)
- API accepts user input without verifying the user owns that resource (IDOR vulnerability)
What to check:
Check #1: Does every private API route verify the user?
Look for routes that accept sensitive actions (delete, update, admin operations) without first verifying the current user's session. Fix: Add session/token verification at the top of every private route.
Check #2: Can a user modify someone else's data?
Routes that accept a userId in the request body can often be exploited to modify other users' data if you don't verify session.userId === userId. Fix: Always verify the session user matches the target user before allowing updates.
Check #3: Are admin routes actually admin-only?
AI often generates admin route checks but forgets the else block — non-admins can still access the resource. Fix: Use early returns and explicit denials, not just if-blocks.
Step 3: Test Input Validation and SQL Injection
The risk: AI code accepts user input directly without sanitization, enabling SQL injection, XSS, and command injection.
Why AI does this: Training data includes millions of lines of code written before modern security practices. AI doesn't inherently understand "this input is untrusted."
What to check:
SQL Injection
If your app uses raw SQL queries (even through ORMs), test for injection:
``javascript
// ⚠️ Vulnerable
const result = await db.query(SELECT * FROM users WHERE email = '${email}');
// ✅ Safe (parameterized)
const result = await db.query('SELECT * FROM users WHERE email = ?', [email]);
`
Test: Try logging in with email ' OR '1'='1 — if you get in, you're vulnerable.
XSS (Cross-Site Scripting)
If your app renders user-submitted content (reviews, comments, bios), test for script injection. React escapes content by default, but raw dangerouslySetInnerHTML bypasses it.
Command Injection
If your app runs shell commands based on user input, it's vulnerable. Never pass user input to shell commands. Use libraries that don't invoke shells.
Step 4: Review File Upload Security
The risk: AI-generated file upload code often allows arbitrary file types and doesn't validate content, enabling malware uploads and remote code execution.
What to check:
- File Type Validation: Validate MIME type AND magic bytes (file signature), not just extension. Attackers can name a PHP file malware.php.jpg
to bypass extension checks. - File Size Limits: Enforce maxFileSize
at the upload handler level. Without limits, users can upload 10GB files to crash your server. - Storage Location: Never store uploaded files in a publicly executable directory. Use random filenames (not user-supplied names) to prevent overwriting critical files.
Step 5: Check for Exposed Debug Info
The risk: AI code often leaves debugging information exposed in production (error messages, stack traces, database queries, internal paths).
What attackers learn from debug info:
- Your database structure (from SQL error messages)
- Your file paths (from stack traces)
- Your framework version (from headers/errors)
- Your third-party dependencies (from error logs)
How to fix: Error responses in production should return generic "Something went wrong", not detailed stack traces. Log errors server-side only, never return them to the client.
Step 6: Test Session and Cookie Security
The risk: AI-generated authentication code often stores sessions insecurely or uses weak cookie settings.
Your session cookies should have:
- HttpOnly: true
(prevents JavaScript access) - Secure: true
(HTTPS only) - SameSite: 'strict'
or'lax'(prevents CSRF)
Also check:
- Sessions should expire (not last forever)
- Logout should invalidate the session server-side (not just delete the cookie client-side)
- Password changes should invalidate all existing sessions
When to Call a Professional
Run a professional security audit if:
- You're handling payments (PCI-DSS compliance required)
- You're storing health data (HIPAA) or EU user data (GDPR)
- Your app has more than 1,000 users
- You're raising funding (investors will ask for a security audit)
- You've made revenue and can't afford a breach
A professional audit includes penetration testing (simulated attacks), code review by security engineers, automated vulnerability scanning, compliance verification, and remediation support.
Cost: $2,000–$10,000 for a small app audit from traditional firms. Worth it to avoid a breach that costs 10x more.
Your 30-Minute Security Audit Checklist
Authentication & Authorization
- [ ] All private API routes verify user session
- [ ] Users can only access/modify their own data
- [ ] Admin routes explicitly check for admin role
- [ ] Password reset tokens expire and are single-use
Input Validation
- [ ] All user input is validated and sanitized
- [ ] SQL queries use parameterized statements (no string interpolation)
- [ ] XSS protection enabled (Content Security Policy headers)
- [ ] File uploads validate type, size, and content
Data Protection
- [ ] No API keys or secrets in source code
- [ ] .env
files in.gitignore - [ ] Passwords hashed with bcrypt (not MD5 or plain text)
- [ ] Database connections use TLS/SSL
Production Security
- [ ] Error messages don't leak stack traces or internal details
- [ ] Session cookies use HttpOnly
,Secure,SameSite` flags - [ ] HTTPS enabled and enforced (no HTTP access)
- [ ] Security headers set (CSP, X-Frame-Options, X-Content-Type-Options)
Monitoring
- [ ] Failed login attempts are logged
- [ ] Unusual API activity triggers alerts
- [ ] Error logging captures security-relevant events
Ready to Secure Your App?
You built something. Now protect it.
Get an AI Code Security Audit →
We audit your codebase for the 23 most common vulnerabilities in AI-generated code, provide a prioritized fix list, and offer remediation support. Fixed-price, 5-day turnaround.
Or run this checklist yourself first — it'll catch 80% of issues.
Either way, don't wait for the breach to happen. The email from the hacker is not the wake-up call you want.
Related Tools:
- AI Code Health Checker — Grade your codebase's overall health
- AI Code Vulnerability Explainer — Understand common security issues
- AI Bug Report Generator — Document security bugs for your developer