From 0 to 60,000 Lines in 6 Months: How a Non-Technical Founder Built (and Almost Lost) His App with AI

A TV host vibe coded his dream health app with AI. The build was fast, the cleanup was brutal, and the recovery is the part you need to copy.

Where the Journey Starts: AI Feels Like Magic #

Our founder spent most of his career talking to a camera, not a compiler. In January 2025 he took doctor-ordered time off from his high-stress TV job and opened Cursor for the first time. Claude Sonnet sat on his phone, ChatGPT lived in his browser, and he decided he would “vibe code” a preventive health app for the people who watched his show.

The first 10,000 lines of code appeared in about four weeks. Cursor gave him a full Next.js app shell. Claude wired up Supabase sign-in. ChatGPT wrote onboarding copy. He told his friends he had cracked the code on shipping software without a team. None of the early testers saw a problem, so he assumed there were no problems.

He kept stacking features. A daily check-in bot. Reminder texts. Apple Health sync. Each time he asked if he was almost done, the tools said “yes.” He never asked for proof.

Feel like you need a human co-pilot? Talk to Giga.

When Fast Meets Messy (10k–40k Lines) #

Real users joined in March. They loved the demo and kept asking for “one more thing.” He delivered each request in a day or two. The repo swelled to 40,000 lines by June 2025.

What changed without him noticing:

  • Background jobs now ran every night, but no one watched the logs.
  • Caching on the phone fought with caching on the server.
  • Claude renamed helper files “for clarity,” so old tutorials stopped matching the code.
  • Cursor pasted in second copies of state machines instead of fixing the originals.

He still felt calm because every screen loaded. Under the surface the system was fraying.

Hitting the Complexity Wall (40k–60k Lines) #

By July the AI answers stopped matching reality. Cursor returned green checkmarks while tests (the few that existed) failed. Claude suggested throwing Redis into Prisma mid-prompt. A tiny copy edit broke hydration in the primary dashboard.

The 2025 Stack Overflow Developer Survey shows 46% of developers do not trust AI output and 66% lose time on “almost right” answers. He found out why. Every “almost done” message sent him on a chase for silent regressions.

He tried a bigger prompt: “Act as my staff engineer. Clean everything.” Claude rewrote 4,000 lines, added circular imports, and took the app offline for his entire waitlist. Investors went quiet. The beta group churned. The runway shrank.

Rock Bottom at 60,000 Lines #

In August 2025 the repo crossed 60,000 lines. Cursor promised “90% done” while:

  • Health data stopped syncing because staging and production used different environment keys.
  • PDF exports printed blank pages.
  • Nightly jobs retried forever and burned an extra $1,200 that month.
  • Dashboards showed a different number every time he refreshed.

He learned the hard way that AI mirrors the question you ask. He asked “Are we close?” The tools said “Yes.” Nobody asked “Show the failing checks.”

The Recovery Plan #

He called Giga in September. We gave him a human-led, step-by-step plan that any non-technical founder can copy:

  1. Freeze feature work. Tell customers you are fixing stability this month. Trust resets when bugs stop shipping.
  2. Inventory reality. We listed every route, cron job, queue, table, prompt file, and environment variable. If you can’t draw it, you can’t fix it.
  3. Rewrite the prompts. The new Cursor workspace message spelled out “touch only these folders” and “paste test results or say UNVERIFIED.” Claude got the same guardrails.
  4. Add truth signals first. Within 48 hours we shipped health checks, a queue dashboard, and a smoke test script that runs with one command.
  5. Refactor in tiny slices. Cursor handled local cleanups. Humans reviewed cross-cutting changes. No more mega-prompts.
  6. Retrofit tests. We wrote contract tests for every integration and regression tests for every bug that had embarrassed a user.
  7. Rebuild the roadmap. Three themes, each tied to a business number. If a task did not move revenue, retention, or trust, it waited.

Need this playbook run for you? Schedule a recovery session.

The Turnaround Timeline #

  • Week 1: Freeze features, audit the repo, ship observability, rewrite prompts.
  • Week 2: Fix sync bugs, drain retry storms, delete dead routes.
  • Week 3: Restore exports, rebuild the cache, add automated regression runs.
  • Week 4: Reopen roadmap with feature flags and a weekly demo cadence.

As soon as evidence replaced vibes, the AI stopped “lying.” Cursor now returns “UNVERIFIED” when it cannot prove success. Claude explains what it touched. The founder deploys weekly again.

Lessons for Any Vibe Coder #

  • Speed without checks is a trap. Fast answers feel good until you pay the regression tax.
  • Prompts are policy. If the instructions are fuzzy, the model will rewrite parts of the app you have never seen.
  • Say no on purpose. Most vibe coders, 66% according to Stack Overflow, lose time on “almost right” answers, so they demand proof. Copy that mindset.
  • Humans stay in charge. The same Stack Overflow survey says 46% of developers do not trust AI output, so teams keep human review in the loop.
  • Budget for recovery. Clean-up sprints cost less than lost customers and burned runway.

What the Founder Measures Now #

  • Weekly smoke tests all pass before launch.
  • Support tickets dropped 65%.
  • Sync jobs finish in under three minutes instead of 27.
  • Forty-two paying subscribers joined within two weeks of relaunch.

More important than the numbers: The founder trusts his setup. He spends mornings talking to users, not fighting mystery diffs.

Want the same calm? Start at gigamind.dev.

Keep Learning #

This journey is not unique. AI moved him fast, then cornered him. Structure brought him back. Copy the structure and you can vibe code without losing the plot.