The Two-Step Method for Debugging with AI: Analyze First, Fix Second

How a SF startup software engineer developed a reliable AI debugging workflow by forcing analysis before fixes—catching bad assumptions early and producing more targeted solutions.

Most engineers use AI coding tools the same way for debugging: paste the error, ask it to fix it, hope for the best. Sometimes it works. Often it doesn't, or it fixes the immediate issue while introducing new problems.

Jack Collins, a founding engineer at Develop Health, uses a completely different approach. His team builds healthcare automation where bugs can't just break things 50% of the time—they need reliability. After months of iteration, he's developed a debugging workflow with AI that consistently produces better results.

The core insight: never ask the AI to fix a bug directly. Make it understand the problem first.

The Two-Step Process #

Jack's workflow has two distinct phases, and keeping them separate is what makes it reliable.

Step 1: Gather and analyze

When Jack encounters a bug, he copies the stack trace from Sentry or DataDog and pastes it into his AI coding tool (he uses Zed, but this works in Cursor or Claude Code too). Alongside the stack trace, he adds context from his own understanding—what the customer reported, his hypothesis about what's wrong, or any patterns he's noticed.

His first prompt is never "fix this." It's something like: "Summarize this but be thorough" or "Tell me every single way this thing is being used and try to make sure it's covering every edge case."

This forces the AI to analyze before acting. It has to reason through the problem, identify all the places the code is used, and think through potential causes.

Jack explained why this matters: "I find sometimes if you just say fix this thing it can go a little awry. So I've just gotten in the habit now of first gather and then get the agent to fix it."

That gathering step does two things. First, it helps Jack verify his own understanding of the bug. Second, it ensures the AI has properly analyzed the issue before jumping to a solution.

Step 2: Fix it

Only after reviewing the analysis does Jack move to the second prompt: "Now go ahead and fix that."

By this point, both Jack and the AI have a clear picture of what's actually wrong. The fix is informed by the analysis, not a guess based on a stack trace.

The Middle Step: Narrowing Focus #

For complex bugs where multiple things could be wrong, Jack often adds a refinement step between analysis and fixing.

After the AI's initial analysis identifies several potential causes, Jack will respond with something like: "Okay, it's obviously not those two issues, so forget about those. Let's go deeper on these three."

This narrows the AI's focus. Instead of trying to address everything at once or picking the wrong path, it concentrates on the most likely causes. Then Jack asks for the fix.

This three-step variation—gather, narrow, fix—works particularly well when the stack trace alone doesn't reveal the root cause or when the bug could stem from several different code paths.

Always Add Tests #

When Jack has the AI generate a fix, he always includes one more instruction: add tests.

The tests serve multiple purposes. They verify the fix actually works. They document the expected behavior. They prevent the same bug from reappearing later.

Jack's philosophy is to push testing as early in the feedback loop as possible. Type annotations catch some bugs right in your editor. Unit tests catch others during local development or CI. The goal is finding issues before they reach production.

Having the AI write tests alongside fixes is natural for AI coding tools—they're good at following patterns, and test patterns are usually well-established in a codebase. You just have to remember to ask.

Why This Works #

The two-step process works because it changes the AI's role from "fixer" to "research assistant first, then fixer."

When you paste an error and immediately ask for a fix, the AI has to simultaneously understand the problem and generate a solution. That's where things go wrong. It might misunderstand what's broken. It might fix a symptom instead of the root cause. It might make assumptions about parts of the codebase it hasn't properly analyzed.

Separating analysis from fixing gives the AI space to think through the problem properly. It also gives you a checkpoint—you can review the analysis and steer before the AI starts modifying code.

Jack uses the same principle beyond debugging. When working on complex features where he hasn't fully mapped out the logic yet, he follows a similar interactive pattern: have the AI scaffold something, review it, manually adjust based on what he learned, then have the AI continue from that clearer understanding.

The pattern is consistent: understand before acting, and involve the human at key decision points.

What Changes With This Approach #

You'll notice a few things shift when you debug this way.

You catch incorrect assumptions early. If the AI's analysis reveals it misunderstood the problem, you know before it writes bad code.

Fixes are more targeted. The AI isn't guessing—it's working from a clear understanding of what's broken.

You understand the codebase better. Reviewing the analysis teaches you about code paths and interactions you might have missed.

The back-and-forth feels more collaborative. You're working with the AI as a thinking partner, not just using it as a code generator.

The Broader Principle #

This two-step debugging workflow reflects something larger about working effectively with AI coding tools: they're most reliable when you structure the work to play to their strengths.

AI agents are excellent at analysis, pattern recognition, and implementation of well-understood solutions. They struggle when asked to simultaneously figure out a problem and solve it, especially in complex codebases with lots of context.

Give them space to analyze. Review their thinking. Guide them to the right solution. Then let them implement.

Jack's team ships healthcare software that needs to work consistently. The fact that this workflow works for them means it'll work for most engineering teams. You just have to resist the temptation to ask for fixes directly and build in that analysis step first.