Development··3 min read

AI Pair Programming: My Real-World Workflow

3 months using an AI coding assistant at work. Here's what got faster and what actually got slower.

I thought delegating code to AI would speed everything up

Three months ago, my company provided AI coding assistant licenses. My first thought was "coding time just got cut in half." Half right, half wrong.

Some things definitely got faster. Boilerplate code, type definitions, test code drafts. The AI cranks these out in 5 seconds. What used to take 15 minutes now finishes in 1.

But review time got added

You can't just use AI-generated code as-is. You have to read it. Once, an AI-generated API handler was missing error handling, and I shipped it straight to production. Got hit with 500 errors. The code looked plausible, so I skimmed it. My fault.

After that, I review AI-generated code more carefully than code I write myself. This review time is longer than you'd think. The AI generates 100 lines in 3 seconds, but reviewing takes 10 minutes.

(It's kind of ironic that I trust AI-generated code less than my own.)

Finding what actually works

At first, I'd say "build this entire feature." The output was usable maybe 30% of the time. The other 70% was either not what I wanted or had bugs.

Now I make small, specific requests. "Add error handling to this function." "Look at this type definition and build an API response parser." "Refactor this code, but keep the interface the same." Narrowing the scope pushed accuracy to about 80%.

My workflow

Step 1: I do the design. Decide what functions are needed and what types flow between them. Step 2: Ask the AI to implement each function. Include related types and existing code as context. Step 3: Review and revise the generated code. Step 4: Ask the AI to write tests.

Following this order, perceived productivity feels about 1.4x. The number sounds small, but it adds up daily.

When AI actually gets in the way

Complex business logic, code deeply entangled with the existing codebase, performance optimization. For these, asking the AI just returns irrelevant answers. It doesn't know our project's context, so that's expected.

Same goes for debugging. Ask "why is this error happening?" and you get 5 generic causes listed. The real cause is almost always something specific to our project's unique setup.

Honest thoughts after 3 months

I don't want to go back to a world without AI coding assistants. They definitely help. But I don't agree with "AI is replacing coding." Design, review, and debugging are still on humans.

A new tool got added to the belt. The fundamental way I work hasn't really changed. Not yet, anyway.

Related Posts