Two Months with GitHub Copilot Workspace
What it's really like when an AI reads your issue and generates code, after 2 months of daily use
It Writes Code From an Issue?
I first heard about GitHub Copilot Workspace through a colleague's Slack message. "It reads the issue, creates an implementation plan, and writes the code." I was skeptical. Copilot autocomplete is wrong half the time. A whole implementation from a single issue?
After two months, it wasn't completely wrong. Half impressive, half underwhelming.
The Moment I Was Genuinely Impressed
I filed a GitHub issue: "Error message not displayed on login failure." Copilot Workspace automatically started analyzing. It found the relevant files, created a plan for which files needed changes, and proposed actual code modifications.
The first issue's suggested code actually worked. The error handling logic was missing a catch block with a setState call and an error message component render. It identified 3 files that needed changes, and it got all 3 right. Well, almost. (2 out of 3 were perfect, 1 needed tweaking.)
This Is Where Reality Set In
Simple bug fixes? It handles those well. New feature implementation? Still a long way off. When I filed an issue saying "add coupon discount functionality to the payment module," the generated plan looked reasonable. But the actual code conflicted with the existing payment flow, and the coupon validation logic was client-side only, which was a security issue.
The most frustrating thing is that it doesn't understand project conventions. Our team handles errors using a custom Result type, but Workspace generated code with try-catch. Had to fix this every time. Over 2 months, code that went straight to merge without modifications: about 23%. The remaining 77% needed changes.
The Planning Feature Is Actually Great Though
The planning capability is far more valuable than the code generation. Analyzing an issue and saying "this function in this file needs modification, a new file is needed here, and tests should be written like this." This saves more time than you'd think.
Before Workspace, I'd spend 30 minutes to an hour digging through the codebase to figure out what needed changing for a given issue. With Workspace's plan as a reference, that drops to 10 minutes. The plan isn't 100% accurate, but as a starting point, it's more than enough.
Two-Month Summary
Code generation: usable for simple bug fixes, not ready for complex features. Planning feature: genuinely good, clearly reduces issue analysis time. Price: $19/month per person on the team plan, and the planning feature alone justifies the cost.
Downsides are the lack of project convention awareness and poor performance in monorepos. In our monorepo, Workspace sometimes tries to modify files from unrelated packages. That'll presumably improve, but right now it's a real annoyance.
The biggest takeaway is that AI coding tools aren't "replacing coding" but "providing a starting point for coding." Decisions and judgment are still human work. But just having that starting point is surprisingly valuable.