When I Didn't Know What to Test
I chased 100% coverage and wasted weeks. Turns out there's an actual testing strategy.
Chasing 100% coverage
At my first job, when I wrote tests for the first time, a senior told me to "get coverage up." So I tested everything. Utility functions, component rendering, button clicks, even whether the right CSS classes were applied. I wrote 248 tests in 3 weeks.
Hit 94% coverage. Felt great. Then the next month, 3 production bugs shipped, and not a single one was caught by my tests.
The bugs my tests missed
The first bug was an API response format change that the frontend wasn't expecting. The second was a redirect that didn't fire under a specific condition in the payment flow. The third was a race condition that corrupted data during concurrent requests.
What I'd been diligently testing was stuff like "clicking the button opens the modal" and "entering a value in the input updates the state." Not that these don't matter, but the real problems were happening somewhere else entirely.
(248 tests, and zero of them caught an actual bug.)
The moment I figured out what to test
It boils down to: wherever money changes hands, wherever user data gets modified, wherever there's complex business logic. Those are the priorities. When I started asking "Would I be scared to deploy without this test?", it became clear what actually needed testing.
Fifteen integration tests validating core flows are often worth more than 200 unit tests.
The time I tried TDD and gave up
I did try TDD. For about 2 weeks. Write the test first, then the code. The problem is that in an environment where requirements keep changing, the tests change right along with them.
Every time the PM said "can we change this feature a bit?", I had to fix the tests before the code. The time spent updating tests started exceeding the time spent on actual feature work. Cart before the horse. I gave up.
Though I'll say TDD works well for pure functions and utilities. When inputs and outputs are clear-cut.
My current testing strategy
Integration tests at the center, with unit tests for critical business logic. E2E only for critical flows like payments and sign-up. Roughly 60% integration, 30% unit, 10% E2E.
I don't look at coverage numbers anymore. Instead, I add tests based on "what could break in this PR." After making this shift, production bugs dropped from about 3 per month to 0-1.
I'm still not sure
Honestly, I don't know if this is the right answer. It probably depends on the project and the team size. I've become skeptical of anyone who claims there's one true testing strategy. Everyone just has what works for their situation.