Development··3 min read

When to Make the Jump from Docker to Kubernetes

After holding out with Docker Compose, we switched to Kubernetes. Here's how I think about the timing

We Survived 2 Years on Docker Compose

I'll be honest: I was in the camp that thought Kubernetes was overkill. A single Docker Compose file handles most services just fine.

In practice, our team ran 2 API servers, Redis, PostgreSQL, and Nginx from one docker-compose.yml for two years.

Were there problems? Plenty. But none of them were problems caused by not having Kubernetes.

When We Hit the Wall

The first signal was deployment time.

As services grew from 3 to 7, a single docker-compose up -d started taking over 5 minutes. I wanted to redeploy just one service, but dependencies caused others to restart too.

The second was scaling. During a promotional event, we needed to go from 3 API servers to 6. We did it manually -- spinning up EC2 instances, copying the docker-compose file, and running it.

At 2 AM. In pajamas.

When K8s Is Still the Wrong Answer

Here's the important nuance: the situation above doesn't automatically mean K8s is the right move.

If you have 5 or fewer services, traffic fluctuation isn't dramatic, and nobody on the team is dedicated to infrastructure, K8s is over-engineering.

Managed container services like ECS Fargate or Cloud Run are a far better intermediate step.

We considered ECS first. But we already had 7 services, each needing different scaling policies, and environment-specific configuration was getting complex.

What the Transition Was Actually Like

We went with EKS. The transition took 3 months.

It took longer than expected because of YAML hell. Deployment, Service, Ingress, ConfigMap, Secret... each service needed at least 4-5 YAML files.

Seven services times 5 files equals 35 YAMLs. That's not trivial to manage.

Introducing Helm charts helped, but the learning curve was steep. Two of our four team members were new to K8s entirely, and just getting comfortable with kubectl commands took 2 weeks.

The first month felt less like operating K8s and more like fighting it.

What Changed After the Switch

The clear win was autoscaling. With HPA configured, Pods scaled up automatically during traffic peaks.

No more manually spinning up servers at 2 AM. That alone made the switch worth it.

Rolling updates also became seamless. Zero downtime during deploys, and when something goes wrong, kubectl rollback takes you back in one command.

But operational complexity definitely went up. When a Pod falls into CrashLoopBackOff, tracing the cause is harder than it was with Docker.

You need to figure out which Pod first, then distinguish between a node issue and an app issue. Network policies occasionally blocked inter-service communication too.

My Rule of Thumb: When These 3 Overlap, Switch

Based on experience, here's the criteria I've settled on.

First, you have 5 or more services. Second, each service has different scaling requirements. Third, the team has at least one person who can invest time in infrastructure.

If any of these three is missing, go with managed services instead of K8s. Look at your team's reality, not the tech's coolness factor.

In Retrospect

Switching to K8s was the right call for our team.

But if we'd done it 6 months earlier, the learning pain would have been worse. Six months later, and we'd have had more outages.

That's how timing works. There's no perfect moment -- only less bad ones.

When Docker Compose starts buckling, don't jump straight to K8s. Evaluate intermediate steps first.

And when the time truly comes, don't be afraid to make the leap. The struggle is unavoidable, but that struggle is what grows the team.

Related Posts