The Chip War and What It Means for Software Developers
How the semiconductor battle is reshaping the software development ecosystem
I Thought Semiconductors Were Someone Else's Problem
For a software developer, semiconductors seem like a distant topic. I felt the same way. Hardware people handle the hardware stuff, right? Then I had a project delayed because we couldn't get GPUs, and my perspective completely changed.
No GPUs, No AI Project
The wait time for NVIDIA H100s was 6-9 months as of early 2024. Things got a bit better in 2025, but next-gen chips like the H200 and B100 are still hard to come by. Reserving an H100 instance on AWS requires a minimum 1-year commitment at $30+ per hour. At 8 hours a day, that's about $7,200 per month — over 10 million won.
Even with a great idea, you can't start without the capital to secure GPUs. The barrier to entry in AI development is being set by capital. Jensen Huang talked about AI's iPhone moment, but the iPhone was accessible to everyone — GPUs are not. (That difference is pretty significant.)
Cloud GPU platforms like Lambda Labs and CoreWeave have emerged, but they're not a fundamental solution.
ARM Is Changing the Dev Environment
Since Apple's M-series chips, ARM has been rapidly growing in the server market too. AWS Graviton, Google Axion, and the like. Graviton-based instances reportedly offer up to 40% better price-performance than equivalent x86 — meaning more servers for the same budget.
But there are compatibility issues. Code that ran fine on x86 might not work on ARM. When I recently migrated to Graviton, about 5% of our dependencies had ARM compatibility problems. Mostly minor issues, but hunting them down one by one took a week. Docker images need multi-architecture builds now, and CI/CD pipelines need to build and test for both x86 and ARM. Build times increased by 1.5x.
But the cost savings were so substantial that we recouped the migration costs within 3 months. Looking ahead, RISC-V deserves attention too. Its server market presence is still minimal, but it's growing fast in IoT and embedded.
The US-China Chip War Affects Us Too
The impact reaches Korean developers directly. US export restrictions on chips to China have pushed China to develop its own AI chips — Huawei's Ascend series being the prime example.
What this means for software developers is framework fragmentation. If you're targeting the Chinese market, you might need to support Huawei's CANN instead of CUDA. Korean companies building global services are already facing this issue. PyTorch and TensorFlow support various backends, but when you factor in performance optimization, true abstraction is still far off.
Software Still Runs on Hardware
Model optimization techniques are getting a lot of attention. Since GPUs are expensive, quantization, pruning, and knowledge distillation — techniques for getting similar performance from smaller models — have become essential skills. On-device AI is spreading too, and Intel and AMD have started putting NPUs in desktop CPUs.
It's clear that semiconductors are no longer just a hardware engineer's concern. If you can't understand the chip, you can't optimize. If you can't optimize, you fall behind the competition. But how deep software developers should actually go — that's honestly a bit fuzzy. You can't dig into everything.