IT··3 min read

AI Regulation: Where Does Each Country Stand?

From the EU AI Act to Korea's AI Basic Law -- a 2026 overview of global AI regulation

We've Reached the Point Where Regulation Is Unavoidable

I was shocked to see statistics showing that deepfake-related crimes increased about 340% year-over-year in 2025. Voice phishing using AI voice cloning, stock manipulation with fake videos, fake news generation. As the technology accelerates, so do the abuse cases -- explosively.

As a developer, my stance on AI regulation is complicated. I worry that regulation could stifle innovation, but I also feel that things can't keep going unregulated.

EU: Leading the Pack

The EU AI Act was enacted in 2024 and has been phased in since August 2025. The core idea is classifying AI by risk level.

Prohibited: Social scoring systems, real-time remote biometric identification (with some exceptions). Chinese-style social credit systems can't be used in the EU.

High-risk: Hiring AI, credit scoring AI, medical diagnostic AI. These require transparency reports, human oversight, and data quality management.

From what I can see, the biggest issue with the EU AI Act is actual enforcement. Estimates suggest compliance costs for small AI companies could run 200-500 million won. For startups, that's effectively a barrier to entry.

US: Slowly Moving from Self-Regulation

The US still doesn't have a comprehensive federal AI law. Instead, individual states are creating their own regulations, which is a bit chaotic. California, New York, and Colorado have each passed their own AI bills.

The Biden administration had an AI executive order, but the Trump administration rolled back significant portions. Currently, it's closer to industry self-regulation. Meta, Google, and OpenAI are setting their own safety standards, which... feels a bit like asking the fox to guard the henhouse.

Korea: AI Basic Law in Effect

Korea's AI Basic Law went into effect in January 2026. It includes mandatory impact assessments for high-risk AI, AI ethics guidelines, and personal data processing standards.

Honestly, the changes felt on the ground so far aren't major. The notable difference at my company was that the legal team now requires an impact assessment when we integrate AI features into our service. Added about two weeks of paperwork.

Korea's AI Basic Law is considerably more lenient than the EU's. It's described as "innovation-friendly," but flip that around and it also means regulation is weak. Personally, I think deepfake-related regulations need to be stronger.

Impact on Developers

There are more things to consider when integrating AI models into services. Whether personal data is included in AI training data, whether bias testing was done, whether AI decisions are explainable.

Just testing a hiring AI for gender and age bias took about three weeks. In the past, you only had to worry about accuracy -- now you also need to track fairness metrics.

It's more work, that's undeniable. But I know it's a necessary process. The frustrating part is that the specific regulatory standards are vague in places, and I often find myself thinking "is this enough?" I wish the guidelines were more concrete.

Related Posts