IT··3 min read

Where Humanoid Robots Are in 2025

I went down a rabbit hole researching the current state of humanoid robotics. Here's what I found.

A CES Video Started This Rabbit Hole

This year at CES, I watched Figure 02 make coffee. It picked up a cup, placed it in the machine, pressed a button. Naturally. Two years ago, this kind of manipulation was insanely difficult for robots. That's when curiosity kicked in: "how far have humanoids actually come?" (Developer brain. Can't let a question go.)

The Major Players

Tesla Optimus: Musk announced limited sales in 2025, but reality is factory pilot tests doing repetitive logistics tasks. Price TBD, though "cheaper than a car" suggests a sub-$20K target.

Figure AI: Collaborating with OpenAI to build robots that work through conversation. Say "give me something edible from the table" and it picks up an apple and hands it over. Natural language understanding + object recognition + manipulation simultaneously. Impressive.

Boston Dynamics Atlas: Switched to electric actuation. Quieter and more precise than hydraulic. But the price is the problem -- estimated at several hundred thousand dollars per unit. Commercialization is far off.

China's Unitree: Their H1 model went viral running at 3.3 m/s. Starting at $90,000. Dramatically cheaper than competitors. But fine manipulation still lags behind Figure and Tesla.

But Is Any of This Actually Usable?

Honestly, not yet. Demo footage and real-world performance are different things. Making coffee in a controlled environment works, but doing dishes in a random kitchen is still far away.

Three biggest technical barriers: hand precision for diverse objects (picking up an egg without cracking it), responding to unexpected situations (what if there's water on the floor?), and battery life (most last 2-4 hours currently).

What's Interesting from a Developer's Perspective

Robot control software merging with LLMs is changing the programming paradigm. Before, you'd program low-level commands like "raise arm 30 degrees." Now, you give high-level instructions like "clear the cups from the table," the LLM interprets it, creates an action plan, and the robot executes.

This feels analogous to the shift from jQuery to React in web dev. Going from low-level DOM manipulation to declarative UI. Robot control is similarly moving from low-level motor commands to intent-based instructions.

Worrying About Jobs Is Premature

"Will robots take our jobs?" I get asked this a lot. Honest answer: "Not yet, but eventually." Repetitive logistics will be replaced first, then simple manufacturing. Creative or socially interactive work comes last.

Developers? I think robots replacing physical labor will arrive before AI fully replaces code-writing. But five years out, who knows. How many people predicted ChatGPT five years before it happened?

What's certain right now is that robotics engineering and software engineering are converging fast. I briefly considered studying ROS 2 before dropping it. (Realistically, I won't.)

Related Posts