top of page

AI That Talks vs. AI That Acts:Why Robots Still Don’t Do What We Want

  • Writer: 양필승
    양필승
  • Feb 9
  • 3 min read

By Dr. Phil Yang, CEO of MAILab


Image generated by AI -- chatGPT
Image generated by AI -- chatGPT

Every few months, a familiar question resurfaces.


If AI has become so intelligent—capable of writing, reasoning, coding, and conversing—why do robots still struggle with something as simple as crossing a doorstep or picking up an object?


ChatGPT responds fluidly to a single sentence.Yet a household cleaning robot can still get stuck on a rug.


This is not a contradiction. And it is not a failure of robotics.


It is the result of two fundamentally different kinds of automation pursuing entirely different goals.


Two Automations, Two Worlds


Most people talk about “AI” as if it were one thing. In reality, we are dealing with two very different systems:

  • Automation of intelligence

  • Automation of the physical world

They look similar on the surface, but beneath that surface, they operate on opposite assumptions.


Intelligent AI Lives in a Probabilistic World


Large language models (LLMs) and other intelligent AI systems operate in a world of probabilities.


When a human intervenes—“That’s wrong. Try thinking this way instead.”the system adjusts probability distributions and improves its output.


This flexibility is not a bug. It is the core feature.


In my talks, I often call this process “tweaking.”

Human feedback is welcomed. It helps the model learn.


In this domain, humans are not just users.We are co-authors, shaping reasoning together with the machine.


Physical AI Lives in a Deterministic World


Robotics, or what is now often called Physical AI, lives in a very different universe.

Here, a deviation of one millimeter can be catastrophic.


A robot arm grasping an object cannot afford ambiguity:

  • Too much force → the object breaks

  • Too little force → the object slips

  • A slight misalignment → mechanical failure or safety risk


This is why robotics treats unexpected human intervention not as help, but as noise—sometimes even as danger.


In physical systems, flexibility is often sacrificed for safety.Control matters more than creativity.

This is not conservatism. It is engineering reality.


Moravec’s Paradox: Why the Easy Things Are Hard


This gap between talking AI and acting robots is best explained by Moravec’s Paradox.


Tasks that humans find effortless—walking, grasping, balancing—are among the hardest problems for machines.


Why?


Because physical action requires continuous real-time negotiation with gravity, friction, momentum, and collision. The physical world does not forgive approximation.


Language lives in symbols.


Physics lives in consequences.


Why Physical AI Is Suddenly Everywhere


So why is Physical AI suddenly back in the spotlight?

The answer is foundation models.

For decades, robots were programmed through rigid rule-based systems—thousands of lines of code defining every possible movement.

Today, that paradigm is changing.


Robots are beginning to:

  • Learn from data

  • Generalize from experience

  • Integrate perception, reasoning, and action


In other words, robots are finally getting something like a brain.


But there is an important limitation.


Even with foundation models, real-time human intervention remains tightly restricted.

The physical risks are simply too high.


This is why progress in Physical AI feels slower than progress in conversational AI—even when the underlying intelligence is improving rapidly.


Co-Authors vs. Supervisors

This leads to a crucial distinction.

With intelligent AI, humans are co-authors.

We shape thought together.

With physical AI, humans are still closer to supervisors.

We monitor, constrain, and intervene only when absolutely necessary.

One day, when robots can fully internalize physical uncertainty—when they can reason about force, balance, and risk as fluently as they reason about language—this relationship may change.

Only then will we be able to “tweak” robots the way we tweak language models.


Beyond AI That Only Talks


Physical AI is still learning how to walk.


But its direction is clear.


We are moving beyond AI that merely speaks intelligently, toward AI that can act intelligently in the real world.


The future of AI is not just about better answers. It is about safer, more precise, and more embodied intelligence.


The true challenge—and the true opportunity—lies at the intersection of control and cognition.


That is where the next era of AI will be built.

 
 
 

Comments


bottom of page