Will Artificial Intelligence Need a Physical Body to Achieve Human-Like Intelligence?

Design Sem Nome 2025 08 06T154045.776
The first robot I recall is Rosie from The Jetsons, quickly followed by the sophisticated C-3PO and his loyal companion R2-D2 in The Empire Strikes Back. But the first AI I encountered without a physical form was Joshua—the computer in WarGames that nearly triggered a nuclear war, only to change course after learning about mutually assured destruction and opting to play chess instead.
Does AI need a body to ever come close to achieving something like human intelligence? And if it does, what kind of body would it need?
Depositphotos

The first robot I recall is Rosie from The Jetsons, quickly followed by the sophisticated C-3PO and his loyal companion R2-D2 in The Empire Strikes Back. But the first AI I encountered without a physical form was Joshua—the computer in WarGames that nearly triggered a nuclear war, only to change course after learning about mutually assured destruction and opting to play chess instead.

When I was seven, everything changed. I began to wonder: Could a machine grasp ethics, emotions, or what it means to be human? Did AI need a physical form? These questions grew more compelling as portrayals of artificial intelligence became increasingly complex—with characters like the android Bishop in Aliens, Data in Star Trek: The Next Generation, and more recently, Samantha in Her and Ava in Ex Machina.

These questions are no longer just science fiction. Today, roboticists are actively exploring whether artificial intelligence requires a physical body—and if it does, what form that body should take.

There’s also the question of how to achieve this. If embodied intelligence is essential for reaching true artificial general intelligence (AGI), could soft robotics be the breakthrough that takes us there?

Apple Study Shows Advanced AI Models Falter as Problem Complexity Increases

Recent research is starting to reveal the limitations of today’s most advanced—yet disembodied—AI systems. A new study from Apple looked at so-called “Large Reasoning Models” (LRMs), which attempt to generate reasoning steps before providing answers. While these models often outperform standard large language models (LLMs) on various tasks, they struggle significantly as problems grow more complex. Rather than merely plateauing, their performance can completely break down—even with ample computing power.

More troubling is their lack of consistent or logical reasoning. Their “reasoning traces,” or the way they process problems, often lack coherence. In fact, the more complicated the task, the less effort they seem to invest. According to the study’s authors, these models don’t actually “think” like humans do.

What we’re building right now are systems that take in words and predict the most likely next word,” said Nick Frosst, former Google researcher and co-founder of Cohere, in an interview with The New York Times. “That’s very different from how you and I think.”

GOFAI and the Era of Symbolic Logic: When AI Didn’t Need a Body

How did we arrive at this point? For much of the 20th century, artificial intelligence was guided by a framework known as GOFAI—“Good Old-Fashioned Artificial Intelligence”—which viewed thinking as a form of symbolic logic. Early AI pioneers believed intelligence could be created by manipulating symbols, similar to how a computer runs code. Under this approach, abstract reasoning didn’t require a physical body.

However, this idea began to unravel when early robotic AI systems struggled in unpredictable, real-world environments. Experts from psychology, neuroscience, and philosophy started exploring a different perspective—one influenced by studies of animal and plant intelligence. These lifeforms adapt, learn, and respond by interacting physically with their surroundings, not by processing abstract symbols.

Take humans, for instance: our gut is managed by the enteric nervous system, often called the “second brain” because it uses the same types of cells and neurotransmitters as our brain. Interestingly, octopus tentacles rely on these same components to sense and respond independently within each limb.

All of these point to a deeper question: what if true, flexible intelligence isn’t centralized in a brain alone, but spread throughout the body, deeply rooted in physical interaction with the environment?

Embodied Cognition: Intelligence Emerges Through the Body’s Interaction with the World

This is the core principle behind embodied cognition: thinking, sensing, and acting aren’t isolated functions—they’re deeply interconnected parts of a single process. As Rolf Pfeifer, Director of the Artificial Intelligence Laboratory at the University of Zurich, explained to EMBO Reports, “Brains have always evolved alongside bodies that engage with the world in order to survive. There’s no abstract, algorithmic space where brains simply emerge on their own.”

To bring AI closer to true intelligence, we may need to give it smarter—and softer—bodies. Cecilia Laschi, a pioneer in soft robotics, shifted her focus from rigid humanoid robots to soft-bodied machines inspired by the octopus, whose limbs move independently without a skeleton.

In human-like robots, every step must be precisely controlled,” Laschi told New Atlas. “But if the terrain changes, you have to reprogram it.” Animals, by contrast, adapt naturally—our knees, for instance, adjust to uneven ground without involving the brain.

This idea, known as embodied intelligence, suggests that the body itself handles some aspects of thinking. From an engineering view, it lightens the load on a robot’s main processor, making it more adaptable to real-world challenges.

As Laschi wrote in Science Robotics, motor behavior is shaped partly by the body and its environment—not just by code. Intelligence, then, isn’t only about computation but about interaction. Soft robotics, using flexible materials, allows machines to move and learn more like living organisms. A soft robotic arm, like an octopus tentacle, can grasp and respond in real time without complex calculations.

To make soft robots as adaptable as an octopus tentacle, engineers are shifting from rigid programming to materials that can sense and respond on their own—a concept called autonomous physical intelligence (API).

UCLA professor Ximin He leads this field, developing soft materials like gels and polymers that not only react to stimuli but also self-regulate using internal feedback.

Self-Regulating Materials Show How Soft Robots Can Move and Decide Like Living Systems

Her lab’s 2018 self-moving gel showcased this approach, which mimics biological systems. API uses nonlinear time-lag feedback, embedding decision-making within the material for smooth, rhythmic motion—like walking—without constant external commands.

By integrating sensing, control, and movement, He’s work brings robots closer to lifelike, real-time adaptability.

Soft robotics is still in its early stages, but its promise is significant. Cecilia Laschi points to early uses like endoscopic tools that respond to tissue or rehab devices that adapt to a patient’s needs.

To reach artificial general intelligence (AGI), machines may need bodies—especially soft, flexible ones. Humans and other lifeforms learn through physical interaction: by moving, touching, and adapting. We know an apple not just by definition, but by handling and experiencing it.

This kind of sensory, real-world knowledge is hard AI to teach that only processes text or images. Physical interaction could help AI develop its own unique understanding of the world. A soft robot with specialized senses—like infrared vision or scent detection—might form a completely different, and possibly valuable, view of life on Earth.

To build human-like intelligence, a machine must gain its own experiences,” said Giulio Sandini, a bioengineering professor at the University of Genoa. Like children, AI needs a body to truly learn.


Read the original article on: New Atlas

Read more: Single-Material Electronic Skin gives Robots Near-Human Feel

Scroll to Top