Is a Physical Form Necessary For AI To Reach Human-Like Intelligence?

The first robot I can recall is Rosie from The Jetsons, soon followed by the sophisticated C-3PO and his loyal companion R2-D2 in The Empire Strikes Back. But the first AI I encountered without a physical form was Joshua, the computer from WarGames—a system that nearly triggered nuclear war until it grasped the concept of mutually assured destruction and opted to play chess instead.
Image Credits: Pixabay

The first robot I can recall is Rosie from The Jetsons, soon followed by the sophisticated C-3PO and his loyal companion R2-D2 in The Empire Strikes Back. But the first AI I encountered without a physical form was Joshua, the computer from WarGames—a system that nearly triggered nuclear war until it grasped the concept of mutually assured destruction and opted to play chess instead.

That moment, when I was seven, left a lasting impression. Could a machine grasp ethics? Feel emotions? Understand what it means to be human? These questions only grew more compelling as portrayals of artificial intelligence became more nuanced—whether through the android Bishop in Aliens, Data in Star Trek: The Next Generation, or more recent figures like Samantha in Her and Ava in Ex Machina.

But these questions are no longer purely theoretical. Today, roboticists are actively exploring whether artificial intelligence requires a physical form—and if it does, what kind of embodiment is most suitable.

Then there’s the question of how to achieve it: if embodiment is essential for developing true artificial general intelligence (AGI), could soft robotics hold the key to unlocking that next breakthrough?

The Boundaries Of Bodiless AI

Recent research is beginning to expose the shortcomings of today’s most advanced—yet still disembodied—AI systems. A new study from Apple looked at so-called “Large Reasoning Models” (LRMs), a type of language model designed to generate reasoning steps before delivering answers. While these models outperform traditional LLMs on many tasks, the study found they tend to break down when faced with more complex problems. And rather than merely hitting a ceiling, their performance sharply deteriorates—even when supplied with ample computational resources.

More troubling is their inconsistency in reasoning. Their “reasoning traces,” or the steps they take to solve problems, often lack coherent logic. As tasks grow more difficult, the models appear to exert even less effort. The researchers conclude that these systems don’t actually “think” in a way that resembles human cognition.

“What we’re creating today are systems that process words and predict the most likely next word … which is quite different from how humans think,” said Nick Frosst, a former Google researcher and co-founder of Cohere, in an interview with The New York Times.

Thinking Goes Beyond mere Computation

How did we arrive at this point? Throughout much of the 20th century, researchers developed artificial intelligence using a framework called GOFAI—’Good Old-Fashioned Artificial Intelligence’—which approached cognition through symbolic logic. Early AI pioneers aimed to build intelligence by actively manipulating symbols, similar to how computers execute code. Under this model, abstract reasoning didn’t require a physical body.

However, this view began to unravel when early robotic systems struggled to operate effectively in the unpredictable, messy conditions of the real world. Researchers from psychology, neuroscience, and philosophy began to reconsider the foundations of intelligence—especially in light of insights from studying animals and even plants, which learn and adapt through physical interaction with their environments rather than through abstract reasoning alone.

In humans, for example, the enteric nervous system—often called the “second brain”—regulates digestion using the same kinds of neurons and chemicals as the brain. Interestingly, octopus tentacles use similar components to sense and react independently, right within the limb.

All of this raises a compelling question: what if adaptable intelligence emerges by spreading throughout the body and staying deeply connected to the physical world, rather than concentrating in a centralized brain?

This is the core principle behind embodied cognition: thinking, sensing, and acting are not distinct functions—they form a single, integrated process. As Rolf Pfeifer, Director of the Artificial Intelligence Laboratory at the University of Zurich, explained to EMBO Reports, “Brains have always evolved alongside bodies that must engage with the world to survive. There’s no abstract, algorithmic void where brains simply emerge.”

Embodied Intelligence

To build truly intelligent systems, we may need to develop smarter bodies alongside smarter AI—and according to Cecilia Laschi, a leading figure in soft robotics, “smarter” often means “softer.” After years of working with rigid humanoid robots in Japan, she turned her focus to soft-bodied machines, drawing inspiration from the octopus—an animal with no skeleton whose limbs can act independently.

“In a humanoid robot, every movement must be precisely controlled,” she explained in an interview with New Atlas. “If the terrain changes, even slightly, you have to adjust the programming.”

In contrast, animals don’t need to consciously plan each step. “Our knees, for example, are naturally compliant,” she noted. “We adapt to uneven surfaces mechanically, without involving the brain.” This concept—where the body itself handles part of the cognitive load—is known as embodied intelligence.

Designing Smarter Bodies

From an engineering standpoint, embodied intelligence offers clear benefits. By shifting perception, control, and decision-making to the robot’s physical design, engineers can reduce the demands on its central processor. This makes robots more adaptable and efficient in unpredictable, real-world environments.

In a May special edition of Science Robotics, Laschi explains it this way: “Motor control isn’t handled solely by the computing system … physical behavior is also shaped mechanically by external forces acting on the body.” In other words, behavior emerges from interactions with the environment, and intelligence is acquired through experience—not hardcoded into a program.

From this perspective, intelligence isn’t simply a matter of faster processors or larger AI models—it’s rooted in interaction. A major driver of progress in this area is soft robotics, which employs materials like silicone or advanced fabrics to create more adaptable, flexible robot bodies. These soft systems can adjust to their surroundings, move fluidly, and learn in real time. Much like an octopus tentacle, a soft robotic arm can grasp, sense, and adapt on the fly—without needing to compute every action in advance.

Living Materials and Feedback: How To Build Self-Thinking Systems

To achieve soft robotics that function as seamlessly as something like an octopus tentacle, engineers are shifting away from programming every potential outcome. Instead, they’re exploring new approaches that enable machines to sense and respond dynamically. Researchers in this field are developing a concept called autonomous physical intelligence (API).

Ximin He, an Associate Professor of Materials Science and Engineering at UCLA, is at the forefront of this research. Her work involves developing soft, responsive materials—such as gels and polymers—that do more than just react to external stimuli. These materials are capable of self-regulating their movements through built-in feedback mechanisms.

“We’re trying to embed more decision-making capabilities directly into the material,” He explained in an interview with New Atlas. “If a material changes shape in response to a trigger, it can also determine how to adapt that trigger based on its deformation—essentially correcting or fine-tuning its next action.”

Embedding Intelligence in Matter

In 2018, He’s team showcased a gel that could control its own movement. Since then, they’ve demonstrated that this concept also works with other soft materials, like liquid crystal elastomers, which function effectively even in open air.

The core principle behind autonomous physical intelligence (API) is nonlinear time-lagged feedback. While conventional robots rely on external control systems to interpret sensory input and direct their actions, Ximin He’s method embeds that decision-making logic directly into the material itself.

“In robotics, it’s not enough to just sense and actuate—you also need decision-making in between,” He tells New Atlas. “We’re building that into the material structure through internal feedback mechanisms.”

She likens this approach to how living organisms function. Biological systems often use negative feedback—like how the body regulates blood sugar or how a thermostat maintains temperature—to correct imbalances. Positive feedback, in contrast, intensifies changes. Nonlinear feedback blends these two, producing stable, rhythmic patterns of behavior, such as those seen in walking or pendulum motion.

“Natural movement—like walking or swimming—is often repetitive and steady,” He explains. “With nonlinear, time-delayed feedback, soft robots can be designed to move forward, reverse, and then move forward again, all without step-by-step external commands.”

This marks a significant evolution from earlier soft robots that depended entirely on outside cues to function. As He and her collaborators outlined in a recent review paper, by integrating sensing, control, and actuation directly into the material, they’re building robots capable of not just responding to their environment—but of making decisions, adapting, and acting independently.

The Future Lies In Intelligent Softness

Soft robotics is still an emerging field, but its potential is immense. Laschi highlights early, clear applications such as endoscopic surgical tools capable of simultaneously inspecting and responding to delicate human tissue, as well as rehabilitation devices that can bend or adjust in real time to meet a patient’s needs.

To progress from AI to AGI, machines may need physical forms—especially ones that are soft and adaptable. Most living beings, including humans, gain knowledge through movement, touch, trial and error, and adaptation. We navigate a messy, unpredictable world with ease—something current AIs still find challenging. Our understanding of something as simple as an apple comes not from reading a definition, but from physically engaging with it: holding, tasting, dropping, bruising, slicing, squeezing, and watching it decay.

This kind of embodied, sensory, and context-rich knowledge is difficult to instill in models that rely solely on text or images. By linking AI more directly to the real world through sensory feedback, we can bypass the constraints of language that large language models face. This opens the door for AI to form its own kind of understanding—distinct from a human one. For instance, a soft robot equipped with alternative sensory inputs—like infrared vision, low-frequency hearing, or the ability to detect disease through smell cancer—could develop a unique and potentially valuable perspective on life on Earth.

“If you want to develop something like human intelligence in a machine, the machine has to be able to acquire its own experiences,” explains Giulio Sandini, Professor of Bioengineering at the University of Genoa. Like children, it must learn through interaction with the world—and that almost certainly means it needs a body.


Read the original article on: New Atlas

Read more: Watch: Sneaker-Clad Humanoid Outpaces Barefoot Rival in Gobi Race