
The earliest robot I recall is Rosie from The Jetsons, followed not long after by the polished C-3PO and his loyal partner R2-D2 in The Empire Strikes Back. My first encounter with a bodiless AI, however, was Joshua from WarGames—the computer that nearly triggered nuclear war before discovering the logic of mutually assured destruction and opting to play chess instead.
At seven, everything shifted for me. Could a machine truly grasp ethics, emotions, or what it means to be human? Did AI require a body to achieve that? These questions grew stronger as portrayals of artificial intelligence became more nuanced—through figures like Bishop the android in Aliens, Data in Star Trek: TNG, and later Samantha in Her or Ava in Ex Machina.
These questions are no longer just hypothetical. Roboticists are actively debating whether artificial intelligence requires a body—and, if it does, what form that body should take. Beyond that lies the challenge of “how”: if embodiment is essential for achieving true artificial general intelligence (AGI), could soft robotics be the breakthrough that makes it possible?
The Boundaries Of Bodiless AI
Recent research is starting to reveal flaws in today’s most advanced – and notably bodiless – AI systems. A new Apple study looked at so-called “Large Reasoning Models” (LRMs), language models designed to generate reasoning steps before producing an answer. While these models outperform standard LLMs on many tasks, the paper shows that their performance collapses once problems reach higher levels of complexity. Instead of simply plateauing, they break down, even when supplied with ample computing resources.
More troubling, they don’t reason in a consistent or algorithmic way. Their “reasoning traces” – the step-by-step process they follow – often lack internal coherence. And as tasks become harder, the models appear to put in even less effort. The authors conclude that these systems don’t truly “think” in a human-like manner.
Nick Frosst, a former Google researcher and co-founder of Cohere, told The New York Times that today’s systems are essentially designed to take words as input and predict the most probable next word — a process he noted is quite different from how humans think.
Cognition Is More Than Just Computation
How did we arrive at this point? For much of the 20th century, artificial intelligence was guided by GOFAI—“Good Old-Fashioned AI”—which viewed cognition as symbolic logic. The early assumption was that intelligence could be created by manipulating symbols, much like a computer runs code. In that framework, abstract reasoning didn’t require a body.
But cracks began to show when early robots struggled in unpredictable, real-world environments. This pushed researchers in psychology, neuroscience, and philosophy to reconsider the problem, drawing on insights from studies of animal and plant intelligence—systems that adapt, learn, and respond to their surroundings through direct physical engagement rather than symbolic representations.
Even in humans, the enteric nervous system—the so-called “second brain” in the gut—demonstrates this principle. It relies on the same cells and neurotransmitters as the brain to manage digestion, much like an octopus tentacle uses those same components to sense and act independently within a single limb.
Paraphrase: This leads to the question—what if true adaptable intelligence comes from being spread across the whole body, rather than existing only in the brain, cut off from the physical world?
Paraphrase: This is the core principle of embodied cognition: perception, action, and thought form a single, unified process. As Rolf Pfeifer, Director of the University of Zurich’s Artificial Intelligence Laboratory, explained to EMBO Reports: “Brains have always evolved within bodies that must engage with the world to survive. They don’t emerge in some abstract, algorithmic void.”
Embodied Minds: An Alternative Form of Thought
We may need more adaptable bodies to match advanced AI — and Cecilia Laschi, a leading figure in soft robotics, argues that adaptability comes from softness. After years of working on rigid humanoid robots in Japan, she turned her focus to soft-bodied designs, drawing inspiration from the octopus, a creature without a skeleton whose limbs operate semi-independently.
“With a humanoid robot, every movement has to be precisely controlled,” she told New Atlas. “If the ground changes, you need to adjust the programming.”
By contrast, animals don’t consciously calculate every step. “Our knees naturally yield,” Laschi notes. “We handle uneven surfaces through our bodies’ mechanics, not our brains.” This illustrates embodied intelligence — the notion that parts of cognition can be delegated to the body itself.
From an engineering standpoint, embodied intelligence offers clear benefits: by shifting perception, control, and decision-making into a robot’s physical design, the central processor has less work to do — enabling robots to operate more reliably in unpredictable conditions.
In a May issue of Science Robotics, Laschi explains that motor control isn’t handled solely by a robot’s computing unit—external forces acting on the body also shape its movements. In other words, behavior emerges from interaction with the environment, and intelligence develops through experience rather than being fully pre-coded into software.
From this perspective, progress in intelligence isn’t about faster processors or larger models, but about engagement with the world. Soft robotics plays a central role here, using materials like silicone and advanced fabrics to create flexible, adaptive machines. Such robots can adjust in real time—like a soft robotic arm modeled on an octopus tentacle, which can grasp, explore, and react without calculating every step in advance.
Living Matter and Loops: Teaching Materials To Think
To create soft robots as capable as an octopus tentacle, engineers must move beyond coding for every scenario and instead develop novel methods for sensing and response. Achieving lifelike independence in machines is driving research toward a new idea: autonomous physical intelligence (API).
At UCLA, Associate Professor Ximin He has advanced this field by developing soft materials—such as adaptive gels and polymers—that not only respond to external stimuli but also control their own movement through inherent feedback mechanisms.
He explains to New Atlas that their research focuses on building decision-making into the materials themselves. These materials don’t just shift shape when stimulated — they can also ‘decide’ how to adapt or fine-tune that response based on their own deformation, effectively adjusting their next movement.
Back in 2018, his team showcased this with a gel capable of self-regulating its motion. Since then, they’ve demonstrated that the same concept extends to other soft materials, such as liquid crystal elastomers that perform well in air.
Building Intelligence into the Material Itself
The foundation of API lies in nonlinear, time-delayed feedback. Unlike conventional robots, where sensors feed data to a controller that then issues commands, He’s method weaves this decision-making process directly into the material itself.
“In robotics, you need sensing, actuation, and a way to choose between them,” He says. “We’re building that choice physically through feedback loops.”
He likens the idea to biology: negative feedback stabilizes systems, as in glucose regulation or a thermostat, while positive feedback reinforces change. Nonlinear feedback blends the two, enabling stable yet dynamic patterns of motion – such as pendulum swings or walking cycles.
“Much of natural movement – walking, swimming, and so on – depends on rhythmic, repeating patterns,” He explains. “By using nonlinear, delayed feedback, we can engineer soft robots that step forward, step back, and continue moving – all without constant outside control.”
This marks a significant leap from earlier soft robots that depended entirely on external triggers. As He and colleagues noted in a recent review, embedding sensing, control, and actuation within the material itself pushes robotics toward systems that don’t just respond passively, but can choose, adjust, and act independently.
Softness Is The New Smart
Soft robotics is still emerging, but its potential is immense. Laschi highlights early applications such as surgical instruments—like endoscopes—that can both explore and respond to delicate human tissue, or rehabilitation devices that adjust and move in harmony with a patient’s needs.
To progress from AI to AGI, machines might require bodies—flexible and adaptive ones in particular. After all, most living beings, humans included, learn through movement, contact, trial, and correction. We navigate an unpredictable, messy world with ease, whereas today’s AIs still falter. Our understanding of an apple doesn’t come from reading its definition, but from holding, tasting, dropping, bruising, slicing, squeezing, and watching it decay.
This kind of knowledge—embodied, sensory, and contextual—is difficult to instill in a system trained only on text or images. By interacting directly with the physical world, AI can overcome the limits of language that constrain today’s LLMs and begin to form its own model of reality. That model wouldn’t mirror a human perspective, but could be something altogether different. A soft robot, equipped with unique sensory abilities—like infrared sight, deep-frequency hearing, or detecting diseases through smell—might cultivate a novel (and potentially very valuable) way of perceiving life on Earth.
As Giulio Sandini, Professor of Bioengineering at the University of Genoa, explains: “To create human-like intelligence in a machine, it must gather its own experiences. Like children, it has to learn through doing—and that almost certainly means having a body.”
Read the original article on: New Atlas
Read more: AiMoga, Chery’s Humanoid Robot that Stole the Spotlight at the 2025 Shanghai Auto Show
