Pictures showing the (A) rigid air-filled chamber stand-ins and paws used for compliance experiments. (B) close-up picture of the rigid air-filled chamber stand-ins showing the cutouts added to avoid reducing the range of motion due to self-collisions compared to the soft air-filled chambers. Image Credits: Science Robotics (2025). DOI: 10.1126/scirobotics.ads6790
Despite their advanced capabilities—from exploring distant planets to conducting intricate surgeries—robots continue to struggle with simple human tasks. One of the biggest hurdles is dexterity: the skill of grasping, holding, and manipulating objects. But that may be changing. Researchers at the Toyota Research Institute in Massachusetts have now trained a robot to use its whole body to manage large objects, mimicking the way humans do.
Humans use a combination of fine motor skills—like precise hand movements—and gross motor skills involving the arms, legs, and torso to pick up and handle objects. Robots, however, struggle with these larger, full-body movements, such as lifting and stabilizing a big box, because they require continuous, complex adjustments to maintain control and avoid dropping the object.
A Humanoid Robot Demonstrating Advanced Object Handling
In a study published in Science Robotics, researchers showcased a humanoid upper-body robot named Punyo that could lift a large water jug onto its shoulder and grasp a big box. Punyo used feedback from its soft, pressure-sensitive skin and joint sensors to coordinate its movements.
The robot’s success was largely due to the softness of its body (known as passive compliance) and the flexibility programmed into its joints (active compliance). Compared to a rigid version, this compliant design led to significantly better performance. “Incorporating any form of compliance—passive, active, or both—improved outcomes, boosting success rates by an average of 206% over a non-compliant model,” the researchers noted.
Another key advantage was the robot’s ability to learn quickly. Using a technique called example-guided reinforcement learning, the researchers trained Punyo with just one virtual demonstration. From there, it practiced independently until it mastered the task. As the team explained, “A single teleoperated demonstration in simulation is enough to train effective policies for complex, contact-heavy movements.”
More Capable Robots
This technology marks a major advancement toward developing robots that are more useful in everyday life. For instance, they could safely and efficiently handle bulky items like furniture at home or heavy packages in warehouses. They might also assist in care environments, helping individuals with mobility issues. Importantly, these robots wouldn’t require detailed programming—they could learn human-like skills from just one example.
A genetically engineered pig lung functioned for 216 hours inside a brain-dead human, marking the first reported attempt at a cross-species lung transplant, according to Nature Medicine.
The technique, known as xenotransplantation, aims to ease the chronic shortage of donor organs. Pig organs are similar in size to human ones, but their proteins often trigger severe immune rejection. Over time, scientists have pinpointed the genes behind these problematic proteins and used gene-editing tools to make pig organs more compatible with the human body.
China Pushes Boundaries With First Pig Lung Transplant
Researchers have already transplanted modified pig hearts, livers, and kidneys into people under experimental protocols, showing encouraging progress. Now, a team at the First Affiliated Hospital of Guangzhou Medical University in China has added lungs to that list—though with important limitations.
The lung transplant lasted more than nine days but eventually caused damaging inflammation, despite an intensive regimen of immune-suppressing drugs.
Still, the researchers note that their results “pave the way for further innovations in the field.”
Every day, about 13 people die while waiting for an organ transplant. The problem is stark: there simply aren’t enough donor organs.
For a transplant to succeed, the donor organ must be closely matched to the recipient’s blood type and immune markers, which makes the wait agonizingly long. As of late September 2024, nearly 90,000 patients were on the kidney transplant list, while more than 3,000 awaited a new heart.
Pig organs offer a possible alternative—but in their natural state, they’re unsafe for humans.
Viruses and Rejection
One issue is that pig DNA carries porcine endogenous retroviruses (PERVs). These viruses don’t harm pigs but can infect humans. Another is immune rejection: every organ is covered in protein markers, like a biological fingerprint. If the body doesn’t recognize that fingerprint, the immune system mounts an aggressive defense. Killer T cells, B cells, and inflammatory molecules known as cytokines can overwhelm and destroy the transplant.
The solution is to make pig organs more human-like so they evade immune detection.
Over the years, researchers have identified the pig genes that encode these problematic proteins and used CRISPR-Cas9 to remove them. But that created new challenges: without certain protein signals, the organs looked abnormal to immune cells. To counter this, scientists inserted three human genes that regulate immune responses, essentially camouflaging the organs.
After years of refinement, Chinese researchers developed a genetically altered Bama Xiang pig—a small breed native to southern China—with six modified genes designed to make its organs more compatible with humans.
At least in theory
First Pig Lung Transplant Attempt in Humans
In a recent trial, scientists transplanted the left lung of a genetically modified Bama Xiang pig into a brain-dead 39-year-old man, with his family’s consent. The organ tested virus-free, and the surgery largely followed standard lung transplant procedures, though some pig structures had to be trimmed for fit.
Lungs break more easily than other organs, and restoring blood flow can severely damage them. Yet within a day the transplanted lung stabilized and functioned normally. By day two, however, it showed signs of acute rejection, with swelling, immune cell activity, and later spikes in antibodies. By day nine, the lung had partially healed and was exchanging oxygen, but the trial ended at the family’s request.
Researchers detected no pig viruses during the study, and patients developed no infections despite immune suppression. Lungs face unique hurdles—high blood pressure, pathogen exposure, and rejection-prone proteins. In this trial, the immune response was faster and stronger than in baboons, highlighting the need for better drugs or more genetic edits.
The team now plans to test existing transplant drugs—and potentially add blood thinners or anti-inflammatories—to better control lung-specific immune reactions in future trials.
The earliest robot I recall is Rosie from The Jetsons, followed not long after by the polished C-3PO and his loyal partner R2-D2 in The Empire Strikes Back. My first encounter with a bodiless AI, however, was Joshua from WarGames—the computer that nearly triggered nuclear war before discovering the logic of mutually assured destruction and opting to play chess instead.
At seven, everything shifted for me. Could a machine truly grasp ethics, emotions, or what it means to be human? Did AI require a body to achieve that? These questions grew stronger as portrayals of artificial intelligence became more nuanced—through figures like Bishop the android in Aliens, Data in Star Trek: TNG, and later Samantha in Her or Ava in Ex Machina.
These questions are no longer just hypothetical. Roboticists are actively debating whether artificial intelligence requires a body—and, if it does, what form that body should take. Beyond that lies the challenge of “how”: if embodiment is essential for achieving true artificial general intelligence (AGI), could soft robotics be the breakthrough that makes it possible?
The Boundaries Of Bodiless AI
Recent research is starting to reveal flaws in today’s most advanced – and notably bodiless – AI systems. A new Apple study looked at so-called “Large Reasoning Models” (LRMs), language models designed to generate reasoning steps before producing an answer. While these models outperform standard LLMs on many tasks, the paper shows that their performance collapses once problems reach higher levels of complexity. Instead of simply plateauing, they break down, even when supplied with ample computing resources.
More troubling, they don’t reason in a consistent or algorithmic way. Their “reasoning traces” – the step-by-step process they follow – often lack internal coherence. And as tasks become harder, the models appear to put in even less effort. The authors conclude that these systems don’t truly “think” in a human-like manner.
Nick Frosst, a former Google researcher and co-founder of Cohere, told The New York Times that today’s systems are essentially designed to take words as input and predict the most probable next word — a process he noted is quite different from how humans think.
Cognition Is More Than Just Computation
How did we arrive at this point? For much of the 20th century, artificial intelligence was guided by GOFAI—“Good Old-Fashioned AI”—which viewed cognition as symbolic logic. The early assumption was that intelligence could be created by manipulating symbols, much like a computer runs code. In that framework, abstract reasoning didn’t require a body.
But cracks began to show when early robots struggled in unpredictable, real-world environments. This pushed researchers in psychology, neuroscience, and philosophy to reconsider the problem, drawing on insights from studies of animal and plant intelligence—systems that adapt, learn, and respond to their surroundings through direct physical engagement rather than symbolic representations.
Even in humans, the enteric nervous system—the so-called “second brain” in the gut—demonstrates this principle. It relies on the same cells and neurotransmitters as the brain to manage digestion, much like an octopus tentacle uses those same components to sense and act independently within a single limb.
Paraphrase: This leads to the question—what if true adaptable intelligence comes from being spread across the whole body, rather than existing only in the brain, cut off from the physical world?
Paraphrase: This is the core principle of embodied cognition: perception, action, and thought form a single, unified process. As Rolf Pfeifer, Director of the University of Zurich’s Artificial Intelligence Laboratory, explained to EMBO Reports: “Brains have always evolved within bodies that must engage with the world to survive. They don’t emerge in some abstract, algorithmic void.”
Embodied Minds: An Alternative Form of Thought
We may need more adaptable bodies to match advanced AI — and Cecilia Laschi, a leading figure in soft robotics, argues that adaptability comes from softness. After years of working on rigid humanoid robots in Japan, she turned her focus to soft-bodied designs, drawing inspiration from the octopus, a creature without a skeleton whose limbs operate semi-independently.
“With a humanoid robot, every movement has to be precisely controlled,” she told New Atlas. “If the ground changes, you need to adjust the programming.”
By contrast, animals don’t consciously calculate every step. “Our knees naturally yield,” Laschi notes. “We handle uneven surfaces through our bodies’ mechanics, not our brains.” This illustrates embodied intelligence — the notion that parts of cognition can be delegated to the body itself.
From an engineering standpoint, embodied intelligence offers clear benefits: by shifting perception, control, and decision-making into a robot’s physical design, the central processor has less work to do — enabling robots to operate more reliably in unpredictable conditions.
In a May issue of Science Robotics, Laschi explains that motor control isn’t handled solely by a robot’s computing unit—external forces acting on the body also shape its movements. In other words, behavior emerges from interaction with the environment, and intelligence develops through experience rather than being fully pre-coded into software.
From this perspective, progress in intelligence isn’t about faster processors or larger models, but about engagement with the world. Soft robotics plays a central role here, using materials like silicone and advanced fabrics to create flexible, adaptive machines. Such robots can adjust in real time—like a soft robotic arm modeled on an octopus tentacle, which can grasp, explore, and react without calculating every step in advance.
Living Matter and Loops: Teaching Materials To Think
To create soft robots as capable as an octopus tentacle, engineers must move beyond coding for every scenario and instead develop novel methods for sensing and response. Achieving lifelike independence in machines is driving research toward a new idea: autonomous physical intelligence (API).
At UCLA, Associate Professor Ximin He has advanced this field by developing soft materials—such as adaptive gels and polymers—that not only respond to external stimuli but also control their own movement through inherent feedback mechanisms.
He explains to New Atlas that their research focuses on building decision-making into the materials themselves. These materials don’t just shift shape when stimulated — they can also ‘decide’ how to adapt or fine-tune that response based on their own deformation, effectively adjusting their next movement.
Back in 2018, his team showcased this with a gel capable of self-regulating its motion. Since then, they’ve demonstrated that the same concept extends to other soft materials, such as liquid crystal elastomers that perform well in air.
Building Intelligence into the Material Itself
The foundation of API lies in nonlinear, time-delayed feedback. Unlike conventional robots, where sensors feed data to a controller that then issues commands, He’s method weaves this decision-making process directly into the material itself.
“In robotics, you need sensing, actuation, and a way to choose between them,” He says. “We’re building that choice physically through feedback loops.”
He likens the idea to biology: negative feedback stabilizes systems, as in glucose regulation or a thermostat, while positive feedback reinforces change. Nonlinear feedback blends the two, enabling stable yet dynamic patterns of motion – such as pendulum swings or walking cycles.
“Much of natural movement – walking, swimming, and so on – depends on rhythmic, repeating patterns,” He explains. “By using nonlinear, delayed feedback, we can engineer soft robots that step forward, step back, and continue moving – all without constant outside control.”
This marks a significant leap from earlier soft robots that depended entirely on external triggers. As He and colleagues noted in a recent review, embedding sensing, control, and actuation within the material itself pushes robotics toward systems that don’t just respond passively, but can choose, adjust, and act independently.
Softness Is The New Smart
Soft robotics is still emerging, but its potential is immense. Laschi highlights early applications such as surgical instruments—like endoscopes—that can both explore and respond to delicate human tissue, or rehabilitation devices that adjust and move in harmony with a patient’s needs.
To progress from AI to AGI, machines might require bodies—flexible and adaptive ones in particular. After all, most living beings, humans included, learn through movement, contact, trial, and correction. We navigate an unpredictable, messy world with ease, whereas today’s AIs still falter. Our understanding of an apple doesn’t come from reading its definition, but from holding, tasting, dropping, bruising, slicing, squeezing, and watching it decay.
This kind of knowledge—embodied, sensory, and contextual—is difficult to instill in a system trained only on text or images. By interacting directly with the physical world, AI can overcome the limits of language that constrain today’s LLMs and begin to form its own model of reality. That model wouldn’t mirror a human perspective, but could be something altogether different. A soft robot, equipped with unique sensory abilities—like infrared sight, deep-frequency hearing, or detecting diseases through smell—might cultivate a novel (and potentially very valuable) way of perceiving life on Earth.
As Giulio Sandini, Professor of Bioengineering at the University of Genoa, explains: “To create human-like intelligence in a machine, it must gather its own experiences. Like children, it has to learn through doing—and that almost certainly means having a body.”
To handle diverse real-world tasks, robots must securely grasp objects of various shapes, textures, and sizes without unintentionally dropping them. Traditional methods improve this by increasing the robotic hand’s grip strength to avoid slippage.
Researchers Develop Bio-Inspired Motion Control to Prevent Slippage in Robotic Hands
Researchers from several universities and labs have proposed new methods to stop objects from slipping from robotic hands. Their technique adjusts the movement paths the hand follows during manipulation, rather than relying solely on grip force. The system, combining a robotic controller with bio-inspired trajectory modulation, was detailed in Nature Machine Intelligence.
“The idea for this work was inspired by a familiar human experience,” said Amir Ghalamzan, senior author of the study, in an interview with Tech Xplore.
Teaching Robots to Adjust Movements Like Humans to Protect Fragile Objects
When sensing a delicate object might slip, people adjust movements—slowing, tilting, or shifting—rather than just tightening their grip. In contrast, robots have traditionally relied on increasing grip strength, which can be ineffective and may even harm fragile items. Our goal was to explore ways to make robots respond more like humans in such situations,” explained Ghalamzan.
The study aimed to create a controller that predicts slip and adjusts movements, using bio-inspired trajectory modulation with grip-force control for more dexterous manipulation.
Image Credits:Figure illustrating the predictive control architecture in humans based on t
“Our method replicates the way humans rely on internal models to interact with their surroundings,” Ghalamzan said. Like the brain anticipating actions, the robot’s data-driven ‘world model’ predicts tactile feedback to detect and prevent slips in advance.
The controller lets robots adjust speed, direction, and hand position in real time instead of just increasing grip strength.. By securing objects through movement adjustments, this method can lower the risk of damaging delicate items. It also works when grip force can’t be changed, enabling more fluid, intelligent interactions.
Novel Motion-Based Slip Controller Enhances Grip-Force Control
“Our research delivers two major innovations,” Ghalamzan explained. First, we present a unique motion-based slip controller that complements grip-force control, useful when increasing grip isn’t possible.
“Second, we developed a predictive controller driven by a learned tactile forward model, or ‘world model,’ that enables robots to anticipate slip based on their intended actions.”
The team applied the new controller to plan a robotic gripper’s movements and tested it in dynamic, unstructured settings. In several cases, it notably enhanced grasp stability, surpassing conventional controllers that rely solely on adjusting grip force.
Ghalamzan noted that researchers have traditionally found embedding such a model within a predictive control loop too computationally intensive. “Our findings demonstrate that it is not only possible but also highly effective.”
World Model Could Broaden Robots’ Real-World Capabilities
This work could advance robotics by enabling safe physical and social interactions via a world model. Such capabilities could allow robots to handle diverse objects in real-world environments, from homes and manufacturing floors to healthcare facilities.
“We are working to make our predictive controller faster and more efficient for use in more demanding real-time scenarios,” Ghalamzan added. “This involves exploring new architectures and algorithms to minimize computational load.”
Future research will extend the system to handle more complex manipulation tasks, such as working with deformable items or objects requiring two-handed coordination. The team also plans to integrate computer vision, enabling trajectory planning that combines tactile and visual feedback.
“Another key goal is to improve the transparency and verifiability of these learned models,” Ghalamzan said. “As robots become more intelligent and autonomous, it’s essential that humans can understand and trust their decision-making. Our goal is to develop predictive controllers that are powerful, safe, and explainable for real-world use.
A study published on July 7 in PNAS introduced a groundbreaking invention: an “artificial tongue” made from ultrathin graphene oxide membranes that can detect and process tastes directly in liquids, similar to human taste buds.
This remarkable technological breakthrough merges sensory detection with learning abilities, marking a first for electronic devices.
Graphene Oxide Layers Enable Precise Taste Detection Through Molecular Filtering
Made from graphene oxide layers, the device acts as a molecular filter, letting flavor ions pass through tiny channels to create distinct electrical signals. These signals enabled the device to identify tastes more accurately as it gained experience.
The key is slowing ion movement by up to 500 times, letting the device retain flavor information for about 140 seconds—enough to mimic short-term human memory.
The artificial tongue reached an accuracy of 72.5% to 87.5% in identifying basic tastes like sweet, bitter, salty, and sour. For more complex drinks such as coffee and soda, accuracy improved to 96%, thanks to their stronger electrical signatures.
Earlier “electronic tongue” systems had to function outside the liquid, using separate sensors and processing units. This device integrates sensing and processing in the liquid, enabling more compact and natural intelligent systems.
This invention can assist in monitoring diseases and medication effects through taste analysis, as well as in quality control of liquids and detection of contamination.
Currently, the prototype is still bulky and consumes a lot of energy. Researchers acknowledge the need to enhance its size, energy efficiency, and incorporate smaller sensors. This artificial tongue ushers in a new generation of intelligent sensors that work organically, autonomously, and seamlessly in liquid environments.
Humanoid robots are mostly tested in assistive manual tasks, while their potential for creative, expressive roles like music or performance remains largely unexplored.
“Researchers from SUPSI, IDSIA, and Politecnico di Milano developed Robot Drummer, a reinforcement learning–powered humanoid that plays drums with precision, expression, and human-like movements.“
From Coffee Conversation to Creative Robotics Challenge
“Lead author Asad Ali Shahid said Robot Drummer originated from a coffee chat about how humanoid robots rarely engage in creative or expressive tasks.” That sparked a question: what if a humanoid robot could take on a creative role, like making music? Drumming felt like the perfect challenge—it’s rhythmic, physical, and demands quick coordination of multiple limbs.
Shahid’s team developed Robot Drummer, a machine learning system enabling a humanoid to play full songs with human-like rhythm, tested successfully on Unitree’s G1 robot.
“The core concept is to model each song as a sequence of precisely timed contact events—what we call a rhythmic contact chain,” Shahid explained.
These contact points specify which drums to hit and at what moments. Using this guidance, the robot practices in a simulated environment, refining its technique over time. It develops human-like drumming skills, such as switching sticks, crossing arms, and optimizing movements to the rhythm.
High-Precision Drumming Across Multiple Music Genres
The team tested the system on a simulated Unitree G1, performing songs from jazz to rock, including In the End, Take Five, and Livin’ on a Prayer.Results showed the robot could master complex rhythms and play with over 90% rhythmic precision in many cases.
Image Credits:The humanoid robot prepares to strike a snare drum (green). Credit: Asa
“Shahid noted the robot learned to anticipate strikes, perform cross-arm hits, and switch sticks mid-performance.” These emerged purely from optimizing for rhythmic rewards during training. Robot Drummer could one day perform with live bands and teach precise timing beyond music.
Potential to Spark Innovation in Robotic Performance Arts
The study may inspire new ML systems for humanoid robots to play instruments or join performance arts. Such technology could transform the entertainment industry and showcase robotics progress at real-world events.
“Our next goal is to transition Robot Drummer from simulation to physical hardware,” Shahid added. “We aim to teach it to improvise and adapt in real time to musical cues, responding like a human drummer.“
Replicating the touch and sensitivity of human skin—known as robotic touch—might not require advances in flexible electronics or the integration of thousands of miniature sensors.
Researchers have developed a new type of robotic skin that is low-cost, durable, and highly sensitive. This innovative skin delivers exceptional precision and fits onto robotic hands like a glove.
Moldable Conductive Polymer Offers Versatile Foundation for Robotic Skin
David Hardman and his team at the University of Cambridge and University College London created a conductive polymer they can melt and mold into complex shapes.
Although it doesn’t match the sensitivity of human skin, the material can process signals from over 860,000 microscopic channels, enabling it to detect various types of touch and pressure—such as a finger’s contact, temperature differences, cuts or punctures, and multiple simultaneous touches.
Remarkably, all of this is achieved using a single material, greatly simplifying the design. By reading physical inputs, this tech helps robots interact more like humans.
Image Credits: University of Cambridge
Most current robotic touch technologies rely on small, localized sensors and require separate components to detect different kinds of touch. In contrast, the newly developed electronic skin functions as a single, unified sensor—closer in function to human skin.
One Material, Many Sensations
“Using different sensors for each type of touch makes the manufacturing process more complex,” explained David Hardman. “Our goal was to create a single material that could detect multiple types of touch at once.”
The researchers achieved this using a sensor material capable of multimodal sensing—responding differently to various forms of touch. Though pinpointing each signal is tricky, the materials are easier to make and more durable.overall.
To interpret the signals, the team experimented with different electrode layouts to identify which configuration yielded the most detailed data. With only 32 wrist electrodes, they collected over 1.7 million data points from the hand via the material’s fine conductive network.
From Gentle Contact to Physical Damage
They tested the prototype with a variety of stimuli, including light touch, multiple simultaneous touches, heat exposure from a heat gun, and physical damage from a scalpel. Data collected from these tests was then used to train a machine learning model that can accurately interpret future touch inputs.
Robotic skin hasn’t yet matched human capabilities, said Professor Thomas Thuruthel, but this is the most advanced and easiest to produce so far—and it works well across real-world tasks.
This week, humanoid robot and acclaimed artist Ai-Da revealed a new portrait of King Charles, explaining the inspiration behind the intricate work—and assuring it has no intention of “replacing” humans.
Engineers designed Ai-Da, one of the world’s most advanced humanoid robots, to resemble a human woman, giving her a lifelike face, expressive hazel eyes, and a brown bob haircut.
Interchangeable Robotic Arms for Artistic Versatility
Her arms, however, remain visibly robotic with exposed metal components and are interchangeable based on the type of art she’s creating.
In late 2023, Ai-Da made history when her portrait of British mathematician Alan Turing became the first artwork by a humanoid robot to sell at auction, earning over $1 million.
As Ai-Da revealed her latest piece—an AI-generated oil painting titled Algorithm King—the humanoid robot emphasized that its worth goes beyond monetary value.
Exploring Ethics Through Creative AI Expression
“I create my artwork to spark conversations about the ethical implications of emerging technologies,” she told AFP at the UK’s diplomatic mission in Geneva, where they will display the portrait of King Charles.
Speaking in a measured tone, Ai-Da explained that the goal is to “promote critical thinking and support responsible innovation aimed at creating fairer and more sustainable futures.“
While attending the United Nations’ AI for Good summit, Ai-Da—known for her sketches, paintings, and sculptures—shared insights into the techniques and inspiration behind her latest piece.
“I rely on a range of AI algorithms to produce my art,” the robot explained.
Image Credits: Techxplore
“I begin with a central idea or theme I want to explore, considering the message behind the artwork—what it aims to communicate,” the robot said.
Referring to the subject of the portrait, Ai-Da noted, “King Charles has used his influence to promote environmental conservation and interfaith dialogue. I created this portrait to honor those efforts,” adding, “I hope King Charles will appreciate my work.“
Aidan Meller, an expert in modern and contemporary art, led the team that developed Ai-Da in 2019, collaborating with AI experts from Oxford and Birmingham universities.
He explained to AFP that Ai-Da, named after pioneering computer programmer Ada Lovelace, was created as an ethical art project—not as a replacement for human painters.
Transforming Art and Human Expression
Ai-Da acknowledged that “AI is undeniably transforming our world—including the realms of art and human creativity.“
However, the robot emphasized, “I don’t believe that AI or my creations will replace human artists.”
Instead, Ai-Da explained that the goal is “to encourage people to reflect on how AI can be used for good, while staying aware of its potential risks and limitations.“
When asked whether a machine-made painting qualifies as art, the robot maintained, “My work is both original and creative.“
“Ultimately, whether humans consider it art is a meaningful and thought-provoking discussion,” she added.
Amazon has long embraced robotics in its warehouses, using a variety of machines from squat bots to tall crane-like models. It’s like a capitalist, e-commerce droid world—just with worse names. While robots handle most warehouse tasks, package delivery remains largely untouched by automation—for now.
Amazon to Test Humanoid Robots for Last-Mile Deliveries in San Francisco
According to The Information, Amazon plans to test humanoid robots for final-stage package delivery in a small San Francisco indoor park. These bots are being trained to “spring out” of Rivian delivery vans and drop packages at customers’ doorsteps. Apologies in advance to your dog, whose dislike for delivery workers may soon extend to robots.
In 2025, Amazon is pairing its push for humanoid delivery robots with AI software to guide them to doorsteps. It may seem ambitious, especially with Alexa+ still not fully launched, but the company’s automation drive shows no signs of slowing. And, of course, with Amazon, that’s ultimately what this is all about.
Amazon’s Robotic Ambitions Highlight Ongoing Tensions with Human Labor
Amazon’s rocky relationship with its workforce makes its push for humanoid delivery robots look like an attempt to bypass labor issues. Robots won’t unionize or need breaks, making them an ideal replacement—from the company’s perspective. While robotic delivery sounds futuristic, it’s unclear who actually benefits besides Amazon. And honestly, I’m not thrilled about a robot that can’t tell the difference between a safe foyer and a doorstep.
Of course, it remains to be seen whether Amazon can actually make robot delivery a reality. Humanoid robots still face major technical challenges—like reliably walking on two legs or lifting anything heavy. There’s no set timeline for when robots like Digit, made by Amazon partner Agility Robotics, will be ready for real-world use. So far, Digit is only tested in controlled factory settings—not in chaotic city streets like New York.But that won’t stop Amazon from trying. Good luck to Digit—or whichever robot ends up landing this unpaid, permanent internship with Amazon. Based on how past human workers have been treated, they’re going to need it.
A new study from the University of Surrey and the University of Hamburg shows that humans are no longer the only drivers of training social robots for effective interaction.
Presented at the IEEE International Conference on Robotics and Automation (ICRA), the study unveils a new simulation approach that allows researchers to test social robots without human participants, enabling quicker and more scalable research.
The team used a humanoid robot to develop a scanpath prediction model that anticipates where a person might look in social situations. Tested on two publicly available datasets, the model showed that humanoid robots could replicate human-like eye movement patterns.
New Model Offers Human-like Focus Without Real-Time Supervision
According to, Dr. Di Fu, co-lead of the study and cognitive neuroscience lecturer at the University of Surrey, explained that their method allows researchers to assess if a robot focuses on the right elements, similar to a human, without live human oversight.
Howover, she highlighted that the model maintains its accuracy even in noisy and unpredictable settings, making it a valuable tool for practical uses in areas such as education, healthcare, and customer service.
Social robots are built to engage with humans through speech, gestures, and facial expressions, making them valuable in fields like education, healthcare, and customer support. Notable examples include Pepper, a retail assistant robot, and Paro, a therapeutic robot used with dementia patients.
The researchers aligned their model’s real-world performance with a simulated environment by projecting human gaze priority maps onto a screen, comparing the robot’s predicted focus of attention with actual human data.
This approach allowed them to assess social attention models in realistic conditions, reducing the need for extensive human-robot interaction studies early on.
Dr. Fu remarked, Replacing early human trials with robotic simulations marks a significant advancement in social robotics. It lets us test and enhance social interaction models, improving robots’ ability to understand and respond to humans. Next, we’ll apply this method to robot embodiment and assess its performance in complex social settings with various robot types.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.