AI-Enabled Robotic Hands Become More Dexterous by Imitating Human Hands

Design Sem Nome 2025 12 17T093636.442 1
Entering ETH Zurich’s Soft Robotics Lab feels like stepping into a blend of a children’s playroom, a cutting-edge workshop, and a curiosity cabinet. Researchers scatter workbenches with foam blocks, plush toys—including a soft squid—and other colorful objects to teach robots dexterous skills. Sensors, cables, and measuring instruments cover nearly every surface. Meanwhile, skeletal fingers displayed in cases or mounted on strong robotic arms appear to stretch out toward you from all directions.
Dexterous, tendon-driven robotic hand performing a task learnt through imitation. Image Credits: Soft Robotics Lab / ETH Zurich

Entering ETH Zurich’s Soft Robotics Lab feels like stepping into a blend of a children’s playroom, a cutting-edge workshop, and a curiosity cabinet. Researchers scatter workbenches with foam blocks, plush toys—including a soft squid—and other colorful objects to teach robots dexterous skills. Sensors, cables, and measuring instruments cover nearly every surface. Meanwhile, skeletal fingers displayed in cases or mounted on strong robotic arms appear to stretch out toward you from all directions.

The lab brings together 19 specialists in robotics engineering, computer science, chemistry, and biology, led by Robert Katzschmann, Professor of Robotics in ETH Zurich’s Department of Mechanical and Process Engineering.

In developing the next generation of robots, Katzschmann draws inspiration from animals and the human body. His latest robotic hands abandon joint-mounted motors in favor of artificial tendons that run through rolling joints. The aim is to create robots that are flexible, gentle, and highly agile. By replacing traditional metals, screws, and motors with hybrid structures that combine soft and rigid materials, the team is building machines capable of handling diverse tasks and smoothly adapting to new environments.

Flexible Robotic Hands

Katzschmann makes extensive use of artificial intelligence—though he favors the more accurate term machine learning, noting that true intelligence remains far off. “Previously, robotics challenges were tackled by simplifying systems and relying on physical models and control theory,” he explains. “Now, machine learning is our main tool.”

This data-centric strategy now underpins almost all areas of robotics, from generative design using 3D simulations to learning skills from video and algorithmic motion control. Katzschmann adds that roughly half of his research team is focused on developing and applying machine-learning techniques.

Conventional approaches such as control engineering work well in predictable, repetitive settings like factory assembly lines, but they struggle in messy, unstructured environments. As Katzschmann notes, even a seemingly simple task—sorting different glass bottles into a crate—poses a major challenge for robots because each bottle varies in shape and size.

To address this problem, his team created a robotic hand with 21 degrees of freedom. Using a mix of reinforcement learning and imitation learning, they integrated this highly dexterous hand into a broader system that, together with a robotic arm, achieves a total of 28 degrees of freedom.

Learning to Grasp with Adaptive Hands

Training the system involves researchers wearing a sensor-equipped glove and camera while demonstrating how to grasp bottles. Their actions are captured by external cameras, producing a detailed dataset that may also include virtual reality imagery. This data is then used to train a transformer model—an architecture akin to those behind today’s large language models. After training, the robotic hand is able to grasp previously unseen objects and place them correctly.

“With traditional techniques, we would have needed to build a 3D point-cloud model of the environment and program every individual finger movement to grasp a bottle,” Katzschmann explains. “Even a slight shift in the bottle or crate would leave the robotic hand unsure how to act.” That’s no longer the case. “Now, the movements for picking up a bottle are fully learned, making the hand highly adaptable,” he adds.

In 2024, this work led to the creation of Mimic Robotics, an ETH spin-off founded by Katzschmann and four of his former doctoral and Master’s students. The startup seeks to transform manufacturing and logistics using AI-driven robotic hands.

Cloud-Based Learning

Stelian Coros, a computer scientist, develops algorithms for robotics, visual computing, and computer-aided manufacturing, focusing primarily on the software that serves as a robot’s “brain.” Over the past decade, advances in deep learning—a type of machine learning that employs artificial neural networks—have driven his research. “We’ve reached a point where data and computing power are sufficient to apply deep learning to specific robotic tasks, like automatic object recognition in images,” he explains.

Neural networks also underpin reinforcement learning, a method where robots learn through trial and error. Researchers reward robots for achieving desired outcomes—such as moving forward without falling—and the robots gradually refine their actions to maximize their rewards.

“It’s a form of learning by doing, similar to how people learn to play tennis,” says Coros. “Robots can’t just watch videos of tasks being done—they need to practice them themselves.”

To achieve this, his team produces large volumes of training data via teleoperation, allowing robots to mimic human operators’ movements. They also employ motion-capture technology from the animation industry to track human actions. When processed with the right algorithms, this data allows robots to execute context-aware, human-like motions—a key factor, Coros asserts, for smooth human-robot interaction.

Simultaneous Training

At the Robotics Systems Lab (RSL), led by Professor Marco Hutter, researchers also employ reinforcement learning, but on a massive scale within virtual environments. “We use simulations to train thousands of robots simultaneously,” explains Cesar Cadena, a senior scientist at the lab. “In just one hour, we can generate as much data as we used to in an entire year.”

These large-scale simulations are enabled by major advances in microchips and graphics processors. Parallel processing, which allows thousands of tasks to run at once, is crucial for AI applications. For this reason, RSL collaborates closely with NVIDIA, a leading developer of graphics processors and chipsets, resulting in research directly connected to the company.

Virtual reinforcement learning runs in the cloud and demands enormous computing power. However, relying on cloud connectivity can limit a robot’s autonomy in continual learning scenarios. A factory robot can stay linked to the cloud to enhance its performance on complex tasks, but a rescue robot operating in a remote disaster zone faces a challenge: without network access, how can it make quick, critical decisions?

To address this issue, researchers equip the robot with onboard computing power and preloaded data from the cloud. “We give up some processing capacity,” Cadena explains, “but for well-defined tasks, it’s typically sufficient.”

Objective: Multi-Functional Robots

Does the current surge in AI signal a robotics revolution or more of a gradual evolution? Coros argues it’s the latter. “The data AI relies on and the data robotics requires are fundamentally different,” he explains. Robots have physical bodies and must learn through hands-on interaction to generalize movements across diverse environments, whereas AI generalizes using vast streams of data—mostly text, but also images and video.

Some robotics teams still pursue a fully data-driven approach, training robots on terabytes of human motion data. “That just isn’t practical,” Coros insists. He cites research on robots learning to fold shirts, which required around 10,000 hours of demonstration data—and even then, the robots made errors. “If it takes that much data to master a single skill, the method is inherently unscalable.”

Instead, his team combines learned data with physical models to fill in gaps. For example, when a robot arm throws a ball, “we already know the physics of how a ball travels through the air.” By applying these physical principles, the robot can adjust its throw to hit the target accurately—without needing massive amounts of data, Coros notes.

In 2023, Coros teamed up with former doctoral students to found the spin-off Flink Robotics. The company combines AI-driven image processing with physical modeling to enhance standard industrial robot arms, allowing them to handle packaging, unloading, and sorting tasks with greater accuracy. Swiss Post, Flink Robotics’ first client, intends to implement this technology to automate its parcel operations.

Tendons Overtake Motors

Back in the Soft Robotics Lab, biologists are creating cell-based tissue for artificial tendons, while chemists bring artificial muscles to life using electrical impulses. Katzschmann believes that conventional motor-driven robots have hit their limits in terms of generalization, regardless of how advanced their AI is. “Those systems simply won’t be flexible enough to handle the variety of real-world situations,” he says.

For him, a robot’s body is just as crucial as its brain, which is why he focuses on musculoskeletal robots modeled on natural anatomy. “Muscles provide softness,” he explains, “and the skeleton gives the structural support needed for complex physical tasks.” Nature has already designed highly stable and versatile systems without relying on motors or metals. “That should be our blueprint,” he insists.


Read the original article on: Tech Xplore

Read more: China Unveiled an AI Humanoid Robot for Traffic Control

Scroll to Top