Tag: Robots

  • Japan has Created Technology that lets your Body Control Humanoid Robots

    Japan has Created Technology that lets your Body Control Humanoid Robots

    A Japanese tech startup has created a device that can transmit a person’s full-body movements and physical force.
    Image Credits:The system’s muscle-focused design enables robots to replicate not only a user’s actions but also the intensity behind them.

    A Japanese tech startup has created a device that can transmit a person’s full-body movements and physical force.

    H2L’s Capsule Interface enables immersive shared experiences between humans, robots, and avatars, expanding remote interaction possibilities.

    Resembling a massage chair, the system turns the user’s body into a control interface that can operate a humanoid robot. The Tokyo-based company demonstrated its capabilities in a short video.

    Real-Time Human Motion Replication in Humanoid Robots

    In May 2025, researchers from Stanford University and Simon Fraser University introduced TWIST, an AI system that allows humanoid robots to accurately replicate human movements in real time.

    In a video released by H2L, a woman remotely operates a humanoid robot from Unitree Robotics using the Capsule Interface system.

    The robot performs tasks such as cleaning, lifting a box, and interacting with another person, demonstrating the system’s ability to transmit precise body movements and physical force.

    Capturing Intent and Force Through Muscle Sensors

    The Capsule Interface uses advanced muscle-displacement sensors that detect even the slightest changes in muscle tension. The technology captures a user’s intent and force by tracking subtle muscle movements in real time.

    This differs from traditional teleoperation systems, which typically use motion sensors—such as IMUs, exoskeletons, or optical trackers—to replicate a user’s movements.

    H2L argues that motion data alone cannot capture the subtle details required for realistic physical and emotional interaction. While syncing visuals and positions can create a basic sense of control, it does not reproduce the forces applied or the effort felt during the action.

    The Capsule Interface maps real-time muscle activity to robots, enhancing force awareness, haptic realism, and embodiment.

    Replicating Movement and Effort for Enhanced Immersion

    This muscle-focused method enables robots to replicate not only a user’s movements but also the intensity behind them. For example, when lifting a heavy object, the robot reflects the level of effort the user exerts. This feedback enhances immersion and empathy by conveying both motion and force.

    H2L envisions teleoperation as more than simple imitation—turning it into a deeply shared, physical experience.

    The Capsule Interface marks a new step in remote interaction, allowing users to transmit full-body movements and physical force to robots or avatars while sitting or lying down.

    Equipped with speakers, a display, and muscle-displacement sensors, the device detects subtle muscle movements to relay a user’s intent and effort in real time.

    Seamless, Low-Effort Integration into Everyday Furniture

    H2L says the Capsule Interface provides a low-effort, natural experience that fits into beds or chairs, unlike complex traditional systems.

    According to the company, the technology has many potential uses. In business settings, people could attend meetings or complete tasks in distant locations by remotely operating humanoid robots from home or nearby offices.

    It could also allow delivery workers to lift and transport items remotely, reducing physical strain, and enable safe robot operation in dangerous environments such as disaster zones.

    The approach could support dual-income households, assist older adults, and help with everyday chores such as cooking and cleaning. Farmers could also remotely control agricultural robots and share expertise, helping reduce labor demands.

    The interface may also enable more immersive avatar communication in virtual environments, opening possibilities in healthcare, entertainment, and education. In the future, H2L plans to add proprioceptive feedback to enhance realism and expand shared experiences between humans and machines.


    Read the original article on: Interestingengineering

    Read more:Chinese Robot sets new Milestone by Walking more than 100 km

  • Robots Use Radio Waves and AI To Spot Hidden Objects

    Robots Use Radio Waves and AI To Spot Hidden Objects

    Engineers at Penn have created a system that enables robots to see around corners by analyzing radio waves with AI, a breakthrough that could boost the safety and efficiency of self-driving cars and robots working in crowded indoor spaces such as warehouses and factories.
    HoloRadar uses radio waves to see around corners, allowing it to detect people at T-shaped intersections like the one pictured here. Image Credits: Sylvia Zhang, Penn Engineering

    Engineers at Penn have created a system that enables robots to see around corners by analyzing radio waves with AI, a breakthrough that could boost the safety and efficiency of self-driving cars and robots working in crowded indoor spaces such as warehouses and factories.

    The technology, known as HoloRadar, allows robots to rebuild three-dimensional scenes beyond their direct line of sight, such as detecting pedestrians coming around a corner. Unlike earlier non-line-of-sight (NLOS) methods that depend on visible light, HoloRadar operates dependably in darkness and changing lighting conditions.

    “Robots and autonomous vehicles must perceive more than what’s immediately ahead of them,” says Mingmin Zhao, Assistant Professor in Computer and Information Science (CIS) and senior author of the study introducing HoloRadar at the Conference on Neural Information Processing Systems. “This ability is critical for enabling robots and self-driving vehicles to make safer, real-time decisions.”

    Using Walls as Reflective Surfaces

    A key breakthrough behind HoloRadar stems from a surprising property of radio waves. Unlike visible light, radio signals have much longer wavelengths—something typically considered a drawback for imaging because it reduces resolution. But Mingmin Zhao and his team recognized that these longer wavelengths are actually beneficial for seeing around corners.

    “Because radio waves are far larger than the tiny imperfections on wall surfaces,” explains Haowen Lai, a CIS doctoral student and co-author of the study, “those surfaces essentially act like mirrors, reflecting radio signals in consistent and predictable ways.”

    In practice, this means flat surfaces such as walls, floors, and ceilings can redirect radio waves around corners, sending information about hidden areas back to the robot. HoloRadar collects these reflected signals and reconstructs scenes beyond its direct line of sight.

    “It’s similar to how drivers use mirrors at blind intersections,” Lai adds. “With radio waves, the entire environment effectively becomes filled with mirrors—without any physical modifications.”

    HoloRadar works by reconstructing 3D scenarios from the bounces of radio waves. Image Credits: WAVES Lab, Penn Engineering

    Built for Real-World Environments

    In recent years, other teams have built systems with similar goals, often relying on visible light. These approaches interpret shadows or indirect reflections, making them heavily dependent on specific lighting conditions. Efforts to use radio signals, meanwhile, have typically required slow, bulky scanning hardware, limiting their practicality outside the lab.

    “HoloRadar is built for the real environments where robots actually function,” says Mingmin Zhao. “It’s mobile, operates in real time, and doesn’t rely on controlled lighting.”

    Rather than replacing existing sensors, HoloRadar enhances the safety of autonomous machines by working alongside them. Self-driving vehicles already use LiDAR—laser-based sensing that detects objects within direct view—but HoloRadar extends perception beyond that range, uncovering hidden hazards and giving robots and vehicles more time to respond.

    HoloRadar relies on compact and nimble scanning equipment, opening up real-world applications. Image Credits: Sylvia Zhang, Penn Engineering

    Analyzing Radio Signals with AI

    A single radio pulse can ricochet several times before reaching the sensor again, producing a complex mix of reflections that traditional signal-processing techniques struggle to separate.

    To address this, the researchers created a specialized AI system that blends machine learning with physics-based modeling. First, it sharpens the raw radio data and detects multiple “returns” from different reflection paths. Then, using a physics-informed model, it traces those signals backward—counteracting the mirror-like effects of surrounding surfaces to rebuild the true 3D scene.

    “In a way, it’s like stepping into a room lined with mirrors,” explains Zitong Lan, a doctoral student in Electrical and Systems Engineering (ESE) and co-author of the study. “You see many reflections of the same object in different spots, and the challenge is figuring out their real positions. Our system learns to reverse that process using physical principles.”

    By explicitly modeling how radio waves reflect off surfaces, the AI can tell apart direct and indirect signals and accurately pinpoint the real-world locations of objects, including people.

    Transitioning from Research to Real-World Use

    The team evaluated HoloRadar on a mobile robot navigating real indoor spaces, such as hallways and building corners. In these tests, the system effectively reconstructed walls, corridors, and even people hidden from the robot’s direct view.

    Looking ahead, the researchers plan to extend testing to outdoor environments, like intersections and city streets, where greater distances and rapidly changing conditions present new challenges.

    “This represents a key step toward giving robots a fuller awareness of their surroundings,” says Mingmin Zhao. “Our ultimate aim is to enable machines to operate safely and intelligently in the complex, dynamic environments humans encounter every day.”


    Read the original article on: Tech Xplore

    Read more: A Modular Robot with Open-Source Design for Investigating Evolution

  • Robots Train with Monks at Shaolin Temple

    Robots Train with Monks at Shaolin Temple

    An unexpected and unusual sight has emerged at the renowned Shaolin Temple in Henan Province, central China. A new kind of “student” has joined the temple, as humanoid robots now train in the legendary Shaolin Kung Fu alongside monks. This striking blend of ancient tradition and modern technology has quickly drawn global attention, CGTN reports.
    Image Credits:In the viral footage, humanoid robots can be seen moving in sync with Shaolin monks.

    An unexpected and unusual sight has emerged at the renowned Shaolin Temple in Henan Province, central China. A new kind of “student” has joined the temple, as humanoid robots now train in the legendary Shaolin Kung Fu alongside monks. This striking blend of ancient tradition and modern technology has quickly drawn global attention, CGTN reports.

    Social Media Reacts to Robot–Monk Training Clips

    Videos showing robots practicing with the monks have spread rapidly across social media. In the widely shared footage, humanoid robots move in harmony with Shaolin monks.

    In the widely shared footage, humanoid robots move in harmony with Shaolin monks. They copy martial arts stances, take part in training exercises, and carry out traditional kung fu routines within the temple grounds. Robots Carefully Mirror Each Movement, Learning Step by Step Like a Student from a Master

    Robots Replicating Centuries-Old Shaolin Traditions Astonish Viewers

    These images have surprised many viewers, as they show machines performing practices rooted in centuries of human discipline and spiritual tradition.

    One viewer remarked, “Shaolin Kung Fu techniques are already very hard, yet these robots can imitate them. It’s truly impressive.”


    Read the original article on: Ndtv

    Read more:She Moves Naturally, Expresses Emotion, Maintains Eye Contact, and Feels Warm—Yet She’s a Robot

  • Rapid, Accurate Radioactive Material Localization With Drones and Robots

    Rapid, Accurate Radioactive Material Localization With Drones and Robots

    Chemical, biological, radiological, nuclear, and explosive substances (commonly referred to as CBRNE) can endanger both the public and emergency responders. In 2023, a tiny cesium capsule fell from a truck in Australia, sparking a major search. The growing number of hybrid attacks and other destabilizing activities has further intensified the threat landscape.
    In addition to a gamma detector, the highly automated UAS also has electro-optical and infrared cameras on board. Image Credits: Fraunhofer FKIE

    Chemical, biological, radiological, nuclear, and explosive substances (commonly referred to as CBRNE) can endanger both the public and emergency responders. In 2023, a tiny cesium capsule fell from a truck in Australia, sparking a major search. The growing number of hybrid attacks and other destabilizing activities has further intensified the threat landscape.

    Two research departments at Fraunhofer FKIE are therefore focusing closely on how drones (unmanned aerial systems, UAS) and robots (unmanned ground vehicles, UGVs) can be used to offer the most effective support to people in such hazardous situations.

    Such systems have been tested for years at EnRicH and ELROB. Both events are organized in alternating years by researchers from the Cognitive Mobile Systems department. They test drones and robots in real-world conditions and guide their development.

    Highly Autonomous UAS for Radioactive Source Detection

    Under a contract with the Bundeswehr Research Institute for Protective Technologies and CBRN Protection (WIS), researchers in the Sensor Data and Information Fusion department are developing a UAS capable of rapidly and accurately detecting and locating radioactive sources. A technology demonstrator has already undergone field trials at the WIS facility in Munster, demonstrating the ability to pinpoint a radioactive source within a few meters in just minutes.

    “The cesium capsule in Australia took days to locate using handheld detectors. “Our UAS could have located it much faster,” says Claudia Bender, who co-designed the demonstrator with Torsten Fiolka.

    The experimental CBRNE robot assists in the detection and recovery of radioactive hazardous materials. Image Credits: Fraunhofer FKIE/Fabian Vogl

    Detection Involves an Exploration Phase and a Targeted Search

    The researchers focus on advanced data processing, sensor fusion, and automation. The system largely automates the detection process and performs it in two stages: an exploration phase and a search phase. In the exploration phase, the UAS surveys the target area, continuously gathering environmental data. When it detects a deviation from background radiation, the system transitions into search mode.

    In search mode, the drone’s flight path adjusts dynamically based on both previously collected data and real-time sensor readings. The system uses stochastic methods to estimate the radioactive source’s probable locations.

    “After the pilot launches the drone, it first follows a predetermined flight path. “Once sufficient data is gathered, the system switches to adaptive search mode to estimate the source’s location,” the researcher explains. “The drone then creates waypoints, continuing until it pinpoints the hazardous material and reports its exact position.”

    A first technology demonstrator has already been successfully tested. It can precisely detect a radioactive source to within a few meters in only a few minutes. Image Credits: Fraunhofer FKIE

    Advanced Sensing and Mapping Capabilities of the Drone

    A spatial heat map shows radiation levels across the scanned areas, while a probability map highlights the cell most likely to contain the radioactive material.

    The drone uses a gamma detector to measure radiation and additional sensors to support detection. It also includes electro-optical and infrared cameras, an Intel NUC, an IMU, and an LTE module for ground monitoring. The cameras capture live video from the drone and identify objects like people, buildings, and vehicles, displaying them on a georeferenced map. The IMU tracks the drone’s 3D position and movement.

    Researchers developed the technology demonstrator through the HUGIYN project (Highly Automated UAS for Detecting and Identifying γ-Emitting Nuclides). In the SLEIPNIR project, researchers aim to boost the UAS’s speed and track multiple moving nuclides simultaneously.


    Read the original article on: Tech Xplore

    Read more: Drone Technology to Reshape Disaster Response, Healthcare, Environment, Farming, and Cybersecurity

  • China is Relying on AI-Powered Robots to Improve Traffic Management

    China is Relying on AI-Powered Robots to Improve Traffic Management

    China is rapidly using AI-powered humanoid robots to manage city traffic and assist law enforcement.
    Image Credits: Xinhua

    China is rapidly using AI-powered humanoid robots to manage city traffic and assist law enforcement.

    The Humanoid Robot Managing Urban Traffic

    A recent example is the R001 Intelligent Police Unit, a humanoid robot integrated with the city’s traffic light network. Wearing a uniform and cap, the robot performs traffic gestures and uses cameras and AI to guide pedestrians autonomously.

    Its AI-driven technology can detect traffic violations by non-motorized vehicles, such as bicycles and scooters, as well as pedestrian infractions and illegal parking. This data processes in real time, organizing traffic and preventing accidents.

    Robots Lightening the Load for Police Officers

    Chinese authorities state that the primary goal of the initiative is to ease the workload of police officers, particularly during peak periods or challenging conditions like extreme weather. The robots serve as operational assistants, enabling human officers to concentrate on more complex duties.

    Cities such as Chengdu have already tested mixed teams of robots, including quadrupeds, wheeled models, and humanoids. According to Xinhua, this technology has been deployed in over 100 different scenarios, covering tasks from reception and security patrols to public service functions.


    Read the original article on: Paraibabusiness

    Read more:Tesla Is Redirecting Its Focus From Classic EVs to Building Robots

  • Tesla Is Redirecting Its Focus From Classic EVs to Building Robots

    Tesla Is Redirecting Its Focus From Classic EVs to Building Robots

    Tesla has concluded its 2025 fiscal year earnings call, revealing that its profits nearly halved compared to the previous year. The company reported a GAAP net income of $3.8 billion, down from $7.1 billion in 2024, marking a 46% decrease. Yikes.
    Image Credits: Tesla believes its Optimus humanoid robot will be the first big step towards its AI and robotics-focused future
    Tesla

    Tesla has concluded its 2025 fiscal year earnings call, revealing that its profits nearly halved compared to the previous year. The company reported a GAAP net income of $3.8 billion, down from $7.1 billion in 2024, marking a 46% decrease. Yikes.

    From Electric Cars to Full Autonomy with AI and Robotics

    That’s a tough situation for any automaker, but Tesla is confident it knows the way forward. The company is shifting its focus from purely electric vehicle production to a broader goal: achieving full autonomy through AI and robotics.

    Optimists drive this strategy, believing that these technologies will lower costs for goods and services while becoming increasingly widespread and valuable.

    Signs of this shift are already emerging. For Tesla owners, one immediate change is coming soon: CEO Elon Musk announced that Tesla will stop selling the Full Self-Driving (FSD) suite—currently offering mostly Level 2 driving assistance—as a one-time purchase. Starting February 14, Tesla will offer it only via a monthly subscription, initially likely matching the current $99/month or $999/year pricing, though the company could raise the cost as it adds new features.

    Image Credits: Tesla’s Full Self-Driving suite of driving-assistance features will soon only be available via a recurring subscription
    Tesla

    Next, Tesla is discontinuing the Model S luxury sedan and Model X crossover—both roughly $100,000—what Musk called an “honorable discharge”—to fully prioritize autonomous technology. Tesla will repurpose the Fremont, California, factory—previously used to produce these vehicles—into a dedicated facility for the Optimus humanoid robot.

    Image Credits: The Model S (top) and Model X (bottom) have each been around for more than 10 years in Tesla’s lineup, and they’re no longer selling well
    Tesla

    Tesla Ends Production of Aging Sedans and SUVs

    It’s a significant move, but also somewhat expected. These older models have been on the market for over a decade, and by 2025, their sales had fallen to less than a third of the newer Model 3 and Model Y. If you’re interested, take one of the remaining vehicles now. Tesla won’t produce any more, but it will continue supporting them as long as they remain operational.

    On the robotics side, Tesla has showcased the Optimus in various development stages over the past few years, but it still hasn’t wowed audiences. At a Tesla event in October 2024, the robots appeared to walk, talk with attendees, and even make drinks—but it was later revealed they were entirely controlled remotely by humans.

    Image Credits: The Optimus bot is being designed to help with household chores and manufacturing tasks, and is expected to cost $30,000 each
    Tesla

    While companies promote these bipedal robots as capable of doing household chores, carrying up to 45 pounds (20 kg), helping factory workers, and even eventually going to space, they are still far from ready for widespread use

    Tesla’s Robot Revolution

    Tesla plans to reach an annual production capacity of one million Optimus units at the Fremont factory, with significant output expected by the end of 2026 and sales beginning in 2027. Meeting this goal—and hitting Musk’s $30,000 target price—will be a major challenge.

    On another front, Tesla is preparing to produce the Cybercab, a two-seater robotaxi with no steering wheel. The company debuted the concept in 2024 and plans to start production this year. The company is currently in the tooling stage, with Musk previously indicating production would begin in April. In the meantime, Tesla is testing self-driving taxi services using modified Model Ys in Austin, Texas, so the system will have some real-world miles under its belt by the time the Cybercab launches.


    Read the original article on: New Atlas

    Read more: Skeptical About Robots in your Living Room? A new Friendly Humanoid Aims to Win You Over


  • AI-Driven Robots May Change the Way Tomatoes Are Harvested

    AI-Driven Robots May Change the Way Tomatoes Are Harvested

    Labor shortages in the agricultural sector are fueling increased interest in robotic solutions for automated harvesting. However, certain crops continue to pose significant challenges for machines. Tomatoes, for instance, grow in clusters, requiring robots to selectively pick only the ripe fruit while leaving unripe ones on the vine. Achieving this consistently demands both accurate decision-making and precise manipulation.
    As labor shortages push agriculture toward automation, harvesting delicate, clustered fruits like tomatoes remains a major challenge for robots. Researchers have now developed a system that allows robots to assess how easy a tomato is to pick before acting, using visual cues and probabilistic decision-making. Image Credits: SciTechDaily.com

    Labor shortages in the agricultural sector are fueling increased interest in robotic solutions for automated harvesting. However, certain crops continue to pose significant challenges for machines. Tomatoes, for instance, grow in clusters, requiring robots to selectively pick only the ripe fruit while leaving unripe ones on the vine. Achieving this consistently demands both accurate decision-making and precise manipulation.

    To tackle this challenge, Assistant Professor Takuya Fujinaga of Osaka Metropolitan University’s Graduate School of Engineering developed a technique that enables robots to evaluate how easily each tomato can be harvested before attempting to pick it.

    Robots Face Difficulties With Selective Harvesting

    Fujinaga’s method integrates image recognition with statistical analysis to identify the most effective angle from which to harvest each tomato. The system evaluates visual details such as the fruit’s appearance, the structure and placement of its stems, and whether the tomato is partially obscured by leaves or other plant parts. By considering these elements together, the robot can make better control decisions and choose the approach most likely to result in a successful pick.

    This framework marks a departure from the conventional focus on simple “detection and recognition,” shifting instead toward what Fujinaga describes as “harvest-ease estimation.”

    Rather than asking only whether a robot can pick a tomato, the approach emphasizes assessing the probability of a successful harvest—a perspective that is more practical for real agricultural settings, he noted.

    The left image shows the tomato-picking robot and camera. The right image shows a ‘robot-eye view’ of the tomatoes. Red represents mature fruits, green indicates immature fruits, and blue indicates selected harvesting targets. Image Credits: Osaka Metropolitan University

    Prioritizing Harvest Success over Detection

    In testing, Fujinaga’s system reached an 81% harvest success rate, far exceeding expectations. Notably, about a quarter of the successful picks came from side approaches after initial front attempts failed, showing that the robot was able to adapt its strategy when faced with obstacles.

    The results highlight the complexity of robotic fruit harvesting, where clustered growth, stem structure, surrounding foliage, and visual occlusion all significantly affect performance.

    According to Fujinaga, the study introduces “ease of harvesting” as a measurable metric, advancing the development of agricultural robots capable of making informed, intelligent decisions.

    Toward Human–Robot Farming

    Fujinaga envisions a future in which robots can independently assess crop readiness. “This could lead to a new type of agriculture where humans and robots work together,” he said. “Robots would harvest the easily picked tomatoes, while humans focus on the more difficult ones.”


    Read the original article on: SciTechDaily

    Read more: Robot Learns How to Lip-Sync After Observing YouTube Content

  • A New Artificial Skin Aims to Give Humanoid Robots the Sensation of Pain

    A New Artificial Skin Aims to Give Humanoid Robots the Sensation of Pain

    For years, humanoid robots have been built to be strong, precise, and durable. They rely on cameras for vision, sensors to gauge force, and highly accurate systems to carry out tasks. What they’ve long lacked is the ability to sense and respond to their own bodies. That gap is now starting to close thanks to a breakthrough by researchers from universities in Shanghai and Hong Kong.
    Image Credits:© Astrid Eckert/TUM

    For years, humanoid robots have been built to be strong, precise, and durable. They rely on cameras for vision, sensors to gauge force, and highly accurate systems to carry out tasks. What they’ve long lacked is the ability to sense and respond to their own bodies. That gap is now starting to close thanks to a breakthrough by researchers from universities in Shanghai and Hong Kong.

    The team has created a flexible robotic skin that can detect touch, impact, and physical damage, effectively acting as an artificial nervous system. This development enables robots to identify potentially harmful situations, serving a role similar to how humans experience pain or discomfort.

    Image Credits:tmeier1964

    Unlike conventional sensors that focus on specific spots, this new skin envelops the robot’s entire body, making the arms, legs, and torso act as a single continuous sensor.

    The system relies on flexible, pressure-responsive materials that can detect small changes caused by impacts, deformation, or wear. Rather than depending only on cameras or motor force readings, the robot gains a direct awareness of what is happening to its own body.

    This heightened sensitivity enables quicker and smarter reactions to unexpected events, which is especially important for robots working close to humans.

    Practical Benefits in Everyday Scenarios

    The advantages are easy to imagine in everyday situations. For example, if a robot is carrying heavy furniture and an object drops on its foot, a traditional robot might keep moving, unaware of the damage, increasing the risk of falling or further harm.

    With the new skin, the impact would be sensed instantly. The robot could stop, adjust its position, or activate safety measures to reduce danger to itself and to nearby people.

    Such responsiveness is essential in settings like homes, hospitals, factories, and logistics hubs, where mechanical failures can result in serious accidents.

    Another key advantage is the ability to detect minor, nearly invisible damage. Tiny cracks or deformations in the outer layer can let dust or moisture seep in, gradually harming internal components.

    Early Detection and Modular Design for Easy Maintenance

    The new robotic skin can spot these issues early, before they escalate. It also features a modular design, letting users replace damaged sections with simple “patches” instead of swapping the entire skin.

    This approach lowers maintenance costs, extends the robot’s operational life, and makes humanoid robots more practical for long-term, real-world use.

    Image Credits: koshinuke_mcfly

    While the research is currently centered on humanoid robots, the team notes that the technology has much broader potential. Advanced prosthetics, for instance, could gain from responsive surfaces that deliver tactile feedback to users.

    Other possible applications include protective gear, rescue tools, and medical devices. In high-risk situations, the ability to sense excessive pressure, heat, or impact can be critical for preventing injuries or system failures.

    The researchers stress that the aim is not to give robots human-like emotions. The concept of “pain” in this context is purely functional, not a conscious or subjective sensation.

    Enhancing Safety and Reliability Around Humans

    The ultimate goal is to develop safer, more dependable machines that can operate alongside people in a predictable manner. By detecting risks and damage early, robots can respond proactively, reducing accidents and building trust in these technologies.

    As humanoid robots move beyond the lab and into everyday environments, innovations like artificial skin may play a crucial role—not in humanizing machines, but in making them more physically aware and better adapted to the human world.


    Read the original article on: Gizmodo

    Read more:A Supercomputer Builds one of the Most Lifelike Virtual Brains ever Created

  • With Adaptive Motion, Robots Achieve Human-Like Dexterity from Minimal Data

    With Adaptive Motion, Robots Achieve Human-Like Dexterity from Minimal Data

    Although robotic automation is advancing quickly, most systems have difficulty adjusting their pre-trained movements to environments with objects that vary in stiffness or weight. To address this, a team of researchers in Japan has created an adaptive motion reproduction system based on Gaussian process regression.
    This image depicts the real-time transfer of a human’s motion to a robotic avatar, enabling the latter to perform a dexterous task. Image Credits: Keio University Global Research Institute (KGRI)

    Although robotic automation is advancing quickly, most systems have difficulty adjusting their pre-trained movements to environments with objects that vary in stiffness or weight. To address this, a team of researchers in Japan has created an adaptive motion reproduction system based on Gaussian process regression.

    Their approach models the relationship between human motion and object characteristics, allowing robots to accurately mimic human grasping actions. It can achieve this with minimal training data and handle unfamiliar objects with impressive precision and efficiency.

    Obstacles to Robotic Flexibility

    “Rapid advancements in robotic automation have the potential to transform industries and enhance our lives by taking over tasks that are dangerous, physically strenuous, or monotonous for humans.”

    Although current robots perform exceptionally well in structured settings like assembly lines, the true challenge of automation is operating in unpredictable, dynamic environments, such as cooking, elderly care, or exploration.

    Achieving this requires overcoming a major obstacle: enabling robots to sense and adapt through touch. Unlike human hands, which naturally adjust their grip to objects of varying weight, texture, or stiffness, most robotic systems still lack this essential adaptability.

    Progress in Motion Replication Technologies

    To equip machines with advanced human-like dexterity, researchers have created a variety of motion reproduction systems (MRSs). These systems focus on precisely capturing human movements and replicating them in robots through teleoperation.

    Nevertheless, MRSs often struggle when the characteristics of the object being manipulated differ from those used during the original motion recording. This reduces the flexibility of MRSs and, consequently, limits the broader usability of robots.

    To tackle this core challenge, a research team in Japan has created an innovative system capable of adaptively modeling and replicating intricate human movements.

    The study was spearheaded by Master’s student Akira Takakura from the Graduate School of Science and Technology at Keio University, with contributions from Associate Professor Takahiro Nozaki of the Department of System Design Engineering, Doctoral student Kazuki Yane, Professor Emeritus Shuichi Adachi of Keio University, and Assistant Professor Tomoya Kitamura from Tokyo University of Science, Japan.

    Enhancing Adaptability Through Gaussian Process Regression

    The team’s key innovation was moving beyond linear modeling approaches and adopting Gaussian process regression (GPR), a method capable of capturing complex nonlinear relationships even from limited training data.

    By recording human grasping motions across a variety of objects, the GPR model learned how an object’s “environmental stiffness” relates to the position and force commands applied by humans. This process effectively uncovers the underlying human motion intent, or human stiffness,” enabling the robot to perform suitable actions with objects it has never handled before.

    “Equipping robots with the ability to manipulate everyday objects is crucial for allowing them to interact naturally with their environment and respond appropriately to encountered forces,” says Dr. Nozaki.

    System Testing and Performance Results

    To validate their method, the researchers compared it with traditional MRSs, linear interpolation, and a standard imitation learning model.

    The proposed GPR-based system showed substantially improved performance in generating accurate motion commands for both interpolation and extrapolation tasks.

    For interpolation—predicting motions for objects with stiffness values within the training range—the method reduced the average root-mean-square error (RMSE) by at least 40% for position and 34% for force.

    For extrapolation—handling objects stiffer or softer than those in the training set—the approach remained highly effective, achieving a 74% reduction in position RMSE. Overall, the GPR-based method significantly outperformed all other tested approaches.

    Applications in Industry and the Evolution of Robotics

    By effectively modeling human–object interactions using minimal training data, this novel approach to MRSs enables the generation of precise and dexterous motion commands for a diverse array of objects. Its capacity to capture and replicate complex human skills allows robots to operate beyond rigid, pre-defined scenarios, paving the way for more sophisticated and adaptable services.

    “Because this technology requires only a small dataset and reduces the costs associated with machine learning, it has broad potential across many industries,” explains Mr. Takakura. “For example, life-support robots, which must adjust their movements to different targets each time, could greatly benefit, and companies that previously struggled to implement machine learning due to large data requirements may now find it more accessible.”


    Read the original article on: Tech Xplore

    Read more: China Unveils a Humanoid Robot with Smooth, Human-like Balance

  • Atlas Humanoid Robots will be Deployed in Hyundai Factories

    Atlas Humanoid Robots will be Deployed in Hyundai Factories

    Boston Dynamics has introduced an industry-ready version of Atlas, a humanoid built for real-world use in warehouses and factories. Designed to work nonstop in harsh conditions, Atlas uses AI to adjust to its surroundings, and production is already underway.
    Image Credits:Atlas is to be trained using new AI foundation models for a wide variety of industrial tasks, beginning in the automotive sector
    Boston Dynamics

    Boston Dynamics has introduced an industry-ready version of Atlas, a humanoid built for real-world use in warehouses and factories. Designed to work nonstop in harsh conditions, Atlas uses AI to adjust to its surroundings, and production is already underway.

    After an attention-grabbing debut filled with eerie visuals, flips, and dance routines, Boston Dynamics is now focusing on practical applications. The company has revealed the production version of Atlas, a humanoid robot designed for demanding industrial tasks. The first units will ship this year, with Atlas taking on its first role at a Hyundai facility—its initial real-world industrial deployment.

    Boston Dynamics CEO calls Atlas the company’s most advanced robot

    “In over 30 years, Boston Dynamics has built some of the world’s most advanced robots,” said CEO Robert Playter. This is the finest robot we’ve ever made. Atlas aims to revolutionize industry and pave the way for robots that enhance safety, efficiency, and daily life at home.

    The newest Atlas measures 6.2 ft (1.9 m) in height and offers 56 degrees of rotational movement throughout its joints. It can rotate its head and hips, move fingers independently, and bend knees and ankles, giving Atlas a 7.5 ft (2.3 m) reach for tight industrial spaces.

    Designed to work alongside people, Atlas tackles environments that would quickly exhaust human workers. It endures –4 °F to 104 °F (–20 °C to 40 °C), lifts 66–110 lb (30–55 kg), runs four hours per charge, and swaps its battery in under three minutes on its own.

    Atlas offers versatile operation and intelligent adaptability

    Atlas supports three modes of operation: fully autonomous, remotely controlled by a human operator, or supervised via a tablet. Powered by AI, it moves smoothly, adapts to its environment, and collects data to boost operational efficiency.

    Image Credits:The production-ready Atlas comes with a four-hour battery, which it can swap out for a fresh one in under three minutes
    Boston Dynamics

    Boston Dynamics has also revealed a collaboration with DeepMind, Alphabet’s British-American AI research lab, aimed at rapidly advancing Atlas’s abilities. The partnership will focus on faster task learning and better understanding of industrial environments. Once a single Atlas masters a skill, it can instantly share that capability across the entire fleet.

    This Atlas handles industrial tasks and reconfigures for new roles within 24 hours. Production has begun in Boston, with initial deployments set for 2026 at Google DeepMind and Hyundai.

    Hyundai plans large-scale robotics facility with expanded Atlas production

    Hyundai, Boston Dynamics’ majority owner, plans a new facility to produce up to 30,000 robots annually, including Spot and other models. Additional Atlas customers will join starting in 2027.

    Image Credits:”This enterprise-grade humanoid robot offers impressive strength and range of motion, precise manipulation, and intelligent adaptability – designed for manufacturability, reliability, and serviceability. Atlas is built to power the new industrial revolution.”
    Boston Dynamics

    Hyundai will supply Atlas’s actuators, strengthening its hardware-AI integration. “This Atlas is our most production-ready yet,” said GM Zack Jackowski, highlighting its reduced parts and full compatibility with automotive supply chains. With the support of Hyundai Motor Group, we expect to deliver industry-leading reliability and cost efficiency at scale.

    The launch of production signals that U.S. industrial humanoid robotics is now catching up with China. Chinese company UBtech recently showcased its Walker S2 robots deployed across automotive, smart factories, logistics, and AI data centers. UBtech says the deployment will proceed in phases, with robots gradually entering active industrial settings.

    This timing shows that the push to deploy humanoid robots on factory floors no longer favors a single player, as American and Chinese companies pursue careful, step-by-step strategies for large-scale adoption.


    Read the original article on: Newatlas

    Read more:Kawasaki’s Four-legged Robotic Horse Vehicle is set to Enter Production