Tag: Robots

  • Robots Complete a Half Marathon — at a Very Slow Pace

    Robots Complete a Half Marathon — at a Very Slow Pace

    Humanoid robots still have a lot of ground to cover before they can match human runners. On Saturday, Beijing’s E-Town tech hub held what it called the world’s first half-marathon for humanoid robots, featuring 21 robotic participants running alongside thousands of human competitors.
    Image Credits:Kevin Frayer / Getty Images

    Humanoid robots still have a lot of ground to cover before they can match human runners. On Saturday, Beijing’s E-Town tech hub held what it called the world’s first half-marathon for humanoid robots, featuring 21 robotic participants running alongside thousands of human competitors.

    According to Bloomberg, the race’s top-performing robot, Tiangong Ultra, was developed by the state-supported research institute X-Humanoid and completed the half-marathon in 2 hours and 40 minutes. While that’s a notable feat for a robot, it pales in comparison to human runners — the event’s fastest male runner finished in just over an hour, and many recreational runners typically clock in under two hours.

    Most Robots, Including Tiangong Ultra, Needed Guidance to Complete the Race

    Tiangong Ultra didn’t go it alone, either. It relied on a human running ahead wearing a signaling device on their back, allowing the robot to mimic their movements. In fact, most of the participating robots required some form of human assistance or remote control, often with operators running alongside them.

    Most Robots Struggled to Complete the Race, With Some Malfunctioning Right Out of the Gate

    Bloomberg reports that all the other humanoid robots took at least three hours to finish the race, and only four managed to cross the finish line before the four-hour cutoff. Some didn’t even make it past the starting area — one robot named Shennong tripped a human guide, crashed into a fence, and broke apart. Another, Little Giant, the shortest robot at just 30 inches tall, came to a stop mid-race as smoke began to rise from its head.

    The event — the Beijing E-Town Humanoid Robot Half Marathon — featured robots built by Chinese companies and student teams. (Unitree’s G1 bot fell at the starting line, though the company said a client ran it without the correct algorithms.)

    To qualify, each robot had to have a humanoid shape and run on two legs. They ran in a separate lane from the human participants, with staggered starts to avoid collisions. Teams were allowed to swap out batteries — Tiangong Ultra needed three changes — and could even replace robots mid-race, though doing so came with a time penalty.

    X-Humanoid’s CTO, Tang Jiang, told Reuters, “I don’t want to boast, but I think no other robotics firms in the West have matched Tiangong’s sporting achievements.”


    Read the original article on: TechCrunch

    Read more: Cosmic Robotics Machines May Accelerate the Installation of Solar Panels

  • People in Japan Respect Robots and AI More Than Those in the West Societies

    People in Japan Respect Robots and AI More Than Those in the West Societies

    Picture an automated delivery vehicle racing to complete a grocery drop-off as you rush to meet friends for a long-anticipated dinner. You both reach a busy intersection simultaneously. Do you pause to let it navigate the turn, or do you expect it to yield, even if traffic rules suggest it should go first?
    Credit: Pixabay

    Picture an automated delivery vehicle racing to complete a grocery drop-off as you rush to meet friends for a long-anticipated dinner. You both reach a busy intersection simultaneously. Do you pause to let it navigate the turn, or do you expect it to yield, even if traffic rules suggest it should go first?

    Navigating a World with Self-Driving Cars

    As self-driving technology advances, these daily interactions will shape how we coexist with intelligent machines,” says Dr. Jurgis Karpus from LMU’s Chair of Philosophy of Mind. He notes that fully autonomous vehicles mark a shift from simply using AI tools like Google Translate or ChatGPT to directly engaging with them. The key distinction? In heavy traffic, our priorities won’t always align with those of self-driving cars, yet we’ll still need to navigate these shared spaces—even if we’re not the ones using them.

    A recent study in Scientific Reports by researchers from LMU Munich and Waseda University in Tokyo found that people are much more likely to exploit cooperative AI agents than equally cooperative humans. “After all, cutting off a robot in traffic doesn’t hurt its feelings,” says lead author Dr. Jurgis Karpus.

    Humans vs. Machines

    Using behavioral economics techniques, the team designed game theory experiments where Japanese and American participants had to choose between cooperation or self-interest. The findings showed that when their counterpart was a machine rather than a human, participants were significantly more inclined to act selfishly.

    However, the study also found that this tendency to exploit cooperative machines is not universal. People in the U.S. and Europe take advantage of robots far more often than those in Japan.

    The researchers attribute this difference to guilt: Westerners tend to feel remorse when exploiting another human but not when taking advantage of a machine. In contrast, people in Japan experience guilt similarly, whether mistreating a person or a cooperative robot.

    These cultural differences may influence the future of automation. “If people in Japan respect robots as much as humans, fully autonomous taxis could become widespread in Tokyo long before they do in Berlin, London, or New York,” Karpus explains.


    Read the original article on: TechXPlore

    Read more: Robotic Rehab and Synced Zaps Restore Movement After Spinal Injuries

  • From robots to humans, good decisions require diverse perspectives.

    From robots to humans, good decisions require diverse perspectives.

    At the intersection of robotics and social science, researchers explore how heterogeneity, influence, and uncertainty drive smarter collective decisions—whether in human groups, robot swarms, or biological collectives. Credit: SCIoI

    When groups make decisions—whether humans, robots, or animals—not all members contribute equally. Some have more reliable information, while others hold greater social influence. A new study from the Cluster of Excellence Science of Intelligence highlights how uncertainty and diversity shape collective decision-making.

    Published in Scientific Reports, the research by Vito Mengers, Mohsen Raoufi, Oliver Brock, Heiko Hamann, and Pawel Romanczuk reveals that groups reach faster, more accurate conclusions when individuals consider not just their peers’ opinions but also their confidence levels and social connectivity. However, overconfident individuals with incorrect information can mislead the group.

    Traditional models assume equal influence among group members, but real-world decision-making varies. Experts and well-connected individuals naturally shape discussions, much like social media influencers or key nodes in robotic swarms. The study finds that uncertainty plays a crucial role—knowledgeable individuals become more central, reducing uncertainty in others, while those with broader connections gather more information over time. This dynamic helps filter out weak data and refine conclusions, provided no one becomes overconfident too quickly.

    Modeling Decision-Making: How Uncertainty and Influence Shape Group Consensus

    To test these ideas, researchers modeled decision-making where individuals adjusted beliefs based on new information. Uncertain members relied on peers, while confident ones guided the group. Connection mattered—highly connected agents spread opinions widely, regardless of accuracy. Results showed that diverse perspectives alone weren’t enough; uncertainty-driven weighting led to faster, more accurate decisions. However, when central figures became too confident too soon, they dominated discussions, even when wrong, spreading bias and misinformation.

    The study has implications for AI, robotics, and human collaboration. Self-driving cars could assess not just data but also confidence levels in sensor readings from nearby vehicles, improving safety. Nature already leverages uncertainty—fish schools, bird flocks, and ant colonies dynamically adjust to new information rather than treating all input equally.

    Ultimately, good decision-making doesn’t eliminate uncertainty—it harnesses it. Whether in human teams, robotic networks, or biological groups, recognizing and adjusting for differences in knowledge and influence leads to smarter, more effective decision-making.


    Read Original Article: TechXplore

    Read More: Nvidia and Google DeepMind to Support Disney’s Development of Adorable Robots

  • Beyond the Uncanny Valley: New Technology Brings Robots to Life

    Beyond the Uncanny Valley: New Technology Brings Robots to Life

    Snapshots of realized sleepy mood expression on a child android robot. Credit: Hisashi Ishihara

    Osaka University researchers have developed a technology that enables androids to express moods like excitement or drowsiness through dynamic facial movements modeled as overlapping, decaying waves.

    While androids can mimic human expressions, their movements often feel artificial, creating discomfort. Traditional approaches rely on pre-programmed action sequences, requiring complex preparation and careful transition management to avoid unnatural expressions.

    Proposed system. Credit: Hisashi Ishihara

    To overcome these challenges, lead researcher Hisashi Ishihara’s team introduced “waveform movements,” where gestures like blinking, breathing, and yawning are represented as individual waves that combine in real-time. This method removes the need for pre-set action scenarios and ensures smoother transitions.

    Additionally, “waveform modulation” adjusts these waves based on the robot’s internal state, instantly reflecting mood changes in facial expressions. Senior author Koichi Osuka emphasizes that this advancement allows androids to display more natural, responsive emotions, enriching human-robot communication.

    Ishihara envisions androids whose every movement reflects internal emotions, making them appear as if they have a heart. By enabling adaptive emotional expression, this technology brings robots closer to humanlike interaction, enhancing their role in communication.


    Read Original Article: Scitechdaily

    Read More: Robotic Dogs Handle Bomb Detection, Neutralization, and Disposal

  • RoBoa Slinks Through Disaster Areas that are Too Perilous for Other Robots

    RoBoa Slinks Through Disaster Areas that are Too Perilous for Other Robots

    The soft-bodied RoBoa can snake its way through collapsed buildings in search of survivors
    RoBoa/ETH Zurich

    When disaster strikes, drones and robots can search for survivors in dangerous zones. The student team at ETH Zurich designed the RoBoa to slither through debris that would stop other robots.

    Developed at the Autonomous Systems Lab, RoBoa aids rescue teams in disaster and war zones. Its snake-like movement allows it to navigate rubble while using its sensor-equipped head to locate trapped survivors.

    The robot features an inflatable fabric tube connected to a supply box that provides pressurized air and houses electronics and additional tubing. This machine is controlled remotely via a live camera feed. The latest prototype upgrades its pneumatic tubing from 10 meters to 100 meters, and its diameter can be adjusted to meet specific mission needs.

    RoBoa for Search and Rescue

    RoBoa: A Versatile Rescue Robot for Communication, Supply Delivery, and Hazardous Environments

    This machine can also communicate with survivors through a speaker/microphone and potentially deliver supplies such as food, water, and medicine. Beyond rescue, its head can be configured for tasks like inspection, environmental monitoring, and mapping. This machine handles dirty or slippery surfaces and is safer in environments where sparks may cause explosions.

    This student project has evolved into a startup, with commercial release on the horizon thanks to an ETH Pioneer Fellowship award. The team will present the RoBoa at ETH Zurich’s Industry Day 2024 on November 21.

    The Snake that Saves Lives

    Read Original Article: New Atlas

    Read More: Scitke

  • A New Method Lets Robots Map a Scene and Identify Objects to Complete Tasks

    A New Method Lets Robots Map a Scene and Identify Objects to Complete Tasks

    Picture tidying up a cluttered kitchen, beginning with a counter scattered with sauce packets. If your aim is to clean the counter, you might gather all the packets at once. But if you want to separate the mustard packets first, you'd sort them by type. And if you were specifically looking for Grey Poupon mustard, you'd need to search even more carefully to find that exact brand.
    MIT’s Clio runs in real-time to map task-relevant objects in a robot’s surroundings, allowing the bot (Boston Dynamic’s quadruped robot Spot, pictured) carry out a natural language task (“pick up orange backpack”). Credit: Massachusetts Institute of Technology

    Picture tidying up a cluttered kitchen, beginning with a counter scattered with sauce packets. If your aim is to clean the counter, you might gather all the packets at once. But if you want to separate the mustard packets first, you’d sort them by type. And if you were specifically looking for Grey Poupon mustard, you’d need to search even more carefully to find that exact brand.

    MIT engineers have developed a method that enables robots to make intuitive, task-specific decisions. Their new system, called Clio, allows a robot to identify the important parts of a scene based on its assigned tasks. Clio processes a list of tasks in natural language, determining the necessary level of detail to interpret its surroundings and “remember” only the relevant aspects.

    In tests, Clio was used in environments like a cluttered cubicle and a five-story building, where the robot segmented scenes based on tasks such as “move rack of magazines” and “get first aid kit.” The system was also tested on a quadruped robot in real-time as it explored an office building, recognizing only objects related to its task, such as retrieving a dog toy while ignoring office supplies.

    A Versatile Tool for Task-Specific Robotics

    Named after the Greek muse of history for its ability to remember key elements, Clio is designed for use in various environments, including search and rescue, domestic tasks, and factory work. According to Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics, Clio helps robots understand their surroundings and focus on what’s necessary to complete their mission.

    The team presents their findings in a study published today in the journal IEEE Robotics and Automation Letters. Carlone’s co-authors include SPARK Lab members Dominic Maggio, Yun Chang, Nathan Hughes, and Lukas Schmid, as well as MIT Lincoln Laboratory researchers Matthew Trang, Dan Griffith, Carlyn Dougherty, and Eric Cristofalo.

    Transitioning from Closed-Set to Open-Set Object Recognition

    Advances in computer vision and natural language processing have enabled robots to identify objects, but this was previously limited to controlled “closed-set” environments with predefined objects. Recently, researchers have adopted an “open-set” approach, using deep learning to train neural networks on billions of images and text. These networks can now recognize new objects in unfamiliar scenes. However, a challenge remains in determining how to segment a scene in a task-relevant way. As Maggio notes, the level of detail should vary depending on the robot’s task to create a useful map.

    With Clio, the MIT team designed robots to interpret their surroundings with detail that adjusts automatically to the task. For instance, if the task is to move a stack of books, the robot should recognize the entire stack, while it should identify just a green book when that’s the focus.

    Integrating Computer Vision and Language Models for Enhanced Object Recognition

    The approach combines advanced computer vision and large language models, using neural networks trained on millions of images and text. They also employ mapping tools that segment images, which the neural network analyzes for relevance.

    By applying the “information bottleneck” concept, they compress image data to keep only the segments relevant to the task, allowing the robot to focus on the necessary items.

    Clio was tested in real-world environments, such as Maggio’s cluttered apartment, where it quickly identified relevant segments for tasks like “move pile of clothes.” The system was also used in real-time on Boston Dynamics’ Spot robot, which mapped and identified objects in an office.

    This method generated maps highlighting only the target objects, enabling the robot to complete tasks efficiently. Running Clio in real-time was a major advancement, as prior methods required hours for processing.

    Looking ahead, the team plans to enhance Clio to handle more complex tasks, like “find survivors” or “restore power,” moving closer to a human-like understanding of tasks.


    Read the original article on: TechXplore

    Read more: Robotic Arm 3D-Prints Two-Story House

  • Head Transplants by Robots Expected Within a Decade

    Head Transplants by Robots Expected Within a Decade

    In what seems straight out of a B-grade sci-fi/horror film, head transplant operations entirely performed by robotic surgeons could be a reality within a decade, according to startup BrainBridge.
    Robots could be surgically swapping heads within the decade
    BrainBridge

    In what seems straight out of a B-grade sci-fi/horror film, head transplant operations entirely performed by robotic surgeons could be a reality within a decade, according to startup BrainBridge.

    This idea comes from Hashem Al-Ghaili, the Berlin-based molecular biologist turned filmmaker, producer, author, and science communicator, known for his 2022 proposal of a futuristic, dystopian artificial-womb baby-making factory called EctoLife.

    Redefining Transplants

    His latest project, BrainBridge, aims to utilize high-speed robotic systems to maintain brain condition during the process of transplanting a head onto a compatible donor body. (Wouldn’t that technically be a body transplant? There are certainly more critical questions to consider here…)

    We believe it’s appropriate to preface this conceptual video with a content warning…

    Head Transplant Machine – BrainBridge

    In a bold move that makes Neuralink look like a simple scalp massage, Al-Ghaili plans to perform full head and face transplants to provide individuals with severe disabilities a new chance at life.

    During the surgery, AI algorithms would guide multiple robotic arms, overseeing the detachment of the head and its attachment to a different body, while also reconnecting the spinal cord, nerves, and blood vessels. Proprietary chemical adhesives and polyethylene glycol would aid in rejoining the severed neurons.

    I’m thrilled to announce BrainBridge, the world’s first head transplant system concept, which integrates advanced robotics and artificial intelligence to perform complete head and face transplantation procedures,” Al-Ghaili announced on social media. “This cutting-edge system offers new hope to patients with untreatable conditions such as stage-4 cancer, paralysis, and neurodegenerative diseases like Alzheimer’s and Parkinson’s.”

    Al-Ghaili’s Initial Tease of BrainBridge and the Eight-Year Plan

    Al-Ghaili first teased his ambitious new science project on X late last year, mentioning that the eight-year plan to the first surgery allows him to recruit “top talent to overcome current challenges” in medicine, such as the complexity of spinal cord repair.

    He expressed his intention for BrainBridge to begin with successful spinal cord surgeries before advancing to the head/body transplant operations.

    Yet, the medical science community has been notably reserved in its reaction to the official launch of BrainBridge.

    This isn’t the initial occasion where the prospect of head transplants has been raised. Italian doctor Sergio Canavero previously made bold claims about performing such surgeries as early as 2017, but failed to materialize them.

    Body Part Exchange in Comparison to “Face/Off” Premise

    His endeavors extended only to exchanging body parts between two cadavers, akin to the premise of the film “Face/Off” starring Nicolas Cage and John Travolta.

    Nevertheless, Al-Ghaili deserves acknowledgment for his inventive strategy to advance in the experimental transplant domain. (Pardon the pun.)

    BrainBridge aims to perform facial and scalp transplants to restore both function and aesthetics,” the website of the startup indicates, although it lacks detailed information.

    Utilizing younger donor tissues decreases the likelihood of rejection and improves appearance, coupled with precise suturing and comprehensive post-operative care to facilitate healing and reduce scarring.”

    The website emphasizes that BrainBridge is still “in the conceptual phase,” so there’s no opportunity to join a waiting list at this time.

    To conclude, And if you happened to overlook EctoLife’s surreal promotional video for its futuristic baby factory, it’s worth a watch…

    EctoLife: The World’s First Artificial Womb Facility

    Read the original article on: New Atlas

    Read more: Revolutionary Biorobotic Heart: A Breakthrough in Cardiac Research and Surgery

  • Why Robots Can’t Beat Fastest Animals, Surprising Reason

    Why Robots Can’t Beat Fastest Animals, Surprising Reason

    In recent years, advancements in robotics and AI have been substantial, yet we have not succeeded in creating robots that surpass the capabilities of nature's finest. New research delves into the fundamental reasons behind this phenomenon.
    Credit: Pixabay

    In recent years, advancements in robotics and AI have been substantial, yet we have not succeeded in creating robots that surpass the capabilities of nature’s finest. New research delves into the fundamental reasons behind this phenomenon.

    Exploring Over a Hundred Studies Reveals a Crucial Integration Barrier in Advancing Robotics

    After examining over a hundred prior studies and comparing robots to animals across various aspects such as power, structure, movement, perception, and management, the findings yielded a surprising revelation. It’s not that our most advanced robots lag significantly in any specific category. The challenge lies in our inability to integrate these diverse elements as effectively as evolution has over millions of years.

    Mechanical engineer Kaushik Jayaram from the University of Colorado Boulder highlights that, at the system level, robots fall short. There are inherent design compromises wherein optimizing for one feature, such as forward velocity, could result in sacrificing another, such as maneuverability.

    Animals were ranked against robots in categories like agility. (Burden et al., Science Robotics, 2024)

    Illustrating this, Jayaram highlights a robot inspired by cockroaches, which he contributed to developing in 2020. While proficient at rapid forward and backward motion, it encounters difficulty when it comes to altering direction or traversing uneven surfaces.

    Unveiling the Hidden Advantages of Trade-offs in Complex Systems

    These trade-offs may also manifest as an advantage when two processes interact in unforeseen ways that benefit the system. While such interactions are more common in complex systems, predicting them proves challenging, if not impossible.

    Furthermore, the researchers highlight that even the tiniest insects surpass most robots in sensing their environment and adjusting their actions accordingly, showcasing a flexibility and agility essential for swift and secure movement.

    Robots can beat animals in certain areas – but not in putting everything together. (Burden et al., Science Robotics, 2024)

    Consider power as another factor. While motors and batteries may outperform tissue and muscle in specific measures, in animals, power is seamlessly intertwined with sensory data within the same cellular units.

    In some respects, animals epitomize this ultimate design principle—a system that harmoniously operates together,” explains Jayaram. “Nature serves as an invaluable instructor.

    The driving force behind the new research is its potential to inspire engineers to develop robots that exhibit greater flexibility, agility, and adaptive locomotion, tailored to various scenarios.

    Advancing Robotics Through Integrated ‘Functional Subunits

    The research team suggests a focus on enhancing the construction of ‘functional subunits,’ akin to the integration seen in animal cells, where different components such as power, sensing, and movement coalesce.

    This approach offers ample opportunity to explore adverse trade-offs and potential emergent properties. Until a deeper comprehension of these aspects is achieved, creatures like cheetahs and cockroaches will maintain their superiority.

    As an engineer, it’s somewhat disheartening,” reflects Jayaram. “Despite over two centuries of concentrated engineering efforts, including remarkable feats like sending spacecraft to the moon and Mars, we’re still perplexed by the absence of robots significantly surpassing biological systems in natural environment locomotion.”


    Read the original article on: Scienca Alert

    Read more: AI-Created Gene Editing Tools Successfully Alter Human DNA

  • Video: Tall Humanoid Robots at Amazon Facility

    Video: Tall Humanoid Robots at Amazon Facility

    Tall, proficient, and resembling insects to some extent, a fleet of Digit robots is currently navigating vacant bins within an Amazon research and development facility. This trial marks the initial phase of utilizing these robots to automate repetitive tasks in warehouses.
    Digit is currently being trialed as a way to relieve workers of difficult repetitive tasks, not relieve them of their jobs, says Amazon
    Agility Robotics

    Tall, proficient, and resembling insects to some extent, a fleet of Digit robots is currently navigating vacant bins within an Amazon research and development facility. This trial marks the initial phase of utilizing these robots to automate repetitive tasks in warehouses.

    Agility Robotics, a technology company supported by Amazon, provided the robots for the trial program. Their main product is the humanoid Digit robot, standing at 5.7 ft (175 cm) tall. It features legs reminiscent of grasshoppers, which the company describes as “backwards legs,” enabling it to squat to retrieve items from the ground and lift them to nearly six feet high. Digit is capable of lifting packages weighing up to 35 lb (16 kg) and maneuvering in various directions, navigating stairs, uneven terrain, and even walking while crouched.

    Digit’s Role in Amazon’s Workforce Evolution

    Amazon, boasting a workforce of over 750,000 robots, emphasizes that Digit’s role isn’t about replacing jobs but rather, about “collaborating with employees.” The company plans for these robots to handle the repetitive task of recycling empty totes that are no longer in use.

    Digit has previously ventured into the workforce, clearly not intended to replace human labor (with a hint of sarcasm). In 2019, a somewhat eerie, headless version of Digit collaborated with Ford in testing autonomous package delivery to households.

    Additionally, just last year, Digit commenced its operations at a warehouse managing fulfillment for the women’s wear brand, Spanx, as depicted in the following video.

    Agility Robotics Partners With GXO

    Digit isn’t the pioneer humanoid robot entering the workforce. Earlier this year, robotics company Figure announced the provision of its sleek metallic humanoid bots to BMW’s plant in Spartanburg, South Carolina, where they will undergo training for planned deployment.

    Advancements in Robotics Fine Motor Skills

    Despite Digit’s remarkable capabilities in crouching, backward walking, and lifting, it still lacks refined fine motor skills. However, advancements may come sooner than expected, as demonstrated by another robot we covered last month from Sanctuary AI. Although it lacks mobility, this bot possesses exceptionally fast and dexterous hydraulically activated hands. One can envision integrating this technology with the motor skills of Digit or other bots from companies such as Boston Dynamics, which may not be too distant in the future.

    Currently, Agility Robotics is quite optimistic about Digit’s potential to contribute to the workforce, evident from the establishment of a 70,000-square-foot (6,503-square-meter) facility in Salem, Oregon. Dubbed the “RoboFab” manufacturing plant, it is expected to have the capability to manufacture over 10,000 Digit robots annually.

    You can observe Digit in action at Amazon’s research and development facility near Seattle, Washington, and listen to insights from Agility Robotics’ Chief Commercial Officer, Rich Bhone, in the video below.

    Agility Robotics Broadens Relationship with Amazon

    Read the original article on: New Article

    Read more: ZenRobotics 4.0 Enhances Intelligence in Waste Sorting Automation

  • Humanoid Figure Achieves Autonomous Task Learning and Performance

    Humanoid Figure Achieves Autonomous Task Learning and Performance

    Within a year of development, Figure's 01 achieved the remarkable feat of walking—an accomplishment that Adcock considers to be a record-breaking milestone.
    Within a year of development, Figure’s 01 achieved the remarkable feat of walking—an accomplishment that Adcock considers to be a record-breaking milestone.

    Brett Adcock, from Figure, asserted a significant breakthrough in humanoid robotics over the weekend, which he referred to as a “ChatGPT moment.” This revelation becomes clearer as we now understand that the robot has gained the ability to observe human task performances, construct its own comprehension of the procedures, and subsequently execute these tasks autonomously.

    Humanoid Figure

    In a groundbreaking development, the humanoid figure has reached a remarkable milestone by gaining the ability to independently observe, acquire knowledge, and execute tasks. This technological advancement marks a significant leap forward in robotics, as it empowers humanoid entities to operate autonomously. No longer limited to pre-programmed instructions, these figures can now dynamically adapt to their surroundings, learn from their experiences, and perform tasks with a level of autonomy previously unseen.

    New Possibilities

    This achievement opens new possibilities for the integration of intelligent robotics into various fields, promising innovative solutions and increased efficiency in tasks ranging from routine activities to complex operations. The era of figures with the capacity to watch, learn, and perform tasks independently has arrived, ushering in a new era of robotics and artificial intelligence.

    Humanoid Robots

    Versatile humanoid robots are required to manage a diverse array of tasks, encompassing the comprehension of tools, devices, objects, techniques, and objectives that humans utilize to accomplish various activities. These robots must exhibit a level of flexibility and adaptability comparable to humans, allowing them to function effectively in an extensive spectrum of dynamic work environments.


    Read the Original Article: New Atlas

    Read More: Historic Launch: First U.S. Lunar Lander in Over 50 Years Sets Course for the Moon