Tag: Machines

  • A light-Controlled Flexible Lens may Give Soft Machines Vision

    A light-Controlled Flexible Lens may Give Soft Machines Vision

    Inspired by the human eye, our biomedical engineering team at Georgia Tech has developed an adaptive lens made from soft, light-sensitive materials.
    Image Credits:Corey Zheng/Georgia Institute of Technology.

    Inspired by the human eye, our biomedical engineering team at Georgia Tech has developed an adaptive lens made from soft, light-sensitive materials.

    Traditional adjustable cameras rely on bulky, rigid lenses and a pupil to control focus and brightness. In contrast, the human eye achieves this through soft, flexible tissues in a compact design.

    A Soft, Light-Responsive Lens That Mimics the Human Eye

    Our innovation, the photo-responsive hydrogel soft lens (PHySL), replaces solid components with soft polymer “muscles” made from hydrogel — a water-based material. These hydrogel muscles adjust the lens’s shape to change its focal length, mimicking the action of the eye’s ciliary muscles.

    The hydrogel contracts when exposed to light, enabling contact-free control by simply projecting light onto the lens. By targeting specific areas with light, we can finely tune the lens’s shape. Without rigid parts, this flexible system is safer, more adaptable, and better suited for use with living tissue.

    Camera-based artificial vision powers many technologies, including robots and medical devices. Yet, the optical components in these systems still rely mainly on rigid, electrically powered materials. This rigidity poses challenges for emerging technologies like soft robotics and biomedical devices, which require flexible, low-power, and self-sufficient systems. Our soft lens is particularly well-suited for these applications.

    Flexible, Adaptive Machines Inspired by Nature

    Soft robots, inspired by living organisms, are built from flexible materials and structures that make them more resilient and adaptable. This approach is enabling advances in surgical endoscopes, gentle robotic grippers for handling fragile objects, and robots capable of moving through environments inaccessible to rigid machines.

    Similar principles benefit biomedical tools, where tissuelike materials create softer, safer interfaces between machines and the human body. Such materials allow devices to move naturally with the body, improving safety and comfort. Examples include skinlike wearable sensors and hydrogel-coated implants.

    image Credits:Corey Zheng/Georgia Institute of Technology.

    This research combines ideas from adjustable optics and soft “smart” materials. Although such materials are commonly used to create soft actuators—components that enable movement, like grippers or propellers—their use in optical systems has been more difficult to achieve.

    Most current soft lens designs rely on liquid-filled chambers or electronically powered actuators, which add complexity and restrict their use in fragile or wireless systems. Our light-responsive design provides a simpler, electronics-free solution.

    Advancing Performance Through Next-Generation Hydrogel Materials

    We plan to enhance our system’s performance by leveraging recent advances in hydrogel technology. Emerging studies have produced various stimuli-responsive hydrogels capable of faster and stronger contractions. By integrating these new materials, we aim to boost the functional performance of our photo-responsive hydrogel soft lens.

    We also seek to demonstrate its potential in innovative camera applications. In our current work, we created a proof-of-concept, electronics-free camera that combines our soft lens with a custom light-activated microfluidic chip. Our next step is to integrate this system into a soft robot, enabling vision without electronics. This would mark a major step forward in showcasing how our design can support new forms of soft visual sensing.


    Read the original article on: Robohub

    Read more:What is the Domain Name System? An Engineer Explains

  • Study Reveals People Forgive Machines Similarly to Humans

    Study Reveals People Forgive Machines Similarly to Humans

    When technology malfunctions—like a frozen computer at a crucial moment, a GPS guiding us into traffic, or a washing machine halting mid-cycle—our initial response is usually frustration. These common annoyances often spark irritation. However, a recent study by two Israeli researchers suggests a surprising insight: we tend to extend forgiveness to machines in much the same way we do to other people.
    Image Credits: Unsplash/CC0 Public Domain

    When technology malfunctions—like a frozen computer at a crucial moment, a GPS guiding us into traffic, or a washing machine halting mid-cycle—our initial response is usually frustration. These common annoyances often spark irritation. However, a recent study by two Israeli researchers suggests a surprising insight: we tend to extend forgiveness to machines in much the same way we do to other people.

    Forgiveness in Human-Machine Interaction: Study by Holtzman and Nimrod

    In a recent publication in Frontiers in Computer Science, Inbal Holtzman and Prof. Galit Nimrod from Ben-Gurion University’s Department of Communication Studies explored this phenomenon for the first time. Their research aimed to uncover what forgiveness looks like in human-machine interactions and to understand why users choose to keep using a technology even after it has let them down.

    “We set out to examine whether emotions such as frustration or disappointment with technology could eventually lead to forgiveness, enabling people to continue using it without holding a grudge,” Holtzman explains.

    To explore this, the researchers conducted focus groups with 27 young adults—both students and professionals—who shared their experiences with technology failures. The discussions quickly shifted from technical issues to emotional responses. Participants used expressive language, saying things like, “The computer betrayed me,” “The app let me down,” or “The phone doesn’t understand me.” They were then asked how they responded to these situations and whether they were able to forgive the device.

    How Users Forgive Technology: Strategies and Acceptance

    The findings revealed several different forgiveness strategies. Some participants weighed the pros and cons—if the benefits of the technology outweighed the inconvenience of the malfunction, they were more likely to forgive and continue using it. Others placed the blame on humans, such as developers, engineers, or even themselves, which made it easier to excuse the failure.

    Some participants looked for ways to engage with the technology or the companies behind it—features like pop-up messages acknowledging issues or easy access to customer support were seen as gestures that encouraged forgiveness. Others simply accepted that technology, while imperfect, is an essential part of modern life.

    The study also revealed notable differences among users. Those who were more tech-savvy and comfortable with digital tools tended to be more forgiving, likely because they recognized the complexity of such systems and saw errors as normal. On the other hand, participants who were skeptical of smart devices from the beginning were less inclined to forgive and, in some cases, stopped using the technology altogether after it failed.

    Prof. Nimrod concluded that our interactions with machines have evolved beyond being purely functional. “We now relate to them more like companions—we feel let down, get frustrated, but we also forgive,” she said. “In many ways, our phones, apps, and devices have become part of our emotional and social worlds.”

    Designing Forgiveness: Practical Implications for Technology and User Trust

    The study also points to important real-world applications. If tech companies recognize that users seek more than just technical fixes—that they also want their frustration acknowledged—they might design systems that are more transparent or even express regret. For example, a screen that explains the error or a robot that says, “Sorry, something went wrong,” could help build trust and strengthen the user experience. As AI, apps, and robots become more integrated into daily life, understanding how people forgive technology could play a key role in shaping better human-machine relationships.

    “Maybe in the near future, we’ll grow accustomed to hearing not just ‘Update completed successfully,’ but also ‘Sorry, we made a mistake,’” says Holtzman. “The real question is whether we’ll be willing to accept that apology—and offer forgiveness.”


    Read the original article on: Tech Xplore

    Read more: DeepMind Unveils its First Thinking Robot AI

  • Machines Mimic Human Motions to Prevent Slipping

    Machines Mimic Human Motions to Prevent Slipping

    To handle diverse real-world tasks, robots must securely grasp objects of various shapes, textures, and sizes without unintentionally dropping them. Traditional methods improve this by increasing the robotic hand’s grip strength to avoid slippage.
    Image Credits:techxplore

    To handle diverse real-world tasks, robots must securely grasp objects of various shapes, textures, and sizes without unintentionally dropping them. Traditional methods improve this by increasing the robotic hand’s grip strength to avoid slippage.

    Researchers Develop Bio-Inspired Motion Control to Prevent Slippage in Robotic Hands

    Researchers from several universities and labs have proposed new methods to stop objects from slipping from robotic hands. Their technique adjusts the movement paths the hand follows during manipulation, rather than relying solely on grip force. The system, combining a robotic controller with bio-inspired trajectory modulation, was detailed in Nature Machine Intelligence.

    The idea for this work was inspired by a familiar human experience,” said Amir Ghalamzan, senior author of the study, in an interview with Tech Xplore.

    Teaching Robots to Adjust Movements Like Humans to Protect Fragile Objects

    When sensing a delicate object might slip, people adjust movements—slowing, tilting, or shifting—rather than just tightening their grip. In contrast, robots have traditionally relied on increasing grip strength, which can be ineffective and may even harm fragile items. Our goal was to explore ways to make robots respond more like humans in such situations,” explained Ghalamzan.

    The study aimed to create a controller that predicts slip and adjusts movements, using bio-inspired trajectory modulation with grip-force control for more dexterous manipulation.

    Image Credits:Figure illustrating the predictive control architecture in humans based on t

    Our method replicates the way humans rely on internal models to interact with their surroundings,” Ghalamzan said. Like the brain anticipating actions, the robot’s data-driven ‘world model’ predicts tactile feedback to detect and prevent slips in advance.

    The controller lets robots adjust speed, direction, and hand position in real time instead of just increasing grip strength.. By securing objects through movement adjustments, this method can lower the risk of damaging delicate items. It also works when grip force can’t be changed, enabling more fluid, intelligent interactions.

    Novel Motion-Based Slip Controller Enhances Grip-Force Control

    Our research delivers two major innovations,” Ghalamzan explained. First, we present a unique motion-based slip controller that complements grip-force control, useful when increasing grip isn’t possible.

    Second, we developed a predictive controller driven by a learned tactile forward model, or ‘world model,’ that enables robots to anticipate slip based on their intended actions.

    The team applied the new controller to plan a robotic gripper’s movements and tested it in dynamic, unstructured settings. In several cases, it notably enhanced grasp stability, surpassing conventional controllers that rely solely on adjusting grip force.

    Ghalamzan noted that researchers have traditionally found embedding such a model within a predictive control loop too computationally intensive. “Our findings demonstrate that it is not only possible but also highly effective.

    World Model Could Broaden Robots’ Real-World Capabilities

    This work could advance robotics by enabling safe physical and social interactions via a world model. Such capabilities could allow robots to handle diverse objects in real-world environments, from homes and manufacturing floors to healthcare facilities.

    We are working to make our predictive controller faster and more efficient for use in more demanding real-time scenarios,” Ghalamzan added. “This involves exploring new architectures and algorithms to minimize computational load.

    Future research will extend the system to handle more complex manipulation tasks, such as working with deformable items or objects requiring two-handed coordination. The team also plans to integrate computer vision, enabling trajectory planning that combines tactile and visual feedback.

    Another key goal is to improve the transparency and verifiability of these learned models,” Ghalamzan said. “As robots become more intelligent and autonomous, it’s essential that humans can understand and trust their decision-making. Our goal is to develop predictive controllers that are powerful, safe, and explainable for real-world use.


    Read the original article on: Techxplore

    Read more: Moonquakes Could Pose Serious Risks to Future Lunar Bases

  • Robots Can now Grow and Self-Repair using Parts From other Machines

    Robots Can now Grow and Self-Repair using Parts From other Machines

    Modern robots are limited by rigid, closed bodies that can't grow, self-repair, or adapt. But scientists at Columbia University have now created robots that can physically "grow," "heal," and enhance themselves by absorbing material from their surroundings or other robots.
    Image Credits: Techcrunch

    Modern robots are limited by rigid, closed bodies that can’t grow, self-repair, or adapt. But scientists at Columbia University have now created robots that can physically “grow,” “heal,” and enhance themselves by absorbing material from their surroundings or other robots.

    A new study in Science Advances introduces “Robot Metabolism“—a process that lets robots take in and reuse materials from their environment or other machines.

    Robots That Think, Grow, and Self-Repair

    True autonomy means robots must think and maintain themselves,” says Philippe Martin Wyder. “Like living beings, they grow, adapt, and repair using materials from their environment or other robots.

    The researchers showcase this new approach using the Truss Link—a robotic magnetic rod inspired by the Geomag toy. Each Truss Link is a simple, bar-like unit with versatile magnetic connectors that can extend, retract, and attach to other modules at various angles, allowing them to form more complex structures.

    Self-Assembling Robots That Evolve and Improve Themselves

    The researchers demonstrated how individual Truss Links could self-assemble into flat, two-dimensional shapes that then transformed into 3D robots. These robots further enhanced themselves by incorporating additional modules, effectively “growing” into more advanced machines. In one case, a tetrahedron-shaped robot added an extra link to use as a walking stick, boosting its downhill speed by over 66.5%.

    Robot intelligence has advanced, but their bodies remain rigid and non-recyclable,” says Hod Lipson, co-author and Columbia professor. He also leads the Creative Machines Lab where the research was conducted.

    In contrast, biological organisms are inherently adaptable—they grow, heal, and evolve. This flexibility comes from biology’s modular design, where components like amino acids can be reused across different lifeforms. To truly progress, robots must adopt a similar strategy—learning to utilize and repurpose parts from other machines. This emerging concept is what we call ‘machine metabolism.’”

    A Vision of Self-Sustaining Robotic Ecosystems Inspired by Nature

    Researchers envision robots in self-sustaining ecosystems, growing and adapting like nature’s modular systems, leading to resilient, self-improving machines.

    Robot Metabolism bridges digital intelligence and the physical world, letting AI evolve mentally and physically,” explains Wyder. At first, this capability will serve specialized roles, such as in disaster response or space missions. Eventually, AI could build physical machines as easily as it edits your emails.

    Lipson offers a word of caution: “The idea of self-replicating robots can sound like something out of dystopian science fiction. As robots become more common—from cars to factories—the question is: who will maintain them? We can’t count on humans to do it all. Ultimately, robots will need to learn to sustain and repair themselves.


    Read the original article on: Techcrunch

    Read more: Malaysia Will Require Official Approval To Trade AI Chips Made In The United States

  • Apple’s Pixar-Style Lamp-Bot Showcases the Friendly Side of Machines

    Apple’s Pixar-Style Lamp-Bot Showcases the Friendly Side of Machines

    Apple’s lamp-bot can depict joy and sadness, and even do a little dance
    Apple

    Luxo Jr. has been a charming presence in every Pixar film since 1995, when the animated desk lamp first hopped onto the screen, playfully stomping on the studio’s logo in the opening credits. This iconic character has now inspired Apple researchers to explore ways to make robots more expressive and improve human-machine interactions—and the result is undeniably endearing.

    A trio of researchers from Apple’s Machine Learning Research division showcased how a robotic desk lamp, capable of movement and gestures, can create a more engaging experience compared to a purely functional design. Their study, accompanied by a detailed video, presents the robot performing six tasks in both “Expressive” and “Functional” modes for direct comparison.

    The video is available on Apple’s website and in the embedded X post below. As shown, the robotic lamp features a camera, projector, and speaker alongside its LED light.

    Expressive vs. Functional

    In ‘Expressive‘ mode, the robot demonstrates lifelike behaviors, such as glancing out the window before giving a weather update, gently nudging a glass toward the researcher as a hydration reminder, and even dancing along when music plays. In contrast, the ‘Functional’ mode lamp performs tasks with only the necessary movements, focusing purely on efficiency.

    Our findings indicate that expression-driven movements significantly enhance user engagement and perceived robot qualities. This effect is especially pronounced in social-oriented tasks,” the researchers stated. That sentiment rings true—when the robot lowered its head in disappointment after being told it couldn’t join a hike, I couldn’t help but ask out loud, “Why not?

    The robot lamp responds to the researcher with ‘lifelike’ gestures of its own
    Apple

    Apple’s Robotics Ambitions Align with Bloomberg Report

    Beyond highlighting Apple’s machine learning expertise, this project supports an August 2024 Bloomberg report by journalist Mark Gurman, which claimed Apple was developing a robot with an articulating arm and an iPad-like display.

    Designed to assist with smart home controls, video calls, and home security monitoring, the device was reportedly expected to launch between 2026 and 2027, with a projected price of around $1,000.

    Apple’s interest in robotics is further evident in a research paper it published last month, outlining a framework for generating natural and expressive gestures in humanoid robots—such as giving a thumbs-up to a student solving a math problem on a chalkboard.

    Whether these technologies will make their way into consumer products remains to be seen. Apple previously scrapped its self-driving car project, and according to Gurman, reassigned that program’s lead to this screen-equipped robot initiative. But if Apple is truly working on charming, expressive robots like this, it’s a vision worth getting excited about.


    Read the original article on: New Atlas

    Read more: Robotic Dogs Handle Bomb Detection, Neutralization, and Disposal