Tag: Unveils

  • Walk Me: Toyota Unveils a Chair that Walks Like a Robot

    Walk Me: Toyota Unveils a Chair that Walks Like a Robot

    Toyota is showcasing its four-legged “Walk Me” robotic chair, designed for independent mobility, at the Japan Mobility Show 2025 in Tokyo from October 30 to November 9. For now, it remains a concept design rather than a market-ready product.
    Image Credits:Toyota

    Toyota is showcasing its four-legged “Walk Me” robotic chair, designed for independent mobility, at the Japan Mobility Show 2025 in Tokyo from October 30 to November 9. For now, it remains a concept design rather than a market-ready product.

    Design and Mechanical Structure of the “Walk Me” Robot Chair

    The “Walk Me” has a rear-opening, fabric-covered seat on a spherical base that houses its electronics and actuators for four flexible, jointed legs, each likely with at least four degrees of freedom.

    The robot’s legs can lift, bend, and move independently to create a walking motion. When not in use, they fold beneath the chair to form a stable base that does not require power. In this state, the piece simply appears as a vibrant, stylish item of furniture.

    The chair is operated using a small joystick or buttons built into two side handles that also function as handholds.Its motion isn’t always smooth, especially on stairs, but it can walk, climb, and handle uneven surfaces on its own.

    Remote Control and Assisted Mobility Features

    It can be remotely controlled and may have partial autonomy, allowing users to call it to their location for transport. This makes the robotic chair especially useful for people with limited mobility who struggle to walk.

    The “Walk Me” is mainly designed for indoor use and may not be suitable for long distances, though Toyota has not confirmed this. The company also shows it guiding a user to a vehicle, lifting and tilting slightly to assist with getting in.


    Read the original article on: Heise

    Read more:Too Lazy to Brush? A Tiny Oral-care Robot Could do it for you

  • Chinese Firm Unveils Highly Agile Life-sized Robotic Hand

    Chinese Firm Unveils Highly Agile Life-sized Robotic Hand

    Wuji Tech has introduced a breakthrough in robotics: a highly agile robotic hand crafted for humanoid applications. The technology aims to broaden the capabilities of robots across multiple fields, from personal assistance to research and development.
    Image Credits:elhombre
    • Wuji Hand delivers enhanced dexterity for humanoid applications.
    • The project aims to improve precision in human-machine interactions.
    • The company is focusing on integration with next-generation robots.

    Wuji Tech has introduced a breakthrough in robotics: a highly agile robotic hand crafted for humanoid applications. The technology aims to broaden the capabilities of robots across multiple fields, from personal assistance to research and development.

    The project mimics human hand movements, enabling tasks from simple to highly precise. This development reflects the growing trend of creating versatile robots designed for seamless interaction in everyday environments.

    Human-Like Dexterity for Precision Tasks

    Wuji Tech engineered its robotic hand to mimic human hand functionality, with joints that enable individual finger movements and a wider range of motion. This design enhances its suitability for tasks requiring fine motor skills, such as handling tools, delicate objects, and collaborating with humans.

    The company notes that the hand is compatible with various humanoid platforms, enabling integration with existing systems. The goal is to create a standardized model adaptable for both research and commercial applications, without reliance on a single robotic framework.

    Lightweight Design Boosts Energy Efficiency and Autonomy

    Another key feature is the use of lightweight materials, which reduce energy consumption during operation—a crucial factor for extending robot autonomy in both workplace and home settings.

    • Offers 20 degrees of freedom.
    • Features a biomimetic five-finger structure, each finger with four degrees of freedom.
    • Weighs under 600 grams.
    • Generates up to 15N of force at the fingertips.
    • Durable, withstanding hundreds of thousands of cycles and reaching up to 1 million in tests.

    Versatile Applications Across Healthcare and Industry

    The technology is designed for applications across multiple sectors. Wuji Hand isn’t just versatile—it adapts to multiple environments. In healthcare, it can assist with surgical instrument handling and support patient rehabilitation. Meanwhile, in industrial settings, its precision makes it ideal for tasks ranging from assembling delicate components to managing sensitive logistics operations.

    In the service sector, Wuji Tech envisions its use in customer-focused robots, performing roles from reception duties to support tasks in commercial environments. In research, universities and innovation centers could leverage the technology to develop new experiments in human-robot interaction.

    The company also highlights educational applications, giving students hands-on experience with a system that realistically mimics human hand movements, thereby accelerating learning in robotics, engineering, and applied sciences.

    Strengthening Wuji Tech’s Presence in Humanoid Robotics

    The launch of the robotic hand underscores Wuji Tech’s commitment to strengthening its position in the humanoid robotics market. The company stated that the technology will go through additional testing and development to enhance its durability and reliability.

    While no commercial release date has been set, Wuji Tech plans to provide prototypes to strategic partners in the coming months to collect feedback and refine the design for specific applications.

    Additionally, the company emphasized ongoing research aimed at integrating the robotic hand with artificial intelligence. This combination could enable humanoid robots not only to execute precise movements but also to make context-aware decisions, advancing automation in complex tasks.


    Read the original article on: Elhombre

    Read more: New AR Technology Can Transform Any Surface into a Keyboard

  • DeepMind Unveils its First Thinking Robot AI

    DeepMind Unveils its First Thinking Robot AI

    Generative AI can also produce robot actions—the core idea behind DeepMind’s Gemini Robotics project. The team has unveiled two new models that work together to let robots “think” before they act. Simulated reasoning has improved language models, and that advance may soon reach robotics.
    Image Credits:Google

    Generative AI can also produce robot actions—the core idea behind DeepMind’s Gemini Robotics project. The team has unveiled two new models that work together to let robots “think” before they act. Simulated reasoning has improved language models, and that advance may soon reach robotics.

    DeepMind argues that generative AI is critical for robots because it enables broad, flexible functionality. Unlike today’s robots, which must be painstakingly trained for narrow tasks, generative systems could handle entirely new environments without reprogramming. DeepMind’s Carolina Parada noted that most robots are custom-built and take months to set up for a single task. Gemini Robotics instead uses a two-model approach: one for reasoning and one for execution.

    Gemini Robotics 1.5 vs. Gemini Robotics-ER 1.5

    The two models are called Gemini Robotics 1.5 and Gemini Robotics-ER 1.5. Gemini Robotics 1.5 is a vision-language-action (VLA) model that interprets visual and text input to produce robotic actions. Gemini Robotics-ER 1.5, where “ER” stands for embodied reasoning, is a vision-language model (VLM) that processes the same types of input but outputs step-by-step plans for completing complex tasks.

    Gemini Robotics-ER 1.5 is the first robotics AI with simulated reasoning, scoring highly on tests for decision-making in physical environments. However, it doesn’t perform actions itself—that role is handled by Gemini Robotics 1.5.

    For example, if you asked a robot to separate laundry into whites and colors, Gemini Robotics-ER 1.5 would analyze the request along with images of the clothing pile. It can also use external tools like Google Search to collect additional information. Based on this, the ER model produces natural language instructions—step-by-step directions the robot should follow to carry out the task.

    Turning Instructions into Actions

    Gemini Robotics 1.5, the action model, takes the step-by-step instructions from the ER model and translates them into robot movements, using visual input for guidance. It also engages in its own reasoning process to decide how to carry out each step. As DeepMind’s Kanishka Rao explained, humans rely on intuitive thoughts to complete tasks, but robots lack that intuition—so a key breakthrough with Gemini 1.5’s VLA is its ability to “think before it acts.

    DeepMind built both new robotics AIs on the Gemini foundation models and fine-tuned them with data for physical interaction. This design enables robots to handle more complex, multi-stage tasks, effectively giving them agent-like capabilities.

    To test this system, DeepMind has deployed it on machines like the two-armed Aloha 2 and the humanoid Apollo. Unlike earlier approaches that required custom models for each robot, Gemini Robotics 1.5 can generalize across different embodiments—for example, transferring skills from Aloha 2’s grippers to Apollo’s more dexterous hands without special adjustments.

    That said, practical household robots are still a distant goal. For now, only trusted testers can access Gemini Robotics 1.5, the model that controls physical machines. The ER model, however, is already available in Google AI Studio, giving developers the ability to generate robotic instructions for real-world experiments.


    Read the original article on: Arstechnica

    Read more: Bird-Like Robot with Novel Wings Achieves Self-Takeoff and Slow Flight

  • Meta Unveils the Smart Glasses of Your Dreams

    Meta Unveils the Smart Glasses of Your Dreams

    Meta introduced several products at yesterday’s Connect event, but the spotlight was on two: the Ray-Ban Meta Display glasses and a surprise accessory called the Neural Band. The glasses mark the official debut of the long-rumored “Hypernova” project—Meta’s attempt at mainstream, easy-to-use smart glasses with a built-in display.
    Image Credits:Meta’s long-awaited smart glasses finally bring the goods, but may take a backseat to a far less expected reveal. Credit: Meta/Ray-Ban

    Meta introduced several products at yesterday’s Connect event, but the spotlight was on two: the Ray-Ban Meta Display glasses and a surprise accessory called the Neural Band. The glasses mark the official debut of the long-rumored “Hypernova” project—Meta’s attempt at mainstream, easy-to-use smart glasses with a built-in display.

    More precisely, a display. The Display glasses project a 600×600-pixel, 5,000-nit image into the right lens only, positioned near the lower edge of your field of view.

    Smartphone Features First, AR Second

    The glasses are technically designed for augmented reality, though that wasn’t strongly highlighted during the debut. Most of the features resembled “a smartphone on your face,” such as reading messages without pulling out your device. The most compelling demo, in my view, was the live transcription and translation tool, which overlays subtitles onto the real world. The glasses also include a 12-megapixel camera that records up to 1440p video at 30 frames per second.

    While the right arm houses a button and touch controls, Meta also unveiled a more innovative way to interact with them.

    The Meta Neural Band relies on surface electromyography (sEMG) to pick up tiny muscle signals in the wrist, translating them into hand and finger movements. This allows users to operate the glasses without touching the frames and provides a far more precise level of control.

    Gesture Controls at Your Wrist

    Early demos—better described as “wrists-on” than hands-on—highlighted gestures such as “clicking” (tapping the index finger against the thumb), “swiping” (running the thumb along the index finger), and “zooming” (pinching in the air). These are interactions that would typically require a smartphone.

    Meta also touched on updates like fresh designs for the Ray-Ban Meta (Gen 2) glasses, broader AI enhancements, and new roadmaps for its software platforms—but it was the glasses and the Neural Band that clearly dominated the spotlight.

    Image Credits:Meta’s sEMG wristband could revolutionize all user input, not just for glasses. Credit: Meta

    The battery life, however, poses a real concern. It’s rated at just six hours, which—as someone who relies on prescription lenses—I can say falls far short, even with the included charging case. Meta notes that the Display glasses support prescriptions from -4.00 to +4.00, but without the option of hot-swappable batteries, the practicality is questionable. As it stands, this first-generation model seems best suited for people with good vision or those willing to wear contacts.

    There’s long been speculation over which company would be first to integrate a true display into glasses. For a time, Google seemed poised to take the lead—but in the end, Meta crossed the finish line. The real question now is whether the Display glasses’ fairly straightforward design will leave less of a mark than the far more novel Neural Band.

    The Ray-Ban Meta Display glasses launch on September 30 for $799, available through Best Buy, LensCrafters, Sunglass Hut, and Ray-Ban retail stores.


    Read the original article on: Extreme Tech

    Read more: Cocoa Flavanols Reduce Age-Related Heart Inflammation in Older Adults