Author: Marcílio Santos

  • The First Laptop in the World Without a Screen, Powered by Augmented Reality

    The First Laptop in the World Without a Screen, Powered by Augmented Reality

    One challenge of working on the go is maintaining privacy—whether it’s teachers grading in a café, designers trying to avoid industrial espionage on a late-night flight, or anyone trying to watch non-PG content on public transport without drawing attention. Anyone who’s ever felt uneasy about someone peeking at their screen will be intrigued by a new “screenless” laptop featuring a 100-inch virtual display visible only to the user.
    Image Credits:laptopmag

    One challenge of working on the go is maintaining privacy—whether it’s teachers grading in a café, designers trying to avoid industrial espionage on a late-night flight, or anyone trying to watch non-PG content on public transport without drawing attention. Anyone who’s ever felt uneasy about someone peeking at their screen will be intrigued by a new “screenless” laptop featuring a 100-inch virtual display visible only to the user.

    Pioneering AR Computing with a Playful Tech Culture

    Founded in 2020, Sightful has offices in Tel Aviv, San Francisco, New York, and Taipei. Israeli CEO and co-founder Tamir Berliner, who previously worked on Magic Leap’s augmented reality, leads the company alongside COO and co-founder Tomer Kahan. The duo presents themselves with a playful tech persona, even highlighting on their website that their “team chat is 60% GIFs, 20% GenAI images, 20% actual words.

    Despite their somewhat cheesy “you-don’t-have-to-be-crazy-to-work-here” image, the duo is earning praise from major outlets like PCWorld, Wired, and Future, with reviewers calling their product a “game-changer” and the “future of computing,” even outperforming Meta’s devices and Apple’s Vision Pro.

    A Virtual Screen That Wraps Around You

    Known as the Spacetop, the system combines hardware and software to create a massive virtual screen that wraps around the user via custom augmented-reality glasses. In a style reminiscent of Minority Report, users can interact with the air-screen—pinning windows, resizing objects, and running most Windows and web apps, as well as the AOSP (open-source Android) OS. The hardware requires an Intel Core Ultra 7 or 9 processor with Meteor Lake architecture or newer.

    Image Credits: Sightful

    Some reviewers were surprised by the Spacetop’s price: $899 (about €788) plus a $200 yearly software subscription. Prescription lenses are optional, costing $50 for single vision or $150 for progressive, covering a range from +6.00 to -9.00.

    While not cheap, the device’s focus on corporate productivity makes its private, invisible screen a standout feature. Other perks include portability—the 106-gram headset adds virtually no extra weight for travel—and compact dimensions of 146 x 175 x 44 mm when unfolded.

    Image Credits:Sightful

    Reviewers also praise the Spacetop’s top-loaded display, which keeps the bottom third of the “screen” clear, allowing users to stay aware of their surroundings. This design helps reduce the motion sickness often associated with AR headsets and prevents accidents like kicking a pet or tripping over objects.

    The device also includes a travel mode that enhances motion tracking and a cursor that follows and predicts the user’s eye movements, eliminating the need to carry a mouse while on the go.

    Additionally, the ergonomic design can ease neck strain caused by looking down at traditional laptops, while the distraction of watching other people’s screens remains someone else’s problem.


    Read the original article on: Traveltomorrow

    Read more:Japan’s Innovative Floating Home Design for Earthquake Safety

  • Adobe’s Digital Dress Impresses with Shifting Patterns and Colors

    Adobe’s Digital Dress Impresses with Shifting Patterns and Colors

    For anyone who dislikes repeating outfits, Adobe has an answer. In 2023, the company introduced Project Primrose, a digital dress that always looks new. With constantly changing colors and patterns, it blends wearable technology with wearable art.
    Image Credits:GOOGLE

    For anyone who dislikes repeating outfits, Adobe has an answer. In 2023, the company introduced Project Primrose, a digital dress that always looks new. With constantly changing colors and patterns, it blends wearable technology with wearable art.

    Project Primrose premiered at Adobe MAX 2023, modeled by Christine Dierk, one of its lead developers. At the time, Dierk was a UC Berkeley graduate student focused on wearable technology when Adobe recognized her expertise. She started as an intern and later joined the research team that designed and prototyped the dress.

    A Blend of Sewing and Technology

    Creating the digital dress relied on both Dierk’s sewing expertise and technical skills. In total, 1,182 petals—each sparkling like a sequin—were meticulously hand-sewn onto the garment. Additionally, the 74 driver boards that powered the dress were also attached by hand.

    Dierk explained, “The petals were a geometry challenge. The dress required wider petals in broader areas and narrower ones where it tapered, so we used 16 different sizes to maintain a consistent look. We then mapped each petal in Illustrator, and attaching them felt like a paint-by-number project.”

    When Dierk took the stage and changed her dress, the Adobe MAX audience erupted. The garment shifted from silver to dark gray and back, displaying moving patterns that seemed to follow her every turn.

    After Adobe MAX, videos of Project Primrose spread quickly online. Seeing herself all over TikTok and Instagram felt strange to Dierk, but it highlighted the impact of their work.

    Fashion as a Dynamic, Interactive Art Form

    She added, “I hope people take away that fashion doesn’t have to be static—it can be dynamic and interactive. I want it to inspire those in technology, fashion design, and anyone who enjoys creating.”

    As 2024 unfolded, Project Primrose kept evolving and even appeared on the runway at New York Fashion Week through a collaboration with Christian Cowan. Although there haven’t been updates in 2025, the digital dress remains a viral phenomenon that reshaped how the world imagines the intersection of fashion and technology.


    Read the original article on: Mymodernmet

    Read more:Japan Launches Trials of Artificial Blood that Works with all Blood Types

  • NASA’s Perseverance Rover Finishes its First Autonomous, AI-Directed Drive

    NASA’s Perseverance Rover Finishes its First Autonomous, AI-Directed Drive

    NASA’s Perseverance rover has been exploring Mars for almost five years, yet the agency continues to push its capabilities. Recently, NASA announced that Perseverance completed its first drive planned entirely by artificial intelligence.
    The Perseverance Rover taking a selfie on a rock named Cheyava Falls. | Image Credits: NASA/JPL-Caltech/MSSS

    NASA’s Perseverance rover has been exploring Mars for almost five years, yet the agency continues to push its capabilities. Recently, NASA announced that Perseverance completed its first drive planned entirely by artificial intelligence.

    For the demo, NASA used vision-language models (VLMs) to set rover waypoints, a task normally done by humans. The demo ran from Dec. 8 to 10 and was led by the NASA Jet Propulsion Laboratory (JPL) in Southern California.

    “This demonstration highlights how much our capabilities have progressed and expands how we can explore other worlds,” said NASA Administrator Jared Isaacman. “Autonomous technologies like this help missions operate more efficiently, navigate challenging terrain, and boost science returns as the distance from Earth increases. It’s an excellent example of teams applying new technology carefully and responsibly in real operations.”

    The VLMs examined data from JPL’s surface mission dataset, using the same images and information that human planners depend on to create waypoints—specific locations where the rover receives new instructions.

    The project was managed by JPL’s Rover Operations Center (ROC) in partnership with Anthropic, utilizing the company’s Claude AI models.

    NASA Puts AI in the Driver’s Seat

    Mars lies about 140 million miles (225 million km) from Earth on average, creating a significant communication delay that makes real-time remote control impossible.

    For the past 28 years, human “drivers” have planned and executed mission routes. They analyze terrain and rover data, then map out paths using waypoints.

    A key constraint is that waypoints must be no more than 330 ft (100 m) apart to prevent hazards. Once finalized, the drivers send the route via NASA’s Deep Space Network, and the rover executes it.

    Now, with Perseverance, NASA is taking a new approach. Generative AI studied HiRISE orbital images and terrain slope data from digital elevation models.

    The AI identified key terrain features—such as bedrock, outcrops, hazardous boulder fields, and sand ripples—and generated a continuous path complete with waypoints.

    To ensure the AI’s instructions worked seamlessly with the rover’s flight software, engineers tested the drive commands using JPL’s digital twin, which verified over 500,000 telemetry variables before NASA transmitted the commands to Mars.

    On Dec. 8, using generative AI–planned waypoints, Perseverance drove 689 ft (210 m), followed by 807 ft (246 m) two days later.

    “The core elements of generative AI show great promise for streamlining off-planet autonomous navigation—perception (identifying rocks and ripples), localization (knowing our position), and planning and control (choosing and following the safest path),” said Vandi Verma, a JPL space roboticist on the Perseverance team.

    “We’re moving toward a future where generative AI and other smart tools will enable surface rovers to cover kilometer-scale drives, reduce operator workload, and highlight interesting features for scientists by analyzing vast amounts of rover imagery,” she added.

    What Lies Ahead for Perseverance?

    This orbital image depicts the AI-planned (in magenta) and actual (orange) routes the Perseverance Mars rover took during its Dec. 10, 2025, drive at Jezero Crater. The drive was the second of two demonstrations incorporating generative AI into rover route planning. | Image Credits: NASA/JPL-Caltech

    NASA hopes the technology tested with Perseverance can benefit a wide range of applications.

    “Imagine intelligent systems not just on Earth, but also deployed on our rovers, helicopters, drones, and other surface assets, trained with the combined expertise of NASA engineers, scientists, and astronauts,” said Matt Wallace, manager of JPL’s Exploration Systems Office. “This technology is key to building the systems for a permanent Moon presence and U.S. missions to Mars and beyond.”

    Since landing, Perseverance has been collecting rock samples for NASA’s Mars Sample Return campaign. However, the timeline for the campaign—originally set for a 2027 launch—is now uncertain.

    In May 2025, the Trump administration proposed canceling the MSR program in NASA’s 2026 budget. Last month, Congress approved a budget that does not fund the MSR, effectively ending the mission.

    The MSR campaign was a collaborative effort between NASA and the European Space Agency (ESA), and it remains uncertain how—or if—the ESA will continue the project without NASA’s involvement.


    Read the original article on: The Robot Report

    Read more: ISS Crew Safely Returns to Earth after a Medical Evacuation

  • A Floating Umbrella Trails the User as they Walk in the Rain

    A Floating Umbrella Trails the User as they Walk in the Rain

    Walking in the rain with an umbrella is rarely hands-free. One hand holds the umbrella, the other juggles a bag or phone—and a gust of wind can spoil it all. That’s why a floating, hands-free umbrella is so appealing.
    Image Credits:YouTuber John Xu beneath his autonomous flying umbrella, which hovers above him and shelters him from rain
    John Xu/I Build Stuff

    Walking in the rain with an umbrella is rarely hands-free. One hand holds the umbrella, the other juggles a bag or phone—and a gust of wind can spoil it all. That’s why a floating, hands-free umbrella is so appealing.

    That’s the premise behind a flying umbrella created by YouTuber and maker John Xu from the I Build Stuff channel. He debuted a drone-powered umbrella in 2024, and it was undeniably impressive. Still, viewers quickly noticed a major drawback: it had to be controlled with a handheld remote. The feedback was straightforward—technically impressive, but not very practical.

    Image Credits:In the original 2024 version, Xu’s flying umbrella required manual control, limiting its practicality despite its spectacle
    John Xu/I Build Stuff

    Xu took that feedback seriously and spent the next two years developing a flying umbrella that could actually track and follow him. The end result is genuinely striking.

    A Sci-Fi Proof of Concept Takes Flight

    His original version was boldly experimental. It used a custom-built, X-shaped quadcopter to lift and maneuver the umbrella, creating a distinctly sci-fi vision of rain protection from above. The design was both ingenious and a little ridiculous—but it proved its point. As a proof of concept, it demonstrated that overhead, hands-free rain coverage was possible.

    The flaw, however, was hard to ignore: the umbrella required manual control. Instead of freeing the user’s hands, it demanded both of them and introduced yet another gadget to operate. YouTube viewers quickly called this out, repeatedly sharing the same request: “Now make it follow you.”

    That feedback became the foundation for Xu’s redesign, which he began later in 2024. His goal was to make the umbrella fully autonomous, though reaching that point involved several missteps. GPS tracking turned out to be too imprecise, with position errors of several meters. On top of that, his decision to make both the umbrella and its drone core foldable introduced significant mechanical challenges.

    The turning point came with the use of a time-of-flight camera, enabling the umbrella to lock onto and follow a user directly—even in low-light conditions. The system wasn’t flawless; it didn’t remain perfectly centered overhead at all times. Still, it functioned well enough to fundamentally change the project. What began as a quirky experiment evolved into something genuinely practical, and viewers on YouTube noticed the difference.

    Big Questions Hover Over a Bold Idea

    Naturally, a flying umbrella also brings its own set of concerns. Strong winds, heavy rain, short battery life, and noisy spinning rotors all raise serious questions about practicality and safety.

    Image Credits:YouTuber John Xu’s project collaborator Henson, a Stanford computer science student, tests the autonomous “follow-me” umbrella as it tracks his movements
    John Xu/I Build Stuff

    Commenters were quick to raise these issues as well, questioning whether a device like this could ever be safe or socially acceptable. Xu didn’t dispute those concerns. His goal wasn’t to replace traditional umbrellas anytime soon, but to explore the idea as a personal, experimental drone. When it finally worked, the result was striking: hands-free rain protection with steady overhead coverage.

    A Glimpse at a More Adaptive, Autonomous Future

    The importance of this fully autonomous, hands-free rain protection project isn’t that flying umbrellas are headed for mass production. Rather, it points to a larger shift toward autonomous technologies designed to adapt to people, instead of forcing people to adapt to them.

    In that sense, the umbrella is a lighthearted, experimental glimpse of what’s possible as sensing and autonomy continue to advance. It may never replace a classic umbrella, but it shows how a bit of imagination can turn even the most ordinary objects into something unexpected.


    Read the original article on: Newatlas

    Read more:Chinese Humanoid Robot First to Connect with an Orbiting Satellite

  • Chinese Humanoid Robot First to Connect with an Orbiting Satellite

    Chinese Humanoid Robot First to Connect with an Orbiting Satellite

    A China-developed humanoid robot has made history by directly connecting to a low Earth orbit satellite without relying on traditional ground networks. Known as Embodied Tien Kung, it became the first humanoid robot to autonomously carry out this type of satellite communication.
    Image Credits:X-Humanoid/Reprodução Redes Sociais

    A China-developed humanoid robot has made history by directly connecting to a low Earth orbit satellite without relying on traditional ground networks. Known as Embodied Tien Kung, it became the first humanoid robot to autonomously carry out this type of satellite communication.

    The achievement was unveiled by X-Humanoid on Friday (23) at the 3rd Beijing Conference on Promoting the High-Quality Development of the Commercial Space Industry, an event highlighting recent progress in China’s space sector.

    Robot Achieves Real-Time Satellite Communication

    During the test, the robot linked to a new GalaxySpace internet satellite fitted with an electronically scanned array antenna and a flat-panel system integrated into the satellite architecture. The connection enabled stable, real-time transmission of images and data, even in the absence of ground-based infrastructure.

    Organizers said the demonstration marked China’s first multi-terminal, multi-link connection using a low-orbit satellite with this setup. Alongside the humanoid robot, smartphones and computers also successfully accessed the network during the test.

    To demonstrate a real-world application, researchers assigned Embodied Tien Kung a task: retrieve a symbolic project completion certificate placed inside an autonomous vehicle. As part of the experiment, the unmanned car drove itself to the newly opened Avenue of Rockets.

    When the satellite passed overhead, the robot independently determined the optimal time to communicate, ran system checks, and established a direct connection with the low-orbit satellite. Once connected, it approached the vehicle, collected the certificate, and delivered it to another building.

    Humanoid Sends Real-Time Motion and Visual Data to Control Center

    During the mission, the humanoid converted its movements, joint data, and images from its front camera into digital information and transmitted them to the satellite. The system relayed this data to the control center almost in real time, enabling operators to track the operation both from the robot’s viewpoint and through external monitoring tools.

    The trial highlighted the potential of satellite-connected humanoid robots to carry out physical tasks in areas lacking conventional internet access. Such capabilities could prove vital in remote locations, disaster zones, and other high-risk environments.

    Satellite-Linked Robots Expand Remote Operations

    By overcoming geographic constraints, the technology opens new possibilities for technical inspections, emergency response, field exploration, and mining. In hazardous situations, robots could take on critical roles, improving both safety and efficiency.

    This milestone builds on earlier accomplishments by Embodied Tien Kung. In February 2025, the robot drew attention after climbing 134 outdoor steps at Haizi Wall Park in Beijing, becoming the first humanoid to master such a demanding outdoor challenge.


    Read the original article on: engenhariae

    Read more:Scientists use a Spinach Leaf to Make an Artificial Heart

  • The Robot Speaks with Correctly Coordinated Lip Motions

    The Robot Speaks with Correctly Coordinated Lip Motions

    People direct nearly half of their attention to their conversation partner’s lip movements. In contrast, robots typically have only simplified “caricature” lips and mouths that don’t move in sync with the sounds they produce via their speakers.
    Image Credits:Jane Nisselson/Columbia Engineering]

    People direct nearly half of their attention to their conversation partner’s lip movements. In contrast, robots typically have only simplified “caricature” lips and mouths that don’t move in sync with the sounds they produce via their speakers.

    Yuhang Hu and his team at Columbia University saw this as a major limitation, describing facial expression as the “missing link” in robotics.

    Bringing Robotic Faces to Life

    To address it, they developed a robot that can, for the first time, learn realistic lip movements for tasks like speaking and singing. In demonstrations, the robot successfully articulates words in multiple languages and even performs a song from an AI-generated debut album called Hello World.

    The robot learns through observation rather than pre-programmed rules. Initially, it practiced using its 26 facial motors by watching itself in a mirror. Then it learned to mimic human lip movements by analyzing hours of YouTube videos. Like other AI systems, its performance improves with more training.

    When lip-syncing is combined with conversational AI, such as ChatGPT or Gemini, it deepens the connection a robot can form with humans,” Hu explained. “The more the robot observes human interactions, the better it becomes at replicating subtle facial expressions, allowing for richer emotional engagement.”

    Creating realistic lip movements in robots is difficult for two main reasons. It requires specialized hardware with flexible facial “skin” and many tiny motors that operate quickly, silently, and precisely. Second, the patterns of lip motion are highly complex, dictated by the sequence of vocal sounds and phonemes.

    Humans have about 30 facial and oral muscles beneath the skin that naturally coordinate with the vocal cords and lips. In fact, producing full speech engages 70 to 100 muscles. Robotic faces are typically rigid, with pre-programmed lip movements that look artificial and awkward.

    Teaching a Robot to Learn Facial Expressions Through Self-Observation

    Hu tackled these challenges by designing a flexible, highly articulated robot face with 26 motors. The robot first learned how its own face moved by observing itself in a mirror. Much like a child experimenting with facial expressions, it generated thousands of random movements. Gradually, it learned to control its motors to create specific expressions—an approach the team calls the “vision-action” language model.

    Once the robot mastered this basic control, it was trained by watching videos of people speaking and singing. This allowed its AI to learn how human lips move in relation to various sounds. Combining these two learning processes, the robot became capable of translating audio directly into realistic lip movements.

    The researchers admit the robot’s lip-syncing is not yet perfect. “We encountered challenges with strong sounds like ‘B’ and with sounds that require pursed lips, like ‘W’,” said Professor Hod Lipson, team coordinator. “However, these skills are expected to improve over time. This technology holds great potential, but we must advance cautiously to maximize benefits while minimizing risks.”


    Read the original article on: Inovacao Tecnologica

    Read more:Scientists Developed a Robotic Hand that Detaches and Walks on its Own

  • BepiColombo Mio and GEOTAIL Observe Similar Magnetosphere Wave Frequencies

    BepiColombo Mio and GEOTAIL Observe Similar Magnetosphere Wave Frequencies

    Right: Mercury Magnetospheric Orbiter Mio of the BepiColombo mission; Left: Earth’s GEOTAIL satellite. The illustration highlights comparative studies of planetary magnetospheres. Image Credit: Mercury image: NASA / Johns Hopkins University Applied Physics Laboratory / Carnegie Institution of Washington, BepiColombo spacecraft image: ESA, Earth image: NASA

    An international team from Kanazawa University (Japan), Tohoku University (Japan), LPP (France), and collaborators has shown that chorus emissions—natural electromagnetic waves well-known in Earth’s magnetosphere—also appear in Mercury’s magnetosphere, displaying similar chirping frequency patterns.

    BepiColombo’s Mio recorded audible plasma waves during six Mercury flybys from 2021 to 2025. Comparison with decades of GEOTAIL data showed matching instantaneous frequency changes.

    This offers the first solid evidence of strong electron activity at Mercury, enhancing our understanding of auroral processes throughout the solar system.

    The Importance of Chorus Emissions

    Chorus emissions are electromagnetic waves produced when electrons interact with plasma waves within a magnetosphere. On Earth, they are essential in shaping and depleting radiation belts.

    These waves feature rising and falling audible frequencies, earning the nickname “birdsong” due to their impact on radio signals.

    Since the energy of the electrons involved is linked to the wave frequency, studying chorus emissions is vital for predicting space weather and safeguarding satellites from radiation.

    Importance of GEOTAIL and Mercury Measurements

    Launched in 1992 by Japan and the United States, the GEOTAIL satellite studied Earth’s magnetotail for 30 years, yielding critical insights into chorus generation, distribution, and frequency characteristics.

    Mercury, with a magnetic field roughly one‑hundredth of Earth’s, remained largely unexplored in this regard. BepiColombo’s Mio detected audible plasma waves, suggesting chorus emissions and cold electrons near Mercury.

    This success stemmed from a targeted effort to apply decades of Earth-based magnetosphere research to Mercury. Mio’s Plasma Wave Investigation instrument was specifically designed to test theoretical predictions of chorus emissions in Mercury’s weak magnetic environment.

    Decades of GEOTAIL data served as a crucial reference for comparison. Positioned in the distant magnetotail around 10 Earth radii away, GEOTAIL experienced conditions similar to Mercury’s much smaller magnetosphere. Plasma wave measurements from Mercury closely aligned with GEOTAIL’s chorus patterns, confirming:

    Frequency variation: swift rising and falling tones, reflecting nonlinear interactions between electrons and waves.
    Spatial distribution: focused in the dawnside region, where energetic electrons predominantly move.

    Impact on Planetary Science and Future Studies

    These results reveal that chorus generation operates similarly across planetary magnetospheres. They also confirm cold electrons near Mercury and pave the way for Mio’s 2027 orbital studies.

    On Earth, chorus emissions drive hazardous radiation belt electrons; applying this knowledge to Mercury improves space weather forecasting and spacecraft radiation protection.

    Despite Mercury’s weak magnetic field, variable-frequency chorus emissions show efficient electron acceleration occurs. Mio will enter orbit around Mercury in late 2026 to study the spatial distribution, frequency behavior, and origins of cold electrons in detail.

    This breakthrough paves the way for comparative studies of planets like Mars, Jupiter, and Saturn. Exploring how auroral phenomena occur on Mercury and beyond, in addition to Earth, will greatly enhance our understanding of planetary space environments and the universal behavior of plasma processes.


    Read the original article on: Phys.Org

    Read more: NASA Aims to Deploy a Nuclear Reactor on the Moon and This Is the Reason

  • A New Artificial Skin Aims to Give Humanoid Robots the Sensation of Pain

    A New Artificial Skin Aims to Give Humanoid Robots the Sensation of Pain

    For years, humanoid robots have been built to be strong, precise, and durable. They rely on cameras for vision, sensors to gauge force, and highly accurate systems to carry out tasks. What they’ve long lacked is the ability to sense and respond to their own bodies. That gap is now starting to close thanks to a breakthrough by researchers from universities in Shanghai and Hong Kong.
    Image Credits:© Astrid Eckert/TUM

    For years, humanoid robots have been built to be strong, precise, and durable. They rely on cameras for vision, sensors to gauge force, and highly accurate systems to carry out tasks. What they’ve long lacked is the ability to sense and respond to their own bodies. That gap is now starting to close thanks to a breakthrough by researchers from universities in Shanghai and Hong Kong.

    The team has created a flexible robotic skin that can detect touch, impact, and physical damage, effectively acting as an artificial nervous system. This development enables robots to identify potentially harmful situations, serving a role similar to how humans experience pain or discomfort.

    Image Credits:tmeier1964

    Unlike conventional sensors that focus on specific spots, this new skin envelops the robot’s entire body, making the arms, legs, and torso act as a single continuous sensor.

    The system relies on flexible, pressure-responsive materials that can detect small changes caused by impacts, deformation, or wear. Rather than depending only on cameras or motor force readings, the robot gains a direct awareness of what is happening to its own body.

    This heightened sensitivity enables quicker and smarter reactions to unexpected events, which is especially important for robots working close to humans.

    Practical Benefits in Everyday Scenarios

    The advantages are easy to imagine in everyday situations. For example, if a robot is carrying heavy furniture and an object drops on its foot, a traditional robot might keep moving, unaware of the damage, increasing the risk of falling or further harm.

    With the new skin, the impact would be sensed instantly. The robot could stop, adjust its position, or activate safety measures to reduce danger to itself and to nearby people.

    Such responsiveness is essential in settings like homes, hospitals, factories, and logistics hubs, where mechanical failures can result in serious accidents.

    Another key advantage is the ability to detect minor, nearly invisible damage. Tiny cracks or deformations in the outer layer can let dust or moisture seep in, gradually harming internal components.

    Early Detection and Modular Design for Easy Maintenance

    The new robotic skin can spot these issues early, before they escalate. It also features a modular design, letting users replace damaged sections with simple “patches” instead of swapping the entire skin.

    This approach lowers maintenance costs, extends the robot’s operational life, and makes humanoid robots more practical for long-term, real-world use.

    Image Credits: koshinuke_mcfly

    While the research is currently centered on humanoid robots, the team notes that the technology has much broader potential. Advanced prosthetics, for instance, could gain from responsive surfaces that deliver tactile feedback to users.

    Other possible applications include protective gear, rescue tools, and medical devices. In high-risk situations, the ability to sense excessive pressure, heat, or impact can be critical for preventing injuries or system failures.

    The researchers stress that the aim is not to give robots human-like emotions. The concept of “pain” in this context is purely functional, not a conscious or subjective sensation.

    Enhancing Safety and Reliability Around Humans

    The ultimate goal is to develop safer, more dependable machines that can operate alongside people in a predictable manner. By detecting risks and damage early, robots can respond proactively, reducing accidents and building trust in these technologies.

    As humanoid robots move beyond the lab and into everyday environments, innovations like artificial skin may play a crucial role—not in humanizing machines, but in making them more physically aware and better adapted to the human world.


    Read the original article on: Gizmodo

    Read more:A Supercomputer Builds one of the Most Lifelike Virtual Brains ever Created

  • China Unveils a Humanoid Robot with Smooth, Human-like Balance

    China Unveils a Humanoid Robot with Smooth, Human-like Balance

    Chinese startup Matrix Robotics has officially introduced MATRIX-3, its third-generation humanoid robot, representing a significant advance in physical AI.
    Image Credits:MATRIX-3 moves humanoid robots from pre-set tasks to adapting and understanding the real world, ready for everyday life.

    Chinese startup Matrix Robotics has officially introduced MATRIX-3, its third-generation humanoid robot, representing a significant advance in physical AI.

    The platform is a complete from-scratch overhaul of algorithms, hardware, and applications, moving humanoid robots beyond rigid task performance toward flexible, real-world interaction.

    Built to be safe, autonomous, and highly adaptable, MATRIX-3 integrates biomimetic perception, precise manipulation, natural human-like motion, and a new cognitive core that supports zero-shot learning.

    The company says the robot is designed to operate beyond factories, extending into commercial, healthcare, and household environments.

    Advancing Adaptive, Human-Like AI

    MATRIX-3 is framed as a significant step forward in physical artificial intelligence. Designed as a safe, autonomous, and adaptable platform, it handles complex, human-like tasks in real-world conditions.

    Our vision with MATRIX-3 is to bring machine intelligence into human environments in the most natural and secure way possible,” said Allen Zhang, CEO of Matrix Robotics, in a statement.

    MATRIX-3 features a biomimetic interface with “skin” and touch, covered in flexible 3D fabric that houses an underlying sensor network. This design absorbs physical contact and monitors impact forces in real time, enhancing safety during close interactions with people.

    Advanced Visual–Tactile Perception for Precise and Safe Manipulation

    A multimodal perception system combines high-sensitivity fingertip sensors with advanced vision, creating a visual–tactile loop that lets MATRIX-3 safely handle fragile and flexible objects.

    MATRIX-3 also marks a major advance in mobility and manipulation. Its dexterous 27-DOF hand mimics human anatomy with lightweight, cable-driven actuation for fast, precise motion. This allows the robot to handle everyday tools, operate delicate equipment, and manipulate soft materials like fabrics.

    Whole-body movement uses a natural gait generated by a motion control model trained on human motion-capture data. Built-in linear actuators deliver high power density with minimal noise, allowing for stable, agile, and well-coordinated full-body motion.

    Matrix’s intelligence division built a new cognitive core that underpins these abilities. Its neural network enables zero-shot learning, letting MATRIX-3 perform new tasks from natural-language instructions without task-specific training.

    Using universal intelligent manipulation, the robot can autonomously plan its grasps, modulate force in real time, and navigate around obstacles through smooth hand–eye coordination.

    Real-World Performance Yet to Be Verified

    So far, videos show MATRIX-3’s capabilities, but researchers have not yet verified its real-world performance; consistently replicating its hand dexterity would mark a major robotics breakthrough.

    Matrix Robotics has launched an early access program for select industry partners, with pilot deployments of MATRIX-3 expected to begin in mid-2026.


    Read the original article on: Interestingengineering

    Read more:Rainbow Around Nearby Dead Star Puzzles Scientists

  • Scientists Develop the Inaugural Model of Artificial Brain Tissue

    Scientists Develop the Inaugural Model of Artificial Brain Tissue

    For the first time, scientists have successfully grown functional brain tissue in the lab without using any animal-derived materials or natural biological coatings.
    Image Credits:guiadafarmacia

    For the first time, scientists have successfully grown functional brain tissue in the lab without using any animal-derived materials or natural biological coatings.

    Unlike the well-known organoids—often called mini-brains—these structures are simplified organs created from living cells and designed to closely mimic the natural brain environment.

    The Challenge of Animal-Based Coatings in Neural Tissue Engineering

    A major challenge in neural tissue engineering has been the dependence on animal-derived coatings, such as laminin, which help cells attach and grow. These poorly defined materials hinder reproducibility: one researcher may achieve good results, but others often fail to replicate them.

    At the same time, researchers still rely heavily on rodent brains, whose genetic and physiological differences from humans limit the applicability of their findings.

    Prince Okoro and his team at the University of California, Riverside (USA) have developed a novel scaffold using the widely available, chemically inert polymer polyethylene glycol (PEG).

    Although cells typically do not stick to PEG, attachment is essential for their growth. Okoro found a way to make PEG biologically active by shaping it into a porous, textured, and interconnected structure that replicates the intricate environment of the brain.


    Read the original article on:Guiadafarmacia

    Read more:Scientists Develop a Shape-Shifting Material Activated by a String Pull