Tag: virtual

  • A Supercomputer Builds one of the Most Lifelike Virtual Brains ever Created

    A Supercomputer Builds one of the Most Lifelike Virtual Brains ever Created

    Understanding how the brain functions is challenging, since living brains are difficult to directly examine. To address this, scientists have developed an advanced simulation of a mouse brain, one of the most detailed models ever created.
    Image Credits:K H FUNG/SCIENCE PHOTO LIBRARY

    Understanding how the brain functions is challenging, since living brains are difficult to directly examine. To address this, scientists have developed an advanced simulation of a mouse brain, one of the most detailed models ever created.

    The project was led by researchers from the Allen Institute in the US and the University of Electro-Communications in Japan, and it could help scientists study neurological disorders like Alzheimer’s in more depth.

    A Complete Mouse Cortex Model With Human-Relevant Insights

    The model represents an entire mouse cortex. Though smaller and simpler than the human brain, it shares key neural similarities, making it a valuable research tool.

    The scale of the simulation is striking. The virtual brain has 9 million neurons, 26 billion synapses, 86 regions, and can perform quadrillions of calculations per second.

    By comparison, an actual mouse brain holds around 70 million neurons packed into a structure roughly the size of an almond.

    This demonstrates that the possibilities are now within reach,” says computational neuroscientist Anton Arkhipov of the Allen Institute. “With sufficient computing power, we can successfully run these types of brain simulations.”

    He adds that the achievement represents a major technical breakthrough, proving researchers can build far larger models accurately and at scale.

    Image Credits:The simulation enables researchers to track the activity of individual neurons. (Kuriyama et al., 2025).

    The detailed simulation lets scientists track cognition, consciousness, and disease in the brain. It functions as a dynamic, three-dimensional map, revealing individual neurons as they fire and form connections.

    Researchers say the model could be used to explore how seizures propagate or how brain waves influence attention, all without relying on repeated or invasive brain scans.

    Fugaku Supercomputer Powers the Brain Simulation

    The massive computational demands were met by Japan’s Fugaku supercomputer, which combined existing cellular data and brain maps to construct the model. In addition, the team created new software to optimize the processing of neural activity and reduce unnecessary calculations.

    Fugaku aids research in fields like astronomy, weather, and drug development, tackling major societal challenges,” says computer scientist Tadashi Yamazaki.. “In this project, we applied Fugaku to simulate neural circuits.”

    The brain plays a vital role in overall physical and mental well-being, as well as healthy aging.Studying virtual and mini brain models is crucial for understanding brain function and decline.

    Early Discoveries From the Virtual Brain Model

    The research team is already using the model to uncover insights into brain wave synchronization and the interaction between the mouse brain’s two hemispheres.

    While the achievement represents a remarkable advance in both computing and biological modeling, the researchers are aiming even higher. Their ultimate objective is to construct a complete virtual model of the human brain.

    Our goal is to build complete brain models, including human, using our institute’s biological data,” says Arkhipov. “We are now transitioning from simulating individual brain regions to modeling the entire mouse brain.”

    The findings were presented at the SC25 supercomputing conference and are available online.


    Read the original article on: Sciencealert

    Read more:China Unveils a Humanoid Robot with Smooth, Human-like Balance

  • New Haptic Technology Brings the Sensation of Touch to Virtual Reality Experiences

    New Haptic Technology Brings the Sensation of Touch to Virtual Reality Experiences

    USC researchers have created a wearable system that allows for more natural and emotionally rich interactions in shared virtual environments, expanding opportunities in remote work, education, healthcare, and more.
    Image Credits: Premankur Banerjee

    USC researchers have created a wearable system that allows for more natural and emotionally rich interactions in shared virtual environments, expanding opportunities in remote work, education, healthcare, and more.

    Restoring the Power of Touch in a Digital World

    Touch is essential to human communication and connection, helping build trust, regulate stress, and form emotional bonds from infancy through adulthood. Yet in today’s digital world, where many interactions happen through screens, physical contact is often absent.

    To address this, researchers at the USC Viterbi School of Engineering have created a wearable haptic system that allows users to share and feel physical gestures—like handshakes, pats, and squeezes—in virtual reality, even across long distances.They detailed their work in a paper published on the arXiv preprint server.

    Wearable Devices Bring Realistic Touch to Virtual Interaction

    The system features gloves and sleeves equipped with small vibration motors that mimic pressure and motion, enabling users to engage with both virtual objects and each other through realistic touch feedback.

    A user study, also presented at the IEEE World Haptics Conference, found that participants felt more connected and engaged when they could physically feel virtual gestures.

    Even as people spend more time socializing online, we’re seeing increased rates of depression, anxiety, and what’s known as ‘touch starvation,‘” said Heather Culbertson, associate professor at USC Viterbi and lead author of the study. “Virtual interactions are here to stay—but we need to make them better mirror the emotional benefits of real-life experiences.

    Image Credits:Premankur Banerjee

    The system allows up to 16 users to connect at once, each represented by a full-body 3D avatar that mimics their real-world movements in a shared virtual space. Unlike video calls, users can move freely, interact with each other, and engage with virtual objects—such as passing items or collaborating on tasks.

    This project came from a simple but powerful human need—to feel close to those we miss,” said Premankur Banerjee, a PhD student in Heather Culbertson’s Haptics Robotics and Virtual Interaction Lab and the study’s lead author.

    Making Long-Distance Communication Feel Close

    After spending over five years away from my own family, this work became personal. It’s not just about creating a sense of presence, but about bringing back the feeling of physical closeness in long-distance communication,” he said.

    To recreate touch, users wear gloves and armbands with vibration motors that simulate motion and pressure, allowing them to feel gestures and interactions in VR.

    Tests showed that participants experienced a stronger sense of connection and presence with tactile feedback. The study also examined how gesture speed and vibration type affect emotional and sensory perception, offering insights for designing more immersive touch-based experiences.

    Merging Science and Emotion in Communication Technology

    Building this kind of technology demands collaboration across disciplines,” said Culbertson. “Our team integrates computer science, engineering, neuroscience, psychology, and social science to develop tools that are not just technically effective, but also enable emotionally rich social interaction.

    The global move toward online communication—accelerated by the COVID-19 pandemic—has offered great convenience but also led to unintended effects. Despite being more digitally connected than ever, many people, especially younger generations, continue to struggle with loneliness, anxiety, and depression.

    Platforms like Zoom and FaceTime help us stay visually and verbally connected, but they lack the physical interaction that humans naturally need,” said Heather Culbertson.

    She emphasized that while this technology can’t replace real-life contact, it can meaningfully enhance social interaction when being together in person isn’t possible.

    Enhancing Care, Collaboration, and Closeness Across Distances

    In hospitals and long-term care settings, the system could help patients and loved ones share comforting physical gestures across distances. In remote work or learning environments, it enables more immersive, collaborative engagement. For families and friends separated by travel or deployment, it helps restore a deeper sense of closeness.

    Touch is essential to human well-being. While technology can’t fully replicate it, bringing tactile experiences into virtual spaces is an important step toward more emotionally connected digital communication,” Culbertson said.


    Read the original article on: Techxplore

    Read more:Humanoid Robots Symbolize China’s Ambitions in AI

  • Spacetop Offers A Vast Virtual Workspace Right In Front Of your Eyes

    Spacetop Offers A Vast Virtual Workspace Right In Front Of your Eyes

    Spacetop for Windows is the perfect marriage of AI and AR, built for productivity and privacy”
    Sightful

    In 2023, tech startup Sightful introduced an innovative system that combined hardware and software to project a massive virtual screen in front of the user’s eyes. Now, the company has decided to phase out the hardware component and adapt its platform to work with AI-powered laptops.

    Advances in AI Make Dedicated Hardware Obsolete

    According to Sightful, the rapid progress in computers with integrated neural processors has made dedicated hardware unnecessary. This shift has allowed the company to focus entirely on software development.

    As a result, the newly released Spacetop for Windows is compatible with laptops built on an architecture that combines NPU, CPU, and GPU – such as the Microsoft Surface Laptop for Business, Lenovo Yoga Slim, HP Elitebook, and Acer’s Swift Go 14. This is not an exhaustive list, and the number of compatible devices is expected to grow rapidly. Canalys estimates that by 2027, 60% of PCs will be AI-capable.

    A huge multi-window virtual workspace in front of your eyes
    Sightful

    As with the original version, users will still need to wear augmented reality glasses to view the 100-inch virtual display. The new $899 bundle includes the lightweight (83 g) XReal Air Ultra 2 glasses and a one-year Spacetop subscription (after which the service will cost $200 per year). Optical lens inserts are also available: single-vision lenses for $50 and progressive lenses for $150.

    A Seamless Virtual Workspace Experience

    To use the system, users must either own or purchase a modern laptop that supports spatial computing. In return, they gain access to an expansive virtual workspace capable of handling multiple windows at once. The platform delivers a natural operating system experience and uses intuitive keyboard shortcuts that allow users to move windows or push them deeper into the virtual space.

    The software is compatible with everyday productivity apps for both personal and business use on Windows. Sightful also highlights that the system delivers bright and sharp visuals both indoors and outdoors, while offering private, immersive sessions shielded from prying eyes. A Travel Mode allows users to save their virtual workspace for use on the go.

    Whether you’re on the plane or train, Spacetop for Windows provides a 100-inch private workspace to get things done
    Sightful

    A Milestone for Sightful and AR Workspaces

    We created AI PCs to enable innovations like Spacetop – and Spacetop fully unlocks the potential of this architecture,” said Tamir Berliner, co-founder and CEO of Sightful. “For four years, as we developed Spacetop, users and businesses constantly asked us, ‘When will it come to Windows?’ Today, we’re finally delivering on that request.The rise of AI laptops not only accelerated our mission to scale through software, but also unlocked incredible new possibilities for what an AR workspace can achieve when combined with the power of AI.”

    Spacetop for Windows is now available, with the bundle including AR glasses and a 12-month subscription for $899. For the first time, the system is also launching outside the United States, beginning in Germany in partnership with Deutsche Telekom. Check out the video below for more.


    Read the original article on: New Atlas

    Read more: 5K Light Field Display Brings 3D Objects to Life for Multiple Viewers

  • e-Taste sprays virtual food flavors directly into your mouth.

    e-Taste sprays virtual food flavors directly into your mouth.

    A new device called e-Taste could soon let you taste your video game food
    Depositphotos

    Virtual reality excels at engaging sight and sound, while developers continue making progress with touch. Even smell is beginning to play a role—for better or worse. That leaves one final sense: taste. Whether or not people truly want to experience flavors in virtual worlds, a new device is now tackling this uncharted territory.

    Taste is highly personal—not just because it involves putting something in your mouth, but also because it varies dramatically from person to person and even bite to bite. What starts as a delightful flavor in the first mouthful can become overwhelming by the end of a meal.

    Recreating the complex chemistry between food and the tongue digitally presents a major challenge. However, researchers at Ohio State University are taking a bold new approach with a device called e-Taste. Instead of relying on electrical or thermal stimulation—previous methods that attempted to trick the brain into tasting different flavors—this device pumps actual flavored chemicals directly into the mouth.

    e-Taste Device Breaks Down Flavors into Chemical Components for Realistic Tasting Experience

    Diagrams illustrating possible uses of the tech in a cooking game (F); a test setup of the e-Taste digital cup (G); A chart demonstrating how different tastes combine to create different foods (H); and how often people successfully identified the virtual flavors (I).
    Chen et al., Science Advances 2025

    The process begins by breaking down the five basic tastes into their respective chemical components. These include glucose for sweetness, salt for saltiness, citric acid for sourness, magnesium chloride for bitterness, and glutamate for umami. The e-Taste device houses these chemicals in separate capsules, releasing them in precise combinations and concentrations to mimic different foods. For instance, fruit juice might consist of two parts sweet and three parts sour, while roast chicken could blend two parts umami with one part salty.

    When a virtual meal is triggered, e-Taste mixes the appropriate formula and delivers a few drops directly onto the tongue. The researchers even demonstrated that flavors could be released remotely through an online connection—a breakthrough with both exciting and potentially unsettling implications.

    Early tests have yielded mixed results. Participants attempted to identify five different foods based solely on taste. While they successfully recognized virtual lemonade and cake, they struggled to distinguish between fried egg, fish soup, and coffee.

    Despite these challenges, the concept holds promise. The research team plans to expand e-Taste with additional chemical compounds for more realistic flavor experiences. One day, users might even be able to sample the umami-rich mushrooms that Mario has been munching on for decades.


    Read Original Article: New Atlas

    Read More: Why Your Smart Home Needs Big Data