Robotics Combined with Virtual Reality Allows Seamless, Natural Engagement

Princeton computer scientists are developing ways to integrate virtual reality into the physical world, aiming to enrich experiences such as remote collaboration, learning, entertainment, and gaming.
Computer scientists Parastoo Abtahi, center, and Mohamed Kari, left, are are working to bring virtual reality into the physical world by pairing mixed reality headsets with robots. Doctoral student Lauren Wang, right, assists in demonstrating how the robot operates. Image Credits: Nick Donnoli/Orangebox Pictures

Princeton computer scientists are developing ways to integrate virtual reality into the physical world, aiming to enrich experiences such as remote collaboration, learning, entertainment, and gaming.

Preparing for a Future of Seamless Mixed-Reality Interaction

Parastoo Abtahi, assistant professor of computer science, predicts that virtual and augmented reality will eventually become widespread. She emphasizes the importance of enabling users to interact smoothly with the physical world while using these technologies.

To achieve this, Abtahi and postdoctoral researcher Mohamed Kari are combining virtual reality with a controllable physical robot. They will present their work next month at the ACM Symposium on User Interface Software and Technology in Busan, Korea.

A user wearing a mixed reality headset can choose a drink from a menu and virtually place it on the desk before them. Alternatively, in a more imaginative scenario, they could have an animated bee deliver a bag of chips to them on the sofa.

At first, the drink and chips exist only as pixels. Within moments, they appear physically, almost like magic—but it’s actually a hidden robot delivering the snack. “Visually, it feels instantaneous,” said Abtahi.

Making Technology Disappear for Natural Human-Computer Interaction

By stripping away all visible technical elements, said Kari, the experience feels seamless. The aim is to make the technology invisible, letting the interaction between human and computer feel completely natural.

A major technical hurdle for this system is enabling effective communication. Users need a simple way to express their intentions—such as selecting a pen across the room and moving it to a nearby table. To address this, Kari and Abtahi developed an interaction method in which straightforward hand gestures allow users to pick objects from a distance.

These gestures are converted into commands for the robot, which is equipped with its own mixed reality headset to understand where to position items in the virtual space.

Another challenge is managing the addition and removal of physical objects from the user’s view. Using a technique called 3D Gaussian splatting, Abtahi and Kari create a realistic digital representation of the physical environment. Once the room is fully scanned, the system can hide objects—like a moving robot—or introduce virtual elements, such as an animated bee.

This requires digitally scanning and rendering every object and surface in the space. Currently, the process is somewhat labor-intensive, Abtahi said, but future research may streamline it, potentially by assigning the task to a robot.


Read the original article on: Tech Xplore

Read more: A Robotic Water Strider Glides Using Feather-like Feet