
Meet the robotic dog that remembers like an elephant and reacts with the instincts of a seasoned first responder.
Developed by Texas A&M engineering students, this AI robotic dog can see, remember, and reason. Designed for chaotic environments, the robot could revolutionize search-and-rescue and disaster response.
The project was led by Sandun Vitharana, a master’s student in engineering technology, and Sanjaya Mallikarachchi, a doctoral student in interdisciplinary engineering. Together, they developed a robotic dog that retains memory of where it has been and what it has observed, responds to voice commands, and uses AI and camera data to plan paths and recognize objects.
How The Robot’s Memory System Functions
A roboticist might characterize it as a ground-based robot equipped with a memory-centered navigation system driven by a multimodal large language model (MLLM). The system uses visual data to guide navigation, combining imaging, reasoning, and path optimization for both strategic planning and real-time response.
Robot navigation has progressed from basic landmark-based approaches to advanced computational systems that fuse data from multiple sensors. Still, operating autonomously in unpredictable, unstructured settings—such as disaster zones or remote locations—remains a major challenge, where adaptability and efficiency are essential.
Although robot dogs and navigation systems powered by large language models exist separately, combining a custom multimodal large language model with a visual, memory-based navigation system in a general-purpose, modular framework is a novel approach.
“Some academic and commercial platforms have incorporated language or vision models into robotics,” Vitharana said. “But no approach has used MLLM-driven memory navigation in the structured way we propose with custom pseudocode guiding decisions.”
Creation and Possible Uses
Mallikarachchi and Vitharana began by exploring how an MLLM could interpret a robot camera’s visual data. Supported by the National Science Foundation, they combined this with voice commands to create an intuitive system blending vision, memory, and language.

Similar to humans, the robot combines reactive and deliberative behaviors with careful decision-making. It can swiftly react to avoid obstacles while also performing high-level planning, using the custom MLLM to assess its surroundings and determine the optimal path forward.
“Looking ahead, this type of control architecture is likely to become a standard for human-like robots,” Mallikarachchi noted.
Its memory-driven system enables the robot to remember and reuse previously traveled routes, improving navigation efficiency by minimizing redundant exploration. This capability is especially valuable in search-and-rescue operations, particularly in unmapped regions or areas where GPS is unavailable.
Expanding Applications Beyond Emergency Response
The potential uses of the robot extend far beyond emergency response. Hospitals, warehouses, and other large facilities could employ it to enhance operational efficiency. Its sophisticated navigation system could also aid people with visual impairments, explore minefields, or conduct reconnaissance in dangerous environments.
Dr. Isuru Godage, assistant professor in the Department of Engineering Technology and Industrial Distribution, provided guidance for the project.
“The heart of our vision is deploying MLLM at the edge, giving our robotic dog immediate, high-level situational awareness and a form of emotional intelligence that was previously unattainable,” Godage said. “This enables the system to bridge the gap between humans and machines seamlessly. Our aim is to make this technology not just a tool, but a truly empathetic partner, creating the most advanced, first-responder-ready system for any unmapped environment.”
Nuralem Abizov, Amanzhol Bektemessov, and Aidos Ibrayev from Kazakhstan’s International Engineering and Technological University contributed to developing the ROS2 infrastructure for the project. HG Chamika Wijayagrahi from Coventry University in the UK assisted with map design and the analysis of experimental results.
Vitharana and Mallikarachchi showcased the robot and its capabilities at the recent 22nd International Conference on Ubiquitous Robots. Their research was published in the conference proceedings for the 2025 22nd International Conference on Ubiquitous Robots (UR).
Read the original article on: Tech Xplore
Read more: Robotic Dogs Handle Bomb Detection, Neutralization, and Disposal





