Google Is Integrating Gemini into Android Auto for In-Car Use

Google announced during its Android Show—held ahead of the 2025 I/O developer conference—that Gemini, its generative AI, will be rolled out to all vehicles compatible with Android Auto in the coming months.
According to the company, incorporating Gemini into Android Auto, and later this year into vehicles with Google’s built-in operating system, will enhance the driving experience by making it “more productive — and fun,” as highlighted in a blog post.
“This marks what we believe will be one of the most significant changes to in-car technology in a very long time,” said Patrick Brady, Google’s VP of Android for Cars, in a media briefing prior to the event.
Gemini will appear in Android Auto in two key ways.
Gemini Enhances In-Car Voice Assistance with Natural Language Capabilities for More Intuitive Interactions
Primarily, it will serve as a significantly more capable voice assistant. Drivers—or passengers, since voice recognition isn’t tied to the phone owner—will be able to ask Gemini to perform tasks like sending messages or playing music. While these functions were already available through Google Assistant, Gemini’s natural language processing means users can interact more conversationally, without needing to use rigid voice commands.
Gemini will also be able to “remember” user preferences—such as a contact’s preferred language for text messages—and automatically handle translations. Additionally, Google says Gemini will be equipped to perform one of the most popular in-car tech tasks: finding top-rated restaurants along a user’s route. According to Brady, it can also sift through Google’s listings and reviews to answer more specific queries, like locating taco spots with vegan options.
“Gemini Live” Enables Ongoing, Dynamic Conversations on a Variety of Topics While Driving
Another major feature is “Gemini Live,” which keeps the AI assistant continuously active and ready for full conversations on a wide range of topics. Brady explained that users could chat with Gemini about everything from spring break travel ideas and kid-friendly recipe suggestions to discussions on Roman history.
If all of this sounds potentially distracting, Patrick Brady insists otherwise. He argued that Gemini’s natural language capabilities will actually simplify how users interact with Android Auto, allowing them to complete tasks more easily and with less mental effort—ultimately “reducing cognitive load.”
That’s a bold assertion, especially at a time when many drivers are pushing for a return to physical buttons and knobs over touchscreen-heavy car interfaces—a shift some automakers are already beginning to embrace.
Gemini to Launch with Cloud Support, with Plans for Onboard Processing to Boost Performance and Reliability
There are still many details to iron out. At launch, Gemini will rely on cloud processing to function in both Android Auto and vehicles with Google Built-In. However, Brady noted that Google is collaborating with car manufacturers to integrate more onboard computing power. This would allow Gemini to run locally (at the edge), which could improve both performance and reliability—key concerns in vehicles that frequently switch between cell towers.
Today’s vehicles produce vast amounts of data through sensors and, in some cases, internal and external cameras. When asked whether Gemini might eventually tap into this multimodal data, Brady said there’s “nothing to announce” yet, but confirmed the company is actively exploring the idea.
“We definitely believe that as cars gain more cameras, there will be some very compelling use cases in the future,” he said.
Gemini for Android Auto and Google Built-In will roll out to all regions that currently support Google’s generative AI and will be available in over 40 languages.
Read the original article on: TechCrunch
Read more: Google and Duolingo think AI can transform language learning. Do they?
Leave a Reply