Google Plans To Expand AI Fraud Detection And Security In India

Google has introduced its Safety Charter in India to strengthen AI-driven tools for detecting fraud and tackling scams in its largest market outside the U.S.
Digital fraud in India has been rising sharply. According to government data, scams involving the UPI instant payment system jumped 85% last year, totaling nearly ₹11 billion ($127 million). The country also experienced a surge in digital arrest scams, where imposters pose as authorities to extort money through video calls and predatory loan apps.
To help combat these issues, Google has also opened a new security engineering center in India—its fourth worldwide after Dublin, Munich, and Malaga. First announced at last year’s Google for India summit, the center will work with local partners—including government, academia, students, and SMEs—to develop solutions addressing cybersecurity, privacy, safety, and AI challenges, said Google VP of security engineering Heather Adkins in an interview with TechCrunch.
Strengthening Cybercrime Awareness: Google Partners with India’s I4C
Google has announced a partnership with the Ministry of Home Affairs’ Indian Cyber Crime Coordination Centre (I4C) to boost public awareness around cybercrimes, according to a blog post. This initiative expands on its previous efforts, including the launch of DigiKavach in 2023—an online fraud detection program designed to curb the impact of malicious financial and predatory loan apps.
Speaking to TechCrunch, Google’s Adkins outlined the company’s focus areas in India through its GSec (Google Safety Engineering Center): tackling online scams and improving digital safety for individuals, enhancing cybersecurity for enterprises, government bodies, and critical infrastructure, and advancing responsible AI development.
“These three priorities will shape our safety charter for India,” Adkins said. “We plan to leverage our engineering talent locally to address issues affecting Indian users directly.”
On a global scale, Google continues to use AI to fight online fraud by taking down millions of ads and fraudulent ad accounts. The company now plans to ramp up its use of AI in India to better address the country’s growing digital fraud problem.
Google Messages, a default app on many Android phones, features AI-driven Scam Detection that helps shield users from over 500 million suspicious messages each month. In a similar effort, Google launched a pilot of Play Protect in India last year, which the company says has prevented nearly 60 million attempts to install high-risk apps, blocking more than 220,000 unique apps across 13 million devices. Additionally, Google Pay—one of the leading UPI-based payment platforms in India—issued 41 million alerts for transactions flagged as potentially fraudulent.
Adkins, a founding member of Google’s security team with more than 23 years at the company, addressed a range of topics in an interview with TechCrunch:
Regarding The Improper Use Of AI Tools
Adkins highlighted growing concerns about both the use and misuse of AI by bad actors.
“We’re closely monitoring AI developments. So far, tools like large language models—such as Gemini—have primarily been used to boost productivity. However, we’ve also seen them enhance phishing attacks. For instance, attackers can use translation features to craft more convincing scams, especially when targeting people who speak a different language, and they’re increasingly using deepfakes, images, and videos,” he explained.
To address this, Adkins said Google is rigorously testing its AI models to ensure they recognize and avoid harmful actions.
“This applies not just to potentially dangerous content the models might generate, but also to any harmful behavior they could carry out,” he said.
Google is developing guardrails like the Secure AI Framework to prevent misuse of its Gemini models. Looking ahead, the company believes it’s crucial to create new safety standards that govern how multiple AI agents interact, to reduce the risk of future abuse by hackers.
“The industry is advancing at a rapid pace, releasing protocols almost in real time,” Adkins noted. “It’s reminiscent of the early days of the internet, where safety considerations often came after the technology was already out there.”
Rather than relying solely on its own frameworks to prevent the misuse of generative AI by malicious actors, Google is collaborating with researchers and developers across the broader community.
Regarding Surveillance Vendors
In addition to the risks posed by hackers misusing generative AI, Adkins identifies commercial surveillance vendors as a major threat. These include spyware producers like the NSO Group—known for its Pegasus spyware—as well as smaller firms selling surveillance technologies.
“These companies are emerging across the globe, creating and distributing hacking platforms,” Adkins explained. “You might spend $20 or $200,000 depending on how advanced the tool is, and it allows you to launch attacks at scale without needing any personal expertise.”
Some of these vendors even sell their tools to monitor individuals in countries like India. Beyond being a target for surveillance tech, India faces unique challenges due to its vast size and population. According to Adkins, the country is grappling with issues such as AI-powered deepfakes, voice cloning scams, and digital arrest frauds—essentially traditional scams reimagined for the digital age.
“The pace at which threat actors are evolving is striking,” Adkins said. “I find cybersecurity in this region especially fascinating because it often foreshadows trends we’ll see globally.”
Regarding Multi-factor Authentication (MFA)
Google has long advocated for users to adopt stronger authentication methods beyond traditional passwords to safeguard their online accounts. The company previously enabled multi-factor authentication (MFA) by default for all accounts and continues to promote the use of hardware security keys—something Adkins illustrated by referencing Google employees using them on their laptops. The industry is also increasingly embracing the idea of “passwordless” authentication, though the term can vary in meaning.
However, getting users in a market as large and diverse as India to move away from passwords remains a challenge due to the country’s wide-ranging demographics and economic conditions.
“We’ve known for a long time that passwords alone aren’t secure. Multi-factor authentication has been a significant improvement,” Adkins said, noting that SMS-based verification is likely the preferred MFA method among Indian users.
Read the original article on: TechCrunch
Read more: OpenAI Is Reportedly Planning To Utilize Google’s Cloud Services
Leave a Reply