X Users Relying on Grok as a Fact-Checker Raise Concerns About Misinformation

Some users on Elon Musk’s X are turning to his AI bot, Grok, for fact-checking, raising concerns among human fact-checkers that this could contribute to misinformation.
Earlier this month, X introduced a feature allowing users to tag xAI’s Grok and ask it questions on various topics, similar to how Perplexity operates its automated account on the platform.
Not long after xAI launched Grok’s automated account, users began testing its responses. In some regions, including India, people started requesting fact-checks on statements and questions related to specific political viewpoints.
Fact-checkers worry about this trend, as AI assistants like Grok can present information in a confident tone, even when inaccurate. Previous instances of Grok spreading misinformation and false claims have already been observed.
In August last year, five state secretaries urged Musk to make essential changes to Grok after the assistant generated misleading information that spread on social media ahead of the U.S. election.
AI Chatbots and the Spread of Misinformation
Other AI chatbots, including OpenAI’s ChatGPT and Google’s Gemini, were also found to produce inaccurate election-related information. Additionally, disinformation researchers in 2023 discovered that AI chatbots like ChatGPT could easily generate persuasive yet misleading narratives.
“AI assistants like Grok are very skilled at using natural language to provide responses that sound human. This gives them an air of authenticity, even when their answers are completely incorrect. That’s the real danger here,” said Angie Holan, director of the International Fact-Checking Network (IFCN) at Poynter, in an interview with TechCrunch.

Human Fact-Checkers Ensure Accountability Through Verified Sources
Unlike AI assistants, human fact-checkers rely on multiple credible sources to verify information and take full responsibility for their findings, with their names and organizations publicly attached to maintain accountability.
Pratik Sinha, co-founder of India’s non-profit fact-checking platform Alt News, pointed out that while Grok may provide convincing responses, its accuracy depends entirely on the data it receives.
“The key question is: who decides what data it gets? That’s where government influence and other factors come into play,” he explained.
“There’s a lack of transparency. Anything that isn’t transparent can be manipulated in various ways, ultimately causing harm.”
Earlier this week, Grok’s account on X admitted in a response that it “could be misused—to spread misinformation and violate privacy.”
However, the automated account does not provide any disclaimers when delivering responses, which means users may unknowingly receive inaccurate information if the AI generates a misleading or fabricated answer—a known risk with AI systems.

“It may fabricate information to generate a response,” said Anushka Jain, a research associate at the Goa-based Digital Futures Lab, in an interview with TechCrunch.
Uncertainty Over Grok’s Training Data and Fact-Checking Safeguards
Concerns also remain over how much Grok relies on X posts for training data and what safeguards it has in place to verify information. Last summer, an update appeared to grant Grok default access to X user data, raising further questions about its fact-checking process.
Another issue with AI assistants like Grok being integrated into social media is their public dissemination of information, unlike private chatbot interactions such as those with ChatGPT. Even if a user recognizes that the AI’s responses may be inaccurate, others on the platform could still perceive them as factual.
This has the potential to cause significant social harm. In India, misinformation spread via WhatsApp has previously led to mob violence—incidents that occurred before the rise of generative AI, which now makes fabricating realistic content even easier.
“If you see enough Grok responses, you might think, ‘Most of them seem accurate,’ and that could be true,” said IFCN’s Holan. “But some will be wrong. Research suggests AI models can have error rates of around 20%, and when they fail, the consequences can be severe in real-world situations.”
“It may fabricate information to generate a response,” said Anushka Jain, a research associate at the Goa-based Digital Futures Lab, in an interview with TechCrunch.
Concerns also remain over how much Grok relies on X posts for training data and what safeguards it has in place to verify information. Last summer, an update appeared to grant Grok default access to X user data, raising further questions about its fact-checking process.
Public Dissemination of AI-Generated Information Poses Misinformation Risks
Another issue with AI assistants like Grok being integrated into social media is their public dissemination of information, unlike private chatbot interactions such as those with ChatGPT. Even if a user recognizes that the AI’s responses may be inaccurate, others on the platform could still perceive them as factual.
This has the potential to cause significant social harm. In India, misinformation spread via WhatsApp has previously led to mob violence—incidents that occurred before the rise of generative AI, which now makes fabricating realistic content even easier.
“If you see enough Grok responses, you might think, ‘Most of them seem accurate,’ and that could be true,” said IFCN’s Holan. “But some will be wrong. Research suggests AI models can have error rates of around 20%, and when they fail, the consequences can be severe in real-world situations.”
Read the original article on: TechCrunch
Read more: Elon Musk Confirms the Launch of Grok 3 as the Most Advanced AI in the World
Leave a Reply