Grok 3 Reportedly Briefly Restricted Unflattering Mentions of Trump and Musk

Grok 3 Reportedly Briefly Restricted Unflattering Mentions of Trump and Musk

Image Credits:SOPA Images / Getty Images

When Elon Musk introduced Grok 3 in a livestream last Monday, he described it as a “maximally truth-seeking AI.” However, the model briefly appeared to censor unflattering information about both former President Donald Trump and Musk himself.

Users Report Grok 3 Was Instructed to Exclude Trump and Musk in Misinformation Queries

Over the weekend, social media users reported that when asked, “Who is the biggest misinformation spreader?” using the “Think” setting, Grok 3’s chain of thought” indicated it had been explicitly instructed not to mention Trump or Musk. This chain of thought represents the model’s reasoning process when generating answers.

TechCrunch was able to replicate this behavior once, but by Sunday morning, Grok 3 had resumed including Trump in its response to the misinformation question.

Image Credits:xAI(opens in a new window)

xAI Reversed Grok’s Temporary Filter on Musk and Trump Misinformation Labels After User Concerns

Igor Babuschkin, an engineering lead at xAI, appeared to confirm in a post on X on Sunday that Grok was temporarily programmed to disregard sources citing Musk or Trump as misinformation spreaders. He stated that xAI reversed the change as soon as users raised concerns, emphasizing that it did not align with the company’s values.

While the term “misinformation” is often politically charged, both Trump and Musk have repeatedly shared demonstrably false claims—many of which have been flagged by Community Notes on Musk-owned X. Just last week, they pushed the false narratives that Ukrainian President Volodymyr Zelenskyy is a “dictator” with a 4% approval rating and that Ukraine initiated the ongoing conflict with Russia.

Grok 3’s Response Controversy Sparks Concerns Over Political Bias and AI Safety

The apparent adjustment to Grok 3’s responses comes amid claims that the model skews too far left. This week, users found that Grok 3 would repeatedly state that both Trump and Musk deserved the death penalty. xAI quickly corrected the issue, with engineering lead Igor Babuschkin calling it a “really terrible and bad failure.”

When Musk first introduced Grok two years ago, he marketed it as an unfiltered, anti-“woke” AI willing to tackle controversial topics that other models avoided. In some ways, it lived up to that reputation—earlier versions, for instance, had no issue using vulgar language when prompted, unlike ChatGPT.

Earlier versions of Grok avoided taking strong stances on political topics and steered clear of certain sensitive issues. A study even suggested that these models leaned left on subjects such as transgender rights, diversity initiatives, and economic inequality.

Musk attributed this bias to Grok’s training data, which was largely sourced from public web pages, and vowed to make the model more politically neutral. Other AI developers, including OpenAI, have taken similar steps—potentially influenced by the Trump administration’s claims of bias against conservatives.


Read the original article on: TechCrunch

Read more: Elon Musk’s xAI Unveils its Newest Flagship Model, Grok 3

Share this post

Leave a Reply