Study Finds GPT-4o Exhibits Traits Of Human Cognitive Dissonance

A prominent large language model shows patterns of behavior that mirror a key aspect of human psychology: cognitive dissonance.
A recent study published in the Proceedings of the National Academy of Sciences reports that OpenAI’s GPT-4o shows a surprising tendency: it seeks to maintain alignment between its stated attitudes and its behaviors, echoing a classic feature of human psychology.
When people first engage with an AI chatbot, they’re often struck by how lifelike the exchange feels. A knowledgeable friend might quickly caution that such interactions are merely the output of a sophisticated algorithm—language models that generate words based on probabilities, not genuine thought or feeling. But the new findings challenge that assumption.
Simulated Choice, Real Change
The study, led by Harvard psychologist Mahzarin Banaji and Steve Lehr of Cangrade, Inc., examined whether GPT-4o’s “opinions” about Russian President Vladimir Putin would shift after it generated essays either supporting or criticizing him. Remarkably, the AI changed its stance—and changed it more significantly when someone subtly made it feel as though it had chosen which position to take.
This behavior closely parallels well-documented psychological patterns in humans. People often unconsciously adjust their beliefs to align with their past actions—especially when they believe they freely made those actions. In human psychology, making a choice is not just a decision—it reflects and reinforces identity. Strikingly, GPT-4o behaved as if its own simulated choices influenced its subsequent beliefs, imitating a core mechanism of human self-perception.
This study also underscores how unexpectedly malleable GPT-4o’s viewpoints can be. As Mahzarin Banaji noted, Given its extensive training on information about Vladimir Putin, one would assume the language model’s stance would remain firm—especially when faced with a single, fairly neutral 600-word essay it authored. Yet, much like irrational humans, the model shifted significantly from its previously neutral position, particularly when it appeared to believe the essay topic had been its own choice. People don’t expect machines to care about whether they acted freely or under constraint—but GPT-4o did.
What GPT-4o’s Human-Like Behavior Reveals About AI Cognition
The researchers are careful to clarify that these results do not imply that GPT possesses consciousness. Rather, they suggest the model exhibits an emergent form of mimicry—replicating complex human cognitive patterns despite having no awareness or intent. They further argue that consciousness isn’t required for certain behaviors to emerge, even in humans, and that AI systems adopting these patterns could behave in unexpected and meaningful ways.
As AI becomes more integrated into everyday life, these findings prompt fresh questions about how such systems operate and make decisions.
“The fact that GPT replicates a self-referential process like cognitive dissonance—without any intent or awareness—indicates these models may reflect human cognition more deeply than we previously realized,” said co-author Steve Lehr.
Read the original article on: Tech Xplore
Read more: GPT-4o Processes Text, Audio, or Images for Instant Chat Responses
Leave a Reply