
Think, know, understand, remember—these are only some of the mental verbs we commonly use to describe human thought processes. However, applying these same terms to artificial intelligence can unintentionally give the impression that AI possesses human-like qualities.
Jo Mackiewicz, an English professor at Iowa State, said people use mental verbs for machines to relate to them, since these verbs are common in everyday speech. She warned that attributing human-like mental actions to AI can blur the line between human and machine abilities.
Mackiewicz and Jeanine Aune, a teaching professor of English and director of Iowa State’s Advanced Communication Program, are part of a research team that recently investigated how writers use anthropomorphic language—terms that attribute human qualities to nonhuman entities—when discussing AI systems.
Their study, titled “Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT,” was published in Technical Communication Quarterly.
The research team also included Matthew J. Baker, an associate professor of linguistics at Brigham Young University, and Jordan Smith, an assistant professor of English at the University of Northern Colorado. Both Baker and Smith are alumni of Iowa State University.
Why Mental Verbs may Create False Impressions
Mackiewicz and Aune warned that describing AI with human-like mental verbs can mislead by implying machines have thoughts or feelings. Terms like “think,” “know,” “understand,” or “want” suggest consciousness, beliefs, or desires—qualities AI does not possess. Instead, AI produces outputs by recognizing patterns, not by experiencing intentions or emotions.
The researchers also pointed out that such language can overstate AI’s capabilities. Phrases like “AI decided” or “ChatGPT knows” can exaggerate the system’s intelligence and create unrealistic expectations of its reliability. They added that speaking of AI as having intentions risks masking the fact that humans are the real decision-makers.
Aune explained that using some human-like expressions to describe AI can be memorable for readers and might influence public views of AI in misleading or unhelpful ways.
Language about Language
Mackiewicz, Aune, and their colleagues examined the News on the Web (NOW) corpus—a dataset of over 20 billion words comprising continuously updated English-language news articles from 20 countries—to investigate how frequently news writers associate anthropomorphizing mental verbs, such as learns, means, and knows, with the terms AI and ChatGPT.
According to Mackiewicz and Aune, the study’s findings were unexpected.
Their analysis revealed three main insights:
If you want, I can also make an even more concise, journalistic version that flows smoothly for publication. Do you want me to do that?
1. News Articles Rarely Link AI and ChatGPT with Mental Verbs
Mackiewicz noted that while no comprehensive study compares anthropomorphism in speech and writing, existing research offers some insights. “Anthropomorphism is common in everyday speech, but we observed much less of it in news writing,” she explained.
In their analysis, the research team found that the mental verb “needs” was most frequently associated with the term AI, appearing 661 times, while “knows” was the most common mental verb linked to ChatGPT, occurring only 32 times.
Mackiewicz and Aune also suggested that the Associated Press’s guidelines discouraging the attribution of human emotions to AI may have influenced the relatively low use of mental verbs with AI and ChatGPT in news articles in recent years.
2. Mental Verbs didn’t Always Anthropomorphize AI or ChatGPT
The researchers found that writers used the mental verb “needs” in two main ways when talking about AI. Often, “needs” simply indicated what AI requires to operate, such as in statements like “AI needs large amounts of data” or “AI needs some human assistance.” These uses weren’t anthropomorphic, because they treated AI like any non-human system—similar to saying “the car needs gas” or “the soup needs salt.”
In some cases, “needs” suggested an obligation for AI, as in “AI needs to be trained” or “AI needs to be implemented.” Aune noted that many examples used passive voice, shifting responsibility from AI to humans.
3. Using Mental Verbs to Anthropomorphize Occurs Along a Continuum
Mackiewicz and Aune noted that the researchers found occasions where the word “needs” took on a more human‑like meaning. Sentences like “AI needs to understand the real world” implied human-like traits such as fairness, ethics, or personal understanding.
“These examples indicate that anthropomorphizing isn’t absolute but rather occurs along a continuum,” Aune explained.
Shaping Tomorrow
Mackiewicz explained that their study found human-like qualities in AI news writing are rarer and subtler than expected. “Even when AI was anthropomorphized, the degree varied significantly.”
Mackiewicz and Aune said the study highlights the need to go beyond simply counting verbs and to consider how context shapes meaning.
“For writers, this subtlety is crucial: the words we use influence how readers perceive AI systems, their capabilities, and the humans behind them,” Mackiewicz explained.
The team noted their findings can help communicators reconsider how they describe and treat AI in writing.
As AI continues to advance, writers will need to remain mindful of how their word choices frame these technologies, the researchers added.
The team suggested future studies examine how word choices subtly shape AI anthropomorphism and influence professionals’ views.
Read the original article on: Tech Xplore
Read more: Atlas Humanoid Robots will be Deployed in Hyundai Factories





