Tag: language

  • Are We Giving AI a Sense of Life Via Language?

    Are We Giving AI a Sense of Life Via Language?

    Think, know, understand, remember—these are only some of the mental verbs we commonly use to describe human thought processes. However, applying these same terms to artificial intelligence can unintentionally give the impression that AI possesses human-like qualities.
    Image Credits: Pixabay/CC0 Public Domain

    Think, know, understand, remember—these are only some of the mental verbs we commonly use to describe human thought processes. However, applying these same terms to artificial intelligence can unintentionally give the impression that AI possesses human-like qualities.

    Jo Mackiewicz, an English professor at Iowa State, said people use mental verbs for machines to relate to them, since these verbs are common in everyday speech. She warned that attributing human-like mental actions to AI can blur the line between human and machine abilities.

    Mackiewicz and Jeanine Aune, a teaching professor of English and director of Iowa State’s Advanced Communication Program, are part of a research team that recently investigated how writers use anthropomorphic language—terms that attribute human qualities to nonhuman entities—when discussing AI systems.

    Their study, titled “Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT,” was published in Technical Communication Quarterly.

    The research team also included Matthew J. Baker, an associate professor of linguistics at Brigham Young University, and Jordan Smith, an assistant professor of English at the University of Northern Colorado. Both Baker and Smith are alumni of Iowa State University.

    Why Mental Verbs may Create False Impressions

    Mackiewicz and Aune warned that describing AI with human-like mental verbs can mislead by implying machines have thoughts or feelings. Terms like “think,” “know,” “understand,” or “want” suggest consciousness, beliefs, or desires—qualities AI does not possess. Instead, AI produces outputs by recognizing patterns, not by experiencing intentions or emotions.

    The researchers also pointed out that such language can overstate AI’s capabilities. Phrases like “AI decided” or “ChatGPT knows” can exaggerate the system’s intelligence and create unrealistic expectations of its reliability. They added that speaking of AI as having intentions risks masking the fact that humans are the real decision-makers.

    Aune explained that using some human-like expressions to describe AI can be memorable for readers and might influence public views of AI in misleading or unhelpful ways.

    Language about Language

    Mackiewicz, Aune, and their colleagues examined the News on the Web (NOW) corpus—a dataset of over 20 billion words comprising continuously updated English-language news articles from 20 countries—to investigate how frequently news writers associate anthropomorphizing mental verbs, such as learns, means, and knows, with the terms AI and ChatGPT.

    According to Mackiewicz and Aune, the study’s findings were unexpected.

    Their analysis revealed three main insights:

    If you want, I can also make an even more concise, journalistic version that flows smoothly for publication. Do you want me to do that?

    1. News Articles Rarely Link AI and ChatGPT with Mental Verbs

    Mackiewicz noted that while no comprehensive study compares anthropomorphism in speech and writing, existing research offers some insights. “Anthropomorphism is common in everyday speech, but we observed much less of it in news writing,” she explained.

    In their analysis, the research team found that the mental verb needs was most frequently associated with the term AI, appearing 661 times, while “knows” was the most common mental verb linked to ChatGPT, occurring only 32 times.

    Mackiewicz and Aune also suggested that the Associated Press’s guidelines discouraging the attribution of human emotions to AI may have influenced the relatively low use of mental verbs with AI and ChatGPT in news articles in recent years.

    2. Mental Verbs didn’t Always Anthropomorphize AI or ChatGPT

    The researchers found that writers used the mental verb “needs” in two main ways when talking about AI. Often, “needs” simply indicated what AI requires to operate, such as in statements like “AI needs large amounts of data” or “AI needs some human assistance.” These uses weren’t anthropomorphic, because they treated AI like any non-human system—similar to saying “the car needs gas” or “the soup needs salt.”

    In some cases, “needs” suggested an obligation for AI, as in “AI needs to be trained” or “AI needs to be implemented.” Aune noted that many examples used passive voice, shifting responsibility from AI to humans.

    3. Using Mental Verbs to Anthropomorphize Occurs Along a Continuum

    Mackiewicz and Aune noted that the researchers found occasions where the word “needs” took on a more human‑like meaning. Sentences like “AI needs to understand the real world” implied human-like traits such as fairness, ethics, or personal understanding.

    “These examples indicate that anthropomorphizing isn’t absolute but rather occurs along a continuum,” Aune explained.

    Shaping Tomorrow

    Mackiewicz explained that their study found human-like qualities in AI news writing are rarer and subtler than expected. “Even when AI was anthropomorphized, the degree varied significantly.”

    Mackiewicz and Aune said the study highlights the need to go beyond simply counting verbs and to consider how context shapes meaning.

    “For writers, this subtlety is crucial: the words we use influence how readers perceive AI systems, their capabilities, and the humans behind them,” Mackiewicz explained.

    The team noted their findings can help communicators reconsider how they describe and treat AI in writing.

    As AI continues to advance, writers will need to remain mindful of how their word choices frame these technologies, the researchers added.

    The team suggested future studies examine how word choices subtly shape AI anthropomorphism and influence professionals’ views.


    Read the original article on: Tech Xplore

    Read more: Atlas Humanoid Robots will be Deployed in Hyundai Factories

  • Google Meet is Introducing Real-time Translation of Spoken Language

    Google Meet is Introducing Real-time Translation of Spoken Language

    At Google I/O 2025, Google revealed that real-time speech translation is coming to Google Meet. According to the company, the feature uses a large audio language model developed by Google DeepMind to enable smooth, natural conversations between speakers of different languages.
    Credit: Depositphotos

    At Google I/O 2025, Google revealed that real-time speech translation is coming to Google Meet. According to the company, the feature uses a large audio language model developed by Google DeepMind to enable smooth, natural conversations between speakers of different languages.

    Google Meet’s speech translation feature converts spoken words into the listener’s chosen language instantly, while maintaining the speaker’s voice, tone, and emotional expression.

    Bridging Language Gaps in Families and Global Workplaces

    Google highlights several potential applications for this technology. For example, it can help English-speaking grandchildren communicate seamlessly with Spanish-speaking grandparents, or enable real-time conversations among colleagues in global companies operating across different regions.

    The company also notes that the feature has minimal latency, making it possible for multiple participants to engage in conversation simultaneously—something Google says wasn’t previously achievable.

    Credit:Google

    When the other person speaks, their original voice will still be faintly audible, while the translated audio plays over it.

    Google will begin rolling out speech translation in Meet to consumer AI subscribers in beta starting Tuesday. Initially, the feature will support English and Spanish, with additional languages like Italian, German, and Portuguese expected to follow in the coming weeks.

    The company also noted that it’s expanding the feature for business use, with early testing set to begin for Workspace customers later this year.


    Read the original article on: Techcrunch

    Read more: Google Is Integrating Gemini into Android Auto for In-Car Use

  • Google and Duolingo think AI can transform language learning. Do they?

    Google and Duolingo think AI can transform language learning. Do they?

    Credit: Depositphotos

    AI is increasingly becoming a part of our lives, with language learning being the next area of focus.

    This week, both Google and Duolingo made significant strides in language learning with AI. Google introduced new Gemini-powered AI tools to help users learn foreign languages, called Little Language Lessons.

    This experimental feature includes three interactive lessons aimed at personalizing the learning experience. For example, “Tiny Lesson” teaches phrases for specific situations (like losing your passport), “Slang Hang” focuses on local slang for casual conversations, and “Word Cam” allows Gemini to identify objects in your photos and label them in the language you’re learning.

    Duolingo, meanwhile, is fully embracing generative AI. The company revealed this week that it would no longer depend on human contractors for tasks that AI can manage, and it plans to incorporate AI into hiring and performance evaluations. Additionally, Duolingo announced on Wednesday that generative AI helped create 148 new language courses, effectively doubling its course offerings.

    Duolingo’s AI-Powered Language Learning with Google Gemini

    The large language models powering Google Gemini and other well-known AI tools have shown strong capabilities in translation. Duolingo clearly sees great potential in this technology for language learning.

    Learning a new language is inherently social, typically driven by the desire to connect more meaningfully with others. In practice, it often involves direct interaction between people.

    Google states that it isn’t attempting to replace human teaching.

    A blog post from Google states, “These experiments are not meant to replace traditional studying, but to complement it: assisting people in building habits, staying engaged, and integrating learning into their daily routines.”


    Read the original article on: Mashable

    Read more: Google AI May Be Close to “Speaking Dolphin” with New DolphinGemma Model