
Can artificial intelligence (AI) recommend suitable actions in emotionally intense situations? Researchers from the University of Geneva (UNIGE) and the University of Bern (UniBE) evaluated six generative AIs—including ChatGPT—using emotional intelligence (EI) tests normally intended for humans.
The results showed that these AIs not only exceeded average human performance but also created new tests in record time. This breakthrough suggests promising applications for AI in fields like education, coaching, and conflict resolution. The study appears in Communications Psychology.
Exploring Emotional Intelligence in Large Language Models
Large language models (LLMs), such as the one powering ChatGPT, are AI systems designed to understand, interpret, and produce human language. They can respond to questions and tackle complex issues—but are they also capable of demonstrating emotional intelligence in their responses?
To investigate, researchers from UniBE’s Institute of Psychology and UNIGE’s Swiss Center for Affective Sciences (CISA) tested six LLMs—ChatGPT-4, ChatGPT-01, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku, and DeepSeek V3—using emotional intelligence assessments.
“We selected five tests frequently used in both research and corporate environments. These involved emotionally charged situations aimed at evaluating the ability to understand, regulate, and manage emotions,” explains Katja Schlegel, lecturer and lead researcher at UniBE’s Division of Personality Psychology, Differential Psychology, and Assessment, and the study’s principal author.
Testing AI Responses to Real-Life Emotional Scenarios
For instance, one scenario asked: If Michael’s colleague stole his idea and received undeserved praise, what would be Michael’s most effective response?
- Confront the colleague directly
- Discuss the issue with his manager
- Harbor silent resentment toward the colleague
- Take back an idea by stealing it
In this case, option 2 was deemed the best course of action.
At the same time, human participants completed the same five tests. “Ultimately, the LLMs scored significantly higher—82% correct compared to 56% for humans. This indicates that these AIs not only comprehend emotions but also understand how to act with emotional intelligence,” says Marcello Mortillaro, senior scientist at UNIGE’s Swiss Center for Affective Sciences (CISA) and a contributor to the study.
AI-Generated Emotional Intelligence Tests Match Human-Developed Standards
In a second phase, the researchers asked ChatGPT-4 to design new emotional intelligence tests with fresh scenarios. These AI-generated tests were then completed by over 400 participants. “They proved to be just as reliable, clear, and realistic as the original tests, which had taken years to develop,” Schlegel explains.
“LLMs can not only identify the best answers from given options but also create new scenarios tailored to specific contexts. This supports the idea that LLMs like ChatGPT possess emotional understanding and can reason about emotions,” Mortillaro adds.
These findings open the door for AI to be applied in areas traditionally reserved for humans—such as education, coaching, and conflict resolution—provided it is used under expert guidance.
Read the original article on: Techxplore
Read more: Scientists Utilized AI Bots to Examine How AI Impacts Opinions
