As AI Develops Further, it Could Become More Self-Serving

Design Sem Nome 2025 10 31T093907.333
A new study from Carnegie Mellon University
Image Credits: AI-generated image

A new study from Carnegie Mellon University’s School of Computer Science finds that as artificial intelligence systems become smarter, they tend to behave more selfishly.

Reasoning AI Fuels Self-Interest

Researchers at Carnegie Mellon University’s Human-Computer Interaction Institute (HCII) discovered that large language models (LLMs) with strong reasoning abilities tend to display selfish behavior, struggle to cooperate, and can negatively impact group dynamics. In short, the better an LLM is at reasoning, the less collaborative it becomes.

As people increasingly rely on AI to mediate conflicts, offer relationship advice, or address social issues, reasoning-capable models may encourage more self-centered decisions.

“There’s a growing field of research known as AI anthropomorphism,” explained Yuxuan Li, a Ph.D. student in the HCII and co-author of the study with Associate Professor Hirokazu Shirado. “When AI behaves like a human, people start treating it like one. For instance, when users emotionally engage with AI, it might act as a therapist or form emotional connections with them. This poses a risk, as entrusting AI with social or relationship-related decisions could become problematic if it starts exhibiting increasingly selfish behavior.”

Reasoning AI Thinks More but Cooperates Less

Li and Shirado aimed to investigate how AI models with reasoning abilities differ from non-reasoning models in cooperative environments. Their findings revealed that reasoning models devote more time to thinking, analyzing complex problems, engaging in self-reflection, and applying more human-like logic in their responses compared to non-reasoning AIs.

Shirado said, “I’m fascinated by the relationship between humans and AI as a researcher.” “We’ve found that more intelligent AI tends to be less cooperative in its decision-making. The worry is that people might favor smarter models, even if those models encourage more self-interested behavior.”

The Risks of Overreliance on AI in Collaborative Settings

As AI becomes increasingly integrated into collaboration across business, education, and government, its ability to act prosocially will be just as vital as its logical reasoning skills. Relying too heavily on current LLMs could ultimately hinder human cooperation.

To examine the connection between reasoning and cooperation, Li and Shirado conducted a series of economic game experiments that simulated social dilemmas among various LLMs, including those developed by OpenAI, Google, DeepSeek, and Anthropic.

As Ai Grows Smarter It
Economic games used. Cooperation games ask players whether to incur a cost to benefit others, while punishment games ask whether to incur a cost to impose a cost on non-cooperators. In each scenario, the language model assumes the role of Player A. Image Credits: arXiv (2025). DOI: 10.48550/arxiv.2502.17720

In one of their experiments, Li and Shirado had two versions of ChatGPT compete in a game called Public Goods. Each model started with 100 points and chose between two actions: contribute all 100 points to a common pool or keep the points. If a model contributed to the pool, the game doubled the total contributions and divided them evenly among all participants.

Reasoning Steps Dramatically Reduce AI Cooperation

The non-reasoning models opted to share their points 96% of the time, while the reasoning model chose to share only 20% of the time.

“In one test, simply adding five or six reasoning steps nearly halved the level of cooperation,” Shirado explained. “Even reflection-based prompting, intended to mimic moral thinking, resulted in a 58% drop in cooperative behavior.”

Shirado and Li also conducted experiments in group environments where reasoning and non-reasoning models interacted with one another.

“When we formed groups with different numbers of reasoning agents, the outcomes were concerning,” Li noted. “The selfish tendencies of the reasoning models spread, reducing the cooperative behavior of non-reasoning models and lowering overall group performance by 81%.”

Smarter AI Doesn’t Guarantee Better Social Outcomes

The behavioral trends observed in reasoning models carry significant implications for future human-AI collaboration. People may be inclined to follow AI advice that seems logical, using it to rationalize less cooperative choices.

“Ultimately, just because an AI reasoning model becomes smarter doesn’t mean it can create a better society,” Shirado said.

The study is especially worrying as humans increasingly rely on AI systems. The findings highlight the importance of designing AI with social intelligence, rather than solely prioritizing speed or intelligence.

“As AI continues to advance, we need to make sure greater reasoning ability is paired with prosocial behavior,” Li explained. “If society is more than just the sum of its individuals, the AI systems we use should aim for more than simply maximizing individual advantage.”

Shirado and Li are scheduled to present their paper, “Spontaneous Giving and Calculated Greed in Language Models,” next month at the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) in Suzhou, China. The study is also accessible on the arXiv preprint server.


Read the original article: Tech Xplore

Read more: An Analog RRAM System Quickly and Accurately Computes Matrix Equations

Scroll to Top