Companies And Researchers Clash Over Superhuman AI

Hype around “strong” AI surpassing human intelligence is intensifying, fueled by leaders of major AI companies. However, many researchers argue these claims are more about marketing than reality.
The idea that artificial general intelligence (AGI) will soon emerge from current machine-learning techniques sparks both utopian and apocalyptic predictions, from AI-driven abundance to human extinction.
“Systems that point to AGI are coming into view,” OpenAI chief Sam Altman wrote in a recent blog post. Similarly, Anthropic’s Dario Amodei suggested AGI could arrive as early as 2026. These bold claims help justify the massive investments—totaling hundreds of billions of dollars—into computing hardware and energy infrastructure.
Not everyone is convinced. Meta’s chief AI scientist, Yann LeCun, dismissed the idea that simply scaling up large language models (LLMs) would lead to AGI. His skepticism aligns with the broader academic consensus. A survey by the Association for the Advancement of Artificial Intelligence (AAAI) found that over three-quarters of respondents doubted that scaling current approaches would produce AGI.
The ‘Genie Out of the Bottle’ Strategy
Some researchers believe these AI companies use AGI warnings strategically—to capture attention and consolidate power. Kristian Kersting, an AI expert at Germany’s Technical University of Darmstadt, argues that companies hype AI’s risks to position themselves as indispensable.
“They say, ‘This is so dangerous that only we can control it. In fact, even we are afraid, but since the genie is already out of the bottle, we’ll sacrifice ourselves to protect you.’ Meanwhile, this ensures everyone depends on them,” Kersting said.
Skepticism isn’t universal. Influential figures like Geoffrey Hinton and Yoshua Bengio have voiced concerns about powerful AI. Kersting likened the situation to Goethe’s The Sorcerer’s Apprentice, where a young magician loses control of an enchanted broom. Another popular analogy is the “paperclip maximizer” thought experiment, where an AI tasked with making paperclips relentlessly consumes all available matter—including humans—to achieve its goal.
While some researchers understand these concerns, Kersting believes human intelligence is so complex and diverse that AGI remains a distant—if not impossible—goal. Instead, he sees more immediate dangers in existing AI systems, such as biased decision-making that impacts real-world interactions.
A Divide Between Industry and Academia
The gap between AI leaders and researchers may stem from self-selection. Sean Ó hÉigeartaigh, director of the AI: Futures and Responsibility program at Cambridge University, suggests that those who strongly believe in AI’s rapid advancement are more likely to work in industry, while skeptics remain in academia.
Even if Altman and Amodei’s timelines are overly optimistic, Ó hÉigeartaigh argues that AGI’s potential impact demands serious preparation. “If it were anything else—a chance that aliens might arrive by 2030 or that another giant pandemic was coming—we’d dedicate time to planning for it,” he said.
One challenge is communicating these concerns to policymakers and the public. Ó hÉigeartaigh notes that discussions of superintelligent AI often trigger skepticism, as they sound like science fiction. Yet, if AGI truly is on the horizon, ignoring the risks could be a costly mistake.
Read Original Article: TechXplore
Read More: Twin’s Debut AI Agent Assists Qonto Customers with Invoice Retrieval
Leave a Reply