Tag: Superhuman AI

  • Companies And Researchers Clash Over Superhuman AI

    Companies And Researchers Clash Over Superhuman AI

    Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that ‘scaling up’ LLMs was unlikely to produce artificial general intelligence.

    Hype around “strong” AI surpassing human intelligence is intensifying, fueled by leaders of major AI companies. However, many researchers argue these claims are more about marketing than reality.

    The idea that artificial general intelligence (AGI) will soon emerge from current machine-learning techniques sparks both utopian and apocalyptic predictions, from AI-driven abundance to human extinction.

    “Systems that point to AGI are coming into view,” OpenAI chief Sam Altman wrote in a recent blog post. Similarly, Anthropic’s Dario Amodei suggested AGI could arrive as early as 2026. These bold claims help justify the massive investments—totaling hundreds of billions of dollars—into computing hardware and energy infrastructure.

    Not everyone is convinced. Meta’s chief AI scientist, Yann LeCun, dismissed the idea that simply scaling up large language models (LLMs) would lead to AGI. His skepticism aligns with the broader academic consensus. A survey by the Association for the Advancement of Artificial Intelligence (AAAI) found that over three-quarters of respondents doubted that scaling current approaches would produce AGI.

    The ‘Genie Out of the Bottle’ Strategy

    Some researchers believe these AI companies use AGI warnings strategically—to capture attention and consolidate power. Kristian Kersting, an AI expert at Germany’s Technical University of Darmstadt, argues that companies hype AI’s risks to position themselves as indispensable.

    “They say, ‘This is so dangerous that only we can control it. In fact, even we are afraid, but since the genie is already out of the bottle, we’ll sacrifice ourselves to protect you.’ Meanwhile, this ensures everyone depends on them,” Kersting said.

    Skepticism isn’t universal. Influential figures like Geoffrey Hinton and Yoshua Bengio have voiced concerns about powerful AI. Kersting likened the situation to Goethe’s The Sorcerer’s Apprentice, where a young magician loses control of an enchanted broom. Another popular analogy is the “paperclip maximizer” thought experiment, where an AI tasked with making paperclips relentlessly consumes all available matter—including humans—to achieve its goal.

    While some researchers understand these concerns, Kersting believes human intelligence is so complex and diverse that AGI remains a distant—if not impossible—goal. Instead, he sees more immediate dangers in existing AI systems, such as biased decision-making that impacts real-world interactions.

    A Divide Between Industry and Academia

    The gap between AI leaders and researchers may stem from self-selection. Sean Ó hÉigeartaigh, director of the AI: Futures and Responsibility program at Cambridge University, suggests that those who strongly believe in AI’s rapid advancement are more likely to work in industry, while skeptics remain in academia.

    Even if Altman and Amodei’s timelines are overly optimistic, Ó hÉigeartaigh argues that AGI’s potential impact demands serious preparation. “If it were anything else—a chance that aliens might arrive by 2030 or that another giant pandemic was coming—we’d dedicate time to planning for it,” he said.

    One challenge is communicating these concerns to policymakers and the public. Ó hÉigeartaigh notes that discussions of superintelligent AI often trigger skepticism, as they sound like science fiction. Yet, if AGI truly is on the horizon, ignoring the risks could be a costly mistake.


    Read Original Article: TechXplore

    Read More: Twin’s Debut AI Agent Assists Qonto Customers with Invoice Retrieval

  • Companies and Researchers Lash Over the Rise of Superhuman AI

    Companies and Researchers Lash Over the Rise of Superhuman AI

    Executives at major AI companies are fueling hype that advanced AI will soon surpass human intelligence, but many researchers view these claims as mere marketing tactics.
    Credit: Pixabay

    Executives at major AI companies are fueling hype that advanced AI will soon surpass human intelligence, but many researchers view these claims as mere marketing tactics.

    The idea that human-level or superior intelligence—often termed artificial general intelligence (AGI)—could emerge from current machine-learning methods fuels speculation about a future ranging from limitless prosperity to human extinction.

    AGI on the Horizon?

    Systems approaching AGI are coming into view,” OpenAI CEO Sam Altman wrote in a blog post last month. Similarly, Anthropic’s Dario Amodei suggested AGI “could arrive as early as 2026.” Such forecasts help justify the massive investments—totaling hundreds of billions of dollars—being funneled into computing infrastructure and energy resources.

    However, not everyone is convinced.

    Meta’s chief AI scientist, Yann LeCun, told AFP last month that simply scaling up large language models (LLMs) like those powering ChatGPT and Claude will not lead to human-level AI.

    Experts Question the Path to AGI

    His skepticism aligns with broader academic opinion. A recent survey by the U.S.-based Association for the Advancement of Artificial Intelligence (AAAI) found that over three-quarters of respondents believe AGI is unlikely to result from merely expanding current approaches.

    Some academics argue that companies’ claims—often accompanied by warnings about AGI’s risks to humanity—are a tactic designed to attract attention.

    Businesses “have made these big investments, and they have to pay off,” said Kristian Kersting, a leading AI researcher at the Technical University of Darmstadt and AAAI member.

    They claim, ‘This is so dangerous that only I can control it. In fact, even I am afraid, but the genie is already out of the bottle—so I will take on the burden for you.’ In the end, that makes people dependent on them.”

    Despite widespread skepticism among researchers, some prominent figures, including Nobel-winning physicist Geoffrey Hinton and 2018 Turing Prize recipient Yoshua Bengio, have warned about the dangers of advanced AI.

    The Perils of Unchecked AI

    It’s like Goethe’s The Sorcerer’s Apprentice—you unleash something you can no longer control,” Kersting said, referencing a poem where an apprentice loses command of an enchanted broom. A modern equivalent is the paperclip maximizer thought experiment: an AI programmed to produce paperclips could pursue its goal so relentlessly that it would convert all matter in the universe into paperclips or machines to create them—eliminating humans who might try to shut it down.

    While not inherently “evil,” such an AI would lack proper alignment with human values and objectives.

    Kersting acknowledges these concerns but believes human intelligence is so diverse and sophisticated that it will take a long time—if ever—for AI to match it. He is more worried about present-day risks, such as AI-driven discrimination in human interactions.

    The apparent divide between academics and AI industry leaders may simply stem from differences in career outlook, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility program at Cambridge University.

    If you strongly believe in the power of current AI techniques, you’re more likely to join a company investing heavily in making them a reality,” he explained.

    Even if figures like Altman and Amodei are overly optimistic about AGI’s rapid development and it arrives much later, O hEigeartaigh argues that the issue still deserves serious attention. “If AGI happens, it would be the most significant event in history,” he said.

    If this were about something else—say, the possibility of alien contact by 2030 or another major pandemic—we would dedicate time to preparing for it.”

    A key challenge, however, is effectively communicating these concerns to policymakers and the public.

    Discussions of super-intelligent AI often trigger skepticism. “It creates an almost immune reaction—it just sounds like science fiction,” O hEigeartaigh noted.


    Read the original article on: TechXplore

    Read more: People in Japan Respect Robots and AI More Than Those in the West Societies