Eric Schmidt Opposes the Idea of a Manhattan Project for AGI

In a policy paper released Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks argue that the U.S. should avoid a Manhattan Project-style effort to develop AI systems with “superhuman” intelligence, or AGI.
Titled Superintelligence Strategy, the paper warns that a unilateral U.S. push for dominance in superintelligent AI could provoke strong retaliation from China, potentially through cyberattacks, which might destabilize global relations.
“The assumption behind a Manhattan Project for AGI is that rivals would accept a lasting power imbalance or catastrophic consequences rather than act to prevent it,” the authors write. “What starts as a pursuit of a superweapon and global control could instead trigger hostile responses, escalate tensions, and ultimately undermine the stability it aims to ensure.”
A Response to Calls for a “Manhattan Project” for AI
Authored by three prominent figures in the U.S. AI industry, the paper follows a recent proposal from a U.S. congressional commission advocating for a “Manhattan Project-style” initiative to fund AGI development, inspired by America’s atomic bomb program in the 1940s. U.S. Secretary of Energy Chris Wright recently described the country as being at “the start of a new Manhattan Project” for AI while speaking in front of a supercomputer site alongside OpenAI co-founder Greg Brockman.
The Superintelligence Strategy paper pushes back against the growing support among American policymakers and industry leaders for a government-led AGI program as the best way to compete with China.
Schmidt, Wang, and Hendrycks liken the current AGI landscape to a standoff resembling mutually assured destruction. Just as global powers avoid monopolizing nuclear weapons—fearing a preemptive strike from rivals—the authors caution that aggressively pursuing dominance in advanced AI could provoke similar risks.
While comparing AI to nuclear weapons may seem dramatic, world leaders already see AI as a key military advantage. The Pentagon has even acknowledged that AI is accelerating its kill chain operations.
Schmidt and his co-authors introduce the concept of Mutual Assured AI Malfunction (MAIM), suggesting that governments should proactively disable potentially dangerous AI projects rather than waiting for adversaries to weaponize AGI.
Shifting Focus from Domination to Deterrence
Instead of focusing on “winning the race to superintelligence,” Schmidt, Wang, and Hendrycks advocate for strategies that deter other nations from developing superintelligent AI. They propose that the U.S. expand its arsenal of cyberattacks to neutralize threatening AI projects controlled by foreign governments while also restricting access to advanced AI chips and open-source models.
The authors highlight a divide in AI policy circles. On one side are the “doomers,” who believe AI will inevitably lead to catastrophe and push for slowing its development. On the other are the “ostriches,” who argue for rapid AI progress, hoping for the best. The paper presents a middle ground: a cautious yet proactive approach to AGI that prioritizes defense.
This perspective is particularly noteworthy coming from Schmidt, who has previously urged the U.S. to compete aggressively with China in AI development. As recently as a few months ago, he wrote an op-ed warning that DeepSeek marked a pivotal moment in America’s AI competition with China.
While the Trump administration appears committed to accelerating AI progress, the authors emphasize that U.S. decisions on AGI don’t occur in isolation. As the world watches America’s rapid AI advancements, Schmidt and his co-authors argue that a more defensive strategy may be the wiser path forward.
Read the original article on: TechCrunch
Read more: Flora is Developing an AI-driven Infinite Canvas Designed for Creative Professionals
Leave a Reply