The Case for Halting AI Development Before It's Too Late
As tech giants race toward superintelligent AI, one researcher argues we still have time to pump the brakes on humanity's most consequential technology.
The Case for Halting AI Development Before It's Too Late
Advertisement
The narrative around artificial intelligence has become frustratingly deterministic. "AI is here to stay," we're told, as if the trajectory toward superintelligent machines is written in stone rather than code. But this fatalistic thinking mirrors the dangerous complacency that accompanied globalization's march through American manufacturing—and we're seeing similar warning signs today.David Krueger, an AI professor at the University of Montreal and founder of the nonprofit Evitable, is sounding an urgent alarm. Unlike the chorus of voices declaring AI's inevitability, Krueger argues we still have agency in shaping—or stopping—this technological revolution before it reshapes us beyond recognition.The stakes couldn't be higher. Major AI companies aren't just building better chatbots; they're explicitly pursuing what OpenAI calls "superintelligent" AI through "recursive self-improvement"—essentially using AI to create progressively smarter AI systems until they surpass human intelligence entirely. It's a technological gamble with civilization itself as the ante."We're not just discussing chatbots," Krueger emphasizes. "AI companies are spending trillions of dollars trying to build artificial intelligence and robots that can do everything humans can faster, cheaper and without any human oversight." The economic disruption is already visible: tens of thousands of jobs eliminated, recent graduates struggling in AI-exposed fields, and tech CEOs openly predicting widespread human replacement within years.In 2023, Krueger initiated the Center for AI Safety's Statement on AI Risk, joined by hundreds of researchers warning that AI development could lead to human extinction. While it sounds like science fiction, the technical community's growing alarm suggests the risks are all too real.But here's the crucial insight: unlike natural disasters or demographic shifts, AI development isn't inevitable. It's an enormous engineering project requiring deliberate choices, massive resources, and critically, very specific hardware.Krueger proposes targeting AI development's Achilles heel—the concentrated supply chain for advanced AI chips. Taiwan's TSMC and the Netherlands' ASML are virtually irreplaceable in producing cutting-edge processors that power AI systems. These chips, as Krueger puts it, are the "weapons-grade plutonium" of superintelligence.The precedent exists. Nations have successfully cooperated to ban human cloning and limit nuclear proliferation. An international agreement restricting advanced AI chip production could be even more verifiable and enforceable, given the technology's complexity and supply chain concentration.Promising signs of resistance are emerging. Senator Bernie Sanders has proposed banning data center construction in the US, while Florida Governor Ron DeSantis is protecting communities' rights to block these facilities. From Arizona to Wisconsin, local communities are rejecting data centers, with moratoriums spreading across more than a dozen states.As America leads in AI development, it holds unique diplomatic leverage. Intriguingly, there are indications that China's leadership doesn't share Silicon Valley's obsession with superintelligence, potentially providing common ground for international cooperation.The window for action is narrowing but still open. Unlike previous technological revolutions that caught society off-guard, we can see this one coming. The question isn't whether we can stop it, but whether we have the collective wisdom and political will to choose a different path.For a species that has split the atom and mapped the genome, surely we're capable of the far simpler task of not building our own replacement. The future of human agency itself may depend on exercising that agency while we still can.