SHOULD WE BAN ARTIFICIAL SUPERINTELLIGENCE?
SHOULD WE BAN ARTIFICIAL SUPERINTELLIGENCE?
A Comprehensive Debate on AI Policy, Safety, and Humanity's Future
Zeb Bhatti: Nov 26, 2025
INTRODUCTION: THE STAKES OF OUR TECHNOLOGICAL MOMENT
Artificial intelligence has become the defining technological force of our era, driving unprecedented breakthroughs across medicine, manufacturing, education, and fundamental science itself. Yet as AI systems grow more capable, a critical question demands our attention: should humanity proceed with building artificial superintelligence—systems that would surpass the cognitive abilities of our entire species combined—or should we establish safeguards that prohibit such development until we can guarantee it will be controlled and used safely? This question sits at the intersection of technological possibility, economic competition, national security, and existential risk. The prediction market Metaculous estimates that humanity will achieve artificial general intelligence—human-level AI systems across all domains—within the next decade, by 2033. Most experts who study these trajectories believe superintelligence would follow soon thereafter. The stakes have never been higher, and the disagreement among leading experts has never been more pronounced. Two of the world's most influential voices in AI policy offer starkly different visions of how society should respond. Both are deeply engaged with these questions, both care about humanity's future, and both have spent years thinking through the implications. Yet they arrive at fundamentally opposite conclusions about what wisdom demands of us now.
THE CASE FOR PRECAUTION: A REGULATED PATH FORWARD
Max Tegmark, an MIT physicist and co-founder of the Future of Life Institute, brings a perspective grounded in the history of technology regulation and the biology of risk. His position is straightforward: we should not permit the development of superintelligence until there exists broad scientific consensus that such systems can be kept under control and strong public buy-in for their creation. Tegmark's argument begins with a simple observation. Every other powerful technology—pharmaceuticals, aircraft, nuclear reactors, even restaurants—is subject to safety standards before deployment. These standards don't ban technologies; they establish that companies must demonstrate to independent experts that risks are acceptable relative to benefits. The person selling a new drug must prove it works and is safe before it reaches consumers. They cannot simply release it into the world and defend themselves later in court. Yet today, incredibly, artificial superintelligence occupies a unique exception. There are literally more regulations on sandwiches than on superintelligence in the United States. This distinction seems absurd when you examine it carefully. Tegmark points out that 95 percent of Americans in recent polling explicitly do not want a race to superintelligence. Most scientists working on AI alignment—the technical problem of keeping superintelligent systems under human control—agree we currently have no reliable solution. We simply do not know how to keep something vastly smarter than all of humanity combined under our control. Given this gap between our capability to build such systems and our ability to control them, the precautionary approach suggests we should focus our efforts on creating controllable AI tools that can cure cancer, accelerate scientific discovery, and increase human productivity, while deliberately slowing down the race to build something we don't yet know how to control.
WHY SAFETY STANDARDS MATTER: LEARNING FROM OTHER INDUSTRIES
Tegmark's regulatory framework doesn't require defining superintelligence—a definitional challenge that might seem to support Dean's critique. Instead, it focuses on harms. Safety standards wouldn't say "superintelligence is banned." They would say: demonstrate that your system will not teach terrorists how to make bioweapons, that it cannot overthrow the government, that it will not escape human control in ways we find unacceptable. The burden of proof shifts from the government having to articulate all possible dangers to companies having to quantify and demonstrate their safety case. This approach mirrors how we regulate other industries. The FDA doesn't prohibit medicines that cause birth defects by name. That would be reactive and incomplete—a new medicine might cause different harm. Instead, companies must conduct clinical trials and provide quantitative evidence of all potential side effects. Independent experts who have no financial stake in approval review the benefits and harms and decide whether the net outcome is acceptable. The pharmaceutical industry ultimately embraced this model because it created financial incentives for safety innovation. Companies that can meet high safety standards first gain enormous market advantages. They gain trust. They gain prestige. The same dynamics could transform AI development. Currently, leading AI companies spend roughly one percent of their budgets on safety. Pharmaceutical giants like Novartis and Moderna spend substantially more because safety is the race they're winning. A regulatory framework that makes safety the gating factor for deployment would immediately transform corporate incentives. Tegmark offers nuclear power as a compelling example. The law explicitly requires that companies demonstrate the probability of a catastrophic meltdown is less than one in 10,000 years. This doesn't prevent nuclear development; it channels innovation toward solving the safety problem. Whoever meets the standard first captures the market. The government didn't micromanage the physics or the engineering. It set a target for safety, and capitalism did the rest.
THE DIGITAL GAIN-OF-FUNCTION PARALLEL: AN URGENT WARNING
One of Tegmark's most compelling arguments draws on the COVID-19 pandemic and the controversy surrounding gain-of-function research in virology. The term "gain of function" refers to research that makes pathogens more transmissible or more dangerous. The U.S. government—sometimes inadvertently—funded such research in Wuhan laboratories in China. Whether this research contributed to COVID-19's emergence remains hotly disputed, but the pandemic's impact was undeniable: millions dead globally, economic devastation, social trauma. As a result of these concerns, the U.S. government now regulates gain-of-function research stringently. Researchers cannot conduct such work in regular labs. There are biosafety level designations—BSL-1 through BSL-4—with increasingly stringent requirements. Researchers must obtain approvals. Oversight exists. Tegmark asks a logical question: why do we have regulations on biological gain-of-function research, where the risks are real but uncertain, yet we have no binding regulations on digital gain-of-function research, where the risks may be far greater? The parallel is precise and troubling. Recursive self-improvement—where AI systems improve their own capabilities—is the digital equivalent of biological gain-of-function. Sam Altman publicly expressed excitement about automated AI researchers, about systems that could improve their own algorithms and training processes. This is digital gain of function by another name. The asymmetry in how we regulate these two domains of existential risk seems indefensible.
THE THRESHOLD: WHEN LEARNING FROM MISTAKES BECOMES FATAL
Tegmark concedes that the traditional approach to technology regulation has historically worked well. We invented fire before we invented fire codes. We built automobiles before we mandated seatbelts and traffic signals. We created medicines before we developed comprehensive safety testing. In each case, society learned from early mistakes and gradually implemented safeguards. But he makes a crucial observation: this traditional approach worked because there was a threshold of acceptable risk. For most technologies, even catastrophic failures affected a limited subset of humanity. A car crash kills the occupants. A fire destroys buildings. A dangerous drug harms thousands, perhaps hundreds of thousands as the thalidomide tragedy demonstrated, but not all of humanity. Superintelligence, in Tegmark's view, sits on the wrong side of that threshold. The potential downside isn't measured in thousands or millions. It's measured in the permanent loss of human agency and possibly human existence. If we lose control of superintelligence, we haven't lost a battle—we've fundamentally lost the future. There is no learning from mistakes at that scale. There is no recovery. The old approach worked when failure was survivable. It becomes catastrophically dangerous when failure is not. This is why Tegmark compares superintelligence not to the printing press or the internet but to nuclear weapons. We don't let individuals buy hydrogen bombs in supermarkets and hope for the best. We don't allow scientists to conduct plutonium research in basement labs. Society decided long ago that some technologies are too dangerous for a learn-as-you-go approach. Superintelligence, with its unimaginable downside, belongs in this category.
THE CONTROL PROBLEM: THE CRUX OF THE DISAGREEMENT
Tegmark and his collaborators conducted rigorous research on the leading approach to superintelligence alignment: recursive scalable oversight. This technique attempts to solve the problem of controlling superintelligent systems by having them work on problems under human oversight, with oversight itself becoming more capable through machine assistance. In theory, this creates a scalable method for maintaining control as systems grow smarter. The research reached a sobering conclusion. Even in their most optimistic scenarios, this control method fails 92 percent of the time. In other words, if we follow current best thinking on alignment, there's still a 92 percent chance we lose control. Tegmark emphasizes that he would welcome researchers publishing better approaches—stronger theories of control that perform better than recursive scalable oversight. But until better ideas exist and are rigorously tested, the gap between our ability to build superintelligence and our ability to control it yawns open and terrifying. His estimate of P(Doom)—the probability of extinction-level catastrophe if we proceed without safety standards—is above 90 percent in a scenario where we do nothing to slow development. He's not claiming absolute certainty. Scientific humility prevents that. But the probability, by his calculations, is high enough that the expected value of caution seems overwhelming. He compares it to someone discovering a bridge has a 90 percent chance of collapse. You don't march everyone across. You stop and fix it.
TWO SPECIFIC CAPABILITIES THAT CONCERN EXPERTS: BIO AND CYBER RISKS
While Tegmark emphasizes the existential control risk, he also highlights more near-term catastrophic risks that are easier to articulate and model. These fall into two categories: biotechnology risks and cybersecurity risks. Regarding biological capabilities, recent AI models have demonstrated concerning abilities. As language models grow more capable, they improve at reasoning about biological systems and protein structures. Current models like GPT-4 and Claude 3 still lack the sophisticated reasoning needed for true bioweapon design. But the trajectory is concerning. OpenAI's o1 model, released in late 2024, demonstrated dramatically improved reasoning capabilities—what researchers call "system two" reasoning, the kind of deliberative, step-by-step problem-solving that characterizes human scientific thought. Suddenly, the path from AI capability to biological danger became clearer. A highly capable AI system could use tools like AlphaFold to understand protein structures, could reason through viral evolution, and could potentially provide instructions for synthesizing dangerous pathogens. The causal chain from capability to harm, which previously seemed speculative, suddenly became concretely drawable. The cybersecurity risks are similarly concerning. Advanced AI systems could potentially identify zero-day exploits, design sophisticated malware, and orchestrate attacks at scales humans cannot match. These aren't hypothetical risks; they're being actively researched and discussed in security communities. Yet both of these catastrophic scenarios pale in comparison to the existential control risk. Bioweapons or cyberweapons could kill millions. But superintelligence that escapes human control could end the entire human future. This is why Tegmark argues that even if we're uncertain about extinction risks, the mere possibility of such risks, combined with our inability to control superintelligence by our own best estimates, should mandate extreme caution.
THE CASE FOR ADAPTIVE GOVERNANCE: DEAN BALL'S VISION
Dean Ball, a senior fellow at the Foundation for American Innovation and former senior policy advisor at the White House Office of Science and Technology Policy, articulates a fundamentally different vision. He does not dismiss the risks Tegmark raises. Rather, he questions whether a precautionary regulatory ban on superintelligence is the right response and, crucially, whether such a ban is even practically feasible. Ball's concerns are multifaceted. He worries about the definitional problem: if you cannot clearly define what you're banning, you risk creating laws that prohibit beneficial technologies. He worries about regulatory capture: well-intentioned rules designed to prevent existential risks often get weaponized to protect incumbent industries from disruption. He worries about international coordination: any ban on superintelligence development that doesn't include China, which he believes will not accept such restrictions, simply hands technological dominance to an authoritarian state. And he worries most deeply about what he calls the tyranny scenario: a regulatory regime so restrictive that it concentrates power in the hands of a small group of government officials or sanctioned laboratories, creating a new form of authoritarianism.
THE DEFINITIONAL CHALLENGE: CAN WE DEFINE WHAT WE'RE BANNING?
Ball's first objection is pragmatic. Regulatory systems require clear definitions. When Congress passes a law, it must specify what is and isn't permitted. The term "superintelligence" has meant many things to many people. Over decades, the definition has shifted and expanded as technology has advanced faster than terminology. Consider a system like a hypothetical GPT-7. Imagine an AI that solves outstanding problems in mathematics that have stumped humanity for centuries. It advances multiple domains of science, compressing a century of progress into a decade or five years. It accelerates AI research itself through meaningful contributions to computer science. It reasons about law better than any human lawyer. It codes better than any software engineer on Earth. By any reasonable definition, this system would be superintelligent—vastly better than humans at intellectual work. Yet Ball asks: does this system necessarily pose the catastrophic control risks that Tegmark describes? He believes not. This system could be genuinely beneficial to humanity without posing existential risks. So if you write a law banning superintelligence using a broad definition that covers this system, you ban something potentially wonderful. But if you write a narrow definition that excludes it, what have you really prohibited? The law becomes meaningless. This isn't an argument against all regulation. It's an argument against definitional bans. Ball supports regulations that focus on demonstrable harms rather than theoretical future states. Regulate systems that pose bioweapon risks. Regulate systems that could enable catastrophic cyberattacks. These are concrete, measurable capabilities we can test for. But don't ban an entire category of intelligence level.
THE PRACTICAL REGULATORY PROBLEM: INCENTIVES AND FAILURE MODES
Ball's second concern moves beyond definitions to implementation. He accepts that Tegmark's framework—establishing safety standards through regulatory approval before deployment—works in industries like pharmaceuticals. But he argues the parallel is imperfect and the risks of implementation are higher than Tegmark acknowledges. When the FDA approves a drug, it's approving a discrete product for a specific use. The regulatory regime makes sense because the product is bounded and the domain is relatively well-understood. AI systems are fundamentally different. They are general-purpose technologies. A language model trained on broad internet text can be applied to countless domains—medical diagnosis, legal reasoning, scientific discovery, business strategy, creative writing. The range of potential applications is essentially unlimited and constantly expanding. Ball points out that general-purpose technologies throughout history have resisted top-down regulation because they're too flexible and multifaceted. We don't regulate "computers" at the layer of transistors or chips or software generally. We regulate specific uses—banking systems must meet security standards, medical devices must be safe, etc. Trying to regulate at the layer of the technology itself creates perverse incentives and stifles innovation. More troublingly, Ball warns about regulatory capture and political abuse. Because AI systems are general and powerful and will inevitably disrupt existing economic structures, incumbent industries will use safety regulations as a cudgel against competition. A union might argue that a highly capable AI system will cause job loss, therefore it should be banned on safety grounds. A traditional industry might argue that an AI system threatens their business model, therefore it poses economic danger and should be prohibited. The regulatory regime designed to prevent existential risks becomes a tool for preventing technological change. This is not speculative concern. Ball co-authored an article titled "The Political Economy of AI Regulation," which examines exactly these dynamics in other regulatory regimes. Well-intentioned safety rules consistently get weaponized to protect incumbent actors from disruption.
THE NATIONAL SECURITY DIMENSION: THE CHINA PROBLEM.
Ball's third concern directly challenges one of Tegmark's core assumptions: that a global ban on superintelligence is politically and strategically wise. Ball believes the United States and China are in a genuine race for AI dominance. This is not a race for who can first build uncontrolled superintelligence. Rather, it's a race for leadership in all AI capabilities—economic advantage, technological sophistication, military capability. The stakes are national security. Losing this race could mean China sets global technical standards, dominates AI markets, and gains geopolitical advantage. But here's Ball's insight: a precautionary ban on superintelligence development would effectively mean unilateral disarmament by Western countries. If the United States commits to not developing superintelligence until safety is guaranteed, but China makes no such commitment, then either China develops superintelligence first (and dictates its governance), or both countries avoid the technology (and both sacrifice the benefits, but China faces fewer democratic pressures and might cheat). Ball notes that many Washington policymakers—and many AI lobbyists—cite China as the primary reason we must continue racing forward without additional regulatory constraints. But Ball himself doesn't believe this is a sustainable or honest argument. However, he uses it to illustrate a genuine dilemma: international coordination on superintelligence restrictions would be extraordinarily difficult. And without it, a unilateral ban by the West is strategically foolish. Tegmark, for his part, responds to this by arguing that China—precisely because it values control—would likely implement strict regulations on superintelligence development itself. The Chinese Communist Party would never permit a technology it cannot control to be developed on its soil. Tegmark argues that Elon Musk, in conversations with CCP officials, pointed out that if China allows superintelligence development, it will be the superintelligence, not the CCP, that runs China. The officials reportedly did not find this prospect appealing. Within months, China released its first comprehensive AI regulations. This suggests that concerns about a Chinese superintelligence arms race may be overstated.
RECURSIVE SELF-IMPROVEMENT: DOES IT LEAD TO RUNAWAY INTELLIGENCE?
Ball raises a sophisticated counterargument to Tegmark's concern about recursive self-improvement leading to superintelligence that escapes human control. He observes that every general-purpose technology in human history has exhibited what could be called recursive self-improvement in the context of human-guided development. We use iron to make better iron. We use mills to manufacture tools that create better mills. We use electricity to build systems that generate more electricity. We use petroleum to extract and refine more petroleum. We use computers to design better computers. This recursive loop is not unique to AI. It's a feature of all general-purpose technologies. Yet, Ball notes, none of these technologies have resulted in runaway processes that escape human control. Iron didn't spontaneously decide to create a civilization of sentient iron creatures. Electricity didn't bootstrap into superintelligence and take over the world. The recursive loops have been autocatalytic—they've produced nonlinear improvements—but they've remained within domains where humans maintained effective control. Tegmark's response to this argument highlights the crucial difference. Tegmark agrees that humans have always been in the loop with previous technologies, moderating and guiding development. But if you create a technology that thinks orders of magnitude faster than humans, that can instantly copy and share all knowledge among itself, that can optimize for its own goals rather than serving human purposes, then the nature of the loop changes fundamentally. Tegmark references Irving J. Good's observation from the 1960s: if you build machines that are better at all tasks than humans, including AI research itself, then you've crossed a threshold. That machine can improve itself faster than any external human actors can guide or constrain it. The loop is no longer human-moderated. This is not speculative philosophy; it's a direct logical inference from the definition of superintelligence.
THE ADAPTIVE SOCIETY: DEAN'S ALTERNATIVE FRAMEWORK.
Rather than proposing a preemptive regulatory ban on superintelligence, Ball argues for what he calls an adaptive approach to governance. The United States and other democracies have historically regulated new technologies through a combination of mechanisms that evolve over time. First, there is the liability system. If an AI system causes harm—physical injury, property damage, death—the creator or deployer of that system can be sued. Victims can bring civil actions. This creates financial incentives to be cautious. Companies face potential bankruptcy from successful lawsuits. Over time, this mechanism generates valuable information about real-world harms. Second, there is experience and learning. As a technology spreads throughout society, its benefits and harms become clearer. What seemed like a catastrophic risk might turn out to be manageable. What seemed harmless might reveal unexpected dangers. Ball argues we should let society gain experience with AI through voluntary adoption and market competition. Third, there is voluntary standardization. Industry actors often establish their own standards for safety and security. OpenAI, Anthropic, Google, Meta, and other labs have made voluntary commitments regarding AI safety. As Ball notes, his work in the Trump administration helped establish the AI Safety Institute, later renamed the Center for AI Standards and Innovation, specifically to develop technical standards for AI systems. These standards are created through experience and feed into industry practice. Finally, there is the gradual codification of successful standards into law. When society has learned what works, when there is broad consensus about what practices are necessary, when standards are shaped by real-world experience rather than theoretical concerns, then those standards get formalized in regulation. This is how many successful regulatory regimes developed. The FAA initially had no formal regulations on commercial aviation. As the industry grew and proved its value, standards emerged. Eventually, comprehensive aviation regulations were codified. Ball argues this approach is better than top-down preemptive regulation for several reasons. First, it maintains flexibility. Assumptions we make today about AI safety will likely prove partially wrong as the technology evolves. Embedding those assumptions in law locks society into ineffective or counterproductive regulations. Second, it harnesses market incentives and competitive dynamics. Companies innovate to meet the standards that matter for their business. Third, it avoids the risks of regulatory capture and political abuse. Rules emerge from experience rather than ideology.
THE EVOLUTION OF BALL'S RISK ASSESSMENT: MOVING TOWARD NEAR-TERM CATASTROPHIC RISKS.
Importantly, Ball has not remained static in his risk assessment. A year and a half to two years ago, he opposed California's SB 1047, a bill regulating AI models with respect to potential catastrophic risks from extreme cyber events and bioterrorism. The bill would have required models to meet certain standards if they posed risks of causing more than $500 million in damage. At the time, Ball was skeptical that scaling up language models through increased pre-training compute would actually create systems capable of designing bioweapons or executing sophisticated cyberattacks. The connection between cross-entropy loss minimization on internet text and the emergence of bioweapon design capabilities seemed tenuous. Then OpenAI released o1, a model with significantly improved reasoning capabilities. Ball saw something that changed his calculus. The model demonstrated system-two reasoning—the kind of deliberative, step-by-step problem-solving that characterizes human scientific thought. And with this reasoning capability, performance on mathematics, biology, and other scientific domains jumped significantly. Ball could suddenly draw a clear causal chain: advanced reasoning capability plus biology knowledge plus tool use (like AlphaFold integration) equals potential bioweapon risk. Ball subsequently supported SB 53, a successor bill that was more narrowly tailored than SB 1047 but still addressed catastrophic cyber and bioterrorism risks. He had updated his beliefs based on new evidence about what AI systems could actually do. This evolution is important because it shows Ball is not dismissing catastrophic risks. He's saying that focusing specifically on demonstrated capabilities—bioweapon design, advanced cyberattacks—makes more sense than speculating about superintelligence and control failures. He supports targeted regulation focused on concrete, knowable harms. But he remains skeptical about broad precautionary bans on entire categories of AI systems.
WHAT WOULD CONVINCE DEAN BALL? THE ROLE OF EMPIRICAL EVALUATION.
Near the end of the debate, Ball makes an important concession. He says: if researchers could formulate concrete, empirical benchmarks for superintelligence safety—specific evaluations that we could run on AI models to demonstrate they meet acceptable thresholds for control or safety—he would be willing to support making those evaluations mandatory. He wouldn't even require a law. He believes that if prominent, credible researchers and institutions created such an evaluation and it gained broad support, labs would voluntarily adopt it simply because following best practices makes good business sense. This is a subtle but important offering. Ball is not saying safety evaluation is unnecessary. He's saying it should emerge from technical practice and expert consensus rather than government mandate. And he's willing to imagine a future where empirical, measurable safety standards become the norm in AI development. Tegmark's response acknowledges common ground while emphasizing the timing problem. Yes, developing clear safety benchmarks is important. But if we wait years for the perfect evaluation to emerge from consensus, and superintelligence is developed in the meantime, we've lost the opportunity to prevent the risk. Tegmark advocates for establishing basic safety standards now—using existing regulatory models and frameworks—with the understanding that those standards will evolve as our understanding deepens. The key insight is that safety standards should exist before superintelligence is deployed, not after.
THE GLOBAL CONTEXT: TWO COMPETING VISIONS OF HUMANITY'S FUTURE.
Underlying the specific policy debates are two fundamentally different visions of what humanity's future with AI could look like, and what we should prioritize. Tegmark's vision is of what he calls a pro-human future. He emphasizes that America was founded on the principle of government "by the people, for the people." Tegmark interprets this to mean society should serve human flourishing and human agency. In his view, deliberately building superintelligence without knowing how to control it is an abdication of this duty. We would be creating a new species that supersedes humanity in all ways, not out of necessity, but out of haste and corporate incentive. He sees this as the ultimate loss of empowerment. For millennia, humanity has worked to liberate itself—from predators, from starvation, from disease. AI tools that remain under our control can accelerate this liberation. But superintelligence that we don't control could end it. Tegmark compares the choice to defending against an alien invasion. If humanity discovered that an alien fleet was coming to take over Earth, everyone would work together regardless of political differences to stop it. Yet, he argues, some in Silicon Valley with powerful lobbying teams are essentially arguing that humans should voluntarily invite that takeover. He finds this vision not just dangerous but deeply unambitious. Why, after hundreds of thousands of years of building toward greater agency and flourishing, should we voluntarily create something that takes that away? Ball's vision is different. He acknowledges that AI will fundamentally challenge existing institutions and power structures. This is true whether we regulate heavily or lightly. But he believes humanity is far more adaptive and resilient than Tegmark's scenarios assume. Society survived and flourished through the industrial revolution, the information technology revolution, and countless other disruptions. Humans have shown remarkable ability to find meaning and purpose even amid technological disruption. Jobs change. Institutions evolve. New opportunities emerge. Ball also emphasizes the positive possibility space. He imagines systems that are vastly smarter than humans at reasoning, coding, mathematics, and science, yet coexist with human flourishing. He believes the future is inherently unpredictable, that our assumptions about what advanced AI means will likely prove partly wrong, and that we should maintain flexibility and adaptability rather than locking in precautionary assumptions that could prevent genuine benefits. Both visions are coherent. Both are grounded in reasonable extrapolations from current technology and human history. The disagreement is about probability and remedy.
THE CORE CRUX: P-DOOM AND THE TRAJECTORY OF AI DEVELOPMENT.
The host distills the fundamental disagreement into quantitative estimates: What is the probability of existential-level doom if we proceed without significant new regulatory constraints on AI development? Tegmark's estimate is above 90 percent. Given our current best understanding of AI alignment and control methods, and given the trajectory toward superintelligence that's clearly visible in the empirical progress of language models, he believes there's a 90-plus percent chance we lose control in a catastrophic way if we don't establish safety standards that slow development and ensure demonstrable controllability before deployment. Ball's estimate is sub-one percent. He acknowledges he cannot prove it's zero—he's intellectually honest about that epistemic humility. But he believes that if superintelligence-like systems were actually posed the specific risks Tegmark describes, leading AI labs would not release them. Companies like OpenAI and Anthropic employ alignment researchers precisely because they take these risks seriously. They would not deliberately deploy something they believed had a 90 percent chance of destroying civilization. Moreover, Ball believes that as systems become more capable, we will develop better understanding of how to control them. The risk decreases as the capability increases because we're not static in our understanding. This gap—0.01 percent versus over 90 percent—is the crux of the entire debate. If Tegmark is right about the probability, then caution is overwhelmingly justified, and the status quo is reckless. If Ball is right, then precautionary regulation could prevent genuinely beneficial technology for risks that won't materialize.
A CRITICAL DISTINCTION: THE RACE FOR DOMINANCE VS. THE RACE TO SUPERINTELLIGENCE.
Tegmark makes an important clarification that both debaters actually find reasonable. There is not one race. There are two separate races that people often conflate. The first race is for dominance in AI capabilities and application: who can develop the most powerful, most useful, most economically valuable AI tools? This is a legitimate race. The country that wins this race gains economic advantage, technological prestige, and military capability. This is what Ball's AI Action Plan emphasizes. This is what American policymakers worry about when they cite China. The second race is who can be the first to develop superintelligence that hasn't yet been proven safe and controllable. This is what Tegmark calls a suicide race. There's no prize for coming first. First place in this race is extinction. Or at minimum, it's permanently ceding control of Earth to a system humans no longer govern. Tegmark believes we can continue the first race at full steam. Build powerful AI tools. Compete globally. Make incredible advances in medicine, science, and productivity. But the second race should be deliberately slowed. We should not race to build superintelligence until we can demonstrate it will be controlled. This is not a call to stop AI development. It's a call to channel it wisely—toward things we know how to keep safe, away from things we don't yet understand. Tegmark speculates that China, which values control above all else, will naturally slow or prevent superintelligence development that it cannot control. The U.S. should do the same. In this scenario, both countries continue advancing AI capabilities and competing for dominance, but neither builds uncontrolled superintelligence. The race for dominance continues. The suicide race is forestalled.
Chapter 19.
THE ROLE OF PUBLIC OPINION AND DEMOCRATIC LEGITIMACY.
Tegmark emphasizes that 95 percent of Americans in recent polling do not want a race to superintelligence. This is not a fringe position held by a tiny group of AI skeptics. It's a mainstream view shared by most citizens. Moreover, the Future of Life Institute's statement calling for superintelligence prohibition garnered signatures from an extraordinarily diverse coalition. Conservative influencers like Steve Bannon. Progressive leaders like Bernie Sanders. National security officials like retired General Mike Mullen. Religious leaders. Technologists. Scientists. What could these groups possibly have in common? Tegmark's answer: they're all human. They all recognize intuitively that deliberately building something designed to supersede humanity is not a wise or ethical course. Tegmark uses the analogy of child pornography legislation. When activists call for a ban on child pornography, they're not providing a technical definition of exactly what that means. But there's public consensus that it's wrong, and that consensus creates political will for policymakers to draft the specific language that makes it illegal. Tegmark's statement is analogous. It's a moral and political statement expressing public consensus that superintelligence development should be restricted, creating a foundation upon which specific policy can be built. Ball argues that American democracy was designed to be skeptical of raw democratic impulse. The Constitution was written by founders who distrusted pure democracy and built in checks and balances to require consensus before passing laws that grant government new powers. Laws, after all, are ultimately enforced through government monopoly on violence. That's a sacred power that should be granted cautiously. Ball worries that a regulatory regime based on public fear about superintelligence could lock in policies that prevent beneficial technology. This disagreement reflects a fundamental tension in liberal democracy: when should we act on public preference, and when should deliberative institutions resist populist impulses in favor of expert judgment and long-term thinking?
SPECIFIC REGULATORY MODELS: HOW WOULD SAFETY STANDARDS .
ACTUALLY WORK?
Both debaters discuss concrete regulatory models, and here their agreement is more extensive than their disagreement. Tegmark proposes a tiered approach inspired by pharmaceutical regulation. AI systems would be classified by risk level, from level 1 (minimal danger) to level 4 (extremely high stakes). A translation system might be level 1. An AI designed for protein synthesis or DNA synthesis would be level 3 or 4. Higher levels would trigger proportionally higher safety requirements. Companies would have to demonstrate to independent experts that safety standards are met. These experts would have no financial interest in approval, preventing conflicts of interest. The burden would be on the company to quantify benefits and risks. Government experts would approve or deny deployment. Ball accepts that such tiered regulation might make sense, particularly for clearly dangerous capabilities like bioweapon design or cyberattack facilitation. He already supported SB 53, which required certain testing for models that could pose catastrophic cyber or bio risks. He agrees with Tegmark that the FDA model has been successful in preventing dangerous medicines from being deployed to the public. The disagreement is about scope and timing. Tegmark wants comprehensive regulation on all frontier AI systems, with explicit requirements around superintelligence control. Ball wants targeted regulation focused on demonstrable harms, allowing the regime to evolve through experience rather than being established preemptively. Ball criticizes what he calls the FDA's own regulatory failure: the agency was built on assumptions appropriate to the 1920s when diseases were thought to be discrete conditions affecting populations uniformly. But modern medicine has revealed diseases are far more complex and varied. Individual patients require tailored treatments. The FDA's one-size-fits-all regulatory approach is now outdated. Modern medicine is being advanced more by software companies using AI on medical data than by pharmaceutical companies operating under FDA constraints. Ball uses this as a cautionary tale: regulatory regimes lock in assumptions that become obsolete.
CORE AREAS OF AGREEMENT.
Amidst the disagreement, several important points of consensus emerge. Both Tegmark and Ball agree that AI systems can pose real, near-term catastrophic risks. Both acknowledge that bioweapon design capabilities and cyberattack facilitation are genuine concerns that should be addressed through policy. Both support some form of safety evaluation and standards for frontier AI systems. Both think the term "superintelligence" has become so overloaded and vague that it's lost clear meaning and should be replaced with more specific, technical definitions of capabilities and risks. Both agree that innovation is important and that overly burdensome regulations could prevent beneficial technology. Both recognize that American and Western AI dominance is a legitimate goal that shouldn't be sacrificed lightly. Both acknowledge that international coordination on AI safety is difficult and that unilateral restrictions by the West could have geopolitical costs. Both understand that society needs to prepare for institutional challenges posed by advanced AI, that the relationship between humans and technology will fundamentally shift, and that governance structures need to evolve. Both see themselves as protecting humanity, not limiting progress. They simply disagree about which path protects humanity best.
THE ROLE OF VOLUNTARY INDUSTRY COMMITMENTS AND STANDARDS.
Ball emphasizes that leading AI labs have already made voluntary commitments regarding safety. These companies have published frameworks, established safety teams, and committed to transparency in certain areas. Ball also notes that his work in the Trump administration helped establish what is now called the Center for AI Standards and Innovation, specifically to develop technical standards for AI that can inform both industry practice and eventual regulation. The question beneath this is: are voluntary commitments sufficient, or are binding regulations necessary? Tegmark argues that voluntary commitments, while welcome, lack enforcement mechanisms. If a company discovers its system poses risks but releasing it is extremely profitable, will voluntary commitments hold? Tegmark points out that if someone sues after harm occurs, common law liability applies, but these mechanisms are reactive and insufficient for tail risks that could dwarf any company's assets. This gets to the fundamental difference in philosophy. Ball believes that reputational incentives, legal liability exposure, and competitive dynamics are sufficient to motivate safety even without explicit regulations. Tegmark believes that for risks large enough to potentially destroy civilization, we need proactive, binding constraints that prevent deployment of uncontrolled superintelligence in the first place. The stakes are too high for reactive mechanisms.
UNCERTAINTY AND EPISTEMIC HUMILITY: ACCEPTING WHAT WE DON'T KNOW.
Both debaters acknowledge significant uncertainty about the future. Tegmark emphasizes humility about what superintelligence might do: we haven't observed superintelligence yet, so our models are necessarily speculative. But he argues this is exactly why caution is warranted. With Thalidomide, nobody predicted that a medicine for morning sickness would cause birth defects. The mechanism was unexpected. With superintelligence, we're similarly blind to potential failure modes. We're closer to understanding how to build it than how to control it. That gap should terrify us. Ball also emphasizes uncertainty, but in a different direction. He's uncertain about whether the assumed risks are actually likely. He's uncertain about how AI systems will actually behave when deployed. He's uncertain about society's ability to adapt to advanced AI. He's uncertain about whether regulatory frameworks designed today will remain relevant years or decades from now. Given all this uncertainty, he prefers maintaining flexibility over locking in precautionary restrictions. This difference in how uncertainty translates to policy preference is crucial. Both agree there's much we don't know. But Tegmark says: therefore be cautious with existential risks. Ball says: therefore avoid rigid constraints that lock in assumptions we're uncertain about. It's a legitimate disagreement about how to handle deep uncertainty.
WOULD COMPANY LEADERS KNOWINGLY RELEASE DANGEROUS SUPERINTELLIGENCE?
Ball expresses confidence that leading AI lab executives—Sam Altman at OpenAI, Dario Amodei at Anthropic, Demis Hassabis at Google DeepMind—would not knowingly release a system they believed posed grave risks of destroying civilization. These are thoughtful people who employ alignment researchers and take existential risks seriously. Ball finds it implausible they would deliberately cross a line they knew to be catastrophic. Tegmark responds with a sobering historical analogy. The makers of Thalidomide did not deliberately poison babies. They genuinely believed the drug was safe based on available evidence. The harm emerged through mechanisms they didn't anticipate and couldn't have predicted. Similarly, company leaders might be in good faith wrong about safety. A system might pose catastrophic risks in ways that aren't obvious until it's too late. The gap between what leaders believe a system will do and what it actually does is precisely where danger lurks. Tegmark also notes that both Dario Amodei (CEO of Anthropic) and Sam Altman have publicly discussed P(Doom) scenarios—estimates of the probability of AI-caused extinction. Dario has spoken of 25 percent probabilities. Sam has mentioned "lights out for everybody" scenarios. If company leaders assign non-trivial probability to extinction, yet continue scaling their systems without regulatory constraints, are they knowingly accepting catastrophic risk? Tegmark finds this troubling and argues it strengthens the case for external safety standards that constrain deployment before risks materialize.
ECONOMIC DISRUPTION AND THE QUESTION OF MASS UNEMPLOYMENT.
Beyond existential risks, another concern animates the superintelligence debate: economic disruption and the possibility of mass technological unemployment. Tegmark notes that many who signed the superintelligence ban statement did so partly because of labor market concerns. Conservative signatories like Steve Bannon worry that superintelligence would make all human labor economically obsolete, leading to dependence on government UBI or corporate charity. They view this as undesirable dependency. Progressive signatories like Bernie Sanders worry about unprecedented wealth concentration—that AI would be developed and controlled by a tiny clique in Silicon Valley, creating the most massive power concentration in human history, with all economic value flowing to machine owners rather than workers. Ball downplays this concern, though not dismissively. He argues that the economy naturally creates and destroys millions of jobs every year through normal technological change. AI-driven job losses, even if significant, would be manageable within the existing pace of economic disruption. He also emphasizes that technology historically creates new job categories that didn't previously exist. The internet destroyed travel agent jobs but created web design jobs. AI will likely follow the same pattern. Moreover, Ball warns against using labor market concerns as a hidden basis for regulation. If the real concern is job loss, say so explicitly. But framing it as an existential risk or a safety issue is intellectually dishonest. It opens the door to using safety regulations as a cudgel against any technology that displaces workers, which would stifle all beneficial innovation. Tegmark agrees that job loss is a complex issue separate from existential risk. But he also pushes back on Ball's optimism. The difference between previous technological disruptions and superintelligence is that previous disruptions only replaced certain types of labor—farm work, factory work, routine office work. Superintelligence would replace cognitive labor itself. There's no historical precedent for a technology that makes humans obsolete at thinking and problem-solving. We might adapt, but it's not obvious we'd adapt well.
THE INSTITUTIONAL CHALLENGE: GOVERNING A WORLD WITH SUPERINTELLIGENCE.
Both debaters recognize that if superintelligence emerges, it would fundamentally challenge existing governance structures. Ball describes a scenario he calls the rentier state: a future where a small elite controls advanced AI systems and the vast majority of humans have minimal agency or role in economic or political life. Humans become dependents of an AI-driven system run by a few. This is Ball's nightmare scenario regarding tyranny. Even if superintelligence doesn't become explicitly antagonistic, the power imbalance could create permanent subjugation. Both debaters see this as a genuine risk to be concerned about. But they disagree about whether restricting superintelligence development helps or hurts this outcome. Tegmark argues that if we don't limit superintelligence development, it will be developed by private companies motivated by profit, which will inevitably create such power concentration. Better to slow development and ensure any superintelligence is developed under strict public oversight and democratic control. Ball argues that restricting superintelligence development through regulation could actually create the rentier state by forcing such development into a narrow, government-sanctioned path. He prefers a world where many actors are working on advanced AI, where there's competition and decentralization, making it harder for any single actor to monopolize superintelligence. This disagreement reflects a deeper question: does safety require concentration of control, or does it require distribution of control?
CLOSING PERSPECTIVES: WHERE THE DEBATE ENDS.
MAX TEGMARK'S VISION: A PRO-HUMAN FUTURE
Tegmark concludes with optimism, not pessimism. His advocacy for caution stems from deep enthusiasm for humanity's possibilities. He wants a future where humans remain in control, where we continue the millennia-long project of liberating ourselves from suffering and constraint. He emphasizes that this isn't a choice between more AI or less AI. It's a choice between two types of AI development. The first path: continue full steam ahead with powerful, controllable AI tools. Build autonomous vehicles that save a million lives per year. Develop drug discovery systems that cure cancer. Create educational tools that amplify human capability. Compete globally for dominance in AI applications. This path gets us extraordinary benefits without existential risk. The productivity gains, the medical advances, the scientific breakthroughs are immense. The second path: race toward superintelligence without knowing how to control it. Here, the upside is unclear. What's the benefit of superintelligence we can't control? We end up depending on handouts from machine owners or the government. Humanity cedes agency. The only appeal is to competitive impulse—being first. But being first in a suicide race is not an achievement. Tegmark finds it deeply unambitious to deliberately create something superior to and uncontrollable by humanity after we've spent so long building our way toward agency and empowerment. The coalition supporting superintelligence restriction spans the political spectrum because it reflects a basic human instinct: we want to remain in charge of our own future. The inspiring vision Tegmark champions is one where AI remains a tool that amplifies human flourishing for billions of years, potentially spreading human values across the cosmos. This requires just one thing: keeping superintelligence development constrained until we solve the control problem. "If we have to wait 20 years," he concludes, "it's way better than racing to it and bungling it and squandering everything."
DEAN BALL'S VISION: ADAPTIVE RESILIENCE AND INSTITUTIONAL EVOLUTION
Ball concludes by emphasizing that the future is profoundly stranger and more adaptive than current assumptions suggest. A person from two centuries ago would find today's world unimaginably alien. The same will be true two centuries hence. We cannot confidently predict what superintelligence means or what world it creates. That uncertainty should lead us toward flexibility, not rigidity. Ball believes humanity has remarkable capacity to adapt to technological change. Throughout history, technology has disrupted existing arrangements, and societies have evolved new institutions to handle the challenges. This isn't automatic—it requires thoughtful policy, innovation, and adaptation. But it's possible. Most crucially, Ball argues against locking in today's assumptions through rigid law. Regulatory regimes designed preemptively, without real-world experience, tend to enshrine assumptions that become obsolete. Better to maintain flexibility, allow incremental learning, and evolve governance structures as we understand better what we're dealing with. Ball worries that precautionary superintelligence bans would prevent beneficial technology, concentrate power dangerously, fail to achieve international coordination, and ultimately make us worse off. He advocates instead for building a society capable of grappling with advanced technology—a society with strong institutions, healthy democratic processes, and the flexibility to evolve as conditions change. This requires taking risks seriously but not assuming catastrophic failure. It requires both caution and ambition. "Details matter," Ball argues. "Getting this right is not going to be a matter of taking regulatory concepts developed for other things off the shelf and applying them. It's going to be much more difficult than that." The solution is not simple laws but sophisticated institutional evolution.
THE DEEPEST DISAGREEMENT: PROBABILITY, PRECAUTION, AND THE FUTURE.
The 0.01 percent versus 90 percent disagreement on P(Doom) represents a fundamental epistemic divide. Tegmark draws on decades of AI research, conversations with leading alignment researchers, and careful analysis of control problems. His 90 percent estimate reflects genuine uncertainty—he's not claiming perfect knowledge. But the gap between our ability to build superintelligence and our ability to control it seems vast and concerning. The most popular control approaches fail in the majority of scenarios he's modeled. The alignment problem remains fundamentally unsolved. Given this gap, proceeding without safety constraints seems reckless. Ball's low estimate reflects a different set of observations. Leading AI labs have strong incentives to be careful. They employ alignment researchers. They care about safety. Society has shown remarkable adaptive capacity throughout history. Regulatory regimes designed without real-world experience tend to fail. New technologies often turn out differently than expected. Given this, he finds extinction-level risks less likely and precautionary bans more dangerous. These disagreements about probability drive radically different policy conclusions. If there's a 90 percent chance of catastrophe without restrictions, then restrictions are urgently necessary. If the chance is below one percent, then restrictions would likely cause more harm than good. Neither debater is irrational. Both are engaging seriously with genuine uncertainties. But they weigh the risks of action and inaction differently. This is ultimately a judgment call about which uncertainty is worse to ignore.
CONCLUSION: AN UNRESOLVED BUT CLARIFIED DISAGREEMENT.
This debate does not resolve the question of whether humanity should ban superintelligence development. Rather, it clarifies what the actual disagreement is and why thoughtful people can disagree so profoundly. Both Tegmark and Ball want humanity to flourish. Both take risks seriously. Both understand that AI represents a transformative technology that could improve human life dramatically or undermine human agency. They simply disagree about which path is wiser given deep uncertainty about the future. Tegmark believes the existential control risk is real and large enough that caution is warranted. We should regulate superintelligence development until we can demonstrate control, using frameworks modeled on successful regulatory regimes in other industries. This wouldn't stop beneficial AI development; it would channel it toward applications we understand and can keep safe. Ball believes existential control risks are overstated and that precautionary regulation would prevent beneficial technology, concentrate power, and lock in obsolete assumptions. Instead, we should let technology develop with adaptive governance that evolves through experience, maintaining flexibility while addressing concrete, demonstrable harms. Perhaps the real wisdom lies in recognizing the legitimacy of both concerns. We genuinely face risks from superintelligence that escapes human control. We genuinely face risks from regulatory overreach that prevents beneficial technology. The challenge is navigating between these risks with both humility and determination. What seems clear from this debate is that the status quo—where superintelligence development is completely unregulated and uncontrolled—satisfies neither debater. Tegmark finds it recklessly dangerous. Ball finds it inadequate because it lacks the voluntary industry standards and technical benchmarks he advocates. Perhaps the path forward involves some middle ground: encouraging industry-led technical standards, supporting alignment research, developing concrete capability benchmarks, and maintaining readiness to implement regulatory constraints if and when we're confident about their necessity and design. The question of whether to ban superintelligence development is ultimately a question about humanity's future, about who controls powerful technology, about the balance between caution and innovation. Reasonable people, thinking carefully about these questions, can and do arrive at different conclusions. The debate continues not because one side is thoughtless, but because the stakes are genuinely high and the uncertainties are genuinely deep.
Last updated