AI, Extinction, and the Struggle for Control

Insights Cover Image
Contents

At the turn of the millennium, governments and corporations worldwide invested billions to avert a risk whose exact contours were unclear but potentially catastrophic: the Y2K bug. Engineers and executives were not certain whether the date change would cause planes to crash, power grids to fail, or financial systems to collapse. However, the possibility of systemic failure, however unlikely, triggered one of the most extensive global mitigation efforts in modern history.

Today, artificial intelligence presents a similar profile of risk. It is high in consequence, difficult to model, and, increasingly, deeply entangled with critical infrastructure. Yet unlike the Y2K moment, our collective response is not one of unified mobilisation but of acceleration. The warnings are louder, the stakes are higher, and the will to act remains fragmented. The question is no longer whether the risk is real, but why we are failing to respond in proportion.

How Serious Risks Are Ordinarily Assessed

“In the calculus of risk, it’s not just the probability of the storm, but the severity of the flood that follows.”- Orellium
In risk governance, the magnitude of potential harm is weighed alongside its probability. Aviation, nuclear power, and finance all maintain strict protocols for even low-probability catastrophic risks. Civil aviation grounds entire fleets for far less than a one per cent failure rate. Nuclear plants shut down if there is even a small risk of meltdown. Banks undergo regular stress testing to prevent tail-risk collapses. These sectors treat mitigation as essential, recognising that inaction can be expensive and delay can be nearly as detrimental as denial.

The contrast with AI is stark. Similar or greater probabilities of catastrophic harm are met not with coordinated guardrails, but with competitive acceleration, often spurred by geopolitical rivalry.

A Rising Chorus

The idea that artificial intelligence could wipe out humanity is no longer the domain of dystopian fiction or fringe thinkers. In May 2023, the Center for AI Safety released an open statement signed by more than 300 AI researchers and executives, stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories included Geoffrey Hinton, Yoshua Bengio, and the CEOs of OpenAI, Google DeepMind, and Anthropic.

Two months earlier, the Future of Life Institute had called for a six-month pause on the development of systems more powerful than GPT-4, warning that safety protocols were not keeping pace. This letter, signed by Elon Musk, Steve Wozniak, and others, was the first mainstream call to halt AI’s advance.

A More Subtle Doom

While some foresee a sudden catastrophe, others predict a slower erosion of human agency. Researchers from the Objectives Institute and Mila have described a process of “gradual disempowerment” in which AI progressively assumes economic and political control. Katja Grace of AI Impacts told The Times to “imagine a scenario where all the humans are basically living on dump sites.” Nate Soares, the president of the Machine Intelligence Research Institute and a former engineer at Google and Microsoft, compares the situation to a car speeding towards a cliff at 100 mph, estimating that the likelihood of humanity’s demise as a result of AI development is “at least 95%.”

There are early signs this process is already underway. A Financial Times report from August 2025 details how companies such as Microsoft, Intel, and BT have begun citing AI as the reason for significant workforce reductions. Entry-level and mid-tier white-collar roles are disappearing, while profits grow. The World Economic Forum projects that up to 92 million jobs could be displaced globally by 2030, with only 78 million created to offset them. Ford’s CEO has warned that nearly half of white-collar jobs in the United States could be eliminated. Independent trackers also show a clear uptick in layoff announcements where AI is cited as a factor, reinforcing that this is not simply a forecast but a present trend.

The threat is no longer theoretical. AI is not just a future danger, but an active force reshaping labour markets with minimal public debate and weak safety nets.

The Thin Margin of Stability

Societies often appear stable until they are not. History shows that order can rest on a narrow ledge, with economic shocks, even temporary ones, tipping communities into unrest. Research into past crises demonstrates that when basic economic thresholds are breached, the probability of unrest rises sharply, particularly in regions with weak safety nets. Automation has been a slower-moving force but has already reshaped income distribution and increased political polarisation in exposed regions.

Generative AI compresses these dynamics into a shorter timeframe. Employers are now open about their expectation that fewer human roles will be needed in the near future. Job cuts linked to AI are no longer speculative; they are part of corporate strategy. This is not a prediction of collapse but a reminder that stability depends on cushioning shocks and ensuring gains are fairly shared. The margin for error is thin, and the current trajectory is fast.

What Comes Next

The situation is complicated by geopolitical competition. The Trump administration has adopted an accelerationist, deregulatory stance, including rescinding prior oversight measures and promoting rapid federal adoption of frontier models. This approach is likely to pressure other governments to follow suit, fearing a loss of strategic advantage. In a race where regulation is viewed as a handicap, slowing down development becomes improbable. In Nate Soares’ metaphor, we are not just driving toward the cliff without braking – we are pressing harder on the accelerator.

This makes the case for urgent action unavoidable. If the trajectory cannot be slowed, the only responsible course is to act now to embed safeguards, mitigate foreseeable harms, and prepare for destabilising shocks before they arrive.

Responding in Proportion

These realities demand a proportionate response that is integrated into the same sense of urgency outlined above. At the societal level, this would mean treating frontier AI as a catastrophic risk within national registers, requiring independent safety cases before deployment, mandating rigorous evaluation and incident reporting, and coordinating internationally to prevent a regulatory race to the bottom. It would also involve building labour market buffers, reskilling funds, and social safety nets to mitigate job displacement. Without such measures, the economic and social destabilisation already visible in early AI-linked layoffs could accelerate.

For the legal sector, proportionate action aligns with this broader imperative. Following regulator guidance from the ICO, SRA, Law Society, and ABA on AI risk management, data protection, and professional duties is not optional but central to preserving client trust and legal integrity. Firms would maintain a comprehensive AI risk register and governance framework, implement human-in-the-loop checks for all AI-assisted outputs, conduct thorough vendor due diligence, ensure model audit rights, disclose to clients where AI outputs are used, and provide mandatory training for all legal staff on AI competencies, bias detection, and ethical safeguards. The goal is to embed resilience now, not in the aftermath of a crisis.

Conclusion

As advisers who spend much of our time designing and implementing AI programmes and roadmaps, we see first-hand both the transformative potential of the technology and the scale of its impact. We remain strong proponents of AI’s benefits, yet we are increasingly amazed at the lack of appreciation for the risks of continued unplanned and accelerated deployment.

This perspective is not driven by pessimism, but by an understanding of how quickly stability can fray. The combination of geopolitical competition, fragile economic margins, and a technology with the capacity to displace millions in a short span creates a risk profile that in any other domain would demand urgent mitigation. None of the measures outlined at the societal level appear likely to be implemented in the current political climate. The stance adopted by the Trump administration, and the probable consequence of acceleration as other nations follow suit, makes a coordinated slowdown improbable. That reality should focus the mind, not lull it. If we cannot slow the pace, we must at least be clear-eyed about the risks and unflinching in confronting the consequences. The time to understand and prepare for these risks is now, while the window for meaningful action, however narrow, remains open. Waiting for perfect certainty will not save us; proportionate action taken early is the only rational choice. The choice is between acting with foresight today or facing the cost of inaction tomorrow.

Get In Touch
To Find Out More

Contact Us