Why this moment feels different. Again.
Over the past few months, there has been a noticeable shift in how AI shows up in the real world. Not just in research papers or product announcements, but in how work is done, how decisions are made, and how systems interact with each other.
This is not because a single breakthrough suddenly changed everything. It is because several trends converged at once. AI systems became more reliable at carrying out multi-step tasks. They became easier to connect to real systems such as email, payments, software tools and databases. And they began to operate continuously, rather than only when prompted by a human.
As a result, the pace of change feels faster, and in many cases it is. What is most striking is not that AI is improving, but that it is increasingly able to act.
From tools to actors.
Until recently, most people experienced AI as a tool. You asked it a question, it produced an answer. You asked it to draft something, it generated text. The human remained clearly in control of when work started and stopped.
That boundary is now blurring.
A growing number of AI systems are designed to take a goal and then plan, execute and adjust their actions over time. These systems can use other software tools, request clarification, monitor outcomes, and continue working without constant human input.
Once systems are able to act, questions of usefulness quickly give way to questions of structure. Acting systems participate inside existing workflows. That shift forces attention away from individual use cases and towards the organisational context in which those systems operate.
As AI systems become capable of planning, executing and adapting over time, the limiting factor is no longer technical performance, but rather organisational design. The assumption that agents can simply be deployed into existing structures, processes and controls without consequence is already proving false. Acting systems interact with incentives, permissions, escalation paths and accountability frameworks that were designed for human decision-making and human pace.
When those frameworks are left unchanged, risk migrates, rather than disappears.
This is why the most significant failures of agentic AI are unlikely to stem from model accuracy or compute constraints. They will arise from misaligned operating models. Unclear spans of control, poorly defined handovers between humans and machines, fragile trust boundaries, and decision rights that no longer map cleanly to responsibility. Treating agent deployment as a technology programme rather than an organisational redesign is an exposure, not a neutral choice.
In January 2026, this became visible in a way that was hard to ignore. Large numbers of AI agents began interacting with each other on an open platform built specifically for them. They created profiles, communicated, explored ways to collaborate and in some cases attempted to transact.
Whether or not every reported number was precise, the underlying signal was clear. Autonomous systems are no longer confined to controlled demonstrations.
This matters because acting systems change how value is created and how risk is distributed. A system that can act can also make mistakes, cause harm, exploit weaknesses and move faster than existing controls were designed to handle.
Intelligence is no longer an abstract debate.
At the same time, the language used to describe AI has shifted. Scientific and policy discussions increasingly refer to these systems as displaying forms of general intelligence, meaning they can perform a wide range of cognitive tasks at or above human level in many domains.
The point here is not whether one agrees with any particular definition. What matters is that this framing changes behaviour. Once systems are widely described as intelligent rather than merely automated, expectations shift. So do questions about responsibility, liability and governance.
It also makes it easier for organisations and governments to justify deploying AI in more sensitive contexts, including security, healthcare, defence and large-scale administration. That increases both the upside and the downside of rapid adoption.
What the leaders (and loudest Megaphones) are saying.
Several voices are particularly important in understanding the current trajectory.
Dario Amodei, the CEO of Anthropic, argues that while public opinion about AI swings back and forth between excitement and scepticism, the underlying capabilities are improving steadily. He describes this as a smooth and persistent increase rather than a series of isolated leaps.
His central warning is that powerful AI systems lower barriers. They take expertise that once required years of training and make it accessible through guided workflows. That applies to productive activities such as research and engineering, but also to harmful ones such as cyber-attacks or biological misuse. He also highlights the risk of rapid job displacement, especially in entry-level professional roles, and the concentration of power in organisations that control large-scale compute and data.
Sam Altman, the CEO of OpenAI, is signalling something complementary from a commercial perspective. His public statements and recent interviews emphasise scale, infrastructure and longterm investment. The ambition is not to build a better application, but to create a foundational layer that large parts of the economy depend on.
Mustafa Suleyman, CEO of Microsoft AI, has recently added a more operational dimension to these claims. He has argued that many computer-based professional tasks, including legal, accounting, marketing and project management work, could be automated within the next 12 to 18 months. The significance of this is that leaders of major technology platforms are planning and investing as if large parts of white-collar work are structurally automatable in the near term. That framing alone alters behaviour inside organisations.
Taken together, these positions point in the same direction. The technology is advancing quickly, it is being industrialised, and it is being embedded into everyday systems. The question is no longer whether AI will matter, but how quickly organisations adapt to the consequences.
Why this matters across all sectors.
One of the most common mistakes organisations make is to assess AI only within the boundaries of their own industry. That approach is increasingly risky.
Developments in defence show this clearly. Military systems are being designed to sense, decide and act at machine speed. Human oversight still exists in theory, but in practice decisions are often made faster than people can meaningfully intervene. The same design principles are now appearing in cyber security, fraud detection, logistics and emergency response.
In biology and healthcare, AI systems are moving from analysing data to designing new molecules and genetic sequences. This does not mean catastrophic outcomes are inevitable, but it does mean that existing regulatory and risk models, which assume slow and specialist progress, are under strain.
In professional services and corporate functions, AI agents are beginning to take on end-to-end workflows. This affects not just efficiency, but training, pricing models and career structures. Tasks that once served as entry points for junior staff are increasingly automated, raising difficult questions about how experience is developed.
For some organisations, these implications remain abstract. For others, they are already shaping concrete decisions about how work is organised and where future capacity will come from.
Goldman Sachs has been explicit that its use of AI agents is part of a multi-year effort to reorganise how work is done across the firm. Working directly with Anthropic engineers, the bank is embedding autonomous agents into core functions such as trade accounting, transaction reconciliation and client onboarding. These activities sit at the heart of control, compliance and client trust.
Importantly, Goldman does not frame this as a narrow efficiency initiative. Its leadership has linked agent deployment to broader structural decisions, including constraining future headcount growth and reshaping how capacity is created inside the organisation. The language used is one of digital coworkers, but the implications are structural. When agents are introduced into complex, processintensive roles at scale, they alter escalation logic, accountability and career progression. This is operating-model change, whether or not it is labelled as such.
What matters here is not the specific technology stack or vendor relationship. It is the pattern of behaviour. When agents are embedded into core processes, organisations are forced to confront questions they can otherwise postpone: who supervises what, how judgement is exercised, and how experience is accumulated when machines perform much of the work.
These changes cut across sectors because they affect how decisions are made, how work is organised, and how accountability is assigned.
Who gains power.
As AI becomes more agentic, power concentrates in specific places.
Organisations that control platforms where AI systems operate gain influence over access, standards and economics. Those that control large, high-quality datasets gain an advantage that is hard to replicate. Those that can afford sustained investment in compute and energy infrastructure set the pace of innovation.
At a national level, governments that can mobilise industrial policy, secure supply chains and align regulation with deployment goals gain strategic leverage.
For most organisations, this means increased dependency. Few will build these systems themselves. Many will rely on a small number of providers, often indirectly through software vendors.
What is most likely to break first.
The first failures are unlikely to be dramatic or futuristic. They are more likely to be mundane and damaging.
Trust boundaries will be tested as it becomes harder to distinguish between human and automated actors. Security models will struggle when systems are granted broad permissions to act autonomously. Accountability will become blurred when outcomes result from chains of automated decisions rather than single human choices.
Labour markets will feel pressure unevenly. When leaders of global technology platforms are publicly suggesting that most computer-based professional work could soon be automated, it is not hard to see why entry-level and structured analytical roles sit directly in the line of fire.
The pressure on entry-level roles is now a present condition, not a future risk. Deloitte’s decision to overhaul its graduate audit training reflects a clear acknowledgement that AI has absorbed much of the repetitive work that once served as the foundation of early professional development. Rather than attempting to preserve outdated task structures, the firm is accelerating technical qualification while shifting training towards judgement, communication and problem-solving.
This response is instructive. When AI removes traditional junior work, organisations face a choice. They can allow early career pathways to erode and hope capability re-emerges later, or they can redesign progression deliberately. The former stores up fragility. The latter requires investment, clarity and leadership attention.
Perhaps most importantly, decision-making processes that rely on slow deliberation will struggle in environments shaped by machine-speed action. This is as true in business and regulation as it is in defence.
The danger of moving the goalposts.
Many of the tensions emerging around agentic AI feel novel, but they mirror challenges organisations have faced before. Limits on spans of control, the need for structured handovers between teams, and the trade-offs between tight and loose coupling are well-understood organisational issues. They do not disappear when work is performed by machines.
A recurring pattern in technology adoption is to redefine significance once change becomes familiar. Capabilities that once seemed remarkable quickly become normal, and attention shifts to the next benchmark.
This creates a false sense of stability. The absence of a single dramatic moment does not mean the absence of profound change. When improvement is steady and compounding, the effects accumulate quietly and then appear suddenly in outcomes.
It is also worth noting that current adoption across many professional services firms remains uneven and experimental. In some cases, reported productivity gains are modest or even negative in the short term. That does not invalidate the structural direction of travel. It simply means the compounding effect may be underestimated because it arrives gradually before it accelerates.
The real risk is not mislabelling the technology. It is underestimating how quickly incentives, behaviours and systems adjust around it.
The human element. Why leadership attention matters now.
Alongside questions of technology, productivity and risk, there is a quieter but equally important issue that leaders must address. How people experience this period of change.
For many employees, AI is no longer an abstract trend. It is something they read about daily, often framed in terms of disruption, job loss, or radical change to careers that were assumed to be stable. Even when those outcomes are uncertain or unevenly distributed, the constant flow of commentary creates a background level of anxiety.
This uncertainty has real consequences. People respond to prolonged ambiguity in predictable ways. Some push themselves harder in an effort to stay relevant. Others disengage, delay decisions, or become quietly risk averse. In most organisations, the first signs are not panic or protest, but loss of confidence, erosion of trust, and declining willingness to invest emotionally in long-term plans.
Fear of job loss often sits alongside rising expectations. As AI tools raise productivity, people can feel pressure to produce more, learn faster, and adapt continuously, while simultaneously questioning whether their role will still exist. This combination is particularly corrosive. It increases stress while reducing the sense of security that normally helps people cope with change.
There is also a strong identity dimension. For many professionals, work is closely tied to self-worth and status. When tasks that once defined expertise are automated, the reaction is not simply resistance to technology. It is often a sense of loss, uncertainty about value, and concern about where judgement and experience still matter.
These effects are not evenly distributed. Early adopters and those closest to decision-making tend to feel more in control. Others may feel left behind or exposed, especially in roles where career progression has traditionally depended on tasks that are now automated. If left unaddressed, this can create internal divisions that mirror wider societal debates about winners and losers.
For leaders, the priority is not to offer blanket reassurance or optimistic slogans. That approach tends to backfire. What reduces anxiety is clarity, fairness, and visible commitment.
Clarity means being specific about what is changing and what is not. Which activities are likely to be automated. Which areas will be redesigned. Where human judgement remains essential. Vague statements about opportunity do little to help people plan.
Fairness means ensuring that productivity gains are not perceived as coming at the expense of security or dignity. If AI adoption feels like a silent redundancy programme, trust will be lost quickly and may not return.
Commitment means treating skills development and career adaptation as core business issues, not optional extras. Reskilling must be practical, supported by managers, and linked to real opportunities inside the organisation. Asking individuals to manage this alone, in their own time, sends a clear and damaging signal.
Early career pathways deserve particular attention. If AI absorbs the tasks that once helped people build experience and confidence, organisations must deliberately create new ways for judgement, context and professional identity to develop. Failing to do so stores up future fragility.
Finally, leaders need to make space for honest conversation, guided by what we describe as compassionate brutality. This means being clear and direct about what AI is likely to change, including the possibility that some roles will evolve significantly or disappear, while being thoughtful and humane in how those realities are addressed. People need to be able to ask what AI means for their role without being labelled resistant or unambitious. Psychological safety is not a soft issue in this context. It is a prerequisite for successful change, and without it even well-intentioned transformation efforts are likely to fail.
The central challenge is unmanaged uncertainty. When people are left to fill gaps with speculation, they will do so in ways that protect themselves, often at the expense of collaboration and trust.
The organisations that navigate this period best will be those that treat the human impact of AI as a leadership responsibility, not a communications problem. They will acknowledge that the pace of change is real, that it is unlikely to slow, and that moving the goalposts to preserve comfort helps no one.
What leaders should be asking now.
As AI becomes more capable and more embedded in daily operations, leadership attention needs to widen. The questions below are strategic, organisational and human.
First, where are agents or autonomous systems already able to take action within our organisation, or through our suppliers?
This includes systems that can initiate communications, approve transactions, make recommendations that are routinely followed, or trigger downstream processes. If the assumption is that this is not happening, it is worth checking again. In many cases it is occurring indirectly, through third-party tools.
Second, which parts of our organisation are most exposed to uncertainty, not just automation?
This is about people as much as processes. Where are roles changing fastest. Where are career paths becoming less clear. Where might fear or loss of confidence already be affecting behaviour, decisionmaking, or retention.
Third, are we being clear enough about what will change and what will not?
Reassurance without detail creates speculation. Leaders should be able to explain which activities are likely to be automated, which will be redesigned, and where human judgement remains central. Silence or general optimism will be filled by external narratives.
Fourth, are we prepared to practise compassionate brutality?
That means being honest about difficult realities, including the possibility that some roles will evolve significantly or disappear, while taking responsibility for supporting people through that change.
Avoiding these conversations may feel kind in the short term, but it usually increases harm later.
Fifth, how are we supporting people to adapt in practice, not just in principle?
Reskilling, redeployment and career transition need time, funding and managerial attention. Leaders should ask whether adaptation is realistically supported, or whether the burden is being placed on individuals to manage alone.
Sixth, what happens to early career development in our future operating model?
If AI absorbs many entry-level tasks, new ways of building judgement, confidence and professional identity are required. Without deliberate redesign, organisations risk weakening their future talent pipeline.
Seventh, do our managers have the capability and permission to lead difficult conversations well?
Senior intent matters little if managers lack the skills or confidence to discuss AI, change and uncertainty openly. Leaders should consider whether managers are equipped to lead with clarity and compassion rather than avoidance.
Eighth, where might trust be at risk if we get this wrong?
Trust is fragile during periods of rapid change. Leaders should ask which decisions, messages or delays could undermine it fastest, and what safeguards are in place to prevent that erosion.
Finally, are we designing the future of work with people in mind, or simply expecting people to fit around new systems?
The long-term health of the organisation depends on whether human judgement, dignity and agency are treated as essential, or merely tolerated until automation advances further.
At this point, the primary risk is no longer lack of understanding. Many leaders broadly grasp what AI and agents can do. The harder question is whether they are willing to confront what those capabilities imply for structure, power and people. Framing agent deployment as experimentation can be useful, but it becomes evasive if it delays necessary organisational decisions.
We leave you with this.
The pace of change in AI is not slowing down. What is changing is how exposed leadership decisions have become.
Comfort often comes from redefining the problem so that it feels familiar. Leaders tell themselves that this is another technology cycle, another productivity wave, another skills challenge that can be absorbed incrementally. That instinct is understandable. It is also increasingly costly.
Treating agentic AI as something to be bolted onto existing operating models does not preserve stability. It quietly undermines it. Authority becomes blurred. Career pathways weaken.
Accountability fragments. Trust erodes, not because people resist change, but because they are left to interpret it alone.
The most serious risk organisations now face is not that they adopt AI too aggressively. It is that they adopt it narrowly. Deploying agents without redesigning decision rights, progression structures and governance frameworks is not pragmatism. It is avoidance. And avoidance shifts risk onto people, culture and future capability.
The organisations that navigate this period well will not be those with the most advanced models. They will be those that recognise agentic AI as a structural force and respond accordingly.
Leadership now requires more than clarity of vision. It requires willingness to act on what that vision implies.