When Autonomy Dies

Insights Cover Image
Contents

AI, Control, and the Restructuring of Professional Work

The FT recently published a piece titled “Professionals are losing control of their work.” Drawing on data from the Skills and Employment Survey, it revealed that the proportion of workers reporting high levels of task discretion had fallen from 62 percent in 1992 to just 34 percent in 2024. The loss of autonomy, once confined to routine and administrative jobs, is now being observed in professional roles across sectors.

The article pointed to digital systems that structure, direct, and monitor work as the primary drivers of this shift. It also suggested that generative AI might, in some cases, expand autonomy. However, the broader pattern it described is one of contraction rather than expansion.

The risk is not only that AI will displace jobs. It is that it will redefine the nature of professional work itself – reducing judgement, constraining discretion, and embedding systems of control. This restructuring is already under way.

Recent comments from Anthropic CEO Dario Amodei underscore the scale of this shift. In an interview with Axios, Amodei warned that AI could eliminate up to half of all entry-level white-collar roles within five years, potentially pushing unemployment to 10–20%. He noted that leaders are “sugar-coating” the shock to come, and that workers are largely unaware of how rapidly these changes may unfold.

Real-world examples support his view. In May 2025, Microsoft announced it would lay off 6,000 employees (around 3% of its global workforce) as part of a broader strategy to reallocate resources to AI. In the same month, Walmart announced plans to cut more than 1500 managerial jobs, citing the unprecedented pace of technological evolution. These are not isolated moves; they are indicators of a wider, systemic pivot.

The Practical Reality of Autonomy Loss

What does this shift look like in practice?

  • A lawyer may find that research is no longer a strategic activity, but one routed through pre-approved AI tools that determine source material and structure.
  • A doctor may be required to follow diagnostic protocols generated by predictive algorithms, with deviations discouraged by risk frameworks.
  • A consultant may be asked to align recommendations to automated benchmarking data, even when client context suggests a different course.

In each case, the professional remains present, but the space for agency has narrowed. The work becomes less about interpretation and more about administration.

The Economic Logic of Automation

Much of this transformation is driven by predictable incentives. Gen AI offers speed, cost-efficiency, and measurable output. These attributes are naturally attractive to organisations operating under pressure from shareholders, clients, or regulators.

As a result, the prevailing approach to AI deployment has been primarily operational, not developmental. Efficiency is prioritised over judgement, standardisation over discretion.

This is not new. The implementation of enterprise systems in previous decades, think ERP, CRM, and workflow platforms, followed a similar trajectory. Promising insight and integration, they often delivered rigidity and oversight. Gen AI, with its broader scope and greater potential for substitution, amplifies this trend.

The Disruption and Its Interpretation

Another recent FT article, “The great AI jobs disruption is under way,” discussed how roles across the tech sector are already being reshaped. Developer roles are declining in number. Customer support teams are being reduced. Simultaneously, demand for AI-focused roles is rising.

This is commonly interpreted as a phase of labour market adaptation: older roles give way to new ones, and workers reskill accordingly.

However, such interpretations often overlook the more fundamental restructuring beneath the surface. The concern is not simply whether jobs exist, but how much control workers retain within them. A data analyst who is reskilled to supervise automated pipelines may be employed, but may exercise far less judgement than before.

Amodei points to the same trend: a collapse of traditional career ladders. Junior developers, paralegals, and first-year associates are among those most at risk. These were once formative roles where judgement was built through practice. In their absence, we risk producing professionals who have oversight responsibilities but no meaningful pathway to expertise.

Beyond Autonomy: A Broader Impact on Employment

The loss of autonomy is one piece of a wider shift. What we are witnessing is a full-scale recalibration of work, who does it, how it is valued, and whether there is room for human growth within the machine.

As firms restructure around AI, the impacts go beyond discretion. They strike at access, progression, and stability. Roles that once served as entry points are being automated before alternatives are properly established. Microsoft and Walmart are only the most recent bellwethers in a growing list of organisations making large-scale changes in the name of AI-readiness.

The implications for economic inequality, intergenerational employment, and institutional capability are only beginning to surface. The time for conceptual debate has passed.

The Limits of the Reskilling Narrative

Reskilling is a valid response to technological change. In some domains, it is clearly producing benefits. Developers using GitHub Copilot report greater productivity and reduced routine coding. Analysts benefit from faster data preparation and visualisation.

But these outcomes are not guaranteed. In many cases, reskilling leads to roles focused on oversight, exception handling, or interface management. The professional becomes a steward of the system rather than its shaper.

Moreover, the framing of reskilling can obscure more difficult questions. It assumes that new roles will offer the same degree of engagement and value as those they replace. It also assumes that organisations are designing systems with meaningful human input in mind. Often, they are not.

Systemic Drivers of the Current Approach

While short-term financial incentives are a major driver, they are not the only factor. Competitive dynamics play a role, especially in sectors where early AI adoption is seen as a strategic advantage. Regulatory pressures can also reinforce automation, particularly when compliance is tightly linked to data-driven processes. Even client expectations may contribute, especially when framed around speed or consistency.

These forces are not inherently negative. But together, they encourage a model of AI deployment that favours control, predictability, and replicability, often at the expense of professional discretion.

Organisational Risks

The consequences of removing autonomy, and ignoring employment disruption, are both cultural and strategic.

Innovation becomes more difficult when professionals have less room to experiment or deviate. Organisational adaptability declines when decision-making is concentrated in systems that are difficult to adjust. Over time, firms risk becoming efficient but inflexible, capable of scale but not of redefinition.

This has implications for long-term competitiveness. Professional capability is not simply a cost centre. It is a source of resilience.

What Responsible Leadership Looks Like

The challenge for leadership is not to resist AI adoption, but to shape it deliberately.

Preserving professional value requires practical design choices. These include:

  • Involving professionals in the development and implementation of AI systems, rather than imposing tools unilaterally
  • Establishing clear protocols for human override, particularly in areas where context or judgement are critical
  • Avoiding performance frameworks that reinforce mechanistic behaviours or punish deviation
  • Protecting zones of work where exploration and discretion are essential, even if they are less immediately efficient
  • Responding transparently to forecasts such as Amodei’s with structured workforce assessments, public communication, and systems that preserve human judgement alongside machine efficiency
  • Being compassionately brutal, guiding the business and their people through the transition

These steps do not inhibit innovation. They enable it, and they ensure that AI complements rather than displaces the core value of professional labour.

Conclusion

Autonomy is not the only casualty in the age of AI. The broader fabric of professional work, including access, learning, progression, and stability is under strain.

Leaders face a strategic choice. AI can be used to enhance human capability or to strip it down to functional oversight. One path supports long-term resilience. The other risks walking off a societal and corporate cliff-edge.

This is a matter of governance, incentives, and intent. As Amodei warns, the cost of inaction is systemic disruption at scale.

Get In Touch
To Find Out More

Contact Us