AI Sovereignty, the Australian Approach and Strategic Choices for Governments

Insights Cover Image
Contents

Artificial intelligence is likely to be the most important economic force over the next two to five years. Countries that secure reliable access to high quality AI, and build the right foundations early, will gain a major advantage. Those that take the wrong path may find themselves exposed to political pressure or locked into technology that limits their future options.

Australia’s recent announcements with OpenAI have been presented as steps towards national AI sovereignty. We examine how far that claim holds up, considers whether a US dominated approach is still the safest route, explore a new and important risk created by the Trump administration and set out guidance for countries deciding how to position themselves.

What Australia Has Announced

Australia’s package contains four key elements.

  1. The first is a NEXTDC led AI campus at Eastern Creek in Sydney, with total investment expected to be around seven billion dollars. OpenAI intends to be a foundation customer of the facility. If built as described, the site would become one of the largest AI compute hubs in the region.
  2. The second element is the broader OpenAI for Australia initiative. This includes workforce training for more than one million people, a support programme for start-ups and the establishment of a local OpenAI outpost in Sydney.
  3. The third is the National AI Plan, which emphasises sovereignty, capability building and safe development.
  4. The fourth is GovAI, a government platform that allows public sector staff to use AI systems hosted on Australian soil, rather than relying on foreign servers.

Together, these moves give Australia rapid access to advanced AI tools and improved control over where government data is held.

How This Relates to Sovereignty

Sovereignty in AI has several layers. Australia’s progress varies across them.

Infrastructure sovereignty

Australia is clearly moving forward here. Locally built data centres and onshore hosting provide the government with stronger control over where and how its systems operate. This is a meaningful improvement on relying entirely on overseas cloud regions.

Data sovereignty

GovAI helps keep government data inside Australia. This reduces exposure to foreign commercial platforms. However, the underlying cloud environment is still provided by American firms, which means some foreign legal obligations continue to apply. This limits how far data sovereignty can realistically extend unless the underlying suppliers change.

Model and technology sovereignty

This remains Australia’s weakest area. The most important AI systems used in the public sector will be controlled and updated by OpenAI and other US companies. Australia does not control how these models are trained, what data they use or how they evolve. Local efforts to build home grown models exist, but they are modest in scale.

Regulatory sovereignty

Australia has chosen a light touch approach to regulation in order to attract investment and encourage adoption. This brings benefits but reduces the government’s leverage over major providers. It is a trade-off between speed and long-term independence.

In summary, Australia is gaining physical control over infrastructure and improving protection of government data but remains dependent on foreign companies for the core capabilities that matter most.

The US Route Versus the Open-Source Alternative

Most countries face a strategic choice.

A US dominated approach offers fast access to world leading AI systems, strong commercial ecosystems, and alignment with long standing allies. The cost is reliance on a small group of foreign companies and the political and legal pressures that shape them.

An open-source approach offers control, transparency, and lower cost. Countries can run systems privately, adapt them to local needs, and avoid foreign legal exposure. Chinese open-source models have reached high quality, though their use raises significant concerns for many democracies around security, political influence, and alignment of values.

Western open-source systems provide a more comfortable middle ground, though they are not yet funded or developed at the scale of the largest American providers.

Australia has chosen to prioritise the US route for now, with only limited investment in open source or domestic models.

A New and More Serious Category of Risk

Until very recently, most governments treated the United States as a dependable and predictable technology partner. Its institutions were seen as stable, its regulatory environment as rules based, and its technology companies as commercially driven rather than politically directed. That assumption no longer holds.

The current Trump administration has introduced a new and far more unpredictable dimension to political risk. AI has been placed at the centre of national industrial strategy, and several major AI companies have aligned themselves closely with this agenda. The result is that the distinction between commercial providers and the political priorities of the White House has become much thinner.

What is different now is the style and intent of US foreign and economic policy. Senior commentators have described it as operating less like a strategic doctrine and more like a personal extortion system. Decisions that once followed a clear policy logic can now be shaped by personal interests, commercial entanglements or vendettas. The administration has already shown a willingness to intervene directly in private companies, reward those who fall into line and punish those who do not. It has also blurred the line between public authority and private gain in ways that would once have been unthinkable in a mature democracy.

For countries that depend heavily on US technology providers, this creates a type of exposure that previously existed only in relation to China. Although the United States still has stronger courts and democratic institutions, the behaviour of the current administration means the risk profile has changed. Dependency on a single US vendor now carries strategic vulnerabilities that are not far removed from the risks associated with relying on Chinese platforms. These include the possibility that access to technology, model behaviour, pricing or export permissions could be influenced by political pressures entirely unrelated to the interests of the importing country.

The old assumption that US technology carries minimal political risk is now outdated. Governments and corporates must assess reliance on American AI providers with the same seriousness they apply to any other major geopolitical dependency. In short, we are no longer in the world where alignment with US technology could be taken for granted as a safe and neutral choice. The risk landscape has changed, and it must be assessed with clear eyes.

Assessment of the Australian Approach

Australia has achieved three important outcomes.

It has secured access to high quality AI quickly. It has laid the foundations for strong local infrastructure. And it has created a safer environment for government data through onshore hosting.

However, it remains heavily dependent on a small number of US companies for the most critical technology. It has not yet built a serious domestic or open-source pillar. Its light regulatory stance may help attract investment, but it also reduces long term leverage.

Australia has moved decisively on infrastructure sovereignty but has much more to do on model and technology sovereignty.

A further consideration, flowing from the earlier defined risks, is that Australia’s dependence on US providers now carries a political dimension that did not previously exist. The current US administration has shown a willingness to use regulatory and executive powers in ways that are personal, unpredictable and closely tied to commercial interests. Since Australia’s AI capability rests heavily on US companies, this means that shifts in the political environment in Washington could have direct consequences for Australia’s access to critical technology. This is a new form of exposure and should be recognised as part of any assessment of long-term sovereignty.

Guidance for Governments Still Choosing Their Path

Several principles now appear essential.

Aim for variety rather than dependence. Critical national systems should not rest entirely on one foreign supplier.

Make open models part of the national strategy. Whether developed at home or with trusted partners, open-source models provide genuine independence and adaptability.

Separate sensitive and non-sensitive uses. The most critical functions of the state should rely on technology that your country can fully control. Less sensitive applications can use foreign systems where appropriate.

Strengthen domestic capability. Build your own research, evaluation and safety expertise so that risks can be assessed independently of vendor claims.

Align your strategy with your reality. If the aim is close alignment with the United States, say so clearly. If the goal is actual sovereignty, ensure that investment and architecture support that aim.

Conclusion

AI is no longer just a technology choice. It is becoming a foundation of national power, economic competitiveness and political independence. Sovereignty in this context is not about isolation. It is about having real options and the ability to act if global politics shift.

Countries that combine strong international partnerships with serious domestic capability and meaningful open-source foundations will be far better placed to navigate the decade ahead than those that rely too heavily on a single foreign provider for AI sovereignty, regardless of how friendly that partner appears today.

Get In Touch
To Find Out More

Contact Us