AI’s future can’t be ‘Made in America’ alone
Himanshu Tyagi
The next five years of AI development will determine whether artificial intelligence supports human potential or becomes a tool of unprecedented control by a handful of nations.
Unfortunately, the race for AI supremacy is framed like an arms race, America versus China. This is a dangerous narrative. The idea that one nation can or should dominate AI development ignores both the global nature of the technology and the risks of concentrated power.
No country can build AI in isolation. America produced 61 notable AI models in 2023, yet nearly 70 per cent of leading AI researchers in the US were born or educated abroad. Two-thirds of American AI start-ups have at least one immigrant founder. Attempting to nationalise AI while restricting foreign talent is self-defeating.
The hardware supply chain shows even greater interdependence. While Nvidia designs AI chips, the Netherlands’ ASML produces the only extreme ultraviolet lithography machines needed to manufacture them.
America produced 61 notable AI models in 2023, yet nearly 70 per cent of leading AI researchers in the US were born or educated abroad
Taiwan’s TSMC fabricates more than 90 per cent of advanced chips and Japan supplies critical materials. Recent tariffs and export controls have disrupted this ecosystem and resulted in raised costs and slower progress globally.
Data, AI’s essential fuel, is also international. Effective AI systems require diverse, global datasets to be able to serve varied populations. Models trained on narrow data sources fail to capture cultural nuances and linguistic variations, limiting their utility and embedding harmful biases.
Perils of AI dominance
If one nation could monopolise AI, the consequences would be catastrophic. A US-dominated AI ecosystem would create an echo chamber that embeds Western perspectives into systems used globally. Today’s leading language models already reflect narrow viewpoints, and this means they aren’t able to provide value to diverse populations.
More concerning is how AI dominance fuels a zero-sum arms race. The US-China competition, with government backing for companies like OpenAI and DeepSeek, treats AI as a strategic resource comparable to nuclear capabilities, with the ambition of one nation’s favoured companies controlling advanced AI and gaining economic and political leverage.
Where would this leave us? Superpowers could dictate global access to AI tools and force smaller nations into dependent relationships reminiscent of cold war alignments. Countries in Africa and south-east Asia risk surrendering autonomy over their AI futures. Dominant powers could withhold AI tools from rivals or flood markets with systems prioritising their own interests.
But this doesn’t provide actual security. AI-powered autonomous weapons could trigger conflicts that escalate faster than human intervention. Dominant nations could deploy AI for global surveillance or economic coercion, and the whole dynamic breeds resentment and dependency worldwide.
Open-source’s role
Centralised AI control is inherently more fragile than distributed ownership. Anthropic’s abrupt termination of AI code editor Windsurf’s access to Claude chatbot models, without transparency or collaboration, shows how closed providers can unilaterally gatekeep critical resources. The global ChatGPT outage revealed our vulnerability. For some this was merely annoying, but imagine if AI-dependent healthcare, infrastructure, or emergency services had failed.
Building fair, resilient AI requires global collaboration through open-source development. When developers from Bangalore to Bogotá can build on shared foundations, we see greater innovation well beyond any single entity’s capabilities. History shows open-source projects thrive by tapping global talent pools, and AI is certainly no different.
When developers from Bangalore to Bogotá can build on shared foundations, we see greater innovation well beyond any single entity’s capabilities
Bottom line, centralised systems create single points of failure that are vulnerable to abuse, so we need infrastructure that doesn’t collapse when one nation pulls a lever, or one company flips a switch. This is particularly true for wealth management firms and private banks increasingly relying on AI for portfolio optimisation, risk assessment and client services.
Toward digital internationalism
When asked whether America or China should lead AI, my answer is neither. We need digital internationalism: shared tools, standards and responsibilities. AI isn’t a resource to hoard or weapon to wield. It’s technology that will redefine how we live, work and solve global challenges.
The EU’s rights-based AI Act, China’s state-driven model and America’s market-led approach each offer insights, but none provides complete solutions. We need dialogue. India, Brazil, Nigeria and beyond have equal stakes in this technology’s future.
Global governance means establishing baseline access and accountability benefiting everyone. Open-source frameworks, transparent development and collaborative oversight can ensure AI serves humanity, not just the privileged few.
America can still lead, but this leadership can’t come through hoarding power. It can lead by example, building systems that work with and for the world. For financial institutions managing global wealth, open-source and collaboration is the pragmatic option, because AI that understands and serves diverse global markets will ultimately prove more valuable than systems constrained by national boundaries.
The direction we take for AI development must set aside previous concepts of dominance. We either move toward a fragmented, unstable future of competing AI fiefdoms, or we create an ecosystem where innovation flourishes no matter where a person is based. Our technological future, like our financial markets, thrives on openness.
Himanshu Tyagi is a professor at the Indian Institute of Science and a co-founder of Sentient



