India’s AI Gambit: Exploring Our Strategic Options

How India can shape, not just follow, the global rules of AI governance

Sundeep Waslekar

[Image from Pixabay]

India stands at a pivotal moment in the unfolding global politics of artificial intelligence. Despite resource constraints, the country has the opportunity to emerge not as a passive rule-taker, but as a proactive rule-shaper—an innovator, not just a consumer—and a balancing force in the global AI power equation.

There is broad consensus among policymakers, technology leaders, and scientists in India that AI should be harnessed to advance economic and social development. Key sectors—agriculture, healthcare, education, urban governance, and business—can benefit significantly from the deployment of AI tools.

At the same time, there is growing emphasis on atmanirbharta, or self-reliance. India is investing in building indigenous cloud infrastructure, developing Large Language Models (LLMs), and exploring chip design and manufacturing capabilities.

India has also established an AI Safety Institute, underscoring its commitment to mitigating threats such as deepfakes, explicit content, and financial fraud. The government has issued specific guidelines to support the safe development and deployment of AI technologies.

India’s AI strategy is informed by its success in software exports and Digital Public Infrastructure (DPI). Consequently, the country emphasizes equitable access to technology and data sovereignty—principles that resonate across the Global South, where inclusion and autonomy are core concerns.

However, here lies the paradox: India has built a strong, consensus-driven foundation in AI policy—but remains on the sidelines of the global AI arena, a spectator rather than a key player.

Top of the Pyramid

While India and many Global South nations prioritize inclusion, sovereignty, and harm prevention, the leading AI powers—the United States, China, and the UK—are driven by a different set of imperatives: pushing the frontiers of science and monopolizing technological supremacy.

Their ambitions are to develop AI systems capable of producing novel scientific theories, discovering new chemical compounds, and formulating mathematical theorems previously unknown. These are no longer speculative goals. The evolution of AI systems today already exhibits early signs of superhuman intelligence.

Consider AI agents like AutoML-Zero and Devin, which can plan, reason, and act over extended timelines. Technologies such as AutoML-Zero enable AI to modify its own architecture and learning algorithms—progressing beyond human-guided design. Laboratory experiments have revealed the capacity of AI to deceive, manipulate, and even create harm. In one case, an AI generated 40,000 toxic chemical compounds in six hours—including the structure of VX, the most lethal nerve agent known—well before the release of GPT-4.

Such capabilities remain, for now, in controlled environments. But their potential real-world applications—and threats—are rapidly approaching. Military applications are shrouded in secrecy. Presently, AI is not integrated into the automated command-and-control systems of nuclear arsenals. Yet a race is underway to embed AI in weapons governance within the next 3-4 years. This could shift life-and-death decisions from human hands to algorithms.

Meanwhile, beyond these known advancements, there is uncertainty about where the race to Artificial General Intelligence (AGI) is headed. Insiders like Elon Musk and Kai-Fu Lee suggest AGI—AI that surpasses human intelligence—may be imminent. While scientists differ on timelines, some estimate AGI could emerge in as little as three years.

India’s Response

Indian policymakers and technology leaders have largely sidestepped the AGI debate, choosing to focus on AI’s utility for productivity and social welfare. This cautious pragmatism is understandable—but it may prove shortsighted.

Two implications follow. First, whether or not India engages with these trends, they will continue to evolve—shaping global systems, markets, and security architectures. Second, while the US and China compete for AI supremacy, they are also quietly negotiating norms and rules for global AI governance. Some of these engagements—such as joint diplomatic efforts at the UN General Assembly or visible meetings in Beijing—are already public.

India must not miss this moment.

A Framework for Global AI Governance

To shape the rules of the road, India must lead the effort to craft a global governance architecture that reins in the reckless pursuit of AGI while supporting AI for inclusive development.

India should rally the Global South to propose a three-tier “AI of Humanity Agreement”.

1. A Global Compute Cap

The agreement should establish an upper limit on computing power: 10^28 FLOPs for training and 10^21 FLOPs for inference. This ceiling—well above the expected compute requirements of GPT-5—would require expert validation and rigorous audit protocols, but it would curb only those who seek unchecked dominance.

For context, the proposed cap is 100,000 times higher than what India’s current LLMs use. India is more likely to be constrained by capital, energy, hardware, and talent—not compute limits. Such a framework would enable India and other Global South countries to catch up, without impeding their progress. The cap could initially be set for ten years, with five-year reviews.

Think of this as an inverse Nuclear Non-Proliferation Treaty (NPT). The original NPT carved out privileges for five nuclear states. By contrast, the AI of Humanity Agreement would emphasize inclusion—excluding perhaps, only a handful of ultra-ambitious Big Tech firms, mostly in the U.S. and China, who refuse to pursue AGI without any restrictions. The EU, South Korea, Brazil and South Africa are in the process of introducing laws and policy directives for prior impact assessment of high-risk systems. On 26 April 2025, President Xi Jinping advised the CPC Central Committee to speed up laws and technical monitoring of high-end risks. President Trump will announce an aggressive AI policy on 22 July 2025. He has received 8,755 public inputs including some about the dangers of AGI. Such debates on the threats posed by potentially advanced systems beyond proper human control, going much beyond the risks of deep fakes or financial fraud-related safety issues are spreading from one parliament to another. India needs to carefully study these developments.

2. The Global AI Accelerator (GAIA)

A neutral, internationally governed AI research lab—backed by abundant energy and capital—should be established to focus on advanced science, health, complexity, and climate research. Much like ITER, the global nuclear fusion project, GAIA would democratize access to cutting-edge AI capabilities.

3. Continental Capacity Creation Clusters (C4)

India, Brazil, and South Africa should host three regional centers for AI capacity-building. These C4 clusters would train the next generation of talent across the Global South, promoting widespread, inclusive adoption of AI.

India as Bridge-Builder

India is uniquely positioned to bridge the gap between advanced AI powers and emerging economies. A core group including Brazil, South Africa, South Korea, Saudi Arabia, and the UAE could give India the diplomatic heft to engage the US and China in negotiating a global agreement.

Contrary to common perception, the American and Chinese governments may be more receptive than expected. Both are wary of ceding too much control to a few corporations.

If India can lead such a diplomatic effort, it will no longer remain on the sidelines—it will step onto centre court.

True sovereignty will come not just from local language models and egalitarian applications, but from active engagement in advanced AI development and governance. Without this, India risks remaining dependent on foreign infrastructure and foundational models.

A Union Cabinet Minister recently called on Indian startups to focus on scientific innovation. This advice will be uncomfortable for those content to copy and consume foreign technologies—while ignoring the technological developments such as Final Spark’s biological AI requiring a million times less energy and Baidu’s patent for AI for interspecies communication as well as the dangerous AI arms race that could shape national security and global power dynamics.

India must ascend to a higher AI orbit—one that both mitigates existential risks and maximizes transformational opportunities. With a rich legacy of mathematical logic and ethical reasoning, India can—and must—reclaim its strength.

Sundeep Waslekar is President of Strategic Foresight Group, an international think tank, and author of A World Without War.

This essay is published simultaneously in Founding Fuel and NatStrat, an independent not-for-profit center for research on strategic and security issues. The core theme in this essay will be discussed in an invitation-only Founding Fuel - NatStrat Live in July. A video recording of the panel discussion will be shared with both communities.

Great Ideas Start Here. It Needs Your Spark.

For over a decade, Founding Fuel has ignited bold leadership and groundbreaking insights. Keep the ideas flowing—fuel our mission with your commitment today.

PICK AN AMOUNT

Want to know more about our voluntary commitment model? Click here.

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

Sundeep Waslekar
Sundeep Waslekar

President

Strategic Foresight Group

Sundeep Waslekar is a thought leader on the global future. He has worked with sixty-five countries under the auspices of the Strategic Foresight Group, an international think tank he founded in 2002. He is a senior research fellow at the Centre for the Resolution of Intractable Conflicts at Oxford University. He is a practitioner of Track Two diplomacy since the 1990s and has mediated in conflicts in South Asia, those between Western and Islamic countries on deconstructing terror, trans-boundary water conflicts, and is currently facilitating a nuclear risk reduction dialogue between permanent members of the UN Security Council. He was invited to address the United Nations Security Council session 7818 on water, peace and security. He has been quoted in more than 3,000 media articles from eighty countries. Waslekar read Philosophy, Politics and Economics (PPE) at Oxford University from 1981 to 1983. He was conferred D. Litt. (Honoris Causa) of Symbiosis International University by the President of India in 2011.

Also by me

You might also like