At the top and bottom of the AI debate

Along the march of AI, two issues need attention: one is of regulation. The other, equally critical, is ensuring that the poor can access and learn the best and latest tech

Sundeep Waslekar

[Bridging the digital gap: Inside “AI on Wheels”, an initiative by social entrepreneur Sachin Joshi, where a bus loaded with all kinds of tech tools goes to the slums of Nashik to educate children of casual labourers]

The debate about the future of Artificial Intelligence (AI) is heating up. There are big and small conferences taking place every week. Social media is abuzz with talk about competition between OpenAI developed by a US company, and DeepSeek by a new Chinese startup. The developing countries are fighting for access to the cutting-edge technologies. 

The debate ignores the problems at the top end of the AI sector and at the bottom end of the sector. At the top end the biggest problem is that AI might become more powerful than humans. It may happen by 2035 or even by 2030. If Artificial General Intelligence (AGI) is created, it may make us humans its slaves. It may destroy all life on the earth. It may manipulate the cyber systems used in nuclear weapons governance. It may then launch a world war without the permission of anyone in Kremlin or the White House.

At the bottom end, there is a risk that large swathes of humanity may not even learn basic technology. More than 2 billion people will become inferior to us, the literate people. This will create a global caste system. Almost one-third of the people in the world do not have access to the internet. According to the United Nations, perhaps half of the school going children are deprived of the internet.

Global political leaders meet in big conferences. It is common for them to issue statements about the “ethical use of AI” to address the top end of the problem and “inclusion” to address the bottom end of the problem. This is lip service. The leaders and their advisers are good at using very general language. They do not accept existential risks to humanity. They also do not recognise the risk a new caste system which excludes people without computers and internet.

[Demis Hassabis, Nobel Laureate and co-founder and CEO of DeepMind, has suggested a governance mechanism for AI. Photo by Arthur Petron - Own work, CC BY-SA 4.0, via Wikimedia.] 

Nobel Laureate Demis Hassabis is known for developing AI to figure out the exact shapes of proteins in our body. It can lead to the discovery of many new medicines. He has suggested the creation of an organisation like the International Atomic Energy Agency (IAEA) to govern AI. The world leaders do not want to discuss it. I had asked many experts from powerful countries in the last two years if we can start by creating a special department in the IAEA to govern the role of AI in nuclear weapons. They all said that it was impossible. In December, I met Dr Mohamed ElBaradei, a Nobel Laureate and former Director of IAEA for a decade. He said it was perfectly possible. The next morning, I asked the same question to a senior official of the Government of Austria. He has been dealing with IAEA for decades. He also said that it was possible. We can begin with such a department in the IAEA. At the same time, the world can negotiate Hassabis’s idea of a global agency for AGI, and not just for the role of AI in nuclear weapons.

When we talk about the debate at the bottom end, I found an amazing solution last weekend. Sachin Joshi is a young educationist in Nashik, a city in western India. It is a semi-rural town. Like many cities in Asia, the Middle East, Latin America and Africa, Nashik has a large population of poor children who do not have computers, internet, and smartphones. They often do not even have electricity. They go to government schools which do not have basic facilities. Many 10-year-old children do not have the ability to read and write that five- or six-year-old children at the beginning of primary school are supposed to have. How can you teach AI tools to such children? Or should they be permanently deprived of the new technologies in the world?

Joshi has created a mobile bus. It is known as “AI on Wheels.” The bus has computers loaded with many different AI tools. It has a recording studio, robotic laboratory, tablets, laptops, screens, high speed modems, and many other facilities powered by solar panels. It goes to the slums of Nashik and uses AI tools to train the children of casual labourers to learn reading, writing and mathematics—at an extremely fast pace. The teachers in the bus also teach the children to learn and use AI tools for many creative activities. It is expected that the bus will bridge the digital gap in Nashik in the next few years. All of it is pro bono and Joshi funded it himself. Last year, he was awarded Rs 51 lakh by the Maharashtra government for the best school in the state; he used the entire amount to fund this initiative.

[Inside 'AI on Wheels' in Nashik]

One bus can change the life of only a few hundred children in one city. It cannot change the world. We need hundreds of such mobile schools in India, and indeed in the Middle East, Asia, Africa, and Latin America. The poor have the right to the best and the latest technology in the world. While we work to solve the problems at the bottom of the pyramid, we should not lose sight of what is happening at the top. We need to pursue the ideas proposed by Hassabis for a global agency to prevent AI from taking over human civilization.

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

Sundeep Waslekar
Sundeep Waslekar

President

Strategic Foresight Group

Sundeep Waslekar is a thought leader on the global future. He has worked with sixty-five countries under the auspices of the Strategic Foresight Group, an international think tank he founded in 2002. He is a senior research fellow at the Centre for the Resolution of Intractable Conflicts at Oxford University. He is a practitioner of Track Two diplomacy since the 1990s and has mediated in conflicts in South Asia, those between Western and Islamic countries on deconstructing terror, trans-boundary water conflicts, and is currently facilitating a nuclear risk reduction dialogue between permanent members of the UN Security Council. He was invited to address the United Nations Security Council session 7818 on water, peace and security. He has been quoted in more than 3,000 media articles from eighty countries. Waslekar read Philosophy, Politics and Economics (PPE) at Oxford University from 1981 to 1983. He was conferred D. Litt. (Honoris Causa) of Symbiosis International University by the President of India in 2011.

Also by me

You might also like