Skip to main content
Founding FuelFounding Fuel

AI in Health and Biology: The Good, Bad and Ugly

AI can detect disease and find medicines which no doctor can. It can also develop biological weapons which do not exist on the earth

30 March 2025· 5 min read

TL;DR

Artificial Intelligence is fundamentally transforming health and biology, presenting unparalleled strategic opportunities for business leaders. From BlueDot's prescient prediction of global infectious disease outbreaks to AI's pivotal role in accelerating COVID-19 vaccine development and platforms like AlphaFold revolutionizing drug discovery, its impact on efficiency and innovation is profound. General-purpose AI, exemplified by ChatGPT's ability to diagnose rare conditions after conventional failures, further underscores its problem-solving prowess. This era demands a dual-focused approach. While businesses must strategically leverage AI's immense power for competitive advantage and societal good – fostering innovation, accelerating R&D, and driving global health equity – they must also rigorously prioritize robust ethical frameworks and proactive governance to mitigate its inherent risks, particularly its capacity for misuse.
AI in Health and Biology: The Good, Bad and Ugly
AI-generated image from Pixabay

Artificial Intelligence attracted headlines when ChatGPT 4 was released in March 2023. But many interesting developments took place earlier, which helped humanity to deal with critical health problems.

Dr Kamran Khan, a professor at the University of Toronto, is the founder of BlueDot, a company that uses AI to predict infectious diseases. They have developed a platform that tracks 100,000 daily pieces of information in 65 languages from sources ranging from news websites to airline bookings. In December 2019, they tracked a new virus in the Wuhan market in China. They alerted that the virus would spread beyond China. They advised their clients in Canada to avoid the routes going to Wuhan. This was a month before the World Health Organization proclaimed the Covid emergency. The AI they used is known as ‘narrow AI’, or a specialised AI system dedicated to a specific problem. It is not a general-purpose AI like ChatGPT.

Pfizer and Moderna, two pharmaceutical giants, used narrow AI for vaccine development for Covid-19. The platform used by these companies had been under development for decades. AI helped with analysing protein structures and optimising candidate vaccines. This made it possible to create the new vaccine.

ChatGPT can be used for finding solutions to protracted medical problems for which doctors have no answers. In 2019, Courtney, an American mother, found that the growth of her four-year-old son, Alex, had stopped. He was in constant pain and had several abnormal tendencies. She visited America’s top 17 hospitals including some of the most famous ones. She could not find a remedy for her son. Finally, a few months after ChatGPT 4 was launched in 2023, she uploaded all the detailed medical notes on this AI tool. ChatGPT suggested that Alex's symptoms might be consistent with tethered cord syndrome (TCS), a neurological disorder. In this rare condition, the spinal cord is attached to the spinal canal, restricting its movement, leading to nerve damage and pain. Once the correct diagnosis was made, she could find a neurosurgeon to perform surgery successfully. She now expects her son to live a normal life.

I am aware of how AI is being used to find medicines for cancer and Alzheimer’s disease. Kit Gallagher, a postgraduate student at Oxford, has found a new method of detecting cancer in its early stage using AI. He is a mathematician. He has devoted his life to applying mathematics to biology.  There are also many other researchers at Oxford University exploring the use of AI to find a cure for cancer.

India and other countries in Asia, the Middle East, Africa, and Latin America suffer from infectious diseases and rare diseases. They can work together to create AI-powered tools that identify diseases early and predict their spread, helping to save millions of lives. They can establish shared data centres and joint research centres. They can help the growth of hundreds of companies like BlueDot in Toronto.

While using AI for a healthy future for half of the world’s people, it is necessary to be aware of the dangers of AI if proper care is not taken. A model developed by Demis Hassabis, CEO of Google DeepMind, was able to predict 200 million protein structures, making it possible to develop medicines to treat diseases in an early stage. An advanced model developed by his company, known as AlfaFold 3, is even more efficient in helping companies to develop drugs.

AI can construct new drug molecules. Moreover, AI can structure new chemicals which do not exist in nature. AI can also make infections very deadly. A rogue scientist can cause havoc by misusing AI. Some AI models can even do it themselves, without human instruction. We should be careful that while AI can help stop the spread of diseases, it can also create new, deadlier ones.

The biggest danger from AI is that it can create new pathogens. In 2022, a company in North Carolina in the United States instructed AI to produce dangerous toxic molecules. Within six hours, the AI created 40,000 dangerous molecules, including the chemical structure for VX nerve agent, the deadliest chemical weapon ever made. The AI also structured dangerous compounds which never existed on the planet before. The company was only performing a scientific experiment. It published its findings in a scientific journal.

The experiment is now well known among scientists. And that is why scientists feel scared of AI. They know that AI can detect disease and find medicines which no doctor can. They also know that AI can develop biological weapons which do not exist on the earth. If we take a narrow competitive approach to AI to win the race against other countries, AI will unleash weapons that will harm all countries. Two of the top companies, OpenAI and Anthropic, strongly indicated in early 2025 that their new models might enable novices to create biological threats. The only alternative is to collaborate on the global stage to harness AI to make a healthy planet and to prevent its evolution into a monster that will create biological weapons.

(Important Note: This article is an example of how AI can be deceptive and therefore dangerous. I used an AI bot to translate the above text in Marathi for a Marathi newspaper. While translating, AI shortened and amended all the sentences where I have warned about AI as a potential source of dangerous pathogens and chemicals. It changed the tone of the article to create an impression that AI is all for good of humanity and we need not worry about its adverse implications.)

Founding Fuel is sustained by readers who value depth, context, and independent thinking.

If this essay helped you think more clearly, you may choose to support our work.

Illustration of supportersIllustration of supporters

Sundeep Waslekar

President | Strategic Foresight Group

Sundeep Waslekar is a thought leader on the global future. He has worked with sixty-five countries under the auspices of the Strategic Foresight Group, an international think tank he founded in 2002. He is a senior research fellow at the Centre for the Resolution of Intractable Conflicts at Oxford University. He is a practitioner of Track Two diplomacy since the 1990s and has mediated in conflicts in South Asia, those between Western and Islamic countries on deconstructing terror, trans-boundary water conflicts, and is currently facilitating a nuclear risk reduction dialogue between permanent members of the UN Security Council. He was invited to address the United Nations Security Council session 7818 on water, peace and security. He has been quoted in more than 3,000 media articles from eighty countries. Waslekar read Philosophy, Politics and Economics (PPE) at Oxford University from 1981 to 1983. He was conferred D. Litt. (Honoris Causa) of Symbiosis International University by the President of India in 2011.

Beyond the noise is the signal.

FF Insights: Sharpen your edge, Monday–Friday.
FF Life: Culture, ideas and perspectives you won't find elsewhere — Saturday.

Readers also liked

The Love of One’s Tongue in the Age of AI
·Artificial Intelligence

The Love of One’s Tongue in the Age of AI

AI can speak in dozens of tongues — but often thinks in just one. Why India must lead in building systems fluent in its own realities

KA
Kavi Arasu

Kavi Arasu

Leadership and Talent Development Professional

Algorithmic War is upon us
·Artificial Intelligence

Algorithmic War is upon us

China and the US are already locked in a race to become the AI superpower. China seems to have a lead. It is creating an ecosystem for AI through state sponsorship and the innovations by entrepreneurial ventures

FF
Founding Fuel

Founding Fuel