Regulatory sandboxes
Regulating tech is difficult: Regulations are difficult because you have to balance the needs of different groups of people. Regulating technology these days is particularly difficult because we don’t have a clear sense of how some of these technologies will work in the context of society—increasing the risks of unintended consequences.
The sandbox approach offers a way forward for fintech firms: The Reserve Bank of India, India’s banking regulator, has released its draft ‘Enabling Framework for Regulatory Sandbox’ for comments. The idea is not new. A number of countries have tried this approach—to allow fintech firms to test out their products and services, monitor the impact on various stakeholders, and shape regulations based on the feedback. It’s a Build-Measure-Learn approach applied to regulations.
AI regulation might be taking the same Build-Measure-Learn path: Earlier this month the European Commission released a set of guidelines for artificial intelligence (AI) development, and asked technology companies to test it till 2020. If Google is any indicator, technology companies are struggling with getting AI right. The Wall Street Journal reports that the search giant, which disbanded an AI ethics council following protests about its composition, found it equally difficult to set up a panel in London to review its AI work in health care.
But, whether regulations can keep pace with its growth remains a big question: Companies are finding newer ways to get data; AI is getting implemented in more and more industries; universities and training institutions—offline and online—are stepping on the gas to keep up with demand. Regulators, of course see this. Which could lead to an altogether different problem of over regulation. Regulating tech is difficult.
Manipulating genes, fixing brains and printing hearts
Gene editing continues to tread into dangerous territory: You cannot find fault with scientists making a tobacco plant 40% larger than usual, knowing similar feats can help cure global hunger. Using CRISPR to treat cancer, as these University of Pennsylvania scientists are doing, might even be laudable. However, splicing human genes into a monkey’s DNA—as a team of Chinese scientists (yes, again) have done—is arguably outside ethical boundaries.
Attempts to fix brains are scaling up: In a pilot, doctors used implants to stimulate key areas of the brain of a brain-damaged person. Good. Now it gets interesting. A group of scientists—loosely associated with Elon Musk’s Neuralink, and funded by the US Defense Advanced Research Projects Agency (DARPA)—have found a way to rapidly implant electrical wiring into the brains of rats. Neuralink’s grand goal is to connect our brains to AI. In a stunning—and ethically ambiguous—work of research, a team of scientists managed to keep a pig’s brain alive for 10 hours after its death.
Doctors have 3D printed a human heart: In Israel.
Human and AI bias
Human bias shows up in many ways: In a major survey, researchers found that the sex ratio is skewed against women in many countries. According to one estimate, 65 million women never got enrolled in Indian electoral lists. And this is just gender bias.
Technology often amplifies human bias: Technology can amplify what’s good in human beings and—as anyone who goes through Twitter or WhatsApp will know—what’s bad in human beings. AI bias is simply a reflection of lack of diversity among those who work on AI. Technology Review points out: “Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s.”
In the end it depends on us: In a different take, Benedict Evans writes, “... just as for cars, or aircraft, or databases, these systems can be both extremely powerful and extremely limited, and depend entirely on how they’re used by people, and on how well or badly intentioned and how educated or ignorant people are of how these systems work.
“Hence, it is completely false to say that ‘AI is maths, so it cannot be biased’. But it is equally false to say that ML [machine learning] is ‘inherently biased’. ML finds patterns in data—what patterns depends on the data, and the data is up to us, and what we do with it is up to us.”
Bookmarks
- Jack Dorsey defends Twitter’s anti-abuse AI during a heated TED exchange - Fast Company
- In African villages, these phones become ultrasound scanners - The New York Times
- A cognitive scientist explains why humans are so susceptible to fake news and misinformation - Nieman Lab
[Image by Sabine Zierer from Pixabay]