When we ask machines to do our work

March 19, 2019: A roundup of news and perspective on disruptive technology. In this issue: AI’s impact on surgical skills, air crashes, electric vehicles and data for public good

N S Ramnath

[From maxpixel, CC0]

The unintended consequence of efficiency

Recently, one of my friends, coming out of back to back meetings, picked up his phone to call home. It had completely run out of power. His colleague helpfully offered his phone to make the call. My friend suddenly realised that he didn’t remember his wife’s phone number. He never had to dial it till then. He managed to find a charger after sometime, and called her from his own phone. Narrating the incident, he said his father knew hundreds of phone numbers by heart. He never developed that skill, and the only number he knew was his parents’ landline number.

We often think of technology as enhancing human capabilities. Steve Jobs liked to describe computers as bicycles for the brain. Elon Musk says, one of his companies, Neuralink (the most comprehensive piece on it) will give human beings a better shot at competing with AI. Those who are less optimistic about technology, often talk about machines leaving humans behind. However, we don’t spend enough time thinking about how technology might actually be making us less capable—by taking away the incentives to develop our skills, and use our brains.

In Axios, Steve LeVine reports on a study by social scientist Matthew Beane who found that the new generation of doctors is found lacking in crucial surgery skills, because of the use of machines and AI. “In new research, Beane found that across high-skill occupations—in law enforcement, banking and more—the early age of applied AI and robots is leaving young professionals unprepared for their new jobs.”

In a 2017 essay, three business school professors explained the lessons from the crash of flight AF447.

They wrote, “Imagine having to do some moderately complex arithmetic. Most of us could do this in our heads if we had to, but because we typically rely on technology like calculators and spreadsheets to do this, it might take us a while to call up the relevant mental processes and do it on our own. What if you were asked, without warning, to do this under stressful and time-critical conditions? The risk of error would be considerable.

“This was the challenge that the crew of AF447 faced. But they also had to deal with certain ‘automation surprises’, such as technology behaving in ways that they did not understand or expect.”

The recent 737MAX crash has again turned the lens on how computers—which have played a big role in making planes safer—can also potentially make flying less safe.

As individuals, we are slowly learning how to deal with the risks of engaging with technology. We go on a digital diet, keeping computers and other devices away from us. Some people make sure to schedule a few face to face meetings every week. Yuval Noah Harari, guru for the age of AI, meditates two hours a day and takes long breaks every year shutting himself off from the world outside.

Partly inspired by Harari, my former colleague Malini Goyal recently took a break for a 10 day Vipassana programme, and wrote about her experience here.

What about organisations? Today, not only businesses, but also governments and social sector organisations are becoming more and more dependent on technology. What happens if the systems go down? Can human beings replace machines if something like that takes place?

Petrol bunks of the future

Soon, fears about running out of charge won’t be restricted just to phones.

Last year, in a report, McKinsey argued that “with EV [electric vehicle] prices declining and ranges expanding, charging could soon become the top barrier.” One of the big messages of the report: in the long run, cities have to develop public charging stations for EVs.

India seems to setting its goals right. Last month, the government said it will focus on setting up charging stations to drive growth in electric vehicles.  

As of now it is left to private players to set them up. This tweet from Ather Energy co-founder Tarun Mehta points to a possible road ahead:

Why sharing data is a good thing

There is a bitter and loud fight going on between the organisations that don’t take our privacy seriously (for example, Facebook), and privacy activists who don’t take their eyes off such organisations. In that noise we tend to forget that data can be used for the good of society. (It is one of the ideas that drives the data empowerment and protection architecture.)

In Nature, Hiten Shah, executive director of the Royal Statistical Society, offers two proposals to ensure the use of data for the public good rather than for purely private gain.

  • First, governments should pass legislation to allow national statistical offices to gain anonymised access to large private-sector data sets under openly specified conditions.
  • Two, push technology companies should revert the data they collect to a national charitable corporation after a specific number of years. The charity would then enable its use for public good. Akin to the Human Genome Project, which was accessible to the scientific community, thanks to John Sulston’s efforts.

Earlier, we argued that India should take a lead in pushing AI as a public good. How we deal with data will be an essential part of that.

Points to ponder

As scientists and technologists work on artificial intelligence, one big question is how well can they navigate the real world. The three extracts below—drawn from a brilliant profile of AlphaGo, a blog post on AI, and an interview with Marc Andreessen—surface some of the issues that human beings have become better at managing after millions of years of evolution, compared to machines.

  • The first relates to how we deal with delayed feedbacks (Gut feeling? Heuristics? Philosophical concepts such as karma?)
  • The second is about the ability to abstract, and learn from analogies. (Analogies are often seen as a way to explain, but not as a way to learn).
  • The third is around the complex adaptive nature of the world, and the element of unintended consequences.  

1. “AlphaGo takes thousands of years of human game-playing time to learn anything. Many AI thinkers suspect this solution is unsustainable for tasks that offer weaker rewards. DeepMind acknowledges the existence of such ambiguities. It has recently focused on StarCraft 2, a strategy computer game. Decisions taken early in the game have ramifications later on, which is closer to the sort of convoluted and delayed feedback that characterises many real-world tasks.” (DeepMind and Google: The Battle to Control Artificial Intelligence)

2. “It is difficult to imagine a more searing indictment of deep learning than the inability to learn by analogy. Essentially all cognitive development rests on learning and abstracting the principles underlying a set of concrete examples. The failure, thus far, of deep learning to do so reveals the emptiness behind the facade of intelligence presented by current AI systems.” (AI’s Big Challenge)

3. “And so…basically with any kind of creative endeavour, anything that we do in our world—and this is, you know, for products or for companies—they’re launching into technically, what’s called a…there’s actually a mathematical term, ‘complex adaptive system’. The world is a complex adaptive system. And it’s sort of inherently…it’s not a predictable system. It’s not a linear system. It doesn’t behave in ways that you can expect. And that, by definition, so they say, is ‘complex’ because it’s just, you know, many, many, many dimensions and variables, and then ‘adaptive’, like, it changes. Like, things change. The introduction of new product changes the system, and then the system recalibrates around the product. (Hallucination vs. Vision, Selling Your Art in the Real World: Brian Koppelman Interviews Marc Andreessen)

Note: Dear reader, if you think there is news or viewpoint that I should include in This Week in Disruptive Tech, please message me on Twitter @rmnth. My DMs are open.

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

N S Ramnath
N S Ramnath

Senior Editor

Founding Fuel

NS Ramnath is a member of the founding team & Lead - Newsroom Innovation at Founding Fuel, and co-author of the book, The Aadhaar Effect. His main interests lie in technology, business, society, and how they interact and influence each other. He writes a regular column on disruptive technologies, and takes regular stock of key news and perspectives from across the world. 

Ram, as everybody calls him, experiments with newer story-telling formats, tailored for the smartphone and social media as well, the outcomes of which he shares with everybody on the team. It then becomes part of a knowledge repository at Founding Fuel and is continuously used to implement and experiment with content formats across all platforms. 

He is also involved with data analysis and visualisation at a startup, How India Lives.

Prior to Founding Fuel, Ramnath was with Forbes India and Economic Times as a business journalist. He has also written for The Hindu, Quartz and Scroll. He has degrees in economics and financial management from Sri Sathya Sai Institute of Higher Learning.

He tweets at @rmnth and spends his spare time reading on philosophy.

Also by me

You might also like