[Photograph of Elon Musk by OnInnovation under Creative Commons]
Can driverless cars overtake humans?
Short answer: If we allow them
Recently Elon Musk, the CEO of Tesla Motors, SpaceX and chairman of SolarCity, used Twitter to reach out to software engineers to ramp up Tesla's autopilot software team. "Should mention that I will be interviewing people personally and Autopilot reports directly to me. This is a super high priority," one of his tweets read. Autopilot is Tesla's self-driving feature. And it's a super high priority for a lot of executives in other companies too—in the legacy automobile companies, such as Audi, BMW and GM, those who are attacking the market from the side, such as Google and Uber as well as tech and automobile companies that are joining hands, like Microsoft and Volvo.
Google cars have done 1.2 million miles of autonomous driving (equivalent to 90 years of driving experience) without once getting a ticket. But a Google car was pulled over recently for driving too slowly, going at 24 mph on a 35 mph lane. From a public perspective, this might well go down as only being human, and a nudge towards greater acceptance.
But how soon will that be? Mark Fields, the CEO of Ford (which incidentally is using a fake city to get unlimited time to test drive its cars), is probably one of the most optimistic about self-driving cars. He says it would hit American roads in four years. Fields might be underestimating the regulatory hurdles. Earlier, Musk said jumping these hurdles could take anywhere between one to five years.
Some resistance will come from those who are directly affected—cab drivers. Uber is already thinking about giving vocational training to its drivers to do other jobs. But it's possible that the technology, despite Google's 1.2 million miles, has not yet come to that bridge yet. Self-driving cars are yet to learn the hundreds of signals human beings—car drivers, pedestrians, cyclists, motor bikers—send, receive and interpret that make driving both safe and accepted. It's true that technology is advancing at an exponential rate in these areas, but the question of perception still remains.
As Fumihiko Ike, chairman of Honda, told Japan Times recently: “Human intelligence has no equal for working out what is happening on the road, so I think fundamentally it won’t be easy to leave it to the machine except in very restricted conditions such as motorways or specific routes.” It's worth remembering that in technology, advances happen in small steps and it might seem nothing is happening before we find ourselves in a different world.
Should we be worried about cryptocurrencies?
Short answer: Even if it’s not big yet, yes.
Soon after the terrorist attacks in Paris, there were reports on how terrorists might have used encryption—easy and cheap to access these days—to communicate among themselves. It turned out they were using the good old SMS. Now, attention has turned to cryptocurrencies. Reuters reported that European Union countries plan to go after virtual currencies and anonymous payments made online to curb the flow of funds to terrorist organisations.
These fears are hardly new. There are reasons why cryptocurrencies are used for shady activities—like buying drugs, gambling and terrorist funding. They can be transacted anonymously—there is no need to prove your identity to transfer funds; they are not restricted by national borders—a bitcoin is as valid in Europe as it is in America; they can be transferred as immediately as cash; they can be transferred without the government tracking it; and of course they are low cost and easy to use.
That doesn't, of course, mean ISIS and other terror organisations have been using it extensively. Ghost Security Group, an antiterrorism hacker group, says, ISIS does have bitcoins, but still most of its funds come from the traditional sources—oil sales, kidnapping, extortion. It also doesn't mean cryptocurrencies might turn out to be a major source of their future funding. What it does highlight, however, is that there is always a trade-off between comfort that a new technology would bring to the lives of millions and the security issues it will open up because it will provide the same comforts to those with bad intentions.
Should we be worried about Artificial Intelligence (AI)?
Short answer: Definitely
"When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success."
So said physicist J Robert Oppenheimer, speaking about the atomic bomb. He is quoted in a long New Yorker profile of Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, and the Oxford professor who runs the Future of Humanity Institute. The Future of Humanity Institute studies large-scale risks to human civilisation and is one of the recipients of Elon Musk's funding aimed at understanding and preventing risks from AI. The title of the profile: The Doomsday Invention.
AI is progressing fast. Matthew Lai, a masters student at Imperial College London, recently created an AI machine that has taught itself to play chess at the International Master level. And the time it took to do this: 72 hours.
An AI program developed by Japan's National Institute of Informatics has passed a college entrance exam, scoring above national average, but not high enough for its top institute, the University of Tokyo. By 2021 it would, its developers say.
To top it all, a Los Angeles-based AI firm Humai is aiming to resurrect human beings within the next 30 years. These words from its website could well be out of science fiction: "We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioural patterns, thought processes and information about how your body functions from the inside-out. This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human. Using cloning technology, we will restore the brain as it matures."
Realistically speaking, what are the chances that AI will turn out to be all that its fans and critics say it would be? The New Yorker profile quotes a survey by Richard Sutton, a Canadian computer scientist: "There is a ten-per-cent chance that A.I. will never be achieved, but a twenty-five-per-cent chance that it will arrive by 2030. The median response in Bostrom’s poll gives a fifty-fifty chance that human-level A.I. would be attained by 2050." For many of us, within our lifetimes, that is.
How can one become a better surgeon?
Short answer: Use 3D models
Jack Nicklaus, the champion golfer, once said: “I never hit a shot, not even in practice, without having a very sharp in-focus picture of it in my head.” But he is not the only sportsman who spoke about the use of mental imagery in enhancing performance. In fact, it's true not only of sportsmen but all activities that need physical skill and sophistication. Surgery, for example. Some of the best surgeons practise surgery in their heads before they do it on the operation table.
What if a surgeon doesn't need to exercise her imagination, but has a very precise replica of the organ on which she would be performing the surgeries? That's one of the things 3D printing lets them do.
Recently, doctors practiced on a 3D printed replica on a patient's brain vessels before they went to the operating table to perform a complex brain surgery. And, for a facial transplant which again was not only complex, but the surgeons had to get it right the first time. And for a liver transplant for a 10-year-old in China. We are not very far from when other radical technologies in prosthetics and cybernetic implants combine with 3D printing to create a market for advanced biohacking.