AI: Beware human decline not rise of computers

Artificial intelligence
Artificial intelligence

AI experts talk to "Globes": Intel's Itay Yogev: It's not that computers will take over but people will rely on them too much.

When James Cameron directed the first movie in the Terminator series, the apocalyptic future described in it appeared too distant to really consider its likelihood and what could lead to it. According to the script, already in 1997, just 13 years after the script was written, the Skynet computer network achieves self-awareness, decides that the most severe danger to its existence comes from human beings, and initiates a nuclear world war in order to destroy them. Since not all people died, and some of those who survived the destruction are fighting against it and against its machines, the computer network decides to send a human-like robot back into the past to murder the mother of the leader of the rebellion, and put an end to the human race's efforts to survive. This development in the plot takes place in 2029.

When Terminator was filmed 35 years ago, the digital world had not even been formed. Not only did few people realize in 1984 that something big was about to happen, but even fewer imagined how fast the change would be, and how comprehensive and earthshaking it would be. Unsurprisingly, the few people who did talk about the possible picture of our digital future described great fears, as well as big hopes.

In the years since then, the world has changed beyond all recognition. Not only has every aspect of our lives become computerized, but artificial intelligence (AI) surrounds us on all sides. Computers are now able to compose hit songs, study our habits and guess what we will want to buy, manage our traffic light systems, drive cars and fly airplanes, find connections and contexts and extract insights from enormous databases, provide services to customers and who knows what else, and mankind is worried. When talking about AI, people are still worried that robots will make them unnecessary - mentally or physically.

While no serious scientist thinks that frustrated robots are likely to go on a killing spree in the foreseeable future, a strong consensus exists in the research community that after years of slow maturation, it appears that the coming decade will be the decade of artificial intelligence. The big leap forward is taking place because of a combination of factors: experience gained and leading to the development of new methods, computer capability that has mounted with the transition to cloud computing, and a flood of ideas for systems in which AI can be integrated and its benefits maximized, and which have caused big money to flow into the sector and accelerated development processes.

We spoke with three AI researchers who dared to make predictions about the principal trends we will see in this sphere in the next ten years, and who warned that we are on the threshold of a world-altering revolution. Together with the breakthroughs that will improve the lives of human beings in almost all areas of life, the scientists are already warning that the rise of AI will lead to a massive loss of jobs and overreliance on computerized decision-making and the creation of a new digital gap. The experience gained from previous important technological revolutions, they claim, requires far more serious preparation for the new situation than most of us believe to be necessary.

Prof. Isaac Ben-Israel, who heads the Blavatnik Interdisciplinary Cyber Studies Center at Tel Aviv University, and led the AI conference conducted there recently, provides a short history of the field. "Look, the term 'artificial intelligence' is rather unclear, so when I talk about it, I usually refer to the idea that instead of writing an algorithm that tells the computer how to do something, I let it learn like you and I learn. We both know how to identify faces, but none of us knows exactly how to explain how he does it. Some took up this idea, and decided to set up a network of neurons to imitate what goes on in our brain. Instead of telling the computer what to do, let it learn from experience. As people, we learn by observing. When you were a small child, your mother would point to a dog, say, and tell you, 'This is a dog,' and then show you another dog, and tell you again, 'This is a dog,' and then point to a different animal that also had four legs, a tail, and fur, but told you that it was a cat, and to something else, and tell you that it was Grandpa, not a dog or a cat, and you learned what was a dog and what wasn't a dog. For a machine to do such things, you have to let it go over large stores of information in which what is and is not a dog is defined, and learn that way.

"These ideas began to surface as early as the 1950s, but they couldn't be carried out until five years ago, because the computer power was inadequate. Supercomputers today are starting to approach the computational ability of the human mind in the number of neurons they shoot off per second, but the computer nevertheless has an advantage. Why? Because the computer takes all of its processing ability and aims it at the same computation, while our brain can't do that. While it is calculating, it also has to oversee the actions of the heart, make sure that no one will attack us, listen to ears, deliver information from the eyes, and so forth."

Ben-Israel says there is no doubt that the rise of AI will greatly alter the job market, but regards this as an opportunity, not just a cause for concern. "Look, when I was a kid, every third store was a shoemaker. I walked on the street this week and suddenly saw a shoemaker, and stood wonderingly in front of it, because I hadn't seen a shoemaker for 30 years. With the rise of AI, the question is not whether certain professions will vanish. They will certainly vanish. The question is what balance will emerge. In the past 200 years, professions disappeared and others appeared, and ultimately, all of our lives improved. I think that this will also happen with AI."

"Today, when they talk about AI, they do so from a narrow context," says Intel advanced analytics director Itay Yogev. "You restrict the machine's learning space in advance in order to make it easy for it, and cover up things that it doesn't know how to do with a lot of explicit knowledge. We call this narrow AI, and we're making good progress in it. For example, we're on the way to creating a machine capable of performing tasks like driving pretty well. The dream, on the other hand, is to achieve general purpose AI - a machine that when you teach it how to do one thing, it will be able to do what we are able to do - to copy knowledge from one area to another. This field is now being researched, and it has the potential to bring about a real breakthrough."

What are the challenges preventing us from realizing the AI vision that we know from books and films - machines and computers capable of doing everything that people can do? "The first challenge is what I call simple logic," Yogev says. "For example, there are things that people understand in an instant - that circumstances have changed, that things are suddenly in a different context. Machines don't really understand this. They need a lot of training in order to be able to make decisions. For example, if you spill hot water into a jar, every three year-old child will understand the context, and realize from the steam that he shouldn't touch the water. It will take a machine without training in these specific circumstances a long time to learn how to behave in such a case of a context not foreseen in advance."

Yogev says that despite the growing understanding of the sector and the fact that computational power is incomparably greater than in the past, there are several reasons why AI has not yet made its great quantum leap. "There are several problems and several levels, but one of them concerns the machines' learning model, which is currently based on the past. This puts the machine's ability to draw conclusions on a vastly inferior scale in comparison with human beings. The second thing is that while machines learn only from information designed to teach them a specific thing, people are constantly learning things from all of their senses. The third thing is that while the processing capability of computers is now ostensibly unlimited, this cannot be fully utilized because of the other two reasons.

"In computer vision, for example, there was a breakthrough several years ago, and machines can now do excellent work when they are asked to identify objects or distinguish between them, but in language, we really aren't there yet. When you reach the upper levels of language use - irony, cynicism, puns, and analogies - the computer has a problem. For example, people easily understand metaphors, but computers really aren't there yet. Will this change in the coming decade? Look, a great deal of money from both governments and commercial companies is being invested in this field, and there's a new approach in language processing that purports to achieve much better results, so maybe yes, but there's another area in which a breakthrough has so far proved difficult, in which I really believe. This is called reinforced learning, which I think is significant for the future. For example, if a child touches boiling water once, he learns to beware of it, right? So this mechanism imitates the human mechanism - sometimes we learn from experience, and sometimes we use the information that has been gained so far. In order to teach a robot to walk, we teach it through watching video clips that enable it to improve, and the same is true when you let a computer play chess. As of now, however, we're having trouble taking this sphere out of the realm of games and into the real world. Why? Because a game has a finite number of possibilities, while in real life, the number of possible scenarios is infinite, or at least much bigger. Mathematics works well for a game, but become impossible in the real world. But I believe that this field of learning by reinforcement is such that a breakthrough in it can really change the game."

"Globes": And when such a breakthrough occurs, should we beware?

Yogev: "In my opinion, the vision in which machines take over people is irrelevant for the near future. I'm more worried about mankind's decline than about the rise of machines. I'm afraid that we'll become too enslaved to the machine. The more you rely on it, you take the person out of the equation, and give the machine too much power. We'll have to look more critically at the question of what we allow machines to decide. I'm less bothered about optimization of AI in the consumer sphere in order to convince people to buy products, or that we will integrate artificial intelligence in the labor market in order to improve, but I'm definitely concerned about the integration of artificial intelligence in the civil and defense field. I think that everyone should ask him or herself which decisions he or she wants the machine to make, and which decisions a person should make. There has to be a profound public discussion about the type of regulation that should be passed, and the sooner, the better. For example, consider a situation in which someone offers to put all of the existing data on a computer, and the algorithm will decide what should be in the National List of Reimbursed Drugs. A lot of people will tell you that the computer isn't biased towards any side, and isn't exposed to pressure, but someone will feed the data into it, and someone will define the functions and the target. These are moral questions. Would you want a computer to make such decisions?"

Prof. Irad Ben-Gal, who heads the Laboratory of AI Machine Learning Business & Data Analytics (LAMBDA) at Tel Aviv University and initiated the Digital Living 2030 program in partnership with Stanford University, says that one of the main developments we will see in the coming years in artificial intelligence is the rise of what is called the nuclear model, which focuses on learning and analysis of a single entity. "For example, think about an application that learns about and follows that way you drive - passing other cars, turns of the wheel, speed, number and force of braking - and can offer an insurance program suitable for you at the level of the risk profile in an individual trip," Ben-Gal says. "The premium will respond to changes in your driving habits from one trip to the next."

Another area mentioned by Ben-Gal as likely to be significant in the coming decade is explainable artificial intelligence (XAI). "This is a very strong trend now in which all of the large companies are interested. Once upon a time, you took a model of a decision tree and saw with your own eyes how each parameter was weighted and influenced the final decision. In the deep networks models existing today, it's very difficult to understand what influences the result, so there's a very big effort to build models based on the existing models, and they're trying to imitate them in a way that will make it possible to obtain explainable insights from them. For example, imagine a medical application that analyzes MRI images and is able to point to a specific patient as having a specific type of lung cancer with an 83% level of significance. The new models, however, will also be able to explain why the AI application reached this conclusion. In image processing, there are XAI models that really paint the pixels in the picture that cause the application to make the recommendation that you got. For example, you can then display the area in the lung image that caused the system to make its specific recommendation. Such an insight not only increases confidence between the human users and the AI application, but also allows feedback from the human expert to the system."

Ben-Gal says that the appearance of systems that weigh ethical and moral parameters will become more common in order to support the AI systems that will work unceasingly around us. "Researchers at MIT screened videos of autonomous driving to millions of people all over the world, and showed them situations of decision-making in driving in order to understand whether a shared human ethical criterion exists. They concluded that in a certain situation, people in the West act according to one criterion, while people in Asian countries act differently in the same situation, and people in Latin America and countries like Spain and Portugal have their own typical decision-making system."

This means that in the future, autonomous cars will also be ethically and morally adapted to the countries in which they transport people. This means that it will be necessary to write specific code for the cars that will take these preferences into account, so that they can imitate the decision-making system of the human drivers in them in the best way.

Ben-Gal says that he is not afraid about an apocalypse in which the machines fight against humans. He is very worried, however, about the possibility that the gaps between people able to work with artificial intelligence systems and exploit them and those unable to work with those systems will become wider. "As I see it, this is the real and concrete danger in the artificial intelligence revolution. Every time humanity has advanced, gaps were created or widened between people our countries able to adopt the changes and those were unable to do so, and were left behind. If there are no clear regulations governing the social use of knowledge, as there is in every other kind of infrastructure, the gaps that will emerge will be very wide, and entire populations will suffer from this revolution, whether through loss of employment or through loss of income."

Published by Globes, Israel business news - en.globes.co.il - on December 18, 2019

© Copyright of Globes Publisher Itonut (1983) Ltd. 2019

Artificial intelligence
Artificial intelligence
Twitter Facebook Linkedin RSS Newsletters גלובס Israel Business Conference 2018