„AI is going to amplify human intelligence not replace it“
If you would have to introduce someone with almost no background in computer science or mathematics to your most important AI-project at the moment, how would you do that?
There are several important projects in computer vision, language translation, natural language understanding, etc. But there are two basic research projects on which we are spending a lot efforts. One is dialog systems, intelligent chatbots, and virtual assistants. The basic science and technology for smart virtual assistants doesn’t yet exist, and we are finding ways to allow machines to acquire complex background knowledge by reading text, so as to be able to reason with this knowledge. Second, we are working on something called “predictive learning” which would allow machines to learn a kind of “common sense” by observation, the way humans and animals do.
What are the crucial breakthroughs that lead to the AI-hype we go through right now?
It’s all due to the emergence of Deep Learning. Deep Learning is a set of techniques for training a computer to perform tasks such as detecting and recognizing objects in images, driving a car, recognizing speech or translating languages. While the basic ideas of deep learning have been around since the late 1980s, they have become dominant over the last 5 years because of progress in methods, faster computers and larger dataset on which to train them. A particular deep learning technique called convolutional neural networks – which I originally developed at AT&T Bell Laboratories in 1989 -- has become a kind of universal tool for image recognition, self-driving cars, medical image analysis, text processing and many other applications.
Many people are afraid about potential consequences of further AI-progress. They are not sure about how resilient their jobs and knowledge are. What is your view on that?
AI is going to amplify human intelligence not replace it, the same way any tool amplifies our abilities. Now, technological progress has always had the effects of (1) increasing overall wealth, (2) creating new jobs (3) making some jobs obsolete. The emergence of AI will also have that effect. The problems societies will have to deal with are (1) the acceleration of technological progress, which causes an increased number of people to retrain for new skills and new jobs (2) the fact that the wealth created by technological progress should be shared with all of society.
Should computer science be obligatory for every pupil today – maybe beginning in elementary school?
The process of reducing a complex task to a set of simple instructions, which is what programming is all about, is a skill that is very useful in many aspects of modern life, not just to professional computer scientists and programmers. So yes, it would be good if most high-school pupils knew the basics of computer programming by the time they graduate. There are tools that can be used to teach young children to program, such as the Scratch visual programming language. I’m not a specialist of education, but I would have loved to be able to play with something like this when I was a kid!
In the past, there have been periods of hope and so called “AI-Winters” – is that kind of cycle still alive and where are we now?
Since the 1960s, there have been waves of interest for various approaches to AI. In the 1960s, it was early neural network models capable of elementary learning. Then in the 1970s and 1980s, people lost interest in learning and focused on logic-based methods, with rules, reasoning, deduction – what we called “expert systems”. This had some success, but it truned out to be very difficult to build these systems. Then in the late 1980s and early 1990s, neural networks made a come-back. This is when convolutional nets and other deep learning techniques were invented. There were successful applications in handwriting recognition and a few other fields. But interest in neural nets waned in the late 1990s in favor of “simpler” machine learning techniques. Then around 2011-2012, neural nets made a huge come-back under the name of “deep learning”. The difference with previous waves of AI is that now there is a large number of very successful applications and a very large technology business around deep learning and AI. While the current hype that surrounds it will surely diminish, I don’t think we will see an “AI winter” like the ones we’ve seen in the past.
When you started your career in the 1980s, you concentrated on neural nets, an approach that had been rejected by the community in the 60s and then again in the 90s – why did you decide to go that way personally? And how did your life change since?
I have always believed in the idea that intelligent machines must be built through learning. That’s why I got interested in machine learning in the early 1980s, when I was still an engineering students and no one in the research community was working on machine learning at the time. I dug up the literature from the 1960s and realized why it had been abandoned. Neural nets are composed of multiple layers of simple computing nodes that can be seen as extremely simplified models of neurons in the brain. But in the neural nets of the 1960s, only the last layer could be trained. The other ones had to be designed “by hand”. What we found in the late 1980s was a way to train all the layers in a multilayer network simultaneously. This is what we now call “deep learning”, because of the multiple layers.
Would you reveal in some detail, how the “Deep Learning Conspiracy” emerged?
Geoff Hinton, Yoshua Bengio and I always believe that multi-layer neural nets were the right thing and would eventually beat other methods for computer vision and speech recognition. Around 2003, I became a professor at NYU, and resumed my work on neural net that I had put on the back burner since 1997. I had been a postdoctoral researcher in Goeff’s lab in 1987-1988 and Yoshua had worked in my lab at Bell Labs in the early 1990s. So we had a common philosophy. Geoff, Yoshua and I were determined to renew the interest of the community in these methods by showing that they worked very well. Geoff convinced the Canadian Institute for Advanced Research, a private foundation, to fund a kind of research network with workshops and collaborations where people with similar interests could meet and exchange ideas. Around 2007, our ideas started gathering interest in the mainstream research community. But things really took off around 2011-2013 when deep learning methods started beating more conventional techniques by huge margins for image and speech recognition.
What is the best idea from an AI-scientist in recent years?
I’m record saying that Generative Adversarial Networks is the best idea in machine learning in the last 10 years. While the early results with GANs are pretty amazing when it’s used for predictive (or unsupervised) learning, there is a lot of research still to do to make them work reliably and to understand their underlying principle. But it’s very promising. We have systems that predict what will happen in the next few frames of a video (very useful for self-driving cars), that can generate nice-looking images from a rough sketch, that can synthesize sounds, colorize a black and white image. These are cool demonstrations, but the main hope with GANs is that they will enable machines to learn how the world works by observation, like animals and humans. This will take years, perhaps decades.