Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
You’ve been referred to as the “godfather of neural networks.” Do you believe you’ll see true artificial intelligence in your lifetime?
It depends on what you mean by true artificial intelligence. If you mean autonomous agents with human level abilities at perception, natural language, reasoning and motor control, probably not. However, it’s very hard to see more than about 5 years into the future so I would not rule it out. Ten years ago, most people in AI would have been very confident that there was no hope of doing machine translation using neural nets that have to get all their linguistic knowledge from the raw training data. But that is now the approach that works best and it has just halved the gap in quality between machine translations and human translations.
What is there to fear about the existence of true artificial intelligence?
I am not too worried about the popular fantasy that evil robots will take over the world. I am much more worried about what people like Hitler or Mussolini might do if they had armies of intelligent robots at their disposal. I think there is a pressing need for international agreements on militarization of this technology.
How do you foresee AI affecting labor and the economy? Does it help or hurt?
Mechanical diggers and automatic teller machines increased productivity by eliminating a lot of tedious jobs, and very few people think that they should not have been introduced. In a fair political system, technological advances that increase productivity would be welcomed by everyone because they would allow everyone to be better off. The technology is not the problem. The problem is a political system that doesn’t ensure the benefits accrue to everyone.
What’s the next big step for the deep learning movement?
At present, we are seeing unprecedented progress in solving tough problems that defied our best efforts for half a century. Speech recognition is now very good and rapidly getting better. The ability to recognize objects in images has taken huge strides forward and I think computers will soon be able to understand what is going on in videos. Neural networks have recently taken over for machine translation. Every week, deep neural nets succeed at new and commercially significant tasks. We have seen an amazing flowering of the basic deep learning techniques introduced 20 or more years ago. This flowering includes better types of neuron, better architectures, better ways of making the learning work in very deep nets and better ways of getting neural networks to focus on the relevant parts of the input. Deep learning is now attracting large numbers of very smart people and huge resources, and I see no reason why this flowering should not continue for many more years.
I think that a lot of effort will be focussed on getting neural networks to really understand the content of a document. This may well involve developing new types of temporary memory, which is currently a hot topic.
One problem we still haven’t solved is getting neural nets to generalize well from small amounts of data, and I suspect that this may require radical changes in the types of neuron we use. Eventually, I think the lessons we learn by applying deep learning will give us much better insight into how real neurons learn tasks, and I anticipate that this insight will have a big impact on deep learning.
Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at the University of California San Diego and spent five years as a faculty member in Computer Science at Carnegie-Mellon. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. In 2013, he became a Distinguished Researcher at Google and he now works part-time at the University of Toronto and part-time at Google.
Geoffrey Hinton designs machine learning algorithms. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. In 2005 he published the first paper on deep belief nets which initiated a resurgence of interest in neural networks. His students then made seminal advances in the application of deep neural networks to speech recognition, object classification, and drug design.