Artificial intelligence methods have been around for decades, but the pace of innovation has picked up significantly over the past few years. This is especially true in areas such as computer vision, language processing and speech recognition, where new approaches have greatly improved computers’ ability to learn — to really understand what they see, hear and read.
Over the years, Gigaom has covered many attempts to improve the way that computers respond to our voices, movements or other visual cues, and identify the words we type and the pictures we take. These technologies have and certainly will continue to change the way we interact with computers and consume the incredible amount of digital data we’re producing. The work being done in universities and corporate research labs right now to build self-learning vision, voice and language models will only make our experiences better.
Here are some timelines tracking Gigaom’s AI coverage over the years, specifically around deep learning research and applications, other types of learning systems and applications, and cognitive computing (really, just IBM Watson). The second timeline gathers discussions of advanced AI at our various conferences. Links to stories are below the images.
We will update it regularly as new product launches, research advances and industry news occur.
Computers that learn what they’re seeing, hearing and reading
For some more information on deep learning, check out these useful primers:
- The Gigaom guide to deep learning
- A presentation from Microsoft Research
- Yann LeCun’s Reddit “Ask Me Anything”
- A paper on (gasp!) some limitations of deep neural networks
- A Businessweek story on the cost of deep learning talent
- Deep Learning in Neural Networks: An Overview by Juergen Schmidhuber