Blog Post

An AI anthology: Tracking the rise of self-learning computers

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

[technology]Artificial intelligence[/technology] methods have been around for decades, but the pace of innovation has picked up significantly over the past few years. This is especially true in areas such as [technology]computer vision[/technology], [technology]language processing[/technology] and [technology]speech recognition[/technology], where new approaches have greatly improved computers’ ability to learn — to really understand what they see, hear and read.

Over the years, Gigaom has covered many attempts to improve the way that computers respond to our voices, movements or other visual cues, and identify the words we type and the pictures we take. These technologies have and certainly will continue to change the way we interact with computers and consume the incredible amount of digital data we’re producing. The work being done in universities and corporate research labs right now to build self-learning vision, voice and language models will only make our experiences better.

Here are some timelines tracking Gigaom’s AI coverage over the years, specifically around [technology]deep learning[/technology] research and applications, other types of learning systems and applications, and cognitive computing (really, just [company]IBM[/company] Watson). The second timeline gathers discussions of advanced AI at our various conferences. Links to stories are below the images.

We will update it regularly as new product launches, research advances and industry news occur.

Computers that learn what they’re seeing, hearing and reading

[protected-iframe id=”b2beedb3daa4fefeaf2dd2dc430d3b4f-14960843-6578147″ info=”″ width=”100%” height=”650″ frameborder=”0″]

For some more information on deep learning, check out these useful primers:

Watson: IBM’s big bet on cognitive computing

[protected-iframe id=”0c31cf709da0f0e880fa02623f9a06f3-14960843-6578147″ info=”″ width=”100%” height=”650″ frameborder=”0″]

Talking AI at Gigaom events

[protected-iframe id=”3ee6882d733434f0f87b3678b046cf56-14960843-6578147″ info=”″ width=”100%” height=”650″ frameborder=”0″]

9 Responses to “An AI anthology: Tracking the rise of self-learning computers”

  1. Jack Decker

    What?! You think this is a timeline of AI? You think “Watson” is a good starting point? Sorry, but you’re discounting events in the development of AI that makes ALL of those you list pathetic in comparison.

    • Derrick Harris

      Thanks for the comment. The timelines are for Gigaom’s coverage of AI, not a history of AI. So, yes, there certainly are many developments that aren’t included.

      However, there have been significant developments in the past few years, especially in the areas we highlighted, thanks in part to all of the web content now available to train models. Watson, for it’s part, has played a big role in bringing this type of self-learning AI into the mainstream consciousness.

  2. Arden Manning

    Over the years Gigom’s coverage has been quite detailed and informative but I believe that there is one aspect of AI that deserves further attention. You have talked about Natural Language Understanding (NLU) and machine learning but what about Natural Language Generation (NLG)? This is the ability for software to write content. Machine to machine communication continues to accelerate but NLG software allows machine to human communication, which is critical to enable collaboration between machines and people.

    Full disclosure, I work for a company called Yseop which is the next generation of NLG software. Our software writes in real time and in multiple languages but it can also dialog to gather context and missing information. Lastly, the AI component of the software allows it to explain its reasoning process, answering the critical questions of “why” and “how”.

  3. Oneasasum

    I imagine it must be pretty difficult reporting on new developments in AI, while also not drawing criticism for leaving things out (e.g. competing products) or failing to make it sound as uninspiring as some would like.

    It’s clear to me that there is a lot to be inspired about, however. Even the ordinarily critical bunch at /r/machinelearning (with more down-votes per post than most forums I’ve seen) sometimes get inspired:

    But more often, this:

    (See top comments. )

  4. thank you for the afternoon reading!

    i hope we have an #EthicalRennaisance soon! Imagine this psychotic reality being imprinted on the logic and the philosophies behind developing AI in a world where #ContainFukushima has already been forgotten and the nuclear industry has a new regulated price on humans at 1/2 the going rate.

    • =) it isn’t free, but, thanks for the 411. p.s. would never go to anything like this with just one woman on the panel. makes me VERY concerned about the future of “cognitive” computing. <3nikiV