9 Comments

Summary:

Gigaom has written a lot about artificial intelligence over the years. Here are three timelines tracking the rise of deep learning and other learning systems, IBM Watson and AI discussions at Gigaom conferences.

artifical-intelligence-anthology

Artificial intelligence methods have been around for decades, but the pace of innovation has picked up significantly over the past few years. This is especially true in areas such as computer vision, language processing and speech recognition, where new approaches have greatly improved computers’ ability to learn — to really understand what they see, hear and read.

Over the years, Gigaom has covered many attempts to improve the way that computers respond to our voices, movements or other visual cues, and identify the words we type and the pictures we take. These technologies have and certainly will continue to change the way we interact with computers and consume the incredible amount of digital data we’re producing. The work being done in universities and corporate research labs right now to build self-learning vision, voice and language models will only make our experiences better.

Here are some timelines tracking Gigaom’s AI coverage over the years, specifically around deep learning research and applications, other types of learning systems and applications, and cognitive computing (really, just IBM Watson). The second timeline gathers discussions of advanced AI at our various conferences. Links to stories are below the images.

We will update it regularly as new product launches, research advances and industry news occur.

Computers that learn what they’re seeing, hearing and reading

For some more information on deep learning, check out these useful primers:

Watson: IBM’s big bet on cognitive computing

Talking AI at Gigaom events

  1. Steve Ardire Monday, June 2, 2014

    Yes indeed Gigaom’s AI coverage has been very good, the pace is definitely quickening, and in addition your events here’s a new event

    Cognitive Computing Forum http://www.cognitivecomputingforum.com August 20 – 21 at St. Claire Hotel, San Jose, CA

    Reply Share
    1. =) it isn’t free, but, thanks for the 411. p.s. would never go to anything like this with just one woman on the panel. makes me VERY concerned about the future of “cognitive” computing. <3nikiV

      Reply Share
  2. thank you for the afternoon reading!

    i hope we have an #EthicalRennaisance soon! Imagine this psychotic reality being imprinted on the logic and the philosophies behind developing AI in a world where #ContainFukushima has already been forgotten and the nuclear industry has a new regulated price on humans at 1/2 the going rate.

    Reply Share
  3. I imagine it must be pretty difficult reporting on new developments in AI, while also not drawing criticism for leaving things out (e.g. competing products) or failing to make it sound as uninspiring as some would like.

    It’s clear to me that there is a lot to be inspired about, however. Even the ordinarily critical bunch at /r/machinelearning (with more down-votes per post than most forums I’ve seen) sometimes get inspired:

    http://www.reddit.com/r/MachineLearning/comments/26irr0/paragraph_vector_a_step_up_from_word2vec/

    But more often, this:

    http://www.reddit.com/r/MachineLearning/comments/26m16j/the_flaw_lurking_in_every_deep_neural_net/

    (See top comments. )

    Reply Share
  4. Arden Manning Tuesday, June 3, 2014

    Over the years Gigom’s coverage has been quite detailed and informative but I believe that there is one aspect of AI that deserves further attention. You have talked about Natural Language Understanding (NLU) and machine learning but what about Natural Language Generation (NLG)? This is the ability for software to write content. Machine to machine communication continues to accelerate but NLG software allows machine to human communication, which is critical to enable collaboration between machines and people.

    Full disclosure, I work for a company called Yseop which is the next generation of NLG software. Our software writes in real time and in multiple languages but it can also dialog to gather context and missing information. Lastly, the AI component of the software allows it to explain its reasoning process, answering the critical questions of “why” and “how”.

    Reply Share
  5. Lupinetine Syed Tuesday, June 3, 2014

    I know it’s just an error, but I was tickled to see that Skymind has been around since the 11th century.

    Reply Share
    1. Derrick Harris Tuesday, June 3, 2014

      Ha, thanks for catching that.

      Reply Share
  6. What?! You think this is a timeline of AI? You think “Watson” is a good starting point? Sorry, but you’re discounting events in the development of AI that makes ALL of those you list pathetic in comparison.

    Reply Share
    1. Derrick Harris Wednesday, June 4, 2014

      Thanks for the comment. The timelines are for Gigaom’s coverage of AI, not a history of AI. So, yes, there certainly are many developments that aren’t included.

      However, there have been significant developments in the past few years, especially in the areas we highlighted, thanks in part to all of the web content now available to train models. Watson, for it’s part, has played a big role in bringing this type of self-learning AI into the mainstream consciousness.

      Reply Share