6 Comments

Summary:

The company wants to build a system that sees and learns like the human brain. That could be decades out, but Silicon Valley is very interested.

It has been a big year for artificial intelligence. Google bought DeepMind in January for $400 million and, now, a group of tech elites and venture capital firms have awarded $40 million to Vicarious.

Venture capital firm Formation 8 led the round, the Wall Street Journal reported Friday. It was joined by Tesla and SpaceX CEO Elon Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher. Re/code reports that Box CEO Aaron Levie, incoming Y Combinator president Sam Altman, Braintree founder Bryan Johnson, Khosla Ventures, Good Ventures Foundation, Felicis Ventures, Initialized Capital, Open Field Capital, Zarco Investment Group, Metaplanet Holdings and Founders Fund were also involved. Vicarious received $15 million in a first round in 2012.

Last year, Vicarious announced that it had developed software that could crack CAPTCHAs with at least 90 percent accuracy. But that is only the beginning of what the startup plans to do with its AI, which is based on how the human brain functions. Its first product is a system that can understand the contents of photographs and videos similarly to how a human would, according to the Vicarious website.

It could be decades before Vicarious achieves its ambitious virtual brain. But companies like Facebook and Google are no doubt interested in seeing it achieved as soon as possible.

You’re subscribed! If you like, you can update your settings

  1. AI progress in last 3 years is extremely impressive. Actually even progress in the last 12 months is nothing short of amazing!

  2. Dan ‘Great Marketing Works’ Sodergren Friday, March 21, 2014

    Reblogged this on It's me – Dan Sodergren – talking about marketing… and commented:
    Worrying or amazing – hmmm all I will say is skynet

  3. It looks like Mark Zuckerberg didn’t pay too much attention to what Yann LeCun (who he recently hired) wrote about Vicarious a few months ago: https://plus.google.com/+YannLeCunPhD/posts/Qwj9EEkUJXY. Double betting?

  4. Paul Fred Frenger Saturday, March 22, 2014

    From experience: modeling mental processes based on actual brain activity leads to the same kind of errors and inaccuracies that mammalian/human brains make. There has to be some kind of “alien” filtering/recognition mechanisms operative to discover when this is happening and abort the process. Real neuronal-synaptic activity is hugely slow, wasteful of resources and error-prone. Future A.I. should concentrate on making better brains than our own.

    1. Paul Fred Frenger, i believe that is the ultimate goal actually. I am thinking, in light of what you are saying that perhaps they need to know that they can do one like the human brain, giving them something that they can test and then measure a better one by. Does that make sense?

      1. Also, is it possible that having a human brain artificially in and of itself would make it better in the sense that it will not have the human experiences and sets of biases and or theories that accompany just being alive in our societies? The algorithms would set it up to the high potentials that the human brain already comes with yet not carry any of the ‘baggage’, so called that humanity itself offers making it better at determining factors that humans, even the best may over look.

Comments have been disabled for this post