Blog Post

Artificial intelligence startup Vicarious collects $40 million from tech elites

It has been a big year for artificial intelligence. Google bought DeepMind in January for $400 million and, now, a group of tech elites and venture capital firms have awarded $40 million to Vicarious.

Venture capital firm Formation 8 led the round, the Wall Street Journal reported Friday. It was joined by Tesla and SpaceX CEO Elon Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher. Re/code reports that Box CEO Aaron Levie, incoming Y Combinator president Sam Altman, Braintree founder Bryan Johnson, Khosla Ventures, Good Ventures Foundation, Felicis Ventures, Initialized Capital, Open Field Capital, Zarco Investment Group, Metaplanet Holdings and Founders Fund were also involved. Vicarious received $15 million in a first round in 2012.

Last year, Vicarious announced that it had developed software that could crack CAPTCHAs with at least 90 percent accuracy. But that is only the beginning of what the startup plans to do with its AI, which is based on how the human brain functions. Its first product is a system that can understand the contents of photographs and videos similarly to how a human would, according to the Vicarious website.

It could be decades before Vicarious achieves its ambitious virtual brain. But companies like Facebook and Google are no doubt interested in seeing it achieved as soon as possible.

6 Responses to “Artificial intelligence startup Vicarious collects $40 million from tech elites”

  1. Paul Fred Frenger

    From experience: modeling mental processes based on actual brain activity leads to the same kind of errors and inaccuracies that mammalian/human brains make. There has to be some kind of “alien” filtering/recognition mechanisms operative to discover when this is happening and abort the process. Real neuronal-synaptic activity is hugely slow, wasteful of resources and error-prone. Future A.I. should concentrate on making better brains than our own.

    • Paul Fred Frenger, i believe that is the ultimate goal actually. I am thinking, in light of what you are saying that perhaps they need to know that they can do one like the human brain, giving them something that they can test and then measure a better one by. Does that make sense?

      • Also, is it possible that having a human brain artificially in and of itself would make it better in the sense that it will not have the human experiences and sets of biases and or theories that accompany just being alive in our societies? The algorithms would set it up to the high potentials that the human brain already comes with yet not carry any of the ‘baggage’, so called that humanity itself offers making it better at determining factors that humans, even the best may over look.