Vicarious gets $15M to search for the key to artificial intelligence


Vicarious, a startup trying to discover the rules that govern intelligence, has raised $15 million in a first round of funding from tech luminaries including Good Ventures, the fund created by Facebook Co-founder Dustn Moskowitz and Peter Thiel’s Founders Fund. The money isn’t to help commercialize its technology however, it’s basically R&D spending for a big tech undertaking.

Vicarious wants to build a series of algorithms that mimic the way the mammalian brain processes and applies information — in short it wants to build software that will grant computers intelligence. The first concrete product the Union City, Calif.-based startup aims to build is a human-like object recognition system, but this is something that co-founder and CTO Dileep George estimates is three to four years away. Apparently the long time frame is just fine with investors, and what makes Vicarious such an audacious bet.

CEO and Co-Founder D. Scott Phoenix explains that the company isn’t focused on commercialization anytime soon as a means to preserve the research into building a truly robust set of intelligence algorithms, as opposed to an industry specific algorithm that leads to limited artificial intelligence — some kind of idiot savant. “We will continue working on solving the core problem.” Phoenix says. “I think it has held back AI when others have tried and found something that works well in a particular domain and then they refine that. Then the tech gets more narrow over time.”

The human brain is computing’s Mt. Everest

Building computer hardware or software modeled on the human brain is the kind of big tech problem that Peter Thiel, a former PayPal executive and a partner with Founders Fund has called on entrepreneurs to do. In this case he’s putting money where his mouth is. And the brain as a computer is like the Mt. Everest of computer science problems. When compared with CPUs or even newer forms of silicon brains, the brain is a far more efficient processor. From a Scientific American article comparing the human brain to IBM’s(s ibm) Watson AI project:

So a typical adult human brain runs on around 12 watts—a fifth of the power required by a standard 60 watt lightbulb. Compared with most other organs, the brain is greedy; pitted against man-made electronics, it is astoundingly efficient. IBM’s Watson, the supercomputer that defeated Jeopardy! champions, depends on ninety IBM Power 750 servers, each of which requires around one thousand watts.

Thus in both hardware and software the search for a silicon brain has absorbed researchers. “We want to help humanity thrive,” says Phoenix. “Human progress is limited by the number of people and their training to solve big problems, so by understanding the core algorithms that produce intelligence we can build computers that are 30 billion times faster and dramatically increase the rates of problem solving on behalf of humanity.”

To build a better AI you don’t need to map the brain.

There are countless research efforts seeking the same thing as Vicarious, but they are going about it in different ways. For example, both IBM and HP(s hpq) are trying to build out a silicon version of the brain in order to create neural computers capable of processing information in different ways– more akin to how humans do it. IBM actually showed off the first chips capable of cognitive computing last year.

IBM’s Watson

IBM also has another effort at AI, although a much less literal one than the hardware efforts. Watson takes loads of text on a certain topic and then has algorithms that help it detect the probability of a relevant response when people ask questions of that material. IBM is building a new business model around offering Watson as a service to help in the medical and financial fields.

Google (s goog) also is delving into research that ties into artificial intelligence and machine learning. A recent research paper on training a computer to “recognize” an image of a cat without outside supervision is a type of AI. And while George of Vicarious explains that its research is different because it is broader and will be capable of learning from moving images as opposed to stills taken from videos, the core idea is related.

There are plenty of other companies attempting to offer at least the veneer of artificial intelligence from Apple’s(s aapl) Siri technology to startups such as ai-one, which is building a software development kit to add AI to other apps. And plenty of other companies are using the fruit of cheaper access to lots of data to make programs and predictive models that look like intelligence.

But computers today rely on people to tell them what to do — that’s what programming is for — but giving them the ability to recognize patterns and then relate those patterns to an understanding about how the world works frees them from the constraints of programming. Of course, once they have that freedom it’s unclear what that means for computer science, programming and the current job market. It’s also unclear how far that freedom can really take a computer. Just giving it intelligence won’t mean it can “think” for itself.

Either way, Vicarious is a startup playing in a field with giants, with a big idea about changing the world.



Sounds a lot like what Numenta has been working on for a number of years


All the comment here referring Arteficial Intelligenz show the usual chauvinism and arrogance of the human race.

“Arteficial intelligence” is the wrong term. Thinking machine is much better. The ability to think implies the capability of induction, deduction, associaton, inter- extrapoltation, logical conlusions, pattern- situationrecognition, autonomous acting and consciousness.

The carbon-chauvinism claim this capabilities only for a biological brain.

But according to Occams law, most lickly the cause of above mentioned capabilities is a simple law of nature. The choice of the “hardware” determens only quantity and performance of the system but not the quality of thinking.

So a thinking machine is also viable with silicon.

Wayne Edfors

Will the artificial need to subscribe to the human mammalian need for social interaction through social media? AI in it’s truest sense is the combination of euphoric interaction AIvicarious interaction. Does the AI experience fulfillment through the human or does the human experience fulfillment through the AI? This will essentially & eventually render useless the social media model leaving us to question if the eventual replacement brain will be branded iVicarious?

John Streeter

If I made a simple statement; “A cat jumped over the toaster”; you would assume that it was a domestic cat on the kitchen counter. But AI would have to put that information together using Neural Network Memory Modules. One module would have the cat family (Felidae), one would have the appliance family (Toaster, Microwave, Etc.), and where you could find these items to make a reasonable match. I certainly don’t want a lion on my kitchen counter.

Brad Arnold

What is needed is “only” a computer software program that can write it’s own code, and properly evaluate it’s progress. Such a virtual mind would not need to be commercialized, since it’s product would be marketable.

What I find interesting the human psychology that denies (sometimes vehemently) that strong AI is even possible. I am encountering the same trying to spread the word about LENR (a clean, very very cheap, and super abundant energy technology).


All these people blathering about Intelligence, what is it? Is it possible to build an intelligent system without self awareness? Magpies are self aware as we test it, there goes the mammalian crap. Oh btw grey Parrots have shown to have a theory of mind. What’s that? Can one do that without context. Uuuups how is context created, is it pattern recognition or pattern creation algorithm ? These algorithms don’t come out of thin air, how about neuron spike firing latency is the just mother nature being lazy or does it mean something. How about bi-ploar axons or AP axons, BAC?
Back to a higher level, why do different cultures treat number lines differently?

One doesn’t have to run on neuron level if one can abstract from neuron level, but one better understands and can account for what they do in that abstraction.

Comments are closed.