One of the latest artificial intelligence systems from MIT is as smart as a 4-year-old


When kids eat glue, they’re exhibiting a lack of common sense. Computers equipped with artificial intelligence, it turns out, suffer from a similar problem.

While computers can tell you the chemical composition of glue, most can’t tell you if it is a gross choice for a snack. They lack the common sense that is ingrained in adult humans. MIT ConceptNet

For the last decade, MIT researchers have been building a system called ConceptNet that can equip computers with common-sense associations. It can process that a person may desire a dessert such as cake, which has the quality of being sweet. The system is structured as a graph, with connections between related concepts and terms.

The University of Illinois-Chicago announced today that its researchers put ConceptNet to the test with an IQ assessment developed for young children. ConceptNet 4, the second-most recent iteration from MIT, earned a score equivalent to the average 4-year-old. It did well at vocabulary and recognizing similarities, but did poorly at answering “why” questions. Children would normally get similar scores in each of the categories.

“All of us know a huge number of things. As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled,” computer science professor and study lead Robert Sloan said in a release. “We’re still very far from programs with commonsense – AI that can answer comprehension questions with the skill of a child of 8.”



For fundamental AI that’s a pretty high level flowchart!
Hasn’t that Signe got a lovely smile!


Why do AI developers assume that in the core of a human brain lies a 1 and a 0? (ie. desires food yes/no)

Computer programmers are trying to figure out the human brain faster than a brain surgeon, for the sake of AI

For some reason I think our brains are more complex than a computer programmer wants it to be

I would find it funny when MIT develops a computer with human-like AI that looks like… a human brain and is organic


What makes you think that agents created by researchers doesn’t use probabilistic logic? Why do you call scientists “programmers”?

Comments are closed.