Gigaom brings you our unique analysis and commentary on the present and future of AI.
The kind of AI that people are excited about right now is machine learning, where we take data about the past and we study it. It turns out, though, we tried a few other ways to make big advances in AI that didn't turn out quite as well. One of these were expert systems. Expert systems seemed to be a really good idea. They say, just go to experts and get them to take all of their knowledge and instantiate that knowledge into a set of rules that you can apply to a situation.
Now, while I said that seems to be a good approach, it didn't scale very well. Because--we learned something very interesting and that is that--generally speaking, experts don't know what they know. If you take a great doctor known for being able to diagnose some disease, and you give them a patient, they ask a series of questions, and then they make a diagnosis. And if you say, "Why did you make that diagnosis?" generally speaking they don't know. They can't reduce it to a set of rules.
It's probably because they have so much experience and such a long history that their brain has arranged all of that information in such a nuanced way that they can't articulate it as a few simple rules. Therefore, while expert systems work well for routing mail through a mail room, or products through a factory floor, or places where there are clear cut decision trees, it doesn't scale well for complex problems, the kinds of which we're having good luck with machine learning today.