« All Episodes: Gigaom AI Minute – June 12

-
-
0:00
0:00
0:00

Over the past few years, we’ve seen great advances in machine learning and artificial intelligence. Where have these advances come from? For the most part, they haven’t been from new techniques. The sorts of techniques we use to do AI today are things we’ve known about for a while. They certainly come from having more data, that’s unquestionable. Our ability to collect and store and index data has never been higher, and we now have data sets on which to train artificial intelligence. And that has been a big boon. In additional we have faster computers now. Thanks to Moore’s law, the speed of computers continues to improve.

But if you think about it, we’ve had fast computers for a while. We’ve had computers that are as fast as machines that we’re doing AI on today. And we’ve had big data sets before, they’ve just only been in certain narrow domains. So why don’t we see more advances in artificial intelligence in those areas, using those machines? I think the biggest change that’s often overlooked is that we are now at a point where the toolkits and the ecosystems of artificial intelligence are so well developed. Twenty or thirty years ago, if you were doing AI, you wrote everything. You didn’t start with a library set that you could then build on top of. And now, we have incredibly rich ecosystems of tools designed to do machine learning. These are tools that an average practitioner can spin up for very little cost, and use their own data, and apply artificial intelligence to it and have results very quickly. This is, as I said, overlooked often, but this is the real reason that I think we’re getting such a wide range of successes, is that, finally, our tools have caught up to our knowledge and ability.

Comment

Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.