« All Episodes: Gigaom AI Minute – June 7

-
-
0:00
0:00
0:00

The kind of AI that is advancing so much today, the kind that makes all the headlines is, of course, machine learning. The assumptions behind machine learning are interesting. They are that you can take a lot of data about the past, study it, and make predictions about the future. This, of course, requires that the future is like the past.

Now, in an example like, this is what a cat looked like in the past, which can therefore project whether something in the future is a cat, seems kind of reasonable. Cats probably don't change in how they look that much. But how far can that basic methodology be pushed? For instance, is language something that you can simply study the past and make a prediction about the future?

If a computer studied everything you have ever said, could it predict what you will say next? The answer is, kind of, yes in narrow circumstances, like when it comes to responding to a simple email. But, to the extent that the computer gets this right, it isn't really thinking the way you are. It doesn't decide on that response because it has the life experience that leads it up to that response. It decides on that response because that's what you've said before.

It is sort of like a paint by numbers version of the Mona Lisa. It looks the same as the other one, but it was never actually created. There is no originality in it whatsoever, it simply looks original.

What do you think? How will machine learning handle change between the past and future states of person, place, or thing?

Comment

Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.