Podcast Episode

Gigaom AI Minute – May 14

:: ::
In this episode, Byron talks about transfer learning.

Gigaom brings you our unique analysis and commentary on the present and future of AI.


It's all about transfer learning, it really is. I've talked about it before, it's the ability that humans have to take knowledge from one domain and apply it to another. A computer needs to be trained on lots of pictures of cats, for instance, to identify cats. But a human does not--you could show a human a photograph or a drawing and they can take it from there. Nobody really knows how humans are able to do this.

Think about it this way. If I were to ask you what does a real duck have in common with a bathtub rubber duck, you could answer that--how are they the same and how are they different? And then I could ask you, how is a photograph of the Mona Lisa different from the actual Mona Lisa? And again, if I said are their textures different, you would say yes; are their values different, yes; does Mona Lisa look different in each one, and so forth?

What humans are able to do is--for any number of things, tens of thousands, hundreds of thousands, a million things--we can say what properties of something transfer over to something like it and which ones don't. And it isn’t something we learn one at a time. Somehow the human brain has this marvelous interconnectedness where we are able to effortlessly glide between like things and different things, and understand intuitively how they are alike and different, without having to learn each one at a time. It's a huge mystery and I personally think it's the key to cracking general intelligence.

Share your thoughts on this topic.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.