Gigaom AI Minute – July 4

:: ::

The Turing Test, homophones, and language are all topics covered in today's AI Minute.

Transcript

You've probably heard of the Turing test, which Alan Turing put forth decades ago when he asked the question, "Can a machine think?" He proposes a test where a person, sitting at a terminal typing questions and receiving answers, can't really tell whether he's talking to a human or to a machine. Turing says that if the computer can fool the human into thinking that it's a person 30% or more of the time, then we have to say that machine is thinking.

The Turing test is often derided, and people question whether it means something, but it undoubtedly does. Language is a big part of what makes humans smart and intelligent and able to communicate, and a computer being able to master it, to be able to infer all the subtleties that come with language, would be a major milestone indeed. That being said we're incredibly far away from having any system that can pass it.

The question I type into every AI that I come across is, "What's bigger, a nickel or the sun?" It's a tricky question. The sun is a homophone it could be s-u-n or s-o-n, and nickel is a metal as well as a five cent piece, and all of those ambiguities are such that I've never seen a system that understood the question, let alone could answer it correctly.

One final note: the 30% number of Turing's is pretty interesting. You would wonder why it wouldn't be 50%, why the machine wouldn't need to be able to convince you it's a person at least half the time. I think Turing thought that bar would be too high, and that the question isn’t can a machine think as well as a human, but can a machine think at all. And the really interesting idea is that if a machine ever gets picked more than 50% of the time, the conclusion you have to draw from that is that it's better at seeming like a human than we are.

Interested in sponsoring one of our podcasts? Have a suggestion for a great guest? Please contact us and let us know.