3 Comments

Summary:

Next week, Jeopardy! champions will square off against IBM’s Watson supercomputer in a contest that could alter way humans view their place in the world. Developing the complex algorithms necessary to carry out such determinations wasn’t easy, and IBM didn’t operate alone.

IBM Watson

Next week, Jeopardy! champions Ken Jennings and Brad Rutter will square off against IBM’s Watson supercomputer in a contest that could alter way humans view their place in the world. Watson will challenge human beings’ superiority in knowledge and reasoning, something that wasn’t really on the line when IBM’s Deep Blue eked out a controversial victory against chess grandmaster Garry Kasparov in 1997. Not only can Watson likely determine the answer to randomly selected questions on the Jeopardy! board, but it can do so incredibly fast. However, developing the complex Question Answering (QA) algorithms necessary to carry out such determinations wasn’t easy, and IBM didn’t operate alone.

IBM announced eight universities Friday — Massachusetts Institute of Technology, University of Texas, University of Southern California, Rensselaer Polytechnic Institute, University at Albany (NY), University of Trento (Italy), University of Massachusetts, and Carnegie Mellon University — that have contributed to Watson thus far. Their efforts range from MIT’s work on START, an “online natural language question answering system … which has the ability to answer questions with high precision using information from semi-structured and structured information repositories,” to RPI’s work on “a visualization component to visually explain to external audiences the massively parallel analytics skills it takes for the Watson computing system to break down a question and formulate a rapid and accurate response to rival a human brain.” It’s all high technology, though, and it helps Watson figure out where to look for information, how to learn from previous questions and, ultimately, to decide whether it’s confident enough to buzz in.

I got a taste of Watson in August when I toured IBM’s Industry Solutions lab in Hawthorne, N.Y., and I have to say it was impressive. Without giving away any of the secrets behind Watson — or any of its potential weaknesses — Principal Investigator for DeepQA David Ferucci gave a sampling of just how deep Watson goes to determine possible answers then pare them down to a final one. It was impressive, and I wasn’t surprised when Watson came out of January practice around ahead of its human competition. As someone who once drove to Los Angeles to try out for Jeopardy, I appreciate how difficult it is for humans to make these types of judgments, and how difficult it must have been to program a computer to do the same.

The techie in me wants Watson to win so the world gets an understanding of what’s possible with algorithms, even beyond the customized experience of browsing Amazon.com, but the human in me wants to cling to that last thread of hope that human judgment can prevail against artificial intelligence. The realist in me knows that Watson will prevail, though, and AI guru Ray Kurzweil agrees. I guess we can all take solace knowing that it takes humans like those who worked on Watson to write such complex software and build such complex systems — for now, at least.

Image courtesy of IBM.

Related content from GigaOM Pro (sub req’d):

You’re subscribed! If you like, you can update your settings

  1. Ever seen Terminator 1,2,3, or Salvation? Pretty sure this is how it started. They called it TURK not Watson though…

  2. This is just another IBM PR stunt and doesn’t advance AI. It’s just a massive database and high speed processing. The way Jeopardy “answers” are constructed, makes it much easier for a computer to come up with the correct answer than a normal Q&A.

    It would me much more impressive if Watson could answer this single surprise question: “When did you realize you were going to win?”.

  3. Thank you, Mr. Ratcliff.

    1) Watson following up a wrong answer with the same wrong answer (even just once) shows that Watson wasn’t

    programmed with the capability of understanding or interpreting the rules of the game, let alone the I/O for genuine

    language comprehension. This means it’s not AI — it may not be Google, per se, but it’s an algorithm nonetheless

    and isn’t sentient or evolving in any way.

    2) Watson buzzes in once a certain probability threshold is met after the buzzer is activated for all contestants.

    Watson’s ability to buzz in is then based, every one-billionth of a second, or even fractions thereof, whereas a

    human must send that message through one side of the brain to another, and then down to his thumb. Ken and Brad may

    have invented this strategy for Day 2 to try and ring in quickly and figure it out after a more thorough glance, but

    since computers are merely a tool for us to do things (i.e., compute) more quickly, then they should have summarily

    known that true A.I. breakthroughs would not have been necessary, even 40 years ago, let alone now, this should have

    been the primary regulatory, standardized, and most clarified point in this “man vs machine” presentation.

    3) Watson, as a computer, excels beyond most persons’ fathomability, at narrowing near-infinite possibilities down

    to one best-case scenario through mathematical means. This is hard calculation done at harrowing speeds, what an

    uninhibited machine is best at. So, it should come at no surprise to someone aware of the complexities of

    statistics and/or game theory that Watson does not round up or round down or be liberal or conservative at any stage

    of the game when it comes to a wager. This is called a regression algorithm – there are a finite amount of

    scenarions even if the first category/amount selected is a daily double, whereby a computer, very quickly with
    Watson’s resources – can determine a favorable outcome based on probabilities, and then determine how much to bet.

    An odd number shouldn’t then be considered odd. The computer would be able to give you that amount not only to the

    cent, but to the fraction of the cent, and when Watson says “$347″, you think why not “$350″ and it is programmed

    not to let you know “$347.152375295……”

    4) Watson is not a natural process, and it’s not AI, and it’s not that much different from Google. It doesn’t glean

    any new information from previously unlinked sources, it doesn’t figure out anything that the programmers haven’t

    figured out for it, and as for health care or aeronautics or whatever IBM glamours up for marketing purposes, it’s

    no more than what computers have been recognized for, albeit as a superior, for quite a while, a tool. And though,

    like one clue this interesting week alluded to, as a familiar saying, a poor workman blames his shoddy tools, the

    same thing can be said that a great workman relies a great deal on his great tools. Watson, nor any other machine,

    can learn on its own, without being told how to learn it.

    5) Watson is only a computer, just like Deep Blue when it beat the reigning world chess champion Garry Kasparov, and only then, after it was defeated. It was reprogrammed between each game of the rematch to allow certain sequences of the algorithm to be reevaluated and redone. The computer itself didn’t do any of this – how could it? – the programmers in collaboration with other top grandmasters instructed the computer on what to do. Yes, it’s an amazing feat to pit man against machine and have a machine come out on top, but if the feats of the machine can be attributed to man, why not just pit man against man? So what if a computer can beat a man in chess, or in Jeopardy, or at Family Feud? We told them how to do it!

Comments have been disabled for this post