Our Robots, Ourselves
Daren C. Brabham / University of North Carolina at Chapel Hill
I really wanted Watson to lose, I really did. Everyone I knew wanted him to lose. Even Conan O’Brien and Andy Richter didn’t like him. But, perhaps unsurprisingly, Watson, IBM’s latest computer system able to understand questions asked of it in natural language, exposed the pitiful humanness of champions Ken Jennings and Brad Rutter by beating them handily in a two-game Jeopardy! match. While I am in awe of Watson’s ability to understand natural human language, I am disappointed that I had to see him demystify the simple magic of the Jeopardy! game and the genius of two of the games most successful past champions.
IBM used the game of Jeopardy! as the task environment for Watson as it was developed over several years. What makes Jeopardy! so difficult for a computer to master is that the game is structured in reverse—i.e., contestants are given answers and have to phrase their response in the form of a question—and that the Jeopardy! clues contain a number of tricky phrases, word games, and essentially human fragments of knowledge. For example, on the final question of the game, “Its largest airport is named for a World War II hero; its second largest, for a World War II battle,” Watson incorrectly answered “What is Toronto???” The correct answer, “What is Chicago,” was easy for Jennings and Rutter to answer because they had likely traveled through—or been stuck at—Midway and O’Hare airports in the past. Watson had not.
IBM tried really hard to get us to like Watson, too. The two-game Jeopardy! match was spread out over three shows, with slick public relations spots from IBM spliced in between the segments of the game. In these segments, we were told by IBM researchers about how Watson was developed, which was interesting, but also in part why Watson was developed. The Watson program was partly a “grand challenge” for IBM and the computing industry, a challenge with goals so lofty that to succeed would propel both the state of computing and the appeal of IBM as a place to work for top young minds in computer science.
In these segments, IBM also told us to expect future applications of Watson’s technology in the domain of health and science. Though these future health applications are admirable and certain, the reality, as is the reality of any high-performance computing project, is that Watson’s primary function will be for defense. We did not hear anything about Watson’s defense applications, but they are certainly there, and we would do well to remain critical of these kinds of “grand challenge” technology programs in terms of their peaceful, humanitarian claims.
Watson symbolizes something bigger, though, in the exciting domain of artificial intelligence (AI) computing. For this moment, Watson has come to represent, in singular, avatar form, the state of AI. But it is a misleading representation.
Today’s artificial intelligence, as Wired’s Steven Levy recently noted, “bears little resemblance to its initial conception.”2 We used to think the future would bring humanoid robots, companions in our image that could help us with tasks and amplify our abilities. But today’s artificially intelligent machines instead do small tasks very well, combing through massive amounts of data rapidly and streamlining specific portions of our industrial lives. These machines still “augment human intellect.”3 They just don’t look like us.
Or do they?
We are, in fact, playing a role in AI. Crowdsourcing systems like Amazon’s Mechanical Turk connect companies with individuals looking to make a bit of money performing simple tasks.4 At Mechanical Turk, “Requester” companies can use the site to coordinate a series of simple tasks they need accomplished by humans, tasks that computers cannot easily do, such as accurately tagging the content of images on the Internet for a search engine. Individuals in the Mechanical Turk community, known as “Turkers,” can then sign up to accomplish a series of these “human intelligence tasks” (HITs) for very small monetary rewards paid by the Requester. Mechanical Turk essentially coordinates large-scale collections of simple tasks requiring human intelligence, employing Turkers as cogs in a giant, artificially intelligent machine.
Amazon founder Jeff Bezos calls this process “artificial artificial intelligence,” since computers facilitate distributed human computation.5 We are all computers. And we are Watson, too. Machines do not make themselves, and they certain do not make us. Humans make machines, and because “all artifacts have politics,”6 machines are both endowed with our values and flawed by our human limitations. Watson, as impressive as he was, did not know more than was loaded into him. He had volumes and volumes of knowledge stored on his hard drive, but it was human knowledge and no more.
Perhaps Watson harrows us with fear and wonder.7 We are at once awed by Watson’s/IBM’s/our performance and fearful that Watson makes us—the un-augmented, average humans—obsolete. Seeing a machine hijacking a long-running TV show that humanizes nerds makes us long for that kind of humanness. For the trivia buff, Jeopardy! is something to aspire to (and I recently took/bombed the Jeopardy! online test, a first step in the tryout process), and Watson’s domination made the show seem just a bit further away from my reach. I feel a bit more disconnected from Jeopardy! because a robot spoiled it. Maybe that is why I was rooting against Watson. Watson made losers of us all by beating the best humans we could throw at it.
But there is an upside to this. If we remember that Watson is merely an extension, an augmentation, of our own human intellect, we can all try to take a little bit of credit for his greatness. Or at least not be so shaken by it. We are all robots after all, and IBM’s recent parade of artificial intelligence is a reminder that we/the machines can do amazing things when we partner with technology to solve problems. Televised exhibitions of technology, like Watson on Jeopardy!, should serve to inspire us, not depress us. In these spectacles, I hope we see possible better futures and not just an avatar of swirling orbs on a computer screen or, worse, a threat to our existence. We should see in technological breakthroughs the opportunity to dream bigger than we are already, to imagine a world where massive social and environmental problems are within our grasp.
1. Final Jeopardy with Watson.
2. Rosie the robot.
- If you don’t get this reference, you should go here, http://en.wikipedia.org/wiki/All_your_base_are_belong_to_us, and then here, http://www.youtube.com/watch?v=dQw4w9WgXcQ. [↩]
- Steven Levy. “The AI Revolution is On.” Wired, 19(1), 27 December 2010, at para. 5. Available online at http://www.wired.com/magazine/2010/12/ff_ai_essay_airevolution [↩]
- Douglas C. Engelbart. “Augmenting Human Intellect: A Conceptual Framework.” Washington, DC: Air Force Office of Scientific Research, October 1962. Available online at http://www.invisiblerevolution.net/engelbart/full_62_paper_augm_hum_int.html [↩]
- Jeff Barr and Luis Felipe Cabrera. “AI Gets a Brain: New Technology Allows Software to Tap Real Human Intelligence.” ACM Queue, 4(4), 2006, pp. 24-29. [↩]
- Jason Pontin. “Artificial Intelligence, with Help from the Humans.” New York Times, 25 March 2007, at para. 3. Available online at http://www.nytimes.com/2007/03/25/business/yourmoney/25Stream.html?_r=1 [↩]
- Langdon Winner. “Do Artifacts Have Politics?” Daedalus, 109(1), 1980, pp. 121-136. [↩]
- In Act 1, Scene 1 of William Shakespeare’s Hamlet, Horatio remarks on seeing the king’s ghost, “it harrows me with fear and wonder.” [↩]
This was a particularly fascinating read, as I find human-technological convergence both terrifying and mesmerizing. Though it was now over a month ago, I remember watching Watson’s performance with rapt attention and while I, like you, found myself at least partially rooting for the AI to lose, I was not entirely of one mind about this. As you say at the end of your column, there is a greatness to these technological advancements–not only for what they can do, but because they stem from our own achievements.
You make a strong point that for all of the wonders of modern technology, AI especially, we as human innovators are the masters of knowledge–simply crafting highly sophisticated tools to aid us in areas already conquered with our facile human minds. (This is a fact I constantly remind myself of as I stave off fears of a decidedly Terminator- or Matrix-esque future.)
Yet, in reading this I was reminded of the caution of my grade-school teachers on the double-edged nature of calculators; useful they may be, but only so long as I still remember how to do sums on my own as well (otherwise I may someday find myself helpless when my calculator inevitably breaks). At the risk of invoking a cliche, such logic leads me to worry about a dystopic future not unlike the one imagined in Pixar’s WALL-E or in E. M. Forster’s short story, “The Machine Stops” (a great read!). For myself, it is not that I fear that we will be made “obsolete” by our technological creations (as you suggested), but that in our innovations we are too easily surrendering the burden of knowledge preservation to machines that are not perfect.