I’m sorry, Dave. I’m afraid I can’t do that.

aithumbThe title of this post (which it turns out I constantly mis-quote) is from Stanley Kubrick’s film: 2001: A Space Odyssey.  For those of you who’ve never seen this movie, it’s a delightful slapstick comedy set amidst the backdrop of a zany space station where every character is a hoot, people go for jogs on the ceiling and robots tell nursery rhymes.  Then you spend ten minutes watching weird colors before some old guy wakes up in a marble bathroom and a baby sits in the moon.

The terrifying thing is that absolutely nothing in that description is false.

I love Kubrick.  My jury is still very much out when it comes to this particular Kubrick film as a whole.  Scene by scene, however, I’m capable of forming opinions.

Thus my opening quote is stolen from a scene in 2001 that I consider to be utterly brilliant. In it HAL, the supercomputer on board Space-station Acid Trip, has decided that the humans traveling with him, because they’re questioning HAL’s protocols, have become a danger to the mission and must be eliminated.  From the humans’ point of view, HAL has made a mistake, the first mistake HAL has ever made, and if their computer is capable of one error then it’s capable of more errors and therefore can not be relied upon fully and needs to be disconnected from controlling the space-station.

halThe characters up until this point in the movie have had breezy, perfectly compliant interactions with HAL, casually asking him for reports and data and enjoying his voice interface with good-natured smiles.  After HAL’s error the humans continue on (with one brief attempt at secrecy) as if HAL can be ignored, oblivious to the fact that that not only is HAL aware of their plans to disconnect him but that HAL is also very upset about this.  The tension which is built around this sequence amounts to a “Don’t go in the attic, the ax murderer is in there!” sort of thing where the audience is more aware of HAL’s intentions than any of the characters on screen.  All of this comes to a head when Dave, after hopping out of the space station to check on something, asks HAL to open the doors so he can get back in.  HAL’s response? “I’m sorry, Dave. I’m afraid I can’t do that.”  A line delivered perfectly causing the audience to cringe as Dave finally catches up with us and we are planted firmly inside of his head, floating alone in the cold depths of space, suddenly faced with the fact that the door to get back inside will not be opening.

All of which has nothing to do with today’s post.  Today’s post is about the Turing test.  I stumbled onto this while doing research for a character this morning and proceeded to spend the next few hours playing and reading.

The Turing Test was proposed by Alan Turing in 1950 as a test to see whether machines could think, which is how I wound up also reading up on HAL and my favorite Kubrick scene from above.  More specifically, the test was an attempt by Turing to do away with the inevitable arguments that arise from a question like, “Can machines think?” and come up with some sort of measurable test instead.  What he proposed was a set up where a machine and a human were placed in separate rooms.  Then a third party interacts with each of them via typewritten messages and tries to see if they can tell the difference. If the machine can fool the third party into thinking it is human then it has passed the Turing test.

There is a ton of debate as to the merits of such a test and to be fair Turing meant it more as a stimulant for philosophical questioning and not a real measure of artificial intelligence.  Most A.I. researchers attempt to build machines that can solve problems, such as scheduling, which they feel are a better measure of intelligence than the ability to fool a human into thinking that something is human. My favorite rebuttal to the Turing test comes from A.I. researchers Russell and Norvig who:

…suggest an analogy with the history of flight: planes are tested by how well they fly, not by comparing them to birds. “Aeronautical engineering texts,” they write, “do not define the goal of their field as ‘making machines that fly so exactly like pigeons that they can fool other pigeons.”

For me the Turing test is a wonderful thing, though, because it spawned tons of attempts to create human-like automated texting programs on the Internet. Yes, there are places on the Internet where you can talk to robots.

I found the following:

jabberwacky.com – This one is fun because it is constantly learning.  Every person who chats with it contributes to its store of responses and it can actually begin to learn about new topics and, assuming enough contributors, new languages.

A.L.I.C.E. and Jeeny AI – I’ll be honest, these two produced a very real emotional response in me. Which is to say that they kind of pissed me off.  Which is odd they’re being computer programs and all. Their conversations tended to be one-sided with the computer elbowing away what I was saying and proceeding with the conversation it wanted to have, not to mention the terrible syntax errors that kept popping up.  And even when you find common ground the conversation usually slows to a halt.  You say something, the computer agrees with you, then you both kind of sit there nodding with nothing to say.  The terrifying thing is how well this approximates so very many of the conversations I have with real humans in the real world.  I don’t need some snooty robot pointing out to me how bad I am at small talk.

Cleverbot.com – This is an offshoot of Jabberwacky from above.  I couldn’t find an “About” page for this so I’m not sure, but it seems to streamline Jabberwacky and put it into a very slick looking interface so the computer appears to be typing, something that adds a lot to the experience.  Actual conversations were hard to produce.  But then again, actual conversations are often hard to produce.  When I started playing with these things I approached them thinking I would test them by attempting to produce robust conversations about the arts; I figured I’d play to my strengths and talk books.  The results tended to be, as I’ve mentioned, a few decent responses that then grind to a halt as things get sort of confusing.  At first I wanted to dismiss these as not very good examples of conversation, then I realized that these are, in fact, perfect examples of conversation.  It’s just that the person you’re talking with is not exactly on the same wavelength as you.  Basically the Internet has managed to replicate every cocktail party I’ve ever been to. Cleverbot is at least friendly about it in a semi-distracted way, unlike the truly irratating A.L.I.C.E. and Jeeny AI…again I should point out how weirdly easy it is to form actual emotional responses.

The real winner I’ve saved for last, though.

MegaHAL – Sadly this is also the only one I couldn’t get to work, so I couldn’t actually play with it, but it has real potential.  The interesting aspect of MegaHAL is that it programs randomness into its answers. Once the computer has analyzed your text and decided on a number of appropriate responses it then chooses one that it measures as being the least expected.  The idea here is that randomness will make MegaHAL a more stimulating conversationalist, always saying something new and interesting.  Judging by the “Examples” page this produces some utterly crazy stuff, but at least MegaHAL is trying.  You don’t have to carry the conversation the whole time with this guy.

One odd aspect of writing fiction is that I have to try and write dialogue. Actual dialogue.  I spend as much time as is socially acceptable doing nothing more than sitting around listening to other people talk, whether they know it or not.  The fun part comes in accepting the fact that actual spoken dialogue is a complete mockery of any and all formal language.  People sound nuts when they talk. Information gets transmitted, don’t get me wrong, but an actual written transcript of most conversations would just look freaking weird.  I always view it as my job to replicate this odd disregard for formalities when I’m writing dialogue.  Which is maybe why I tried so hard, and sadly failed, to chat with MegaHAL.  This thing understands my job.

I’ll finish up with this example sentence that MegaHAL produced during a previous exchange.


Now that’s someone I want to talk to at a cocktail party.