Search This Blog

Pages

About Me

My photo
19 years old. Homeschooled, then went to a community college instead of high school. Currently at Hampshire College. http://www.facebook.com/NamelessWonderBand http://myspace.com/namelesswondermusic http://youtube.com/namelesswonderband http://twitter.com/NamelessWonder7 http://www.youtube.com/dervine7 http://ted.com/profiles/778985

Wednesday, February 16, 2011

The Turing Test

Hampshire College
Philosophy of Mind

The Turing Test involves the following procedure: a person, the interrogator, is talking to both a machine, designed to imitate a human, and to an actual human. (The communication is entirely through text.) The interrogator’s job is to determine what is the human and what is the machine. If no interrogator can determine which is which, the machine is judged to be thinking. Is this accurate? In this paper, I will argue that it is, because we can only judge thought based on behavior, and if the behavior of the machine is identical to a human’s but the machine is not thinking, we must doubt the existence of human thought.

When discussing the Turing Test, it is important to make a distinction between what we can measure and the “inner nature”, so to speak, of that which we are measuring. The discussion then involves two questions: by what criteria are we to judge something as thinking, and is that thing, in fact, thinking?

In regards to the first question, let’s begin with considering the method that we, in fact, employ, before moving on to the method we ought to employ. I.e., how is it that we naturally determine that some particular thing in our environment is thinking? A first answer to this question might be that we ascribe thought to other members of the human race, and nothing else. This will not do, however, as we often say that certain humans lack thought: definitely if they are brain dead, for instance, and more controversially if they are mentally handicapped or a child. And although there are no non-controversial examples of non-humans thinking in our experience, we nevertheless have no problem imagining that non-human beings, such as extraterrestrials—that have no genetic or physiological resemblance to us (or, in some cases, no physiology at all)—could nevertheless think. Also, it seems that we often ascribe thought to non-human animals of which we do have experience, i.e., “the dog is thinking about where to hide its bone”. Perhaps in this later case we use “thinking” as a figure of speech, a sort of shortcut for “acting as if they are thinking”. But why then do we not say that when we talk of other humans “thinking” we are using the same shortcut? After all, we do not perceive thoughts (except for our own)—we perceive their manifestations, the acting-as-ifs. For those who need to be convinced of this fact, consider the following thought experiment.

Suppose that someone had been rendered completely incapable of moving any part of his/her body, either through nerve damage or some sort of outside force: furthermore, the parts of his/her nervous system that we believe are responsible for thought have been hidden from us in some way (perhaps encased in some material which is opaque to any sort of scan), so that we cannot determine whether they are active or damaged. (We could also suppose that there has been no damage to the nerves carrying signals to their brain, if this supposition is deemed necessary.) It is undeniable that this person could be thinking: victims of temporary paralysis can describe the experiences and thoughts they had while they were paralyzed. But it would be completely impossible for us determine whether this person is, in fact, thinking.

So how do we naturally determine whether other things in our environment are thinking? The examples given of brain-dead individuals and aliens make it apparent that this judgment is not ultimately made on the basis of belonging to a certain species, human. It may be initially made on that basis—we probably have an instinctual tendency to ascribe thought to any other human we meet and ascribe a lack of thought to any non human (for example, on encountering a brain-dead individual we might assume he/she is aware until we find out their condition, or in meeting a sufficiently strange alien we might assume it’s just a non-thinking creature until we learn more about it)—but it isn’t ultimately made on it. Instead, we make this judgment based on whether the thing behaves in a way that appears to indicate thought.[1] Furthermore, the example in the thought experiment above makes it clear that this is the only way that we can make this judgment; and, pragmatically, it is the way that we ought to make this judgment. This means that for a machine that passes the Turing Test we should judge it to be thinking: it is behaving exactly like a human, human’s think, therefore it is behaving in the sort of way that indicates thought—and if from these facts we do not infer that machine is thinking, then we are holding it to a different standard than the other things about which we make such judgments.

Now we move on to the second question. Does our judgment, made for pragmatic reasons, reflect actual reality? Is the machine, in fact, thinking?

It is useful now to define exactly what we mean by “thinking”. Turing’s definition[2] is that the sort of thing that thinks is the sort of thing that passes the Turing Test, a definition which is useful for him as a computer scientist interested in what computers can do but not very useful for philosophers of mind, as the definition makes it tautological that a machine which passes the Turing Test is thinking. For our purposes, I propose the following definition: something is thinking when the part of it responsible for thought is manipulating models of the world (which are not physical models), and when it has a subjective, qualitative awareness of the models and manipulations that it is performing on them. The first part of the definition comes from the fact that one of the things that distinguishes thought from non-thought is that while the latter consists of observably deterministic responses to force and/or stimulus and in solving problems purely by trial and error behavior (randomly producing behavior until something works), the former consists of considering the best course of behavior before doing anything, and this consideration is made by the thinking thing modeling the situation and then running through the possible solutions, noting what those different solutions do within the model—observationally, this means that something that acts as if it thinks can look at a puzzle and then proceed to quickly (relative to trial-and-error) perform the solution. This is how we, who are thinking things, behave, and it is a major part of the acting-as-if by which we judge that other things are thinking. However, the second part of this definition is important, as there are many things that manipulate models of the world that may not be thinking: any computer would fall under this category. The second part of the definition is more essential to our question, as we are making the distinction between what we can measure and the “inner nature” of that which we are measuring. Because of this, it is important to specify that the subjective awareness be qualitative, i.e., that there is something that “it is like”, so to speak, for the thing to be aware of what it’s doing—as computers can monitor and analyze their own internal processes and still not be considered to be thinking.

However, the fact that computers can do this might mean that there is something “it is like” and we just do not realize it. For this reason I will not ask the general question about what sorts of computers could think, but the specific question of whether the sort of computer that can pass the Turing Test thinks; I’m concerned with whether passing the Turing Test is sufficient for allowing us to infer thought, not whether it is necessary.[3]

We are now ready to answer the question: is the machine that passes the Turing Test thinking? We have shown that if we are to judge whether it is thinking the way that we judge whether other things are thinking, we must judge it to be thinking. And based on the facts considered so far, I believe that our judgment would be accurate. The machine that passes the Turing Test is the machine that perfectly imitates human behavior. If it is not, in fact, thinking, then humans that pass the Turing Test would not have to be thinking either. Thinking would not have any explanatory force or necessary connection to behavior, and we therefore would have no reason to assume its existence. Furthermore, it could be argued that if the machine is not thinking, humans must not be thinking, as we have a case of two things that are behaving in an identical manner and therefore it would seem that any phenomenon produced by one must be produced by the other (even if the phenomenon looks different: for example the same program run on my Mac plays music, whereas if it were run on a mechanical computer it does not). If the machine that passes the Turing Test is not thinking, then solipsism becomes a truly viable option: and this, I think, is a conclusion no reasonable person wants to accept.





[1] One who objects to this assertion might bring this up the case of the paralyzed patient who, while not exhibiting any behavior, nevertheless exhibits certain brain activity from which we infer that he/she is thinking. However, the only reason that we can make this inference is that that sort of activity is normally associated with thought-exhibiting behavior: if we had not observed such a correlation, we would not know what, if anything, the brain activity indicated. We can also imagine that a neurological (or whatever word we’d use for the study of the part of the alien involved in cognition) examination of a thinking alien might reveal completely different sorts of activity. Therefore, my assertion can be easily extended to say that, in cases where we cannot make judgments based on behavior, we can infer thought if the thing we’re dealing with exhibits some observable phenomena that is normally correlated with thought-exhibiting behavior amongst examples of that thing (if examples of that thing engage in thought-exhibiting behavior).

[2] “Computing Machinery and Intelligence”, pg. 3

[3] Indeed, it is not necessary: our paralyzed patient from the earlier thought experiment would not pass it, although he/she is thinking. It is also possibly that a beings with a higher level of thought than our own would fail it: the sorts of things they might say might appear to us to be total gibberish.

No comments: