Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence by Larry Hauser


TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1 | CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY

1. Introduction | 2. Descartes: Minds and Other Minds | 3. Turing Machines, Turing Tests, and Turing Machine Functionalism | 4. Conclusion: Previewing the Chinese Room | Endnotes
Chapter One:
THE PHILOSOPHICAL NEIGHBORHOOD

The Turing test enshrines the temptation to think that if something behaves as if it had certain mental processes, then it must actually have those mental processes. And this is part of the behaviorist's mistaken assumption that in order to be scientific, psychology must confine its study to externally observable behavior. Paradoxically, this residual behaviorism is tied to a residual dualism. .... The mind, they suppose, is something formal and abstract, not a part of the wet slimy stuff in our heads. ...unless one accepts the idea that the mind is completely independent of the brain or of any other physically specific system, one could not possibly hope to create minds just by designing programs. (Searle 1990a, p. 31)

1. Introduction

Questions of artificial intelligence are questions about other minds. There are ontological other minds questions about what, besides myself, has mental properties: here views range from solipsism (nothing else does) to panpsychism (everything does). There are epistemological other minds questions about whether and how we know. Such other minds questions relate closely to metaphysical mind-body issues about what minds or mental properties essentially are (if they are essentially anything) and how things' mental properties relate to their physical properties.{1} For instance, Descartes' identification of thought (mental processes and abilities) with abstract reasoning generally, and linguistic capability in particular, leads him, notoriously, to deny nonhuman animals really think or have any mental properties at all. Descartes' further identification of thought or reason with private phenomenological states or subjective conscious experiences -- which Searle seconds -- threatens the (I take it, absurd) conclusion that we don't really know if other humans genuinely have mental properties such as their behavior and conversation seem to bespeak.

Appearances of artificial intelligence bear crucially on this complex of issues. In the first place, the apparent mental capacities of existing computers and prospects for the intelligent-seeming capacities of such machines increasing dramatically in the near future bear directly on the ontological other minds question. Other minds (even radically alien ones) are what intelligent machines seem to have. Secondarily -- their programming, presumably, being the source of their intelligent seeming capacities -- other minds, it has even been thought, are what computer programs are. This has led some to think that our minds, indeed all minds, are programs: to the Turing machine functionalist doctrine that (a proper subset of) programs are what minds essentially are. On the other hand, if minds cannot be identified with programs, this may cast doubt on whether computers really have the mental properties their seemingly intelligent performances seem to indicate.

That minds cannot be identified with programs and that this, moreover, refutes claims of artificial intelligence on the part of existing or soon to be existing machines is what Searle's Chinese room thought experiment, I take it, tries to show. If thought experiments are aptly styled "intuition pumps" (Dennett 1980, p. 429), Searle's Chinese room "experiment" can be also be said, in this connection, to pump intuitions contrary to Turing's famous imitation game "experiment" or "test" (Turing 1950). Searle states,

The [Chinese room] example shows that there could be `systems', both of which pass the Turing test, but only one of which understands; and it is no argument against this point to say that since both pass the Turing test they must both understand, since this fails to meet the argument that the system in me that understands [English] has a great deal more than the system [Searle-in-the-room] that merely processes Chinese. (Searle 1980a, p. 419)

Even the form of Searle's article (Searle 1980a) copies the form of Turing's (1950) essay: a thought experiment is presented, its moral drawn, a series of objections are considered and answered.

Since Searle's Chinese room thought experiment is explicitly directed against Turing's imitation game "experiment" (the Turing test), I preface my discussion of Searle's "experiment" with a discussion of Turing's. Since Turing's imitation game "experiment" is a variation on a thought experiment described by Descartes (in Part Five of his Discourse on Method), I will preface my discussion of Turing's "experiment" with a discussion of Descartes. Finally, with Turing's test "apparatus" in place, I will use it to briefly survey the metaphysical and epistemological terrain just glimpsed.

2.1 Mind, Body and Self-Knowledge: The Cogito | 2.2 The Other Minds Problem | 2.3 Two Sure and Certain Tests | 2.4 Minds & Machines: The Productivity Argument
2. Descartes: Minds and Other Minds

2.1 Mind, Body and Self-Knowledge: The Cogito Thought Experiment

In the background of Descartes' proposals for detecting other minds is a thought experiment he undertakes earlier, in Part Four of the Discourse on Method (Descartes 1637, p. 127), and (most famously) in Book I and Book II of his Meditations on First Philosophy.{2} This famous thought experiment, the cogito "experiment," purports to prove something about the nature of mind (that it is essentially immaterial, rational and conscious) and about a mind's knowledge of itself (that it is direct and incorrigible). It proceeds as follows. Hoping just as "Archimedes used to demand just one firm and immovable point in order to shift the entire earth; so I too can hope for great things [in the sciences] if I manage to find just one thing, however slight, that is certain and unshakable" (Descartes 1642, p. 16), Descartes proposes to doubt all that can possibly be doubted in order to "demolish everything completely and start again right from the foundations" (Descartes 1642, p. 12). Feigning to doubt everything "acquired ... through the senses" (Descartes 1642, p. 12) by imagining his seeming waking experience to be dreamt or hallucinated or even induced by "a malicious demon of the utmost power and cunning [who] has employed all his energies in order to deceive me" (Descartes 1642, p. 15), Descartes "discovers" first that he is able to completely doubt the evidence of his senses and all that such evidence seems to reveal about the nature and existence of the external world; but next he proceeds to find, he thinks, just the certain and unshakable point he seeks, on which to found the sciences, in his own self-awareness.

I will suppose then, that everything I see is spurious. I will believe that my memory tells me lies, and that none of the things that it reports ever happened. I have no senses. Body, shape, extension, movement and place are chimeras. So what remains true? .... I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. (Descartes 1642, pp. 16-17)

From this, Descartes famously concludes, "after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind" (Descartes 1642, p. 17).

Yet this -- "I am, I exist" -- seems at first too fleeting and unstable a point from which to "move the world": it seems, at first, to provide too slim a basis on which to establish much (much less everything!) "in the sciences ... stable and likely to last" (Descartes 1642, p. 12). Yet, further reflection on the cogito discovers more, Descartes urges, than this bare point of certainty that "I am, I exist." It discovers that from the supposition that "there is absolutely nothing in the world" it does not at all follow "that I too do not exist." Thus, Descartes maintains, it shows that (at least conceptually, and perhaps in fact) I am "not so bound up with a body and with the senses that I cannot exist without them" (Descartes 1642, p. 16): "thought ... alone is inseparable from me" (1642, p. 18). Thus, Descartes understands the cogito experiment to show not only that I exist, necessarily, "whenever this proposition, I am, I exist is put forward by me or conceived in my mind"; he also takes the cogito experiment to reveal "what this `I' is, that now necessarily exists" (Descartes 1642, p.17). It reveals that "I am ... in the strict sense only a thing that thinks; that is, I am a mind, or intelligence, or intellect, or reason" (Descartes 1642, p. 18).

But, "What is that [a mind or intelligence, or intellect or reason]?" Descartes answers, "A thing that doubts, understands, affirms, denies, is willing, is unwilling, and also imagines and has sensory perceptions" (Descartes 1642, p.19). Now, the narrow Archimedean point of certainty bids to expand into something more extensive:

This is considerable list, if everything on it belongs to me. But does it? Is it not one and the same `I' who is now doubting almost everything, who nonetheless understands some things, who affirms that this one thing is true, denies everything else, desires to know more, is unwilling to be deceived, imagines many things even involuntarily, and is aware of many things which apparently come from the senses (Descartes 1642, p. 19).

The thinking thing the cogito "experiment" uncovers, according to Descartes, is more than just a momentary assurance of the bare fact of my thinking existence. Descartes urges that what the "experiment" reveals is a unified, enduring, multifaceted conscious self: it reveals within this "I" it discovers -- or, perhaps, reveals this "I" to be -- an extensive field and temporally continuous stream of consciousness; a field and stream of consciousness, moreover, that is indubitably known (or can be indubitably known) introspectively or self-reflectively to itself.

Are not all these things [doubtings, understandings, affirmings, denyings, willings, imaginings, and sensings] just as true as the fact that I exist .... Which of them can be said to be separate from myself? The fact that it is I who am doubting and understanding and willing is so evident that I see no way of making it any clearer. But it is also the case that the `I' who imagines is the same `I'. For even if, as I have supposed, none of the objects of imagination are real, the power of imagination is something that really exists and is part of my thinking. Lastly, it is also the same `I' that has sensory perceptions, or is aware of bodily things as it were through the senses. For example, I am now seeing light, hearing noise, feeling heat. But I am asleep [this is all a dream, suppose], so all this is false. Yet I certainly seem to see, to hear, and to be warmed. This cannot be false; what is called `having a sensory perception' is strictly just this, and in this restricted sense of the term it is simply thinking. (Descartes 1642, p. 19)

In view of the immediate certainty of his awareness of his own conscious experiences as compared to the dubious inferential nature of belief in the existence of external things (inferred on the basis of these experiences), Descartes concludes, "I know plainly that I can achieve an easier and more evident perception of my own mind than of anything else" (Descartes 1642, 22-23).

The cogito "experiment" which begins by dismissing beliefs about the external world sense perception seems to reveal as too dubious and uncertain to provide a basis for science concludes by "discovering" an indubitable internal "world" of conscious experiences; a "world" revealed not by sense, but introspectively, by the "natural light" of infallible intellectual self-apprehension.

2.2 Other Minds Problems

Whether the epistemological advantages the subjective turn the cogito "experiment" is designed to induce are genuine or counterfeit, they are purchased at considerable epistemological cost: difficulties about our knowledge of other minds. Such difficulties stem from the alleged epistemological consequence of the cogito "experiment," that "I can achieve an easier and more evident perception ... than of anything else" only of my own consciousness in conjunction with the alleged metaphysical consequence of the experiment, that consciousnesses are what minds essentially are. Given the privacy or subjectivity of experience, "marked by such facts as that I can feel my pains, and you can't" (Searle 1984a, p. 17), and the claim that such subjective feelings or experiences (modifications or modes of consciousness, as Descartes puts it) are what mental processes and states essentially are, what (if anything) warrants my belief that others besides myself have minds also?

The extreme form of this difficulty, concerning knowledge of other human minds, is foreshadowed at the conclusion of the cogito "experiment" in the Second Meditation. Descartes writes,

But then if I look out the window and see men crossing the square, as I just happen to have done, I normally say that I see the men themselves, just as I say that I see the wax. Yet do I see any more than hats and coats which could conceal automatons? I judge that they are men. And so something which I thought I was seeing with my eyes is in fact grasped solely by the faculty of judgment. (Descartes 1642, p. 21)

The trouble, as Pierre Bayle expresses it, is "the arguments of the Cartesians lead us to judge that other men are [unthinking] machines" (Bayle 1697, p. 231) also. This was by Bayle's reckoning (and continues, by the reckoning of many, to be) "perhaps the weakest side of Cartesianism" (Bayle 1697, p. 231). Yet, despite so strongly foreshadowing this at the end of Meditation Two, Descartes seems surprisingly unconcerned with this extreme form of the difficulty.

The other minds problem that Descartes does explicitly address is whether "brutes" (infrahuman animals) have minds. Much as Searle denies computers have any mental properties at all, Descartes notoriously denies that brutes have any. Objections to this are pressed by several of the commentators in the Objections and Replies (Descartes et al. 1642) to Descartes' Meditations. Pierre Gassendi, e.g., pointedly asks, "whether the sense perception which the brutes have does not also deserve to be called `thought', since it is not dissimilar to your own" (Descartes et al. 1642, p. 187). Descartes replies,

Your questions about the brutes are not appropriate in this context since the mind, when engaged in private meditation, can experience its own thinking but cannot have any experience to establish whether the brutes think or not; it must tackle this question later on, by and a posteriori investigation of their behavior. (Descartes et al. 1642, 247-248)

Antoine Arnauld, in the same vein as Gassendi, objects that "it seems incredible that it can come about, without the assistance of any soul, that the light reflected from the body of a wolf into the eyes of a sheep should ... precipitate the sheep's flight" (Descartes et al. 1642, p. 144). Descartes replies, "As far as the souls of the brutes are concerned, this is not the place to examine the subject, and, short of giving an account of the whole of physics, I cannot add to the explanatory remarks I made in Part 5 of the Discourse on Method" (Descartes et al. p. 161). Let us turn, then, to these "explanatory remarks" outlining the sort of "a posteriori investigation of behavior" Descartes proposes for detecting the absence (in the case of the brutes) or presence (in the case of other human beings) of mental properties in others.

2.3 Two Sure and Certain Tests

In Part 5 of the Discourse Descartes describes the sort of "a posteriori investigation of ... behavior" he thinks conclusively evidences the absence of thought or consciousness in beasts (as in machines). That Descartes supposes this same sort of investigation also serves to conclusively evidence the presence of thought or consciousness in other human beings (as in himself) explains -- and even, in a manner, justifies -- his neglect of the other human minds problem. It justifies such neglect on the hypothesis that certain behavioral capacities -- especially linguistic capacities -- conclusively evidence thought in others or inductively suffice to warrant attribution of mental properties to them. It is a further question, of course, whether Descartes and his followers are entitled to this hypothesis or assumption ... but we needn't belabor this further question here. We needn't belabor this because the saving hypothesis Descartes invokes (which Turing embodies in his test) is something Searle's Chinese room thought experiment claims to refute. Since what concerns us in connection with Searle's Chinese room "experiment" is the consequence of this rejection, what concerns us in connection with Descartes, then, is just the use to which he puts this saving hypothesis in the crucial experiments he endorses to prove the mindlessness (or lack of consciousness and rationality) of the brutes and en passant the consciousness and rationality (or mindedness) of other humans. Here, Descartes proposes the following thought experiment. Imagine "automatons, or moving machines" and suppose (in the first case) "such machines had the outward shape of a monkey or of some other beast that lacks reason" (Descartes 1637, p. 139). In this case, Descartes maintains,

"we should have no means of knowing that they did not possess entirely the same nature as these animals; whereas [in the second case] if any such machines bore a resemblance to our bodies, and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing they were not real men. (Descartes 1637, pp. 139-140)

The first "means" or test "is that [mindless unconscious things, e.g., automata] could never use words, or put together other signs, as we do in order to declare our thoughts to others" (Descartes 1637, p. 141); the second is that [mindless unconscious things, e.g., automata] could never "act in all the contingencies of life in the way in which our reason makes us act" (Descartes 1637, p. 141). I will mainly consider this first "language test" because it is the precursor of Turing's. Descartes himself regards the language test as "the principal argument ... that the brutes are devoid of reason" (Descartes 1647b).{3} Elsewhere Descartes even asserts, "speech is the only certain sign of thought hidden in a body" (Descartes 1649, p. 366, my emphasis). In connection with his use of the test(s) to exclude the brutes from the ranks of thinking (or conscious or intelligent) things, Descartes seems committed to the claim that any thought (or consciousness or intelligence) at all causally suffices for linguistic competence and hence that lack of linguistic competence in the brutes provides inductively sufficient evidence of absence of thought (the complete and utter mindlessness or lack of any mental properties at all) in them. In connection with his use of the test(s) to include other human beings in the ranks of thinking (or conscious or intelligent) things, Descartes seems committed to the claim that thought (or consciousness or intelligence) is causally necessary for linguistic competence and hence that possession of linguistic competence provides inductively sufficient evidence of the presence of thought (or consciousness or intelligence) in other human beings.

Now, I take it that Descartes' defense of the claim that (any) consciousness or thought (at all) causally suffices for linguistic competence and consequently that the absence of linguistic competence inductively suffices to evidence utter lack of thought or consciousness is the weakest part of his discussion. It is this claim, of course, that mandates Descartes' notorious views about the mindlessness of infrahuman animals. He reasons:

it is quite remarkable that there are no men so dull-witted or stupid -- and this includes even madmen -- that they are incapable of arranging various words and sounds together and forming an utterance from them in order to make their thoughts understood; whereas there is no other animal however perfect and well endowed it may be, that can do the like. This does not happen because they lack the necessary organs, for we see that magpies and parrots can utter words as we do, and yet they cannot speak as we do: that is, they cannot show that they are thinking what they are saying. On the other hand, men born deaf and dumb, and thus deprived of speech-organs as much as the beasts or even more so, normally invent their own signs to make themselves understood by those who, being regularly in their company, have the time to learn their language. This shows not merely that the beasts have less reason than men, but that they have no reason at all. For it patently requires very little reason to speak; and since as much inequality can be observed among the animals of a given species as among human beings, and some animals are more easily trained than others, it would be incredible that a superior specimen of the monkey or parrot species should not be able to speak as well as the stupidest child -- or at least as well as a child with a defective brain -- if their souls were not completely different in nature from ours. (Descartes 1637, p. 140)

Arnauld is surely right that this conclusion concerning the mental properties of the brutes, "that they have none" is a view that "will not succeed in finding acceptance in people's minds unless it supported by very solid arguments" (Descartes et al. 1642, p. 144); and Descartes arguments here hardly seem solid. Crucially, the theoretical identification of linguistic competence with intellectual competence per se and further identification of intelligence with mind per se, on which the argument depends, are too dubious to support the blanket denial of mental properties to infrahuman animals in the face of the widespread and predictively fruitful practice of making mental predications ("sees the rabbit," "recognizes his master," etc.) of them. It is to such predications that Arnauld and Gassendi appeal. Chomsky even draws precisely the opposite moral than Descartes does from there being virtually "no men so dull-witted and stupid as to be incapable of arranging various words together and forming an utterance from them": where Descartes concludes from this that mental capacity and linguistic capacity are identical, Chomsky concludes from this that linguistic ability is "independent of intelligence" (Chomsky 1966, p. 71). There is little to be said, it seems, either for the identification of mental activities and states generally with reasoning or for the idea that all mental states and processes presuppose the higher reasoning processes. There is much to be said, on the other hand, for the idea that cats (e.g.), despite their manifest lack of higher rational (e.g., linguistic and mathematical) abilities, nevertheless, sometimes see birds, seek to catch them, etc.

So much for what I take to be the weakest part of Descartes' discussion -- his notion that any genuinely mental ability (given the universal instrumentality of reason) must bring in its train every mental ability and his consequent dismissal of the more or less partial mental endowments (compared to ours) of the lower animals. I turn now to what I see as the strength of Descartes' discussion, his notion that thought (i.e., mental properties such as understanding) are causally necessary for conversational ability and, consequently, that conversation or linguistic competence inductively suffices to evidence the mental property of understanding what's said and the language being conversed in. What commends this allowance above all, I think, is that to refuse this -- to refuse to credit behavior (e.g., conversation) with being sufficient to inductively warrant attribution of mental properties (e.g., understanding) -- threatens insoluble difficulties about our knowledge of other human minds. What else do we rely on "if ever we are challenged to say why we believe other people have minds" besides "the capacity to produce Turing-indistinguishable bodily performance" (Harnad 1991, p. 51)? What besides their conversation and other behavior? We know what other people think -- hence that they think -- foremost and most fully from what they say. This, I take it, underwrites Descartes' confident neglect of the other human minds problem: he regards speech as a "certain sign of thought hidden in a body," and conversation as providing, as it were, semi-direct access to other's thoughts.

2.4 Minds and Machines: The "Productivity" Argument

I have suggested that allowing behavior generally, and speech especially, to conclusively evidence thought is justified by the dire consequences -- intractable difficulties about other minds, even other human minds -- that otherwise threaten. Descartes also attempts to provide additional positive theoretical reasons for taking the behavior test and language test to be decisive tests of mentality or lack thereof. He reasons as follows:

For whereas reason is a universal instrument which can be used in all kinds of situations, these organs [of animals, like mechanisms in general] need some particular disposition for each particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act. (Descartes 1637, p. 140)

We might also say, with regard to the language test, it is for all practical purposes impossible for a machine to have enough different organs to make it speak in all the contingencies of conversation in the way in which our reason makes us speak.

For we can certainly conceive of a machine so constructed that it utters words, and even utters words which correspond to bodily actions causing a change in its organs (e.g., if you touch it in one spot it asks what you want of it, if you touch it in another it cries out that you are hurting it, and so on). But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as even the dullest of men can do. (Descartes 1637, p. 140)

Descartes continues, "even though such machines might do [or say] some things as well as we do [or say] them, or perhaps even better, they would inevitably fail in others, which would reveal that they were acting [or speaking] not through understanding but only from the disposition of their organs" (Descartes 1637, p. 140).

Descartes' would-be positive justification of the (inductive sufficiency of) the language test, then, seems to argue from (1) the unlimited novelty of speech (from speakers' unbounded capacities "to form new statements which .... which are appropriate to new situations" (Chomsky 1966, p. 71) -- what Chomsky calls the "creative aspect" and others (e.g., Fodor 1975, Lycan 1992) have called the "productivity" of speech -- together with the "fact" (2) that "it is for all practical purposes impossible for a machine to have enough organs" with enough dispositions (since "these organs need some particular disposition for each particular action [or locution]") to make it speak productively, to the conclusion (3) that "this capacity [for productive speech] is beyond the limitations of any imaginable mechanism" (Chomsky 1966, p. 73) and consequently inexplicable on purely material (i.e., for Descartes, purely mechanical) principles. Hence (4) we need to postulate some immaterial cause or principle -- "a `creative principle' alongside the `mechanical principle' that suffices to account for all other aspects of the animate and inanimate world" (Chomsky 1966, p. 73) -- to account for the creative aspect or productivity of human speech. Moreover (5) we are directly aware (via the cogito) of just such an immaterial "creative principle" -- the mind or reason -- in ourselves. Hence, finally, (6) the productive speech of other humans inductively warrants (by a kind of inference to the best explanation) the claim that they must also be actuated in their conversation by an immaterial "creative principle" -- i.e., a mind or reason -- such as we discover (via the cogito) in ourselves.

As I have already indicated, save for the supposed immateriality of the "creative principle" and the supposed discoverability of this (both the principle itself and it's immateriality) via the cogito (or simply introspectively), I accept this final conclusion: I accept the inductive sufficiency of conversation to evidence thought. It is possible -- indeed, with an eye to avoiding intractable other minds problems, essential -- to accept this limited claim without accepting Descartes further claims concerning the immateriality and introspective knowability of thought, and (consequently) without accepting any such Cartesian argument for the sufficiency of speech to evidence thought as the one just outlined. Indeed, my rejection of what I take to be the extraneous Cartesian part of the argument's final conclusion (6) is tantamount to rejecting the claim of premise (5). There certainly seem good enough reasons for rejecting this "in that it assumes that our faculty of inner observation or introspection reveals things as they really are in their innermost nature" (Churchland 1988, p. 15) when, in general, "other forms of observation -- sight, hearing, touch, and so on -- do no such thing" (Churchland 1988, p. 15). Indeed, "recent research in social psychology [e.g., Nisbett & Wilson 1977] has shown that the explanations one offers for one's own behavior have little or no origin in reliable introspection ... but are instead spontaneously confabulated on the spot as explanatory hypotheses to fit the behavior and circumstances observed" (Churchland 1988, p. 79); hypotheses, moreover, which "are often demonstrably wrong" (Churchland 1988, p. 79).

There is a second problem with this argument of Descartes: of the three things it premises, (1) the productivity or creativity of speech, (2) the nonproductivity or noncreativity of machinery, and (5) the introspectibility of the immateriality and creativity of mind, it is not just (5) which is dubious, but also (2). Indeed, if (2) were allowed to stand we would have grounds independent of the introspective ones adduced in (5) supporting the claimed immateriality of thought. If thought is supposed to be causally required for speech (so that speech, conversely, inductively suffices to evidence thought) then, since by (2) unbounded (productive or creative) capacities can't be mechanically (and hence, it seems, not materially) generated, it follows that thought (insofar as it causes speech) must be immaterial. Descartes is led to the conclusion that "matter is incapable of thinking" (Bayle 1697, p. 216) even independently of the cogito by the contrast between the productivity of thought manifested in our human capacities of responding appropriately to our aims in indefinitely many novel situations or appropriately to our conversational aims in indefinitely many novel speech situations (on the one hand) and the presumably limited inflexible nature of machines (on the other). From this contrast -- between the limited capacities of machines and the boundless productivity of thought and speech -- Descartes and his followers famously conclude "that only spiritual substances are capable of ... reasoning" (Bayle 1697, p. 216); "that all thought ... is of such a nature that the most subtle and perfect matter is incapable of it and that it can only exist in incorporeal substances" (Bayle 1697, p. 216). Turing's seminal argument (Turing 1936-7) that certain types of machines -- since called "Turing machines" -- can compute any computable function precisely demonstrates the falsity of Descartes' presumption about the necessarily limited capacities of machines. Turing machines have unbounded capacities for unlimited novel computation.{4}

3.1 Computing Machinery: Descartes to Turing | 3.2 Turing Machines & Computation | 3.3 Universal Turing Machines & Digital Computers | 3.4 Turing's Test: The Imitation Game | 3.5 Turing's Test: Methodological and Empirical Considerations | 3.6 Metaphysical Consideration
3. Turing Machines, Turing's Test, and Turing Machine Functionalism

3.1 Computing Machinery: Descartes to Turing

Of views like Descartes' about the limited inflexible nature of machines, Turing remarks

I believe they are mostly founded on the principle of scientific induction. A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a limited purpose, when required for a minutely different purpose they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes these are necessary properties of machines in general. (Turing 1950, p. 54)

Again, "the belief that machinery was necessarily limited to extremely straightforward possibly even repetitive, jobs" has been encouraged by the "very limited character of the machinery which has been used until recent times" (Turing 1969, p. 3). Such beliefs, however, were counterinstanced already in the nineteenth century by Charles Babbage's designs for an "Analytical Engine" (as he called it). "Babbage had all the essential ideas" (Turing 1950, p. 45). Babbage's Analytical Engine was (or would have been, had it been completed) a programmable (hence a flexible -- even unlimitedly flexible) machine. It was (or would have been) the first digital computer. The theoretical import of this was recognized by Babbage's associate, Lady Lovelace.{5} She writes,

The bounds of arithmetic were ... outstepped the moment the idea of applying the [instruction] cards had occurred; and the Analytical Engine does not occupy common ground with mere "calculating machines." It holds a position wholly its own; and the considerations it suggests are more interesting in their nature. In enabling mechanism to combine together with general symbols, in successions of unlimited variety and extent, a uniting link is established between the operations of matter and the abstract mental processes of the most abstract branch of mathematical science. .... Thus not only the mental and the material, but the theoretical and the practical in the mathematical world, are brought into more intimate and effective connection with each other. We are not aware of its being on record that anything partaking of the nature of what is so well designated the Analytical Engine has hitherto been proposed, or even thought of, any more than the idea of a thinking or of a reasoning machine. (Augusta 1842, p. 368-369)

Lady Lovelace's speculations were made exact -- the refutation of Descartes' presumption about the limits and inflexibility of machines implicit in Babbage's design made explicit and decisive -- by Turing.

3.2 Turing Machines and Computation

In a paper titled "On computable numbers with an application to the Einscheidungsproblem" (Turing 1936-7), Turing establishes precisely the "link ... between the operations of matter and the abstract mental processes of the most abstract branch of mathematical sciences" Lady Lovelace foresaw. In so doing Turing develops (as Lady Lovelace forecast),

A new, a vast, and a powerful language ... for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind than the means hitherto in our possession have rendered possible. (Augusta 1842, p. 369)

The "language" Turing developed is a scheme for the abstract description of discrete state machines. This he uses to describe machines capable of computing any effectively computable function or of representing any mathematical algorithm. These abstract machines which Turing (1969) called "logical computing machines" (LCMs) to distinguish them from actual digital computers or "practical computing machines" (PCMs) have come to be called "Turing machines." Though typically extremely limited in their repertoire of basic operations, Turing machines are capable, by concatenation of these basic operations, of representing any algorithm and computing any effectively calculable mathematical function.

Discrete state machines are "machines that move by sudden jumps or clicks from one quite definite state to another" (Turing 1950, p. 439). This notion of a discrete state machine may be illustrated, and Turing's way of abstractly characterizing the operation of such machines introduced, by considering the following very simple machine:

a wheel which clicks round through 120° once a second, but may be stopped by a lever which can be operated from the outside; in addition a lamp is to light in one of the positions of the wheel. This machine could be described abstractly as follows. The internal state of the machine (which is described by the position of the wheel) may be q1, q2, or q3. There is an input signal i0 or i1 (position of lever). (Turing 1950, p. 439-440)

The operation of this machine "can be described abstractly" as being controlled by tables A (determining the output signals o0 (lamp off) or o1 (lamp on) on the basis of the machine's internal state) and B (determining the internal state of the machine at any moment on the basis of the last state and the input "signal") below.

These two tables may be combined to form a standard "machine table." Such a table completely specifies the behavior -- both state transitions and output -- of a discrete state machine at any given moment on the basis of the last state and the input. The table for our machine, then, is the following:

For each input (row) and last state (column) the table specifies an output and next state: so, the first entry (for i0 in q1) can be read, "if i0 is input when the machine is in state q1 then o0 is output and the machine goes into state q2"; and so on, mutatis mutandis, for the other entries.

Two things about this machine and its description, I think, are noteworthy. First, the brutally mechanical nature of the state transitions and output determined by the input "signal" and the previous state of the machine: the machine "obeys" the "instructions" in the machine table only in the sense that the table describes the sequences of states under the different input conditions. Secondly, the abstract character of the description, noteworthily, involves a double idealization or abstraction. In the first place, as Turing observes, there is already idealization involved in characterizing this physical system as a discrete state machine since, "Strictly speaking there are no such machines. Everything really moves continuously. But there are many kinds of machine [such as this one] which can profitably be thought of as being discrete state machines" (Turing 1950, p. 439). In the second place, the machine table description abstracts from the actual physical construction of the machine insofar as any system that has (or can be profitably thought of as having) two discrete inputs, three discrete internal states, and two discrete outputs whose causal interactions answer to this table will receive equivalent abstract descriptions under this scheme. Under this abstract description they will be equivalent machines. Thus, Turing speaks of machines specified under this scheme as "abstract" or "logical machines" (Turing 1969, p. 6).

Turing (1936-7) specifies a type of logical computing machine -- a "Turing machine" -- and argues that there is some LCM capable of computing every possible algorithm and thus of determining every effectively computable mathematical function. Several considerations strongly support this now generally acknowledged Turing-Church thesis{6}: crucially, (1) all known (intuitively recognized) algorithms are representable as Turing machines; and (2) other independently developed characterizations of the set of computable functions, e.g., Gödel's (1934) and Kleene's (1936) recursive algorithm (recursive function) and Church's (1936) lambda calculus pick out the same functions as Turing's. Turing's own defense of the thesis that the Turing "computable numbers include all numbers which could naturally be regarded as computable" (Turing 1936-7, p. 116), besides attempting to show that LCM computability is coextensive with intuitively recognized cases of algorithmic calculability (Turing 1936-7, 9-10) and that "all effectively calculable (lamda-definable) sequences are computable and its converse [that all Turing computable sequences are effectively calculable (lamda-definable)]" (Turing 1936-7, p. 149), ingeniously supports the first claim by close consideration of what human algorithmic computation involves. Thus:

Computing is normally done by writing certain symbols on paper. We may suppose the paper is divided into squares like a child's arithmetic book. In elementary arithmetic the two-dimensional character of the paper is sometimes used. But such a use is always avoidable, and ... the two dimensional character of paper is not an essential of computation. I shall also suppose that the number of symbols which may be printed is finite. .... The effect of this restriction of the number of symbols is not very serious. It is always possible to use sequences of symbols in place of single symbols. Thus an Arabic numeral such as 17 or 999999999999999 is normally treated as a single symbol. Similarly in any European language words are treated as single symbols.... The differences from our point of view between the single and compound symbols is that the compound symbols, if they are too lengthy, cannot be observed at one glance. This is in accordance with experience. We cannot tell at a glance whether 9999999999999999 and 999999999999999 are the same. (Turing 1936-7, p. 135-136)

Reflecting further on the mechanics of human computation, Turing continues,

The behavior of the [human] computer at any moment is determined by the symbols which he is observing, and his "state of mind" at the moment. We may suppose that there is a bound B to the number of symbols or squares which the [human] computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need to be taken into account is finite. .... Again, the restriction is not one which seriously affects computation, since the use of more complicated states of mind can be avoided by writing down more symbols on the tape. (Turing 1936-7, p. 136)

Now, in the light of the preceding,

Let us imagine the operations performed by the [human] computer to be split up into "simple operations" which are so elementary that it is not easy to imagine them further divided. Every such operation consists of some change of the physical system consisting of the [human] computer and his tape. We know the state of the system if we know the sequence of symbols on the tape, which of these are observed by the [human] computer (possibly with a special order), and the state of mind of the computer. We may suppose that in a simple operation not more than one symbol is altered. Any other changes can be split up into simple changes of this kind. The situation in regard to the squares whose symbols may be altered in this way is the same as in regard to the observed squares. We may, therefore, without loss of generality, assume that the squares whose symbols are changed are always "observed" squares. (Turing 1936-7, p. 136)

Finally,

Besides these changes of symbols, the simple operations must include changes of distribution of the observed squares. The new observed squares must be immediately recognizable by the [human] computer. It is reasonable to suppose that they can only be squares whose distance from the closest of the immediately previously observed squares does not exceed a certain fixed amount. Let us say that each of the new observed squares is within L squares of an immediately previously observed square. (Turing 1936-7, p.136)

In sum, then, we can say, with regard to human computation,

The simple operations must therefore include: (a) Changes of the symbol on one of the observed squares. (b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares. (Turing 1936-7, p. 137)

Furthermore,

It may be that some of these changes necessarily involve a change of state of mind. The most general single operation must therefore be taken to be one of the following: (A) A possible change (a) of symbol together with a possible change of mind. (B) A possible change (b) of observed squares, together with a possible change of state of mind. (Turing 1936-7, p. 137)

Turing continues,

We may now construct a machine to do the work of this [human] computer. To each state of mind of the computer corresponds and "m-configuration" of the machine. The machine scans B squares corresponding to the B squares observed by the [human] computer. In any move the machine can change a symbol on a scanned square or can change any one of the scanned squares to another square distant not more than L squares from one of the other scanned squares. The move which is done, and the succeeding configuration, are determined by the scanned symbol and the m-configuration. (Turing 1936-7, p. 137-138)

Insofar as the details of Turing's description of human computational processes are warranted (as they seem to be) it seems clear that machines so constructed can do the same computational work as human computers. Indeed, Turing shows that machines so constructed can do the same computational work as a human computer even on the minimal assumptions: assuming B (the number of squares able to be scanned by the machine) = 1; assuming L (the number of squares away from the immediately previously scanned square that machine is supposed to be able to redirect its "gaze") = 1; and assuming n (the number of symbols in the machine's alphabet) = 2.

Now, Turing machines can be characterized (as Turing 1950 characterizes them) as consisting of a store, an executive unit, and a control. The store of the machines Turing envisaged is a tape, divided into squares, each square containing a symbol from a finite alphabet (consisting of two or more symbol types): this tape is supposed to extend infinitely (or, practically, to be indefinitely extendable) in either direction. The executive unit is a read-write mechanism or scanner capable of scanning the input-output tape just described and reading or recognizing a symbol on the tape, writing (or overwriting) a symbol on the tape, and advancing the tape one square in either direction. The control determines which operations to perform in which order on the basis of the last state of the device and the input (scanned) signal. By whatever mechanism such control is supposed to be accomplished it can be thought of as (or represented by) a table according to which the output (symbol written) "is determined by the last state and input signal according to the table" (Turing 1950, p. 46). Each such abstract LCM or Turing machine is completely characterized by its machine table.

The following machine tables represent different Turing machines for adding two positive integers expressed (in keeping with our minimal assumptions) in unary ("simplified Roman" or "prisoner's script") notation.

The rows of these matrices represent the possible inputs to the machines, which are the two symbols of their alphabets, "0" and "1". The columns correspond to the states of the machine. In each square of the matrices appears an "instruction" which may be read as follows: "s1q2" means "print the symbol s1 in the scanned square and go into state q2"; "Rq2" means "proceed to scan the square immediately to the right of the square presently being scanned (shift right) and go into state q2"; "Lq3" means "shift left and go into state q3"; etc. When a machine reaches a condition (i.e., is in a state scanning a symbol) for which no instruction is defined (i.e., for which the corresponding cell of the functional matrix is empty) the machine is said to "enter the rest state" or "halt." The machines described by these tables function as follows: they are started in state q1 scanning the first unary digit of the sum to be worked out, which sum is represented by two strings of "1"s (i.e., two unary numerals) separated by a single "0", the unused leading and trailing portions of the tape being filled with "0"s. The first machine (Table D) proceeds to work out the sum by shifting right until it scans the separating "0", replacing this "0" with a "1", shifting left until it scans the first leading "0", shifting right to position itself once more over the initial "1", replacing this "1" with an "0", and finally shifting right to halt scanning the first unary digit of the answer. The second machine (Table E) proceeds by first replacing the initial "1" in the sum to be worked out with "0", then shifting right until it reaches the separating "0" which it replaces with "1", shifting back left until it reaches the first leading "0", and finally shifting right to halt scanning the first unary digit of the answer.

I note several things about these machines. First, note that while, according to the Turing-Church thesis there is a Turing machine for every effectively computable function, the relationship between computable functions and Turing machines (like that between functions and algorithms for computing them) is one-many. The two machines just described, for instance, are different machines representing different algorithms for computing the addition function in unary. Machines such as these -- which compute the same function or produce identical output when given identical input -- are said be "weakly equivalent" or "input-output" equivalent. This will concern us later. Secondly, there's the "formal" and "mechanical" nature of computation as Turing characterizes it: our machines are machines for operating on unary numerals: Turing (1936-7) "defined computation as the formal manipulation of uninterpreted symbols by the application of formal rules" (Boden 1990, p. 4). Just as the "input" lever of our first machine causally determined the next state of the wheel in virtue of its physical properties (e.g., its rigidity) so are the input symbols of Turing machines supposed to causally determine the state transitions of the machines according to their physical properties (e.g., their shapes or electrical potentials).{7} This will concern us mightily in connection with Searle's Chinese room argument and experiment later on since Searle "takes for granted that AI programs and computer models are purely formal-syntactic (as is a Turing machine)" (Boden 1990, p. 5). On the other hand (thirdly), though, "the `logical [machine table] description' of a Turing machine does not include any specification of the physical nature of these [machine] `states' -- or indeed of the physical nature of the whole machine" (Putnam 1960, p. 371). The machine table, in other words, does not describe an actual physical device but only the "formal shadow" (Searle 1980a, p. 422) cast by (possibly many physically different) devices. "In other words, a given `Turing machine' is an abstract machine which may be realized in an almost infinite number of different ways" (Putnam 1960, p. 371). Just as the "logical description" abstracts from the physical nature of the processing and states, so does it abstract from the physical nature of the processed symbols -- whether s0 and s1 are "0" and "1" or "x" and "y" or a 0-volt and 5-volt charge are irrelevant. Just as "the `logical description' (machine table) of the machine describes the state[s] only in terms of their relations to each other and to what appears on the tape" (Putnam 1960, p. 367) it describes the symbols that appear on the tape only in terms of their relations to each other and to the machine states also. Just as "the `physical realization' of the machine is immaterial, so long as there are distinct states [q1, q2, q3, etc.] and they succeed each other as specified in the machine table" (Putnam 1960, p. 367), so is the physical realization of the alphabet immaterial.

3.3 Universal Turing Machines and Digital Computers

According to the Turing-Church thesis, a Turing machine "can do anything that could be described as `rule of thumb' or `purely mechanical'" (Turing 1969, p. 7), i.e., some such machine is capable of performing any algorithmic or rote procedure. This formulation is ambiguous. Everything said so far, perhaps, suggests only the weak reading that there is some Turing machine (a different one) capable of implementing each algorithm; but it is also the case that there exist Turing machines (individually) capable of implementing every algorithm.

Up to now we have held the point of view that different algorithms are performed by different Turing machines, each with its own functional matrix [i.e., machine table]. We can, however, construct a universal Turing machine capable of executing any algorithm, that is, in a certain sense, capable of doing the work of any Turing machine. (Trakhtenbrot 1963, p. 80)

This possibility may be shown (informally) to follow from the following considerations:

1. For every rote procedure or algorithm there exists some Turing machine which implements that procedure or algorithm. (The Turing-Church thesis)
2. Every Turing machine is completely characterized by its machine table.
3. It is possible to describe a rote procedure for applying (following) a machine table.

From these considerations it follows,

There are Turing machines that implement rote procedures for applying machine tables and, hence, for imitating or emulating any Turing machine.

Such a procedure is called an "imitation algorithm" and a machine implementing such an algorithm is called a "universal Turing machine" (Trakhtenbrot 1963, p. 80-81). For the formal proof that there exist such machines and detailed description of one the reader is referred to Turing 1936-7 §5-7); but it is intuitively obvious that the process of hand-simulating a Turing machine's operation from its table and initial configuration (the contents of the tape, the square being scanned, and the initial state of the machine) is rote. Given the machine table and initial configuration you simply look up the instruction in the cell cross referenced by the initial state and the scanned symbol then execute the instruction (which either halts or alters the configuration of the machine), and so on (for each subsequent configuration). This "can be carried out by someone who knows nothing about Turing machines. If he has the functional matrix of any machine and an initial configuration of the tape, he will be able to imitate the operation of the machine exactly and obtain the same result" (Trakhtenbrot 1963, p. 80). Being a rote procedure this "imitation algorithm can also be given as a Turing functional matrix" which can be reexpressed as a set (representable as a string) of quadruples. The adding machine described by Table D, e.g., is described by the sequence q1s0s1q2 q1s1Rq2 q2s0Rq3 q2s1Lq2 q3s0Rq4 q3s1s0q3, where the order of the elements in each quadruple is last state, scanned symbol, operation (tape movement or output symbol), next state. Turing (1936-7, p. 126) shows how such a sequence can be coded as a "standard description (S.D.)" in the form of a sequence of "0"s and "1"s. Hence,

It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with a tape on the beginning of which is written the S.D. of some computing machine M, then U will compute the same sequence as M. (Turing 1936-7, pp. 127-128)

If the Church-Turing thesis is true then "this one machine, the universal machine, itself possesses all the computational power that any computing machine may possess" (Stillings et al. 1987, p. 322). "Given the existence of the universal machine," now, "any functional matrix [machine table] can be viewed in two ways": (1) as describing "the logical unit of a special Turing machine for solving the corresponding problem"; (2) as describing "a program to be used by the universal machine to solve the corresponding problem" (Trakhtenbrot 1963, p. 84). "The importance of the universal machine is clear" (Turing 1969, p. 7) both practically (for the design and construction of actual computing machines) and philosophically. Practically, actual computing machines of the programmable sort (PCs, mainframes, etc.) "have the essential properties of ... `Universal Logical Computing Machines' mentioned earlier. In practice, given any job which could have been done on an LCM one can also do it on one of these digital computers" (Turing 1969, p. 8). "This special property of digital computers, that they can mimic any discrete state machine" means they can aptly be "described as being universal machines" (Turing 1950, p. 441). Hence, the universal machine's philosophical importance: contrary to Descartes' contention that "it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act," (Descartes 1637, p. 140), "it is unnecessary to design new machines to do various computing processes" (Turing 1950, p. 441).

We do not need to have an infinity of machines doing different jobs. A single one will suffice. The engineering problem of producing various machines for various jobs is replaced by the office work of `programming' the universal machine to do these jobs. (Turing 1969, p. 7)

Various computing processes of all kinds "can all be done with one computer suitably programmed for each case" (Turing 1950, pp. 441-442).

3.4 Turing's Test: The Imitation Game

Though the oft alleged behavioristic nature of the Turing test can be overplayed -- we have just seen Descartes, certainly no behaviorist, propose an almost exactly similar test -- it is nonetheless instructive to view Turing's proposal against the background of the dominant behaviorist paradigm or research program of the psychology of his day. It will also be profitable to view behaviorism against the background of the Cartesian introspective research program the behaviorist movement superseded. According to Cartesians and introspectionists "psychology is a study ... of the phenomena of consciousness" (Watson 1913, p.272) and no one can experience anyone else's experiences. The upshot of this, Watson complains, is that "consciousness can only be analyzed by introspection" and as a result "we find as many analyses as there are individual psychologists" (Watson 1924, p.5). Consequently, on Cartesian assumptions, "there is no way of experimentally attacking and solving psychological problems and standardizing methods" (Watson 1924, p. 5). Though the Cartesian paradigm or introspectionist research program for psychology held virtually unchallenged sway for almost three hundred years, it had failed to make significant headway on its outstanding anomalies: its skeptical implications concerning our knowledge of other minds, and the problematic nature of the presumed causal interaction of minds with bodies. Introspectionist psychology had reached a state which from our present vantage we might describe as a state of Kuhnian paradigmatic crisis (c.f. Kuhn 1970). Thus:

We have become so enmeshed in speculative questions concerning the elements of mind, the nature of conscious content (for example, imageless thought, attitudes, Bewusseinslage, etc.) that I, as an experimental student, feel that something is wrong with our premises and the types of problems which develop from them. (Watson 1913, p.273)

Given the essential privacy of introspection, and the seeming irreconcilable disagreements among the introspective reports of different individuals and schools, Watson even thought, "There is no longer any guarantee that we all mean the same thing when we use the terms now current in psychology" (Watson 1913, p. 23). In a like vein, Turing complains that the question "Can machines think?" "is perhaps too meaningless for consideration" (Turing 1950, p.442) and proposes "to replace the [original] question by another, which is closely related to it, and is expressed in relatively unambiguous words" (Turing 1950, p. 433). He proposes to replace the question of whether a machine can think with the question of whether it can pass a certain operational test -- "to play the imitation game so well, that the average observer has only a 70% chance of making the correct identification after five minutes of questioning" (Turing 1950, p.442). This "imitation game" is a game designed to test, in the first instance, the candidates conversational ability or mastery of natural language. In the second instance, since "The question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor which we wish to include" (Turing 1950, p.435) -- since language, like reason, is a "universal instrument" -- the game serves to test for general knowledge and comprehension. This method can even be made to go proxy for something like Descartes' behavior test: instead of actually putting the subject in various situations and observing how they act we can describe the situations to them and ask how they would act.

It would not be too wide of the mark to think Turing views himself as proposing an "operational definition" or "operational test" of thinking in accordance with the methodological prescriptions of the psychology of his day. Turing speaks of replacing the naive question "can a machine think" with the question of whether a machine can pass the test he is proposing. This indicates that Turing did not take himself to be offering an explication of the pretheoretic notion of thinking or intelligence. He even doubts there is single well-defined pretheoretic sense of "thought" to be explicated. He rejects, perhaps for this reason, the idea that "the meaning of the words `machine' and `think' are to be found by examining how they are commonly used" (Turing 1950, p. 433). Such an "ordinary language" approach to the question, he thought, "made it difficult to escape the conclusion that the meaning and the answer to the question, `Can machines think?' is to be sought in a statistical survey such as a Gallup poll" (Turing 1950, p.433), which he judges to be "absurd" (Turing 1950, p.433).{8} So, rather than speaking of "explication" or offering a "definition" Turing speaks of "replacement," and rather than agreement of the replacement with ordinary usage and pretheoretic notions, I suppose, Turing would have laid greater stress on the replacement's scientific utility. In the interest of scientific utility Turing proposes to cut through the Gordian knot of pretheoretic ideas and ordinary usage in a single "operational" stroke. Yet, Turing recognizes, "We cannot altogether abandon the original form of the problem, for opinions will differ as to the appropriateness of the substitution and we must at least listen to what has to be said in this connection" (Turing 1950, p. 442) and there is much to be said for the test Turing proposes as partial explication of our ordinary use of mental terminology. It is just such conversational evidence (as the Turing test, like Descartes' language test, invokes), supplemented (or in the case of mental attributions to nonverbal systems, e.g. infrahuman animals, supplanted) by behavioral evidence (such as Descartes' behavior test invokes) that we rely on in attributing mental properties to others. Moreover, the universal instrumentality of language just alluded to -- i.e., the possibility of questioning the candidate "about any of the fields of human endeavor" -- seems to Turing (as perhaps it seemed to Descartes also) to make the supplementation of the language test by the behavior test inessential in the case of verbal systems. Also, the language test, unlike the behavioral test perhaps, "has the advantage of drawing a fairly sharp line between the physical and intellectual capacities" (Turing 1950, p. 434): "we should feel there was little point in trying to make a `thinking machine' more human by dressing it up in ... artificial flesh" (Turing 1950 p. 434). "The form in which we have set the problem [the form of the Imitation game thought experiment] reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing their voices" (Turing 1950, p. 434).

The operational test Turing proposes to replace the question "Can machines think?" -- since called "Turing's Test" or "the Turing Test" -- is a behavioral test, in the first place, of linguistic competence and, in the second (in the light of the "universal instrumentality" of language), of general intellectual ability. Turing characterizes this test "in terms of a game which we call the `imitation game,'" (Turing 1950, p. 433) whose original version he describes as follows.

It is played by three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either `X is A and Y is B' or `X is B and Y is A' The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?

Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be

`My hair is shingled, and the longest strands are about nine inches long'

In order that the tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. .... The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as `I am the woman, don't listen to him!' to her answers, but it will avail nothing as the man can make similar remarks.
We may now ask the question, `What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is being played like this as he does when the game is played between a man and a woman? These questions replace our original, `Can machines think?' (Turing 1950, p.433-434)

Concerning the version of the imitation game in which the machine takes the part of A, Turing predicted that "in about fifty years' time [by the year 2000] it will be possible to program computers ... to make them play the imitation game so well that an average interrogator will have no more than 70 per cent. chance of making the correct identification after five minutes of questioning." (Turing 1950, p.442). From the present vantage point (1992) this prediction seems exceedingly optimistic. Barring unexpectedly dramatic breakthroughs in knowledge representation and ambiguity resolution software engineering and perhaps also the hardware engineering of true parallel processors, fully conversational computers -- capable of passably human conversation in a natural language, without restriction (e.g., to micro-worlds or other limited domains of discourse) -- seems more than a decade (perhaps much more than a decade) away.{9} Nonetheless, I believe methodological and other considerations already broached should temper the enthusiasm of the would-be AI detractor for empirical arguments against AI based on the conversational disabilities of the present generation, or even the limited prospects of the next generation, of computers.

3.5 Turing's Test: Methodological and Empirical Considerations

The empirical line of argument against AI just broached has been advocated most prominently, perhaps, by Hubert Dreyfus (Dreyfus 1979, Dreyfus & Dreyfus 1986) who stresses how far we still remain from being able to program machines to pass the Turing Test, and how dim the prospects for fulfillment of Turing's prediction remain. This, according to Dreyfus, evidences the inherent incapacity of computing machines to have, or the intrinsic inability of programming to impart, the sort of general intellectual capacity and resultant capacities for adaptable behavior and conversation the Turing Test purports to detect. The Dreyfus line is, in effect, to accept the validity of the Turing Test, accept that the ability to pass this test would be very good evidence of intelligence; to note that, as a matter of empirical fact, computers are very far from being able to pass Turing's test; and to conclude that this strongly evidences that AI's "successes" in imparting limited "intellectual" abilities to computers are "mere engineering feats, not steps toward generally intelligent systems" (Dreyfus 1979, p.18), and "do not constitute ... progress toward producing general or generalizable techniques for achieving adaptable intelligent behavior" (Dreyfus 1979, p.5). On Dreyfus' assessment, not only are natural language processing computers with prospects for conversing fluently in one natural language (as Turing's test requires) or translating from one natural language to another (a related ability Dreyfus discusses) "still over the horizon" (Dreyfus 1979, p.92) but "the horizon seems to be receding at an accelerating rate" (Dreyfus 1979, p.92). According to Dreyfus, moreover, such facts and appearances show that prospects for genuine AI are dim, and consequently that (what I call) the thesis of AI proper, that "Machines such as digital computers can think" is strongly empirically disconfirmed.

Now, this empirical tack Dreyfus takes against AI invites two sorts of reply. First, it may be answered on its own terms with an empirical rejoinder: it may be said that developments in AI have made progress toward the goal of conversing in (and translating between) natural languages, albeit slower progress perhaps than Turing envisioned when he made his prediction. But a more telling rejoinder to the empirical line of objection Dreyfus essays can be made on the methodological grounds already broached. The same methodological considerations that undercut Descartes' claim that the inability of "brutes" to pass the language test provides sure and certain evidence of the utter mindlessness of infrahuman animals also undercut appeals, such as Dreyfus', to the inability of actual computers to pass Turing's test as evidence of the utter mindlessness of computers. In the absence of considerable theoretical reasons for thinking natural language comprehension is prerequisite for having any mental properties at all, the conversational disabilities of animals do not suffice to warrant refusal to credit such predictively fruitful attributions of mental properties (e.g., seeing the bird, seeking to catch it, etc.) to animals as we seem practically impelled to make of them. Neither, in the absence of considerable theoretical reasons for thinking lack of natural language comprehension is required for having these mental abilities, do the conversational disabilities of computers suffice to warrant refusal to credit such predictively fruitful attributions of mental properties (detecting the carrier signal, seeking to solve equations, etc.) to computers as we seem practically impelled to make to them. Since there are no considerable theoretical reasons, I take it, for thinking natural language comprehension is the sine qua non of mind (required for having any mental properties whatsoever) Turing test passing capability ought not to be understood as a necessary condition for (warranted attribution) of mind or intelligence but ought rather to be understood merely to be sufficient. Even supererogatory. Turing himself observes, "The [imitation] game may perhaps be criticized on the ground that the odds are weighted too heavily against the machine" (Turing 1950, p. 435). He continues, "This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection" (Turing 1950, p. 435). As Turing would have it, failing this test does not conclusively evidence lack of any mind or intelligence (as Descartes thought failure to pass his language test would), but passing this test is very presumptive evidence of the presence of mind or intelligence. That Turing test passing capacity is not required for intelligence -- that lack of Turing test passing capacity is insufficient to warrant denial of mind or intelligence to systems lacking such capacity -- seems incontrovertible and blocks any such appeal to the conversational disabilities of computers as evidence of their utter mindlessness as Dreyfus' empirical objection attempts. While it is plausible to think Turing test passing capacity requires or presupposes a variety of other mental capacities (which Turing test passing, consequently, suffices to evidence), it is wildly implausible to suppose that every other mental capacity requires or presupposes Turing test passing capacity. It is, consequently, implausible to argue from the conversational disabilities of computers (i.e., their Turing test failing) to their utter lack of any mental abilities whatever (as Dreyfus' empirical line of objection to AI attempts).

3.6 Turing's Test: Metaphysical Considerations

What is crucially at issue, then, is the sufficiency of Turing test passing capacity for mind or intelligence: it is this sufficiency claim on behalf of the Turing test, moreover, that Searle's Chinese room experiment claims to counterinstance. Searle's claim is that something or someone (e.g., Searle in the Chinese room) could pass Turing's test by "conversing" fluently in a natural language while totally lacking the mental property or properties (in the first place, understanding of the language) their performance seemed to evidence. Given the general admission that behavioral evidence generally, and in the case of other human minds conversational evidence particularly, is what we ordinarily do take to evidence mental properties (or lack thereof) in others, denial of evidential sufficiency to Turing's test, I take it, requires some theoretical underpinning. I propose to close this chapter with a survey of the underpinning for disputing the adequacy of Turing's test that various metaphysical views about the nature of mind or intelligence think (or, at least, must hope) to provide.
It will be revealing, I think, both of the these metaphysical views and of the issues pertaining to the adequacy of Turing's test, to present what are generally regarded to be the basic metaphysical alternatives here in the light of a modified version of Turing's test "apparatus." I propose to simplify Turing's arrangement by dropping one of the contestants, the man or the woman, and taking it to be the job of the interrogator to discover which (man or woman) the subject is. This, I take it, does not fundamentally alter the experiment. We can depict this test setup, now, as follows:

This setup aims to parallel both the epistemic situation one may plausibly be supposed to be in vis a vis the "thought hidden in the body" of another and the causal or metaphysical situation of thought between sensory stimuli and behavioral responses. Just as the contents of the room are hidden from the
questioner, who must infer the sex of the subject on the basis of their answers to questions, so, the thoughts of others may plausibly be supposed to be hidden from ourselves, inferable only on the basis of their behavior and conversation. Whatever else thought is, it is something sometimes occasioned by sensory inputs and eventuating in behavioral outputs; and whatever is essential to thought's being what it is, metaphysically, that it sometimes mediates between sensory inputs and behavioral outputs would seem to be essential to its being known to others (if not ourselves) as, I take it, it is.{10}

Now, a crucial feature of the original (man/woman) imitation game setup is this: whatever the evidence provided by the subject's answers may seem to indicate, there is a further fact (or facts) of the matter that really determines whether the subject is male or female regardless of how typically masculine or feminine their responses; what's in their jeans (or genes). The sex organs (or chromosomal structure) of the subject are constitutive of their sex: the masculinity or femininity of their answers (or even more generally, in ordinary life, of their dress and behavior) are merely indicative of their sex. Because having a penis or y-chromosomes is metaphysically constitutive of maleness the property of having a penis or y-chromosomes may be said, epistemologically, to be criterial for being male. Being able to answer questions about football more readily and accurately than questions about fashion, since it is not metaphysically constitutive of maleness, on the other hand, may be said, epistemologically, to be merely symptomatic of being male. Note that knowledge that someone satisfies the criteria of maleness suffices to override any amount of symptomatic evidence (any number of typically feminine mannerisms, personality traits, habits, etc.) to the contrary. Note, finally, that there are such things as reliable symptoms: I shall say some property P is a "reliable symptom" of the presence of some other property Q if and only if knowing some individual x is P suffices for knowing x is Q even though P-ness is not metaphysically constitutive of Q-ness. REM sleep is, perhaps, reliably symptomatic of dreaming; turning blue litmus paper red of acidity; thunder of lightning; etc.

Now, Turing's proposal may be said to be this: passing this conversational test (or better, actually having the conversational capacities the test tests for) is either criterial for, or reliably symptomatic of having a mind (or mental properties). Having such capacities is supposed (by the proponent of Turing's test) to be at least epistemologically sufficient for having a mind (if linguistic capacity is merely reliably symptomatic of mindedness) and perhaps metaphysically sufficient (if linguistic capacity is constitutive of mindedness). Given that behavioral evidence generally (and conversational evidence especially, in the case of other human minds) is what we ordinarily go on in assessing the mental properties (or lack of same) of others, it seems that anyone who would deny the sufficiency of the linguistic capacities Turing's test tests for for having a mind undertakes some burden to say what else is necessary. Different metaphysical views concerning the nature of mind mandate different answers to this question and may be characterized in terms of the answers they mandate.

I note in this connection that despite their radical disagreement about "what it is that I am attributing to [myself and others] when I attribute cognitive [or other mental] states to them" (Searle 1980a, p. 420) all parties to these disputes -- dualists, functionalists, identity theorists, and eliminativists, as well as behaviorists -- agree that the evidence on the basis of which we attribute (or refrain from attributing) mental properties (at least to others) is largely, if not entirely, behavioral. Behaviorism then -- at least when shorn of operationalist and scientistic allegiances with which it has been historically associated in the United States{11} -- might be styled, in contrast with dualism, identity theory, and functionalism, as a kind of null hypothesis.{12} According to the behaviorist, nothing else is necessary for thought besides having the right (characteristic) behavioral capacities. Having the linguistic capacities Turing's test tests is sufficient to evidence mind or thinking because it is criterial for having a mind (i.e., some mental properties) generally and criterial for the mental property of understanding whatever natural language the questions and answers are couched in, in particular. According to the behaviorist the case of "thought hidden in a body" is not fully analogous to the case of the male or female subject hidden in the room because, in the case of understanding English (e.g.) there is no further fact of the matter about something's understanding English besides the fact that it can "produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence" (Descartes 1637, p. 139-140). Being able to respond more readily and appropriately to questions about sports than questions about soap operas plainly does not suffice for maleness because plainly something else (the right sex organs or chromosomes) is necessary: it is not so plain that anything else is necessary for knowing more about sports than soap operas. Nonbehavioristic views about the nature of mind are committed to the claim that something else is necessary in this case also -- that knowledge, understanding, and other mental properties "hidden in the bodies" of others are analogous to the maleness or femaleness of the subject hidden in the room.

Nonbehavioristic views, then, can be distinguished one from the other (as well as from behaviorism) by what they take this something else which is necessary to be. Mind-brain identity theory holds that what is necessary (besides symptomatic behavior) is that the behavior be brought about by the right material processes and states: if pain is supposed to be the firing of C-fibers (say) then what is necessary for an individual exhibiting the behavioral syndrome symptomatic of pain (flight from and subsequent avoidance of the occasioning stimulus, nursing the injured part, etc.) is that this behavior be caused in the right way (by C-fiber firings) and not otherwise. Functionalism holds that what is necessary (besides symptomatic behavior) is that the behavior be brought about in the right abstract manner, where the right manner is not defined physically (as with the mind-brain identity theory) but purely procedurally: a functionalist who takes the right manner of acting as-if pained, answering questions in English, etc., to consist in implementing the right Turing-machine table is a Turing machine functionalist. Dualism holds that what is necessary (besides symptomatic behavior) are the right associated conscious experiences (e.g., the feeling or quale of pain or the inner experience of understanding English). Since on each of these views the something else that is necessary is supposed to be constitutive of the mental phenomena in question we may say accordingly that dualism identifies specific mental states and processes (e.g., being in pain, understanding English) with specific phenomenological (or qualitative or conscious experiential) states and processes. Similarly, we may say functionalism identifies mental states and processes with specific procedural states and processes (e.g., Turing machine states and state transitions) and mind-brain identity theory identifies mental states and processes with specific physical states and processes. Behaviorism, on the other hand, does not identify specific mental "states" and "processes" with specific states or processes intervening between sensory stimuli and behavioral responses.
Note that the behaviorist needn't deny there are intervening mechanisms mediating between sensory inputs (e.g., questions) and behavioral outputs (e.g., answers) when I understand English: behaviorism is only committed to the claim that the physical, procedural, and phenomenological characteristics of these states and processes are irrelevant to their being states or processes of understanding English. So long as the intervening states and processes (whatever they are) support the (actual and counterfactual) input-output or stimulus-response correlations typifying (indeed, constituting) the mental property in question, then the individual in whom these states and processes intervene can truly be said to have that mental property. Typically behaviorists don't identify the intervening processes and states (whatever they might be) with the mental "states and processes" they support,{13} though this refusal is perhaps not essential to the view. Someone might say, I suppose, that whatever microstructural states and processes cause glass to break when sharply struck are what the fragility of glass consists in; other, much different, microstructural states and properties which cause bone china to break when sharply struck are what the fragility of bone china consists in; etc. Similarly, I suppose, someone might say that whatever internal states mediate between whatever is said in English in my presence and my answering appropriately in English are what my understanding of English consists in: one might say this, I suppose, and still be a behaviorist.{14} Still, it seems more perspicuous (and parsimonious) to say fragility just is the disposition (one and the same disposition) in anything to break when sharply struck. Similarly, it seems more perspicuous (and parsimonious) to say that understanding English just is a disposition (or capacity) to respond appropriately in English to what is said in one's presence in English; especially, if (as I suspect) the mediating states and processes capable of causing appropriate answers in English to questions in English (ceteris paribus) are as varied as the those capable of causing things to break (ceteris paribus) when sharply struck.

Dualist, functionalist, mind-brain identity theoretic, and behaviorist hypotheses concerning what, if anything, besides the behavioral dispositions or competencies (e.g., for conversation in English) associated with various mental properties (e.g., understanding English) is necessary in order to be actually possessed of the mental property may now be depicted by the following "Turing-box" diagrams:

On this construal identity theorist, functionalist, and dualist alike undertake a certain burden of research: to provide a theoretically fruitful and intuitively acceptable neurophysiological specification (mind-brain identity theory) or procedural specification (functionalism) or phenomenological specification (dualism) of what minds (generally) or specific mental properties essentially are. Behaviorism shorn of operationalist pretensions owes no such specification. Unlike reductive behaviorism, such modest (nonreductive) behaviorism does not try to underwrite "the likelihood that the intentional sciences might eventually produce theories whose objectivity and reliability parallel those of the physical and biological sciences" (Fodor & Lepore 1992, p. 16). Unlike eliminativism nonreductive behaviorism does not accept the scientistic imperative that such likelihood is required for "an Intentional Realism worth having" (Fodor 1990, p. 52).{15}

4. Conclusion: Previewing the Chinese Room

The Chinese room experiment, according to Searle, is about "what it is that I am attributing ... when I attribute cognitive [and other mental] states" (Searle 1980a, p. 422): the thrust of the experiment is supposed to be "that it couldn't be just computational processes and their output because the computational processes and their output could exist without the cognitive state" (Searle 1980a, p. 422). Since the subject in the room is imagined to produce output (answers) to input Chinese (questions) "indistinguishable from ... native Chinese speakers" yet "not understand a word of ... Chinese" (Searle 1980a, p. 418), Searle contends this experiment counterinstances behaviorism: it shows that something else is required for understanding a natural language besides behavioral competency, besides conversational fluency. Since the subject in the room is further imagined to produce this output to this input by the right computational means (whatever these are), Searle contends the experiment also counterinstances Turing machine functionalism and perhaps (given the preeminence of Turing's machinery for specifying procedures) functionalism generally. Searle's own "monist interactionist" (Searle 1980b, p. 454) position or position of "biological naturalism" (Searle 1992, p. 1) tries to combine elements of the mind-brain identity theoretic picture with elements (denatured, he claims) of dualism: some further physico-phenomenological facts of the matter beyond Turing-indistinguishable competency and performance are supposed to determine whether the subject in the room really understands the Chinese text processed. These further physico-phenomenological facts (when present) constitute, for Searle, the meaning or "semantics" the subject attaches to the text or "syntax" it processes.

Endnotes

  1. I use "ontological" and "metaphysical" more or less interchangeably to denote assertions, questions, issues, etc. about being: I use "epistemological" and "epistemic" more or less interchangeably to denote assertions, questions, issues, etc. about knowing.
  2. Other presentations of the cogito "experiment" are to be found in Descartes' Rules for the Direction of the Mind, Rule Twelve (1628, p. 46), and in the Preface to the French edition of his Principles of Philosophy (1647a, p. 183f). My exposition follows the Meditations.
  3. Cited by Noam Chomsky (1966, p. 73). The "language test," "behavior test" designations follow Keith Gunderson (1971). Gunderson also stresses the priority Descartes seems to accord to the language test.
  4. Of course actual computational performance is limited by such factors as chip burnout, power failure, memory limitations, etc.; but so is actual linguistic performance limited by "such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors" (Chomsky 1965, p. 3). Much as a generative grammar "purports to be a description of the ideal speaker-hearer's intrinsic competence" a Turing machine table purports to describe ideal machines' competencies: it is the underlying competence (not actual performance) which is said to be "productive" or "creative" or "unbounded" in both cases.
  5. A mathematically gifted daughter of Lord Byron, Lady Lovelace has been called the first computer programmer. She wrote programs for Babbage's engines.
  6. The Turing-Church thesis is a mathematical thesis or conjecture, not a theorem. Given the imprecise intuitive concept of an algorithm there can be no exact proof that the precisely defined notion of a Turing machine exactly coincides with the intuitive conception of an algorithm as "a list of instructions specifying a sequence of operations which [followed to the letter, infallibly] give the answer to any problem of a given type" (Takhbrot 1963, p. 3). Though not strictly provable, however, recognition of intuitively algorithmic procedures which are not representable as Turing machines would counterinstance and could refute the thesis. Church published his conjecture shortly before Turing. Boolos and Jeffrey (1974, p. 20) speak of "Church's thesis," Johnson-Laird (1988, p. 50-51 ) of "Turing's conjecture."
  7. It is also notable than human applications of algorithms have this same formal or "syntactic" character. The familiar addition algorithm most of us learned in elementary school is an algorithm for adding-in-decimal-notation. "The elementary operations," in applying this algorithm are, like the elementary operations of computers, "purely formal in that they can be carried out automatically using an addition table" (Trakhtenbrot 1963, p. 3).
  8. I agree that it would be absurd to decide these questions on the basis of a statistical survey like a Gallup poll. I do not agree, however, that the idea that "the meaning of the words `machine' and `think' are to be found by examining how they are commonly used" commits us to any such absurdity. This is avoided if we credit people's working use of mental terminology above such "holiday" talk (Wittgenstein 1958, §38). See chapter 4 for further discussion.
  9. The recently instituted annual Loebner Prize competition's plan is "to confine the interrogator's questioning to narrow areas of knowledge that computer entries have been specifically programmed to handle" in order to "ensure that the earlier [annual] rounds of the test are interesting" (Stipp 1991). The first round of competition held on November 8, 1991 was won by Joseph Weintraub's PC Therapist III which was judged human by five of ten judges.
  10. If Wittgenstein's well-known reflections on "private language" (1958, §243-317) are well conceived (as I believe them to be) that thoughts should sometimes mediate between sensory inputs and behavioral outputs is essential even to our knowledge of our own minds in being presupposed by our even having such mental concepts as belief, desire, etc. to apply to ourselves. Evidence that "much of what passes for introspective reports is really ... spontaneous theorizing ... where the hypotheses produced are based on the same external evidence available to the public at large" (Churchland 1988, p. 79) are similarly suggestive.
  11. The British school of behaviorism typified by Wittgenstein (1958), Ryle (1949) and Anscombe (1963) seems to contrast with the American school typified by Skinner, Hempel (1949), and Quine in this respect.
  12. The "nullness" in question concerns whether minds or thoughts are natural kinds or have essences, I take it. Functionalism hypothesizes a computational essence. Identity theory a neurophysiological essence. Dualism a phenomenological or experiential essence. Operationalistic behaviorism presumed to stipulate (discover?) something like an essence (a nominal essence?) in the form of necessary and sufficient observable conditions for having thoughts or mental properties. "Null hypothesis" or "British behaviorism" is tantamount to "none of the above": unlike eliminativism, however it does not think "none of the above" impugns existence of the phenomena. No more so than the indefinability of "game" impugns the existence of games.
  13. Ryle (1949) and Wittgenstein (1958) are typical in this regard.
  14. Rich Hall insists this would not be behaviorism but species specific identity theory (cf. Kim 1992a). Whether the essentialist character of identity theory (which distinguishes it from behaviorism) survives the relativization of identities to specific species, however, seems, at best, doubtful to me. It seems especially doubtful if the "species" in question fail to match up one to one (as there is reason, I take it, to doubt they will) with natural biological kinds.
  15. Eliminativism is the view, advocated, e.g., by Paul Churchland (1988), that there really are no such things as minds -- no beliefs, desires, etc. Fodor himself advocates functionalism as the "only game in town" that has a chance of forfending eliminativism by underwriting the aforementioned "likelihood."


next | previous
TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1 | CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY