Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence by Larry Hauser
TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1 | CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY
INTRODUCTION

In considering any new subject, there is frequently a tendency, first to overrate what we find to be already interesting and remarkable; and secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable. (Augusta 1842, p. 398)

This essay concerns the bearing of John Searle's influential Chinese room thought experiment (CRE) and connected Chinese room argument (CRA) on questions of artificial intelligence (AI): centrally, on the question, "Can machines [such as computers] think?" (Turing 1950, p. 433). Since the argument and "experiment" are not always carefully distinguished either by Searle or in the literature, the occasion and even the need will frequently arise to refer indiscriminately to either or both: on such occasions I will either speak of "the Chinese room argument/`experiment'" or simply "the Chinese room." "In view of the fact that the present interest in `thinking machines' has been aroused by a particular kind of machine, usually called an `electronic computer' or `digital computer'" (Turing 1950, p. 436), I restrict my inquiry mainly to these. By "machine" I mean such as your MacIntosh, my IBM-PC, IBM main frames, their kin, and (Apple and IBM able) their descendants. "Think," in the context of this discussion means to have mental properties: cognitive properties (e.g., knowing, deducing), conative or motive properties (e.g., wanting, seeking), perceptual properties (e.g., seeing, hearing), emotional properties (e.g., anger, anxiety), etc. This does not claim to be an exhaustive or mutually exclusive categorization, just a rough, more or less ostensive, indication of the properties in question: properties answering to predicates of the familiar network of terms we use to understand each other and predict, explain, and evaluate human (and some infrahuman) behavior, the network of terms ascribing propositional attitudes and other associated properties (e.g., nervousness, and pain) that I will call (following Churchland 1980) "folk psychology," though it is also known as "propositional attitude psychology" (Fodor 1986), "the intentional idiom" (Leiber 1991), "commonsense belief/desire psychology" (Fodor 1987), etc. Thus formulated, the question of AI concerns whether such predicates as " detects keypresses," "calculates that 79*3=237," "tries to initialize the printer when I forget to turn the printer on," "looks ahead to consider and evaluate different possible continuations of chess games," etc. -- predicates we habitually apply to machines such as our IBM-PCs and MacIntoshes -- truly apply to present day computers or someday will truly apply to machines that are the descendants of present day computers. I call the thesis that they do apply, that existing machines do think, or already have mental properties, the thesis of strong AI proper (SAIP). I call the thesis that such machines as these can think (roughly, that they someday will, if they don't already), the thesis of AI proper (AIP). This contrasts with Searle's proprietary use of the phrase "strong AI" to mean (usually?) Turing machine functionalism (the metaphysical identification of mental states and processes with computational states and processes), but also to mean (sometimes) claims of AI proper (AIP or SAIP), e.g., claims by AI researchers (on behalf of currently existing or envisaged computers) that "my machine, or the machine we're eventually going to build has thought processes in exactly the same sense that you or I have thought processes" (Searle et al. 1984, p. 146) or that "the appropriately programmed computer ... really is a mind" and "can literally be said to understand and have cognitive states" (Searle 1980a, p. 417).

Searle's Chinese room "experiment" involves imagining someone (Searle or oneself) locked in a room, hand tracing a natural language understanding and generation program for Chinese in the form of a set of written instructions (in English) for generating appropriate Chinese replies to Chinese queries. The program is imagined to be such that the "input and output capacities" and performances of Searle in the room (SIR) "duplicated those of a Chinese speaker" (Searle 1980a, p. 423). On Searle's telling,

As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of Chinese, I am simply an instantiation of the computer program. (Searle 1980a, p. 419)

"All the same," Searle maintains, in the envisaged situation he "understands nothing of Chinese, and a fortiori, neither does the system, because there isn't anything in the system that isn't in him [Searle]" (Searle 1980a, p. 419). This Chinese room "experiment" -- along with its allied argumentation -- has been advertised by Searle and widely viewed as showing claims of artificial intelligence to be either "demonstrably false" (Searle et al. 1984, p. 146) or "scientifically out of the question" (Searle 1990a, p. 31), advertised and understood as supporting the conclusion that machines such as digital computers don't think (-SAIP) and (probably) won't ever (-AIP). I contend that neither Searle's experiment nor its embedding argument have any such refutatory force against AIP or even SAIP.

Here, it is well to distinguish claims about the mental properties (or lack thereof) of specific things (e.g., SAIP) and the mental prospects (or lack thereof) of specific things (e.g., AIP) from theoretically more ambitious metaphysical claims about the nature of minds and epistemological claims about the nature of psychological explanation. Specifically, it is well to distinguish the two claims just bruited (SAIP and AIP) from theoretically more ambitious metaphysical doctrines, and related epistemological doctrines, which along with AIP (and sometimes, but not always, SAIP), have tended, in contemporary discussions, to go together under some such banner as "computationalism." I will refer to the doctrine which identifies (a restricted class of) programs with minds, or (more weakly) as sufficient to cause minds, "Turing machine functionalism" (FUN). I distinguish this high level theoretical claim about what minds or mental properties are (their ontology) or what causes them (their etiology) from less theoretical (more practical or empirical) claims about the specific mental properties of specific (types of) things (i.e., AIP and SAIP) on the one hand; and I distinguish these (AIP, SAIP and FUN) from methodological or epistemological claims -- that "programs ... explain human cognition" (Searle 1980a, p. 417) or that "AI concepts of some sort must form part of the substantive content of psychological theory" (Boden 1990, p. 2) -- on the other. I refer to this latter epistemological or methodological thesis (or family of theses) as "cognitivism" (COG).

Searle's remarkably influential Chinese Room argument/"experiment" (Searle 1980a; 1984, p. 38f; 1990a) -- this "infamous Chinese room argument," as John Fisher has called it (Fisher 1988, p. 279) -- has been described as having "already achieved the status of a minor classic" (Fisher 1988, p. 279) and as "having shook up the entire AI field" so considerably that "things still have not settled down since" (Harnad 1991, p. 47). Searle's Chinese room (argument and experiment), for some, has even "rapidly become a rival to the Turing Test as a touchstone of philosophical inquiries into the foundations of AI" (Rapaport 1988, p. 83). AI pioneer P.J. Hayes has written that his own notion of what constitutes the core of cognitive science "could be summed up in the following way: it consists of a careful and detailed explanation of what's really silly about Searle's Chinese room argument" (Hayes 1982, p. 2). I propose to undertake such a careful and detailed explanation. On the one hand, I agree with Hayes about the silliness of the Chinese room (argument and experiment), and with Georges Rey that "this argument has commanded more respect than it deserves" (Rey 1986, p. 169): it (rather obviously, I think) lacks logical or evidential force such as it has been advertised and widely supposed to have against claims of artificial intelligence (SAIP and AIP) and "relies [mightily] on ill gotten gains" (Dennett 1980, p. 429) from "impressive pictures and dim notions capturing old prejudices ... not on solid argument" (Weiss 1990, p. 180). On the other hand, I share Hayes's grudging admiration and recognition of the Chinese room's importance due to the fact, I believe, that much of what turns out to be "really silly about Searle's Chinese room argument" is not unique to the argument or original with Searle but derives rather from assumptions most proponents of AI and practitioners of Cognitive Science share with Searle. In this connection, the question of why Searle's Chinese room argument has commanded such respect (despite its logical and evidential shortcomings), not just among opponents of AI, but among AI proponents, is scarcely less interesting than the main question of whether the argument has the force against claims of AI it has been advertised and widely thought to have.

Following a survey of the philosophical neighborhood of the Chinese room and the philosophical genealogy of Searle's thought experiment tracing its lineage back to similar proposals of Turing (1950) and Descartes (1637), in Chapter 1 of this essay, Chapter 2 shows Searle's argument (CRA) lacks logical force against either AIP or SAIP.  The force against these claims that Searle's argument has seemed to many to have, I maintain, is due to his proprietary use of the phrase "strong AI" to conflate the claim (FUN) that programs are (metaphysically) "constitutive of" or (causally) "sufficient for" mind, with claims that "my machine [SAIP], or the machine that we're eventually going to build [AIP] has thought processes in exactly the same sense as you and I have thought processes" (Searle et al. 1984, p. 146). Construed as an argument against SAIP and AIP Searle's celebrated "refutation of AI" is an ignoratio elenchi. Yet, it has a certain ad hominem or polemical force against "high church" computationalists who base their acceptance of AIP on FUN (in conjunction with the Church-Turing thesis) while denying or strongly doubting SAIP: either they deny or strongly doubt any present day machines really have the mental properties we habitually attribute to them or, alternately (the line essayed, e.g., by Rapaport 1992) deny or strongly doubt that such properties (e.g., recognizing the dir command, calculating that 7 + 5 = 12, considering and evaluating alternate possible continuations of chess games) as had by computers (PCs running DOS, pocket calculators, and Deep Thought, respectively) are really mental. I understand the high church argument to be, roughly, the following:

Thinking is a species of computation. (FUN)
A Turing machine can compute any computable function. (Turing-Church thesis)
Digital computers implement Turing machines.
Therefore,
Digital computers can think.

The second half of Chapter 2 shows that Searle's argument (CRA) is not a sound argument against FUN, the doctrine which Searle's most careful formulations of CRA, I take it, explicitly target. The argument is invalid: though the premises Searle sets out may be true, they don't logically entail -FUN. This being the case, one may wonder with Thomas Weiss, "Why did so many functionalists try to answer an invalid argument? Why are so many [others] convinced that Searle succeeded in showing the falsity of functionalism?" (Weiss 1990, p. 169) Weiss's answer "that Searle appeals to unexamined intuitions" comprising "our common-sense notion of consciousness," which is "our idea of a subject, of a being with experience" (Weiss 1990, p. 169), while part of the story, plainly isn't the whole. "Intuitions" about conscious subjective experience Weiss cites do, I think, account for much of the popular appeal of Searle's argument, as well as the argument's appeal for such Neo-Cartesians as Harnad (1989, 1991) and Searle himself; but Weiss's explanation fails to explain why even individuals unsympathetic to Searle's Neo-Cartesian intuitions about conscious subjective experience "have [also] tried to answer an apparently invalid argument" or have even been "convinced that Searle succeeded in showing the falsity of functionalism." Chapter 2 concludes by exploring one reason Searle's argument has seemed to call for refutation by functionalists or merit allegiance from opponents of functionalism disinclined to accept Searle's Neo-Cartesianism assumptions. The reason is that, despite the invalidity of Searle's stated argument, the argument supplemented by an assumption many functionalists and computationalists hold -- that anything interpretable as an instantiation of a program is a proper implementation of that program -- is valid. This assumption Searle terms "the standard textbook definition of computation" (Searle 1990c, p. 26) has the consequence that almost anything instantiates (under some possible interpretation function) almost any program you like. The validity of Searle's argument given this assumption and consequence, I think, explains in part why functionalists not predisposed to credit intuitions about conscious subjective experiences have still thought it necessary to answer Searle's argument. Yet others share the intuition about syntax not sufficing for semantics that Searle's thought experiment "pumps" (Dennett 1980, p.429) for reasons having nothing to do with the imagined lack of conscious subjective experiences of meaning on the part of SIR, but for reasons having to do with what they take to be Searle-in-the-room's lack of something else. Some hypothesize the something else that SIR lacks is appropriate causal (perceptually, behaviorally, and perhaps socially mediated) connection to the referents of the Chinese symbols he processes. Such misgivings, driven rather by causal theoretic intuitions about reference than Neo-Cartesian intuitions about consciousness, are anticipated in Chapter 1 and addressed more fully in Chapter 3.

Chapter 3 shifts the focus of my attention from Searle's formalized argument (CRA) to his vaunted "experiment" (CRE). Chapter 3 aims to show that Searle's "experiment" is methodologically flawed and ultimately unconvincing: CRE fails to provide a telling counterexample to the Turing Test (Turing 1950) and, consequently, CRE fails to argue as such (a counterexample to the Turing Test) against SAIP and thereby fails to militate, (as it might be supposed by inductive extrapolation, perhaps, to do) either against AIP or FUN. The methodological flaw I allege is Searle's Neo-Cartesian grant of overriding epistemic privilege -- the privilege of overriding all public, behavioral evidence to the contrary -- to SIR's presumed lack of introspective awareness or presumed disavowal of the mental property (understanding Chinese) at issue. Searle's "experiment" (on his own understanding of it) is an attempt to implement the methodological imperative, "always insist on the first person point of view" (Searle 1980b, p. 451). My objection -- that such a grant of overriding epistemic privilege to how it seems to SIR "from the point of view of the agent, from my point of view" (Searle 1980a, p. 419-420) is indefensible -- pursues what Searle (1980a, p. 421-422) calls the "other minds reply": this anticipates latter discussion (in Chapter 6) of the inadequacy (for scientific theoretic or just plain practical purposes) of Searle's attempted wholesale identification of mental properties with states or modifications of consciousness. Additionally, Chapter 3 argues, even if such a grant of overriding epistemic privilege to "the first person point of view" were allowed, Searle's example would still be unconvincing unless the grant is even more implausibly extended not only to actual introspective judgments or first-person avowals, but to introspective judgments or disavowals one imagines one would make in circumstances one can scarcely imagine.

Chapter 2 concluded by observing that CRA -- while failing to refute FUN outright -- is compelling against standard versions of Turing Machine functionalism which hold it suffices for implementing a program (or machine table) in the sense relevant to cognitive attribution merely to instantiate the program (or machine table) under some mathematically possible interpretation function. Thus CRA at least defines a research imperative for Turing Machine functionalism: elaborate a more robust notion of program implementation than the anemic (mathematical) sense invoked by "the standard textbook definition of computation" (Searle 1990c, p. 26-27). Since efforts along these lines have, to date, been few and halting (Vinod Goel 1991 represents one such attempt), Searle's argument has considerable ad hominem or polemical force against extant versions of Turing machine functionalism. Should efforts along these lines continue to be few and halting, Searle's argument, perhaps, becomes scientifically decisive against Turing machine functionalism. The upshot of Chapter 3 is that if the main or only reason one has for believing that AIP is true derives from functionalist theory via something like the "high church" argument limned above, and CRA in lieu of progress by functionalists along the lines just mentioned makes a compelling case against FUN, then CRA makes a polemically compelling case ad homimen, against one's belief in AI (i.e., in the truth of AIP). Furthermore, if CRA makes a compelling case against FUN, and FUN is the only basis anyone has for believing in AI, or if it's the main basis for scientific belief in AI, then Searle's CRA may even put AI "scientifically out of the question" as Searle alleges.

Against all this, I maintain in Chapter 4, that there are reasons for accepting AI that do not depend on acceptance of FUN, that render AI itself (SAIP and consequently AIP) immune to Searle's argument; strong reasons to consider such attributions as those already mentioned (e.g., "DOS recognizes the dir command") to be true literal attributions of mental properties. What warrants crediting such attributions of mental properties to computers at face value -- what supports the idea that present day machines, such as the IBM-PC I am writing on, really do detect keypresses, recognize commands, perform mathematical calculations, try to initialize our printers (when we forget to turn our printers on), etc. and thus "have thought processes ... in exactly the same sense that you and I have thought processes" (Searle 1984 et al., p. 146) -- is (1) that such attributions of mental properties to computers "gives predictive [and explanatory] power we could get by no other method" (Dennett 1981, p. 23) and (2) that "an ordinary application of Occam's razor places the onus of proof on those who wish to claim that these sentences are ambiguous" (Searle 1975b, p. 40). Chapter 4 argues that there are no compelling intuitive reasons, no intuitions of figurativeness or ambiguity attending such attributions of mental properties to computers, sufficient to warrant Searle's charges that such everyday attributions of mental properties to computers are uniformly figurative and equivocal. Standard ambiguity tests that Searle himself advocates and applies in other connections (Searle 1980d), in fact, strongly support the claims of the predications of mental terms to computers in question to be literal univocal predications. While agreeing with Searle that the question of whether machines (such as computers) can think "is an empirical question" (Searle 1980a, p. 422); what I maintain, contrary to Searle, in Chapter 4, is that prima facie, the empirical evidence is that they can, because some of them evidently do. I call this view, which bases acceptance of AIP on acceptance of SAIP, and accepts SAIP based on the unstudied judgments (working mental attributions) we make on the basis of our experiences with computers, naive AI. The prima facie evidence (what our interactions with computers lead us to say of them) together with the tendency of standard ambiguity tests to support the claim that many attributions of mental properties we make to computers are literal univocal attributions, I conclude, places the onus Searle acknowledges squarely on those (such as himself) who maintain (in the face of the prima facie empirical evidence) that the machines in question do not literally and truly have the mental properties we attribute to them with such predictive and explanatory success. In the face of such linguistic habits and their predictive success, theoretical basis is required for denying the attributions in question are true and literal, not for asserting it.

The theoretical basis Searle tries to provide for discounting everyday attributions of mental properties to computers as equivocal is a distinction between (genuine) "intrinsic intentionality" and derived intentionality or (counterfeit) "as-if intentionality" (see, e.g., Searle 1980b). The remaining issue, then, is whether Searle's notion of intrinsic intentionality provides any adequate scientific or theoretical basis for denying our everyday working predications of mental terms (so called "propositional attitude verbs," especially) to computers are true literal predications. Chapter 5 and Chapter 6 argue that neither of Searle's takes on intrinsicality -- neither understanding "intrinsic" to mean physically in the nervous system nor understanding it to mean phenomenologically in consciousness -- provides perspicuous theoretic or scientific reasons for dismissing predications of mental terms of computers as mere as-if attributions distinct from true literal attributions (of "intrinsic intentionality") we make by predicating the same mental terms of humans and some animals.

Chapter 5 considers the question of whether our mental properties (in particular, propositional attitudes or intentional mental states) are physically in us (humans) and our animate cousins in a way in which they are not physically in computers: with computers, Searle alleges, mental properties are just "in the eye of the beholder" (Searle 1980a, p. 420). Chapter 5 argues that, on the only understanding of "in" on which the propositional content of the attitudes or intentional mental states we attribute to computers and hence the attitudes or states themselves aren't In them, the contents of our propositional attitudes or intentional mental states aren't In us either. Meanings fail to be physically In computers, not being uniquely determined by or not being supervenient on the electrical-mechanical states of their hardware, in just the same way in which Putnam shows (by his Twin Earth, and other examples and arguments) that meanings "ain't in the head" (Putnam 1975, p. 227), not being uniquely determined by or supervenient on the electrical-chemical states of our individual brains. On weaker understandings of "in," on the other hand, it turns out that the intentional states and contents we attribute to computers are just as much physically in them as our intentional states and their contents are physically in us.

Chapter 6 explores the possibility of grounding a scientifically or theoretically perspicuous notion of intrinsic intentionality on the presence of human (and infrahuman animate) intentional states (or their propositional contents) to or in consciousness. Here, I conclude that Searle's consciousness based "research" proposals are little more than warmed-over Descartes. If Searle's grant of overriding epistemic privilege to "the first person point of view" in his Chinese room thought experiment "invites us to regress to the Cartesian vantage point" (Dennett 1988, p. 336), his recent Connection Principle, requiring every intrinsic "intentional phenomenon" be "in principle accessible to consciousness" issues an engraved invitation. In the absence of proposals by Searle or his fellow Neo-Cartesians (e.g., Nagel 1974, 1986; Gunderson 1990, Harnad 1991) for resolving the anomalies -- most notably the other minds problems and mind-body interaction problems -- that scuttled the original Cartesian research program, I argue that Searle's "new" consciousness-based proposals are ill suited to providing scientific or theoretical vindication of his putative distinction between intrinsic and as-if intentionality (or anything else). Furthermore, for all it costs (inheriting the debts -- the other minds problems and interaction problems -- of traditional Cartesian consciousness-based "research"), Searle's appeal to conscious experience has no offsetting benefits. The "symbol grounding problem" (Harnad 1990) that "syntax is not sufficient for semantics" (Searle 1984a, p. 38) -- the crucial difficulty Searle's Chinese room (argument and experiment) poses for Turing machine functionalism -- is no less problematic for consciousness-based accounts such as Searle's. Conscious phenomenological experience -- either alone or in addition to syntax -- does not suffice for semantics either. Lines of argument originating with Ludwig Wittgenstein (1958) and pursued by Saul Kripke (1982) and Paul Boghossian (1989), and independent lines of argument developed by Ruth Millikan (Millikan 1984. pp. 89f), convincingly show this. Putnam's reflections on Twin Earth comprise a third, independent, line of support for this conclusion that whatever conscious experiences attend our various uses of words are insufficient to determine their meanings or fix their references.

The overall plan of this essay is to work, first inward (as it were) from the surrounding philosophical terrain (in Chapter 1) to the Chinese room argument (in Chapter 2) to the central Chinese Room "experiment" (in Chapter 3), then to work back outward from the "experiment" to supporting Searlean (and other possibly supportive) doctrine (in Chapter's 5 and 6). The force of the argument lies in showing Searle's case to depend on more and more dubious doctrine -- driving the advocate of Chinese room objections to AI further and further out on thin ice -- until at last the ice is too thin (the doctrine insufficiently credible) to support substantial claims (-SAIP or -AIP) against AI in the face of the prima facie evidence of artificial intelligence cited in Chapter 4. Chapter 4 is the crux: it shows the onus to be on the opponent of AI who would deny that computers have the mental properties we say -- who would deny that our predictively successful attributions of mental properties to machines are literally true -- and not on the (naive) proponent of AI (like me) who takes these attributions at face value, as evidencing artificial intelligence in existing artifacts. High church advocates of AI, on the other hand, by disdaining the vulgar empiricism of naive AI (disdaining appeal to the manifest capacities of present generation computers running ordinary applications programs), by rejecting SAIP, or waffling on it, take the burden of scientifically elucidating the essential nature of thinking on themselves. Rushing in where Turing feared to tread -- as would-be definers of "thought" -- by their hubris, they cede the high ground in these debates to their nemesis. Searle's Chinese room argument, given this high ground, seems unbeatable.

Searle insists that the question "Can a machine think?" is not the right question to ask: this question, he protests, is uninteresting because, "The answer is, obviously, yes. We are precisely such machines" (Searle 1980a, p. 422). Again: "If by `machine' one means a physical system capable of performing certain functions (and what else can one mean?), then humans are machines of a special biological kind" (Searle 1990a, p. 26). The right question to ask, according to Searle is rather this: "Could a machine think just by implementing a computer program? Is the program by itself constitutive of thinking?" (Searle 1990a, p. 26). Again:

But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding? This I think is the right question to ask, though it is usually confused with one or more of the earlier questions ["Could a machine think?"; "Could an artifact, a man-made machine think?"; "Could a digital computer think?"], and the answer to it is no. (Searle 1980a, p. 422)

Insofar as Searle restricts himself to the question "Could instantiating a program," in the anemic mathematical sense of course, "by itself be a sufficient condition of understanding?", he is right; it couldn't. Searle's Chinese room argument supplemented by the standard textbook account of computation (or what it is to instantiate a program) does prove this. On the other hand, insofar as the conclusion that instantiating a program, by itself, is not a sufficient condition of understanding is supposed to support the claim that no existing (or soon to be existing) computer "has thought processes in exactly the same sense that you and I have thought processes"; insofar as -FUN is supposed to support claims that "in the literal sense the programmed computer [running Schank and Abelson's story understanding program SAM] understands ... exactly nothing" (Searle 1980a, p. 419) and "the same would apply to ... any Turing machine simulation of human mental phenomena" (Searle 1980a, p. 417); Searle is mistaken. Nothing in the Chinese room (experiment or argument) militates in the least against the claim that existing or soon to be existing Turing machine "simulations" of mental phenomena have the mental properties they may be said with good predictive effect to have.

Just as Searle's proprietary use of the phrase "Strong AI" to denote Turing machine functionalism conflates the issue of whether computers think with the question of whether programs "are ... constitutive of [or] sufficient for minds" (Searle 1990a, p. 27), or whether programs "by themselves are ... minds" (Searle 1989a, p. 703), so too do his disarming physicalistic pieties about humans being thinking "machines of a special biological sort" shift the ground of the discussion tendentiously away from claims of artificial intelligence proper (SAIP and AIP) which (given the wealth of empirical evidence for and the dearth of credible theoretical reasons against) are highly probable toward the vastly more dubious question of whether programs are what minds essentially are and toward Turing Machine functionalist metaphysical speculation that they essentially are (a proper subset) of machine tables or programs. This question, perhaps, presupposes that minds are natural kinds; which seems unlikely. The Turing machine functionalist answer to this question supposes that minds are natural kinds with computationally specifiable essences; which, of course, is even less likely.

Searle recently acknowledges, "I have not tried to prove that `a computer cannot think'" (1990a, p. 29): neither has he proved (nor provided any reason at all to think) that existing and soon to be existing "Turing machine simulation[s] of human mental phenomena" don't "literally have thought processes ... in exactly the same sense that you and I have thought processes." Searle's Chinese room (argument and experiment), while telling (though perhaps misleadingly) against Turing machine functionalism (on standard textbook accounts of computation), does not tell at all against claims of artificial intelligence on behalf of existing and soon to be existing computers. Contrasting "strong AI" to "weak AI" (Searle's name for his view that computers merely simulate the mental abilities they seem to manifest) as Searle does (Searle 1980a, p. 417), serves to compound the confusion Searle's use of the phrase "strong AI" to designate Turing Machine functionalism introduces: confusion between strong claims of AI (that computers really have mental abilities) and Turing machine functionalism (that thought is essentially computation). It is thus, by buttressing an ignoratio elenchi (illicit transition from -FUN to -SAIP) with a false dichotomy (between "Weak AI" and "Strong AI"), that Searle's vaunted Chinese room argument/experiment musters such potent rhetorical force -- despite lacking any logical or much evidential force -- against belief in artificial intelligence.


next | previous
TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1 | CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY