Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence by Larry Hauser


1. The Chinese Room Experiment | 2. First Analysis & Prospectus | 3. Searle's Diagnosis: Why Searle in the Room Doesn't Understand | 4. Argument and "Experiment" | 5. The CPU Simulator Reply | 6. Replies and Rejoinders | Endnotes
Chapter Three:

The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer. A human computer is supposed to be following fixed rules: he has no authority to deviate from them in any detail. We may suppose that these rules are supplied in a book, which is altered whenever he is put on a new job. He also has an unlimited supply of paper on which to do his calculations. (Turing 1950, p. 436)

As for the linking together that occurs when we reason, this is not a linking together of names but of the things that are signified by the names, and I am surprised that the opposite view should occur to anyone. .... For if he admits that the words signify something, why will he not allow that our reasoning deals with this something which is signified, rather than merely with words. (Descartes' reply to Hobbes: Descartes et al. 1642, p. 126)

1. The Chinese Room Experiment

In order to test, among other things,{1} the Turing machine functionalist (Searle 1980c) or "strong AI" (Searle 1980a) theory that thinking is a species of computation, John Searle asks us to ask ourselves, "what it would be like if the mind actually worked on the principles that the theory says all minds work on?" (Searle 1980a, p. 417). To this end Searle proposes "the following Gedankenexperiment" (Searle 1980a, p. 417):

Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain sorts of Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all these symbols call the first batch "a script," they call the second batch "a story," and they call the third batch "questions." Furthermore, they call the symbols I give back "answers to the questions," and the set of rules in English that they gave me they call "the program." (Searle 1980a, p.418)


Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view -- that is, from the point of view of somebody outside the room in which I am locked -- my answers to the questions are absolutely indistinguishable from those of Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. (Searle 1980a, p. 418)

I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of Chinese, I am simply an instantiation of the computer program. (Searle 1980a, p. 418)

Here Searle imagines what it would be like from "the point of view of the agent, from my point of view" (Searle 1980a, p. 420) to implement a story understanding Program I will call "CSAM." CSAM is supposed to be like Schank and Abelson's (1977) Script Applier Mechanism (SAM) except (1) for being imagined (for the sake of the argument) to be indisputably Turing test passing, and except (2) for the stories it processes and seems "from the external [third person] point of view" (Searle 1980a, p. 418) to understand being imagined (for dramatic effect) to be in Chinese. Such is the "experiment."{2}

The result Searle observes or avows when he imagines "what it would be like" for him to be the individual hand tracing CSAM in the room is this:

it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. (Searle 1980a, p. 418)

Searle further asserts that this generalizes immediately from the case of a person (himself) implementing (by hand tracing) the imaginary Chinese story understanding Program CSAM to the case of an electronic computer implementing (by running or executing) Schank and Abelson's actual story understanding program SAM.

For the same reasons [that Searle understood no Chinese in the Chinese room] Schank's computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing. (Searle 1980a, p. 418)

Furthermore, Searle maintains,

nothing [in the experiment] that follows depends on the details of Schank's programs. The same arguments would apply to Winograd's SHRDLU (Winograd 1973), Weizenbaum's ELIZA (Weizenbaum 1965), and indeed any Turing machine simulation of human mental phenomena. (Searle 1980a, p. 417)

Hence, he urges, this result generalizes to any system implementing any Program whatever. If these results are credible, Searle in the Chinese room counterinstances FUN, the Turing machine functionalist claim ((x)(Px ->  Mx)) that implementing a Program suffices for possession of the mental properties that execution of the program "simulates." Since Searle lacks the mental property of understanding Chinese that implementing CSAM causes him to "simulate," -FUN ((3x)(Px & -Mx)) follows immediately from experimental result R:{3}

(R) Ps & -Ms

Since Searle's imagining (in the thought experiment) really does seem completely generic it does seem "nothing ... depends on the details" of the natural language understanding program being imagined: if CRE is well imagined and establishes R, it does seem R will extrapolate (as Searle alleges) to all current or pending Turing machine "simulations" of understanding. Notice, however, that this would-be result's generalizing to all current "simulations" of any mental phenomenon whatever (contrary to SAIP) and to any "machine we're eventually going to build [and program]" (Searle et al. 1984, p.146) for "simulating" any mental phenomenon whatever (contrary to AIP) requires not only that "nothing [in the thought experiment] depend upon the details of Schank's programs," but also that nothing in the thought experiment depend upon the specific mental phenomenon being "simulated." Whether Searle's "experiment" meets this second requirement is doubtful: it seems, in fastening on understanding, Searle fastens on perhaps the most propitious intentional mental state for his purposes. However oxymoronic "unconscious understanding" seems, "unconscious seeking" "unconscious inferring," etc., seem less so.

My main criticism of CRE, however, is more fundamental: it fails to support R. Since CRE "appears to some to be valid (Maloney 1987)"{4} and "to others to be invalid (e.g., many of the `peer' commentators following Searle 1980[a], Sharvy 1983, Carleton 1984, Rey 1986, Anderson 1987)" (Cole 1991b p. 399) it appears that whether CRE seems to one to support R varies with the theory of mind or meaning one subscribes to. For this reason CRE, I maintain, cannot adjudicate between these theories or provide theory independent grounds for accepting R and hence for doubting FUN, AIP, and SAIP.  The thought experiment is evidently circular. Searle's "parable" (Searle 1984a, p. 33) is a charade.

2. First Analysis and Prospectus

What stands out first, crucially, in Searle's "experiment" is the transition from "it seems to me quite obvious that I do not understand" to "I understand nothing" (Searle 1980a, p. 418). Clearly R's following (and hence all Searle will claim follows from R) depends not just on Searle's seeming to himself not to understand but on his really not understanding the Chinese stories, questions, and answers. That it would seem thus to Searle, from his "first person point of view" (Searle 1980b, p. 451) in the situation Searle initially describes seems unexceptionable. What is exceptionable is the transition from its seeming so from "the first person point of view" to its being so. No doubt those who understand natural languages such as English or Chinese are normally aware of understanding them: it both seems to them, from their first person points of view, that they understand (English or Chinese, e.g.) and they do. Conversely, no doubt, those who do not seem to themselves to understand a natural language (when they hear it spoken or see it written) normally do not understand. This is the normal case -- but note, in the normal case of not understanding it not only seems to me from my "first person point of view" that I do not understand, it also seems that way to others (or would seem that way to others who spoke to me or corresponded with me in Chinese) from their "external point of view." In the ordinary case of someone (e.g., myself) not understanding Chinese, given some Chinese stories along with questions (in Chinese) about the stories, anyone who understands Chinese "just looking at my answers can tell that I don't speak a word of Chinese" (Searle 1980a, p. 418: my emphasis). Since the envisaged case, CRE, is designedly abnormal in just this respect, it seems that we cannot appeal either to the normal covariance of seeming to oneself not to understand with not understanding to decide this case in favor of Searle's not understanding or to the normal covariance of seeming to others to understand with actually understanding to decide this case to the contrary. If Searle's imagined introspective lack of awareness or sincere disavowal of understanding is all the warrant CRE provides for the conclusion that Searle would not understand the Chinese stories, questions, and answers in the envisaged scenario, CRE must be judged inconclusive.

What would enable CRE to establish the result that, despite implementing CSAM and consequently giving "answers to the questions [that] are absolutely indistinguishable from those of Chinese speakers" (Searle 1980a, p. 418), Searle in the evisaged scenario would not understand? Only an a priori tender of the epistemic privilege of overriding all "external" appearances to how it seems "from the point of view of the agent, from my [first person] point of view" (Searle 1980a, p. 420) seems to suffice; i.e., only a Cartesian grant to Searle of privileged access to his own (lack of) understanding in the experimental situation. It seems to me quite obvious from this, as from Searle's characterization of CRE as implementing the methodological imperative "always insist on the first person point of view" (Searle 1980b, p. 451), that CRE does invite us "to regress to the Cartesian vantage point" (Dennett 1987b, p. 336) via just such an a priori tender of epistemic privilege to "the first person point of view": the "experiment" succeeds (or seems to) only if we accept the invitation. If CRE does depend on such an a priori grant of overriding epistemic privilege to how it seems to the agent with regard to their own intentional mental states (e.g., understanding) "from the point of view of the agent, from my point of view" (Searle 1980a, p. 420) it seems CRE's support for would-be first result R will be as dubious as this Cartesian grant of epistemic privilege.

Whether CRE's support for R does depend on such a grant of epistemic privilege to the "first person point of view" (Searle 1980b, p. 451), however, is controversial. Searle himself protests (I think, too much) that he employs no "Cartesian apparatus" (Searle 1992, pp. xxi, 14) or "Cartesian paraphernalia (Searle 1987, p. 146), and in several places (Searle 1992, p. 95f; 1987, p. 146) explicitly disavows privileged access. Yet others -- especially advocates of what Searle calls "the robot reply" (i.e., causal theories of reference) -- unsympathetic to the a priori tender of overriding epistemic privilege to the "first person point of view" that Searle's application of the "always insist" injunction in the "experiment" seems to involve, nonetheless incline to credit Searle's "experiment" as refutative of unadulterated or "high church" functionalism; though not augmented or "low church" functionalism.{5} In this connection, Searle's own diagnosis of what he would be lacking in the way of understanding Chinese in the "experimental" situation -- awareness of what the symbols meant or of their semantics -- perhaps encourages the idea that CRE supports R (or can be made to support R) by pumping credible intuitions about syntax and semantics rather than Cartesian intuitions about consciousness and privileged access I, for one, "had thought long ago discredited" (Searle 1992, p. xii).

Again, given the abnormality of the "experimental" case Searle describes -- of Searle's (imagined) self-avowed or introspected lack of understanding of Chinese in the room -- CRE does not suffice to establish Searle's actual lack of understanding given the (imagined) overwhelming evidence "from the external point of view" that he does understand. Again, while CRE reliably pumps the intuition that Searle would not be conscious of understanding the Chinese (stories, questions, and answers) he was processing it fails to reliably pump the intuition that this processing would not to amount to understanding. The suggestion we are now considering is that this criticism might be avoided by taking the intuitions CRE pumps directly to be about the meaninglessness (to Searle) of the symbols Searle shuffles; i.e., we are considering the possibility of taking CRE to support R at one remove, so to speak, as follows:

(R') Ps & -Ss
(A2) (x)(Mx  -> Sx)
(R) Ps & -Ms

If this "indirect construal" of CRE can be sustained and intuitions about syntax and semantics underwriting R' (unlike intuitions about consciousness and privileged access directly underwriting R) do not depend on a dubious a priori tender of epistemic privilege to the "first person point of view," then CRE, indirectly construed, will be immune to the usual objections to claims of privileged access.

3. Searle's Diagnosis: Why Searle in the Room Doesn't Understand

Both, it seems, to reinforce the "intuition" that one would not understand Chinese in the imagined scenario, and to show (by comparison) what the occupant of the room lacks in the way of understanding, Searle contrasts his imagined doings in the Chinese room with the ordinary case in which "people give me [a native English speaker] stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English" (Searle 1980a, p. 418). Searle then asks, "Well, then, what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences?" He replies, "The obvious answer is that I know what the former mean, while I haven't the faintest idea what the latter mean" (Searle 1980a, p. 418). On this diagnosis Searle fails to understand because "understanding implies the possession of mental (intentional) states" (Searle 1980a, p. 418n) and when you ask yourself what it would be like for you to be the person in the room it seems clear to you that your awareness would lack the requisite intentional directedness toward the referents of symbols you were processing (you would not know what the stories were about) and thus the requisite intentionality for the mental states (e.g., understanding) at issue.

Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. (Searle 1980a, p. 422)

According to this diagnosis, what Searle in the room lacks for Chinese (and programmed computers, Searle maintains, lack altogether) is intentionality or meaning or semantics. To the human computer in the room (hence also to the electronic computer running SAM) the Chinese questions and answers are "just so many meaningless squiggles" (Searle 1980, p. 418). According to this diagnosis the intuition that Searle would not understand the Chinese stories questions and answers he processed in the room depends on the more fundamental intuition that, in the "experimental" situation described, Searle couldn't attach the right semantics and needn't attach any semantics to the symbols he was processing.

Now it might rightly be objected that much as Searle's unawareness of understanding does not suffice in the face of overwhelming evidence to the contrary to conclusively establish that he doesn't understand unawares, neither does Searle's failure to consciously attach meaning to the symbols he's processing establish that the symbols and processing themselves lack meaning or intentionality. Here, however, perhaps a wedge might be driven between the two "intuitions" (supportive or R and R'): while it is doubtful that it follows from Searle's seeming to himself not to understand Chinese that he is actually not understanding, it is perhaps less doubtful that it follows from Searle's unawareness of the meaning of the symbols being processed that these symbols are meaningless to Searle, that they lack what Searle calls "intrinsic intentionality" (Searle 1980b, p. 450: my emphasis). When this line is pursued (as it is in chapters 4-6, below) it turns out the "intuitions" CRE's support of R' depend on are, crucially, not about the (lack of) meaning of the symbols processed and the (lack of) intentionality of the processing per se, but about the (lack of) intrinsicality of this meaning and intentionality.

4.1 Anomalies of the Standard Construal of CRE's Relation to CRA | 4.2 Some Would-be Strengthenings of BSA Rejected | 4.3 The "Upside Down" Take
4. Argument and "Experiment"

Let us, now, look at the interface between the formal Chinese room argument (CRA) and Searle's Gedankenexperiment (CRE). Stresses at this interface show CRE's question-begging reliance on the very theories of mind that R (or R') supposedly supports to support R (or R').

4.1 Anomalies of the Standard Construal of CRE's Relation to CRA

Searle writes that the formal Chinese room argument (CRA) has a very simple logical structure, so you can see whether it is valid or invalid" (1984, p. 38): I have shown (Chapter Two, above) that the "derivation from axioms" (1989a, p.701) Searle offers is invalid! The "brutally simple" crux (BSA) of Searle's argument fails to entail the anti-functionalist conclusion (C1) "Programs aren't (sufficient for) minds" Searle alleges because the possibility remains, on the stated premises (A1-A3) that the syntaxes which are supposed (by way of satisfying A3) to lack semantics are not Programs. Thus, it is consistent with Searle's premises that all syntaxes that are Programs are (sufficient for) semantics and minds. In Chapter Two I indicated one possible reason why this logical gap in Searle's putative derivation has been overlooked by Searle and others: allegiance of partisans to these disputes to a "standard textbook definition of computation" (Searle 1990c, p. 26) on which you can describe any sufficiently complex object as instantiating any sufficiently simple program (hence any sufficiently simple Program) you like fills the logical gap in the derivation nicely. I now note that the Chinese room experiment itself, if it succeeds, fills this logical gap in the argument too; but fills it too well. Searle doesn't dispute the analyses of others of his "thought experiment [as] devoted to shoring up axiom 3" (Churchland & Smith Churchland 1990, p. 34) -- and himself claims,

The point of the parable about the Chinese room is to reveal a deep point about the character of artificial intelligence research and human thinking. This is the point ... from syntax alone you can't get the mental [semantic] content" (Searle et. al. 1984, p. 147)

Syntax by itself is not the same as, nor is it sufficient for, semantics. This is shown by my Chinese room [experiment]" (Searle 1990b, p. 58).

Yet, in the example here supposed to counterinstance the thesis that syntax alone suffices for semantics (the example of Searle instantiating CSAM yet failing to attach meaning to the Chinese symbols he processes) the syntax instantiated is already supposed to be a Program. If CRE succeeds as advertised in establishing R then CRE either proves "Programs by themselves are not (sufficient for) minds" directly independently of all the "axioms" of Searle's would-be derivation (the "direct interpretation" of CRE); or else (on the "indirect construal") the experiment, if successful, proves "Programs by themselves are not sufficient for semantics or intentionality," from which C1 follows in conjunction with A2 alone, independently of A3 and A1. Since CRE, if it succeeds, does not support CRA by filling the logical gap in BSA but rather entirely (on the direct reading) or partly (on the indirect) supplants BSA, what is the point of Searle's "derivation from axioms" (Searle 1989a, p. 702)? Indirectly construed CRE yields intermediate result IR ((3x)(Px & -Sx)) as follows:

(R') Ps & -Ss
(IR) (3x)(Px & -Sx)

IR then entails C1 by an argument I will term the Chinese room experimental argument (CREA):

(A2) (x)(Mx -> Sx)
(IR) (3x)(Px & -Sx)
(C1) -(x)(Px -> Mx)

It is odd, to say the least, to suggest a more complicated invalid argument (BSA) in place of this simpler valid one.

4.2 Some Would-be Strengthenings of BSA Rejected

One possible explanation for this perplexing shift is suggested by Searle's initial advertisement of the Chinese room argument/experiment as being intended to show that "a computer program is never by itself a sufficient condition of intentionality [or semantics]" (Searle 1980a, p. 417 abstract, my emphasis). The hypothesis is that this is a modal "never"; that Searle wants to show that programming is necessarily insufficient. But CRE would only show, at most, that instantiating a computer Program is not actually sufficient for mind (-(x)(Px -> Mx)) or semantics (-(x)(Px -> Sx)). Perhaps, more weakly yet (there being doubt whether "simulation" of Chinese understanding by the means imagined by Searle is practically, or even nomologically, possible), CRE may only show that it is (nomologically or logically) possible for Programming to be insufficient; may only show that "linguistic communication could in principle (though perhaps not in practice) be no more than mindless symbol manipulation" (Harnad 1991, p. 50: my emphasis). Call the strong modal version of C1 ([]-(x)(Px -> Mx)), that programming never suffices for mind, the "strong conclusion" (C1S). Call the strong modal version of IR ([]-(x)(Px -> Sx)), that programming never suffices for semantics the "strong intermediate result" (IRS).{6} What would show that Programming is necessarily insufficient for intentional mental states would be if C1 could be derived from "the very definition of a digital computer" (Searle 1984a, p. 30, my emphasis) or program (A1), a definitional truth (A2) about intentional mental states, and (A3) a "logical truth, namely, syntax alone is not sufficient for semantics" (Searle 1984a, p. 34, my emphasis). According to the interpretive hypothesis we are now considering Searle invokes BSA to get the conclusion that programming is necessarily insufficient for mind that CRE alone won't yield.

Representing the modal commitments implicit in the definitional or logical truth of Searle's axioms explicitly yields the following strong modal version (BSAS) of BSA:

(A1S) [](x)(Px -> Fx)
(A2S) [](x)(Mx -> Sx)
(A3S) []-(x)(Fx -> Sx)
(C1S) []-(x)(Px -> Mx)

This envisaged modal strengthening of the premises is of no avail. BSAS is invalid for much the same reasons as BSA. It satisfies A3S if at every possible time or causally possible world there exists some uninterpreted formalism lacking semantics: the possibility remains, on the stated premises (A1S-A3S), that the formalisms or syntaxes which are supposed (by way of satisfying A3S) to be necessarily insufficient for semantics are not Programs, i.e., that there are possible worlds at which all the syntaxes that are Programs have semantics and are minds. The possibility even remains that this is actually the case, that all Programs have semantics and are minds at our world, the actual world. Merely styling the premises necessary does nothing to repair the logical gap in BSA. The strong modal intermediate result (IRS) that programming is necessarily insufficient for semantics ([]-(x)(Px -> Sx)), of course, would suffice to fill this gap; but, again, the gap is filled too well. Adding IRS to the modal formulation above renders A1S and A3S otiose. The operative argument now -- the strong modal CREA (CREAS) -- is just:

(A2S) [](x)(Mx -> Sx)
(IRS) []-(x)(Px  -> Sx)
(C1S) []-(x)(Px -> Mx)

This strong modal construal accords well with the a prioristic spirit of Searle's insistence that his "refutation" of AI (unlike, e.g., Dreyfus's) "is completely independent of any state of technology" and rather "has to do with the very definition of a digital computer" (Searle 1984a, p. 30). This construal also seems consistent with Searle's advertisement of the conclusion of the Chinese room argument/experiment as "a very powerful conclusion" (Searle 1984a, p. 39). On the other hand, this strong modal construal of CREA squares ill with Searle's rendering of the Chinese room's conclusion elsewhere as "It is possible that (program and not mind)" (Searle 1989a, p. 702) or equivalently "It is not the case that (necessarily (program implies mind))" (Searle 1989a, p. 703).

Represent the weak modal construal of IR (IRW) answering to this weak modal reading of C1 as follows:

(IRW) <>-(x)(Px -> Sx)

Perhaps CRE merely proves IRW. We have, then a weak modal version of CREA (CREAW) in the following:

(A2) (x)(Mx -> Sx)
(IRW) <>-(x)(Px -> Sx)
(C1W) <>-(x)(Px -> Mx)

If Turing machine functionalism is aptly construed as holding "Necessarily(program implies mind)," as Searle (1989a, p. 701) construes it, C1W is appropriately antifunctionalist. Note, however, that C1W, unlike C1S, is not "a very powerful conclusion" (Searle 1984a, p. 39) but a very weak one.

Searle says that the conclusion (C1?) he derives from his axioms (A1?-A3?) is "very powerful ... because it means the project of trying to create minds solely by designing programs is doomed from the start."{7} But C1W does not mean or imply any such thing: not that "the project of trying to create minds is doomed from the start" (Searle 1984a, p. 39), and not that "no computer program can ever be a mind" (Searle 1984a, p. 31) as Searle maintains his conclusion (C1?) does. C1W no more implies any of this than the nomological possibility of a human being surviving hanging implies that the project of executing people "solely" by hanging them is "doomed from the start" or that nobody "can ever be" executed by hanging.

Perhaps less obviously, the objection just raised against C1W -- that it is not so "very powerful" that it "means that the project of trying to create minds solely by designing programs is doomed from the start" (Searle 1984a, p. 39) -- applies equally to C1S. Suppose by way of analogy with IRS that at every nomologically possible world hanging is not sufficient to kill someone ([]-(x)(Hx -> Kx)). It still does not follow that the project of trying to execute people by hanging alone is "doomed from the start" at our world or indeed at any world. It might be the case at any nomologically possible world (as it is at ours) that given certain background conditions that generally obtain at that world hanging generally suffices to cause death without being nomologically sufficient by itself to cause death. It is even possible for something to always suffice to cause death at our world without being nomologically sufficient by itself to do so: e.g., decapitation. Nomological insufficiency does not entail actual or practical insufficiency.

What would have the extreme implications Searle intends his conclusion to have would be if the Chinese room experiment established the extreme result (XR) that Programming suffices to preclude semantics ((x)(Px -> -Sx)).{8} Alas, there is also Searlean text, to support this interpretation. Searle asserts, "we should not attribute intentionality" to anything if "we knew it had a formal program" that would "account for its [apparently intelligent] behavior" and "as soon as we knew the behavior was the result of a formal program ... we would abandon the assumption of intentionality" (Searle 1980a, p. 421). Searle's repeated assertions that "a computer program is only syntactical" (1984a, p. 30) or "purely formal" (1990a, p. 27) [my emphases] even suggest he may subscribe to something like the "purity principle" that programming excludes a thing from having, not just semantics, but any other (nonsyntactic) properties! Though XR might well fall out from this purity principle, were it true, the principle is absurdly false: this computer I'm working at instantiates (at this time) a wordprocessing program yet obviously has other properties -- being beige, being on my desk -- besides. Neither is it easy to square Searle's contention that "as soon as we knew the behavior [of anything] was the result of a formal program ... we would abandon the assumption of intentionality" (Searle 1980a, p. 421) with his recent assertion that "formal symbols [hence, by A1 programs?] have no physical causal powers" (Searle 1990a, p. 30): if formal programs "have no physical causal powers" how could behavior be "the result of a formal program"?

I conclude that none of the interpretive hypotheses or would-be strengthenings of BSA considered in this section merit adoption: each finds but equivocal textual warrant and each proves on close consideration unavailing for Searle's larger argumentative purposes. Either the reconstructed arguments forthcoming on these hypotheses are invalid (as BSAS is) or their premises are obviously false (as XR is). It is more plausible, I think, to read the "never" of "programming alone is never by itself a sufficient condition of intentionality" as a "never" of second order predication, claiming that being Programmed is insufficient for semantics whatever the Programming ((P)-(x)(Px -> Sx)).{9}

4.3 The "Upside Down" Take

Perhaps CRE does not really support A3 -- does not really go to prove A3 or even, perhaps, confirm it -- but merely "reminds us" (Searle 1989a, p. 701) that syntax is not sufficient for semantics. Even this is odd. A3 is not dubious. A3 follows immediately from the acknowledged "arbitrariness" of linguistic representation and is a corollary of the Löwenheim-Skolem theorem guaranteeing that any consistent formalism "can always be interpreted as referring to integers and arithmetic operations over them" (Pylyshyn 1984, p. 44). Moreover, if one did need "reminding" (Searle 1989a, p. 701) of A3 there are far clearer, more convincing reminders that might be assembled than CRE; e.g., uninterpreted calculi such as those presented in logic texts and logic classes. The more the attempt to construe the relationship between CRE and A3 as one of proof or confirmation of the latter by the former is pressed, it seems, the curiouser it gets. This is why, I believe: the relation of support between CRE and A3 runs in the opposite direction from the one suggested by the standard account. Searle suggests as much in asserting that CRE, his "demonstration" that "a computer, me for example, could run the steps in the program for some mental capacity, such as understanding Chinese, without understanding a world of Chinese" (Searle 1992, p. 200), "rests on the simple logical truth that syntax is not the same as, nor is it by itself sufficient for, semantics" (Searle 1992, p. 200: my emphasis). Consistent with this "upside down" take Searle even goes so far as to as to deny that his "experiment" relies on intuitions at all!

The point of the argument is not that somehow or other we have an `intuition' that I don't understand Chinese, that I find myself inclined to say that I don't understand Chinese but, who knows, perhaps I really do. That is not the point. The point of the story is to remind us of a conceptual truth that we knew all along; namely, that there is a distinction between manipulating the syntactical elements of languages and actually understanding the language at the semantic level. What is lost in AI simulation of cognitive behavior is the distinction between syntax and semantics. (Searle 1988, p. 214)

This "upside-down" take on CRE has the additional virtue of explaining Searle's tendency to strengthen the claim (A1) that "programs are formal" to the claim (call it A1*) that "programs are purely formal" (Searle 1990a, p. 31: cf. Searle 1980a, p. 423; 1984a, p. 31). A1* would support IR, thus buttressing CREA, as follows:

(A1*) Programs are purely formal (syntactical).
(A3) Syntax does not suffice for semantics.
(IR) Programming does not suffice for semantics.

Rather than "Searle's thought experiment [being] devoted to shoring up axiom 3" (Churchland & Smith Churchland 1990, p. 34), I suggest that Searle invokes "the simple logical truth" that syntax isn't (sufficient for) semantics (A3) (in supplementing CRE with CRA) to shore up CRE against the possibility that the "the intuitions to which he appeals are unreliable" (Boden 1988, p. 92). Given how unreliable these "intuitions" are (as the following section, along with Section Two, above, shows), I submit, CRE's would-be support of R desperately needs such shoring up.  The trouble with this upside-down take, however, is that this way of shoring up would-be intermediate result IR undercuts CRE's alleged support of this result: if IR is supported by A1* and A3 rather than by R, then CRE provides no support for IR and hence provides no support (via CREA) for C1. Clearly if CRE is to support C1 it has to be via R; and if CRE is to provide support for R it has to be experimental support, based on observations or (in the thought experimental case) on intuitions. Again the question arises -- more pointedly now -- of whether CRE reliably pumps the requisite intuitions.

5.1 Turing, Bombes, Wrens, and Enigma | 5.2 Newell and Simon: Blind Protocols | 5.3 Conclusion
5. The CPU Simulator Reply

I have already maintained that the "intuitions" CRE pumps about understanding (which, if reliable, directly support R) rely on dubious Cartesian assumptions about consciousness and privileged access. I have also suggested that "intuitions" about syntax and semantics per se (which, if reliable, directly support R' and hence indirectly support R via CREA) similarly rely on these Cartesian assumptions and, so, are similarly unreliable. There is a further consideration that undermines the intuitions (whether about understanding or semantics) CRE is designed to pump: similar (thought) experimental procedures can pump "intuitions" contrary to those Searle commends. Comparable (thought) experimental procedures to Searle's -- blind CPU simulations or blind tracings -- yield (for Turing 1950, Newell & Simon 1963) contrary "observations" to Searle's: Searle's "experimental" results are not reliably replicable and hence not credible.

Searle imagines himself to be hand tracing CSAM on Chinese input (stories and questions), producing appropriate Chinese output. Hand tracing, stepping through a program by hand, on paper, is something computer programmers characteristically do in the process of checking and debugging programs. Normally, the programmer knows what the input means, what the output means, and what the program is supposed to be doing (the function it's supposed to be computing). Generally, the programmer must know these things in order to use the hand trace to check the program: you need to know what the input and output mean in order to tell whether the program is processing the former into the latter correctly. The case Searle imagines is unusual in that the tracing is "blind":

Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me they call "the program." (Searle 1980a, p. 418)

Searle in the Chinese room not only doesn't know the meanings of the input and output strings he processes, he doesn't even know the input strings are stories and questions or the output strings are answers to questions, he may not even "recognize the Chinese writing as Chinese" (or for that matter, as writing as opposed "just so many meaningless squiggles" (Searle 1980a, p. 418), and he doesn't know that the set of instructions transforming input strings into output strings he is following is a natural language understanding (or any other kind of) program. Searle's blind trace procedure here is precedented: first, by Turing's wartime use of "blind" human "computers" to decrypt the German naval code during the Second World War, and second by Newell and Simon's use of "protocols" derived from "blind" inferences of human subjects to gather information about human reasoning processes in the hope of embedding these same processes in their General Problem Solver (GPS) program.

5.1 Turing, Bombes, Wrens, and Enigma

During World War Two, Alan Turing directed a project aimed at breaking the German naval code, "Enigma," and deciphering coded German naval communications. The work was initially done by members of the Women's Royal Naval Service (Wrens) acting as human computers following decryption programs Turing devised. To maintain secrecy, the Wrens were kept in the dark about the meaning of the input they received and output they produced (that these were messages about the locations of submarines, etc.), about the input and output being enciphered and deciphered German, and even about the input and output being encrypted or decrypted messages. They were unaware that the project on which they were involved was a decryption project; unaware the instructions they were following were decryption programs; etc. "The Wrens did their appointed tasks without knowing what any of it was for" (Hodges 1983, p. 211), like Searle in the Chinese room. Overseeing this veritable Chinese gymnasium (cf. Searle 1990a, p. 28) Turing, in Andrew Hodges words, "was fascinated by the fact that people could be taking part in something quite clever, in a quite mindless way" (Hodges 1983, p. 211). More precisely, Turing judged that the Wrens were doing something mental, i.e., deciphering, unawares; and as the work of the Wrens was taken over by machines of Turing's devising, called "Bombes," so too, Turing judged, were the Bombes doing something mental, i.e., deciphering, same as the Wrens. This intuition -- occasioned by a situation comparable to the situation Searle imagines in his thought experiment -- seems to have been a major inspiration for Turing's famous (1950) defense of machine intelligence. Had he thought to pose the problem in the manner of our indirect reading of Searle's thought experiment, I submit, Turing would also have judged Wrens and Bombes alike to be processing information about the locations of German U-boats, etc., unawares.

5.2 Newell and Simon: Blind Protocols

Another prominent blind trace experiment was undertaken by Allen Newell and H. A. Simon in connection with their GPS (General Problem Solver) program. Newell and Simon used a blind trace procedure to gather "protocols" (running verbal commentaries) of deductive reasonings performed blindly by human subjects in order to "extract information" (Newell & Simon 1963, p. 282) about procedures humans use or manipulations humans perform in the course of their deductive reasonings. The information extracted was then to be used to "write [GPS] programs that do the kinds of manipulation humans do" (Newell & Simon 1963, p. 283).

Here is how Newell and Simon describe the experimental situation:

A human subject, a student in engineering in an American college, sits in front of a blackboard on which are written the following expressions:

(R -> -P) & (-R -> Q) * -(-Q & P).

This is a problem in elementary symbolic logic, but the student does not know it [my emphasis]. He does know that he has twelve rules for manipulating expressions containing letters connected by ["ampersands" (&)], "wedges" (v), ["arrows" (->)] and ["minus signs" (-)], which stand [which the subject does not know] for "and," "or," "implies," and "not." These rules [inference and equivalence rules -- though the subject doesn't know this] show that expressions of certain forms ... can be transformed into expressions of somewhat different form.... .... The subject has practiced applying the rules, but he has previously done only one other problem like this. The experimenter has instructed him that his problem is to obtain the expression in the upper right corner from the expression in the upper left corner using the twelve rules. .... The subject was also asked to talk aloud as he worked; his comments were recorded and then transcribed into a "protocol" -- i.e., a verbatim record of all that he or the experimenter said during the experiment. (Newell & Simon 1963, pp. 278-280)

Here is an excerpt from the initial portion of this subject's protocol.

Well, looking at the left hand side of the equation, first we want to eliminate one of the sides by using rule 8 [A &B -> A / A & B -> B]. It appears too complicated to work with first. Now -- no, -- no, I can't do that because I will be eliminating either the Q or the P in that total expression. I won't do that first. Now I'm looking for a way to get rid of the horseshoe inside the two brackets that appear on the left and right sides of the equation.

Revealingly, the possible partial breech of "blindness" evidenced by the subject's characterization of the expressions (logical formulae) as "equations" is irrelevant. A subject who had never heard of an equation might produce a protocol much like this. So might a subject who recognized the expressions as logical formulae, the rules as inference and equivalence rules, the transitions as deductions, etc. The intuition here (like Turing's intuition in the Enigma case) is that the subjects knowing what they're doing to be deduction (or decryption) is not a requirement for what they're doing being deduction (or decryption). Here too, should we pose the issue in terms reminiscent of our indirect reading of CRE, the intuition of Newell and Simon, I take it, would be that in applying Rule 8 the subject was applying a rule about conjunction unawares where Searle's "intuition" would be that the subject's unawareness of the rule's being about conjunction makes the rule, as the subject applies it, not be about conjunction; or perhaps Searle would merely say it's not about conjunction for the subject. Note however, that it will not suffice to object along these lines -- that Rule 8 is not about conjunction for the subject, the transitions not deductions for the subject -- unless their not being such for the subject implies their not being such period. If the transitions not being deductive inferences "for the subject" or "to the subject" just means the subject is unaware of their being deductive inferences, this is nothing Newell and Simon do not already allow; but if the transitions not being deductive inferences "for the subject" means the subject is not really deducing, then the protocol data could not provide information about how humans deduce. Neither will it avail in this connection to try to stop short of saying, "If it's not deduction for the subject it's not deducing period" by saying, "It's still deduction to the experimenter." The subject's doings not being deductive except "in the eye of the beholder" (Searle 1990g, p. 637: my emphasis) discountenances the notion that subject's protocols inform us about their deductions no less than the subject's doings not being deductions at all discountenances that notion. In treating protocols of "blind" deductions as sources from which information about the subject's deductive thought processes can reliably be extracted Newell and Simon, like Turing, credit intuitions about blind traces contrary to Searle's.

5.3 Conclusion

We have been considering the proposal -- implicit in Searle's advertisement of CRE as a knockdown "refutation" (Searle 1988, p. 213) of AI -- that blind trace or CPU simulation examples only pump intuitions, such as Searle's "intuitions" that "blind understanding" really isn't understanding, "blind deducing" really isn't deducing, etc. The examples just cited -- of Turing and Newell and Simon taking such examples contrarily -- confutes this immodest proposal. Perhaps CRE can be construed, more modestly, as merely showing that CPU simulation examples can also pump "intuitions" contrary to Turing's and Newell and Simon's. Thus, modestly construed, CRE would underwrite a "nonreplicable therefore noncredible" verdict against Turing's intuition that blind trace or CPU simulation examples support claims of AI; the nonreplicability argument would cut both ways. Though my own intuitions run strongly counter to Searle's even in the thought experimental situation he carefully constructs, yet, many readers do seem to share Searle's intuitions concerning this thought experimental situation. Perhaps the conclusion to draw from this is that intuitions evoked by blind trace cases are unreliable and incapable of weighing decisively either in favor of claims of artificial intelligence (i.e., of AIP or SAIP) or against them. Since prior to Searle's "experiment" blind trace cases had seemed (to Turing and Newell and Simon, notably) to reliably to pump intuitions favoring claims of AI, this moderate result itself, if it stands, is not unimportant. On the other hand the prominence of the contrary intuitions of Turing and Newell and Simon seems to decisively undercut Searle's advertisement of the blind CPU simulation he envisions (CRE) as unequivocally infirmatory of claims of AI. Like his formal argument, CRA, CRE, Searle's putative experimental "refutation" of Turing machine functionalism and associated claims of AI, is inconclusive.

6.1 Systems Reply and Network Theory: High Church Functionalism | 6.2 The Robot Reply and Causal Theory of Reference: Low Church Functionalism | 6.3 The Brain Simulator Reply | 6.4 The Combination Reply | 6.5 The Other Minds Reply | 6.6 Many Mansions
6. Replies and Rejoinders

Much of Searle's original (1980a) Chinese room article is concerned with stating and answering "a variety of replies" suggested when Searle "had the occasion to present this [Chinese room] example to a number of workers in artificial intelligence" (Searle 1980a, p. 418). I conclude this chapter with a consideration of these replies and Searle's rejoinders. My main concern in so doing is to elucidate a suggestion made earlier, and confirmed, I take it, by the results of the preceding section, that the "intuitions" CRE provokes in one depend on what theory of mind or meaning one subscribes to; that these "intuitions" consequently are too theory dependent to comprise evidence for or against -- that they are incapable, as such, of deciding between -- these theories. I have already outlined (in Chapter One) the theories of mind at issue and argued (in the preceding sections of the present chapter) that one's direct intuitions about whether Searle in the Chinese room understands seem to depend on the as if dualistic identification of understanding and other intentional mental states with "modalities" of consciousness (Searle 1992, pp. 128, 168). Besides reinforcing this result concerning the reliance of CRE, directly construed, on the very theory of mind that R is supposed to support to support R, consideration of these replies and Searle's rejoinders brings out the theories of meaning at issue and how CRE, indirectly construed, relies on the very theory of meaning or intentionality that R' is supposed to support to support R'.

6.1 Systems Reply and Network Theory: High Church Functionalism

The systems reply to the Chinese room example, as Searle presents it, is as follows:

"While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger is front of him in which are written the rules, he has lots of scratch paper and pencils for doing calculations, he has `data banks' and sets of Chinese symbols. Now understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part." (Searle 1980a, p. 419)

Call this whole system "Searle-in-the-room" (SIR). Searle's main response, which he deems "quite simple" (Searle 1980a, p. 419), is this:

let the individual internalize all the elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand then there is no way the system could understand because the system is just a part of him. (Searle 1980a, p. 420)

Call the memorized ledger, data banks, etc. the "room-in-Searle" (RIS). Is this "swallow-up-strategy" (Weiss 1990, p. 167), as Thomas Weiss calls it, successful?

In considering the force of the systems reply and the adequacy of the swallow-up-strategy Searle offers in response we will have occasion to refer to Weiss's take on the argument according to which CRE counterinstances functionalism. Weiss's take on this is as follows:

P1 If functionalism is right any system with the right structure and the right input-output pattern understands Chinese.
P2 Searle instantiates such a system.
P3 Searle does not understand Chinese in the thought experiment.
C Functionalism is wrong. (Weiss 1990, p.167)

Weiss asserts, "The functionalist has two options for coming to terms with [the] change" (Weiss 1990, p. 167) Searle's swallow-up-strategy effects in the example. One can "doubt that Searle really does not understand Chinese" (Weiss 1990, p. 167-168); or one can deny "that Searle -- even working outdoors -- instantiates the required functional structure." (Weiss 1990, p. 168). Weiss' version of the second and, he thinks "stronger rejoinder ... the denial of premise two" -- "No system lacking any motor or sensory capacities counts as a system with the right functional structure and the Chinese speaking subsystem of Searle [and] his sense and movements are almost totally unconnected" (Weiss 1990, p. 168) -- effects a shift toward the "robot reply" to be considered in the next subsection. In the present subsection I consider, first, alternate grounds (other than those Weiss gives) for disputing Weiss' P2: these grounds, like Weiss', foreshadow the robot reply. Second, I consider grounds Weiss offers for disputing P3 (i.e., for directly disputing putative result R). These provide a bridge of sorts between the direct rejoinders to R offered above (in Section 2 and Section 5) and the other minds reply (of subsection 6.3, below).

Weiss' P2 invokes the vexed notion of Program instantiation; but previously noted troubles with this notion aside, there is considerable reason to doubt that the situation "as [Searle] has described it" (Searle 1984a, p. 32) is nomologically possible. Hence, there is considerable reason to doubt that Searle has imagined a counterexample to the causal sufficiency of Programming for imparting semantic content and intentional mental states. It is beyond the realm of present human capability to hand trace a complicated program (expressed as a set of English instructions or in a programming language or however), as in the original scenario, fast enough and accurately enough that "from the external point of view -- that is, from the point of view of somebody outside the room in which I am locked -- my answers are absolutely indistinguishable from those of native Chinese speakers" (Searle 1980, p. 418). Patrick Hayes observes, "The whole idea of being able to memorize (or even follow for that matter) a piece of code sufficiently complicated to simulate a Chinese speaker is complete fantasy" (Hayes et al. 1992, p. 236). Roger Penrose similarly notes, must take into account the fact that the execution of even a rather simple computer program would normally be something extraordinarily lengthy and tedious if carried out by human beings manipulating symbols. .... If Searle were actually to perform Schank's algorithm in the way suggested, he would be likely to be involved with many days, months, or years of extremely boring work in order to answer just a single question .... (Penrose 1989, p. 19)

In the situation originally envisaged the responses from SIR would be distinguishable from "the external point of view" from those of a native Chinese speaker by taking hours (if not weeks and months and years) instead of seconds. It is just barely within the realm of nomological possibility for a person to hand trace a complex program such as SAM accurately enough (even forgetting real time constraints) to give responses "indistinguishable from those of native Chinese speakers"; nor is the situation helped much (if at all) by having Searle "swallow-up" the program by memorizing program, data structures, etc. and doing all the calculations in his head. Any gain in speed from such internalization would have to be balanced against an inevitable loss of accuracy and the consideration that this would be an all but impossible (if not completely impossible) feat of memorization. It would be contrary to the spirit of Turing machine functionalism, however, to press this objection too strongly. Recall that the basis of Turing's claim that digital computers or Turing machines are "universal instruments" is that "considerations of speed apart, it is unnecessary to design various machines to do various computing processes" (Turing 1950, p. 441: my emphasis). That considerations of speed are irrelevant to mental attribution would also seem to be a consequence FUN given the "standard textbook definition" of computation. On the usual "pure" or "high church" understanding of functionalism time is irrelevant: if SIR hand traces the right algorithm SIR must be thinking (i.e., understanding Chinese and Chinese stories) albeit slowly.

Yet even if we should forgo the speed/accuracy objection in considering CRE's claim to counterinstance pure functionalism, clearly this objection is relevant and does undermine Searle's contention that "precisely one of the points at issue" in this "experiment" is "the adequacy of the Turing test" (Searle 1980a, p. 419) and that the person in the Chinese room "satisfies the behavioral criterion for understanding without actually understanding" so that the refutation of pure functionalism (if this succeeds) is "a fortiori a refutation of behaviorism" (Searle 1987, p. 124 note). Searle claims "The example shows that there could be two "systems" [the Chinese processing subsystem including (or included in) Searle and his English speaking subsystem], both of which pass the Turing test, but only one of which understands" (Searle 1980a, p. 419): the speed/accuracy objection is that Searle has not shown that there nomologically could be a Turing test passing system such as SIR or RIS are imagined to be. If Searle in the room or with the room in him is really supposed to be an actual or nomologically possible human being he will not be Turing test passing: if he is imagined to be Turing test passing he must be imagined to have superhuman powers and abilities of memorization or paper shuffling.

Let us understand CRE, then, to be directed in the first place against pure, high church functionalism and its associated "network theory" (Churchland 1980, p. 56) of meaning according to which the meaning of information bearing components of program states and sequences is determined by their causal/inferential interconnections (the program) plus (one hopes) the structural similarity or isomorphism (Cummins 1989, chapt. 8) between the program states and states of the domain the program models (hence, is about); determined by these factors independently of considerations of time or place of program execution. Modified in accord with the "swallow-up strategy," CRE's providing a counterexample to high church functionalism depends on RIS implementing a Chinese language processing Program yet understanding "not ... a word of Chinese" (Searle 1980, p. 420). Now, while considerations of speed and accuracy considered above cast doubt on whether RIS really would be a proper Program implementation, high church functionalism, we have just seen, is not free to avail itself of any such considerations. By viewing CRE as posing a counterexample just to high church functionalism, that RIS implements a Chinese story understanding program can freely be assumed: the high church functionalist belief that considerations of time or speed of program execution are irrelevant licenses this assumption. To rebut Searle's would be counterexample in a way consistent with high church principles, then, one needs to rebut Weiss's premise P3, the claim that RIS, despite being a proper (high church) implementation of CSAM, still does not understand Chinese. We turn our attention now to Weiss's P3: what, in the Chinese room scenario, is supposed to establish this?

Well, for Searle, the crucial datum (which is supposed to establish that SIR or RIS wouldn't understand Chinese) seems to be Searle's introspection or avowal that "I [having internalized the room] haven't the faintest idea what the [Chinese symbols] mean" (Searle 1980a, p. 418). Searle (having internalized the room) would still not be conscious of, or aware of, what the Chinese stories were about. But again, it is only from the point of view of the agent, from the imagined point of view of Searle, that SIR or RIS seems to be "obviously failing to understand" (Harnad 1991, p.50) or obviously failing to attach the correct meanings to the symbols; from the point of view of an external observer outside the room, of course, it seems perfectly obvious that the correct meanings are being attached to the symbols and the input and output strings are being understood. The reply that I essayed (in the preceding sections) is that given this preponderance of evidence from "the external point of view" and the inadvisability (which the equivocal evidence of blind trace or CPU simulation experiments underscores) of according how it seems to the agent from their "first person point of view" the epistemic privilege of overriding this preponderance of evidence from the "third person point of view" to the contrary, Searle's "experiment" is inconclusive, at best.

Where the preceding reply seeks to use "external" evidence of understanding to counterbalance "internal" appearances to the contrary, the systems reply disputes the relevance of the "internal" appearances Searle cites. According to the systems reply, doubts concerning the identity of the understanding subject in the "experiment" undercut the relevance of Searle's introspective unawareness or sincere disavowal of any understanding of the Chinese symbols being processed because the subject who is properly accorded understanding in the scenario is not Searle. In the first scenario, Searle is only a subsystem that is part of the candidate (would-be understanding) system not the whole system: even granting the dubious principle that intentionality implies consciousness, the candidate whose avowals or introspection should be consulted (on the neo-Cartesian principles being granted) would not be Searle but SIR, the larger system. Conversely, in the second scenario, once again, the subject whose introspective lack of awareness or avowal that "I haven't the faintest idea what the [Chinese symbols] mean" counts is not Searle himself but RIS, a subsystem in Searle. As Weiss puts this objection,

A functionalist can simply claim that the [system] has consciousness. Searle, he can then go on, is not in the right position to realize that. All the functionalist has to claim is that the [system] has a certain experience, not that Searle has it. As a last step a functionalist can deny the very basis of Searle's argument. (Weiss 1990, p. 171)

Weiss's point would seem to apply whether Searle is part of the system or the system part of Searle: in either case, "Searle ... is simply in the wrong position to know [introspectively] whether or not the [system] has consciousness" (Weiss 1990, p. 171). In a similar vein, David Cole suggests that the case where the system is in Searle might be conceived on the model of multiple personality disorder: "Each personality is a fully integrated and complex unit with unique memories, behavior patterns, and social relationships that determine the nature of the individual's acts [and avowals] when that personality is dominant."{10} Weiss and Cole grant Searle all the neo-Cartesian apparatus he wants and still resist CRE's would-be anti-functionalist conclusion. Thus, by Weiss's reckoning:

It is Searle's first person point of view that makes for the intuitive force of his argument. But it is that very perspective which makes it fail. Searle has a first person point of view only of himself, not of the robot [or system] as he purports to have. Searle only tells us, that he, Searle, does not have the flavour of understanding: he lacks a certain sort of consciousness. (Weiss 1990, p. 171)

Yet, while it makes for a pretty irony to resist Searle's conclusion on the basis of the very "intuitions" his Chinese room experiment is designed to evoke -- on the basis of the very Neo-Cartesian assumptions on which it seems to depend -- it is not, I think, advisable to credit such "intuitions" or accept such assumptions as Weiss and Cole seem to do. The deeper difficulty with the Chinese room scenario is with these assumptions and "intuitions" themselves.

Finally, two additional, closely related gambits (besides the main "swallow-up" gambit) Searle uses in his rejoinder to the systems reply are worth mention here. Stevan Harnad attempts to use the first of these -- the "incredulous stare"{11} by which the systems reply is not so much argued against as "lampooned" (Harnad 1991, p. 48) -- to give a "plausibility defense" of Searle's experiment against such rejoinders as Weiss's and Cole's. This plausibility defense seeks to show up such systems replies as "theory-saving on the basis of [implausible] sci-fi fantasies" (Hayes et al. 1992, p. 220). In the same vein Searle mocks the "idea ... that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand" (Searle 1980, p. 420). Harnad acknowledges,

It is logically possible that Searle would be understanding Chinese under those conditions (after all, it's just a thought experiment -- in real life, memorizing and manipulating the symbols could conceivably give rise to an emergent conscious understanding of Chinese in Searle, as some systems-repliers have suggested). It is also logically possible that memorizing the symbols and rules and doing the manipulations would generate a second conscious mind inside Searle, to which Searle would have no access, like a multiple personality, which would in turn be understanding Chinese. This too has been proposed. Finally, it has been suggested [which is also logically possible] that "understanding would be going on" inside Searle under these conditions, but not in anyone's or anything's mind, just as an unconscious process, like many other unconscious processes that normally take place in Searle and ourselves. (Hayes et al., p. 219)

While acknowledging, "All these logical possibilities exist, so there is clearly no question of a `proof'" (Hayes et al. 1992, p. 219) Harnad continues,

But then neither is there a question of a proof that a stone does not understand Chinese, or indeed that anyone (other than myself, if I do) does. Yet one can draw quite reasonable (and indeed almost certainly correct) conclusions on the basis of plausibility alone, and on that basis neither a stone nor Searle understands Chinese (and, until further notice, only beings capable of conscious understanding are capable of unconscious `understanding', and the latter must in turn be at least potentially conscious -- which it would not be in Searle if he merely memorized the meaningless symbols and manipulations). (Hayes et al. 1992, pp. 219-220)

Harnad concludes, "the logic of Searle's simple argument is no more nor less than this. An apparently plausible thesis is shown to lead to implausible conclusions, which are then taken (quite rightly in my view) as evidence against the thesis" (Hayes et al. 1992, p. 220). Similarly Searle judges the systems reply to lead to implausible conclusions that are "independently absurd" (Searle 1980a, p. 419). But are they?

Consider Searle's incredulity at the notion that "somehow the conjunction of ... person and bits of paper" might have mental properties the person (sans paper) would lack. The possibility here being lampooned is not just plausible but commonplace. The availability of paper and pencil can drastically alter ones mathematical calculative abilities; the conjunction of a monolingual English speaker with a Chinese-English phrase book has Chinese understanding abilities (slight though they are) the speaker without the phrase book lacks; etc. The implausibility of thinking a stone understands Chinese, is simply irrelevant because stones don't act like they understand Chinese at all, whereas SIR and RIS (ignoring considerations of time) do act like they understand Chinese. It's not the plausibility of crediting inanimate "systems," e.g., stones, with mental properties they don't behave at all like they have that's at issue but the plausibility of not crediting inanimate systems, e.g., computers, with mental properties they that behave like they do have. If the brute intuition that we "just know" that the computer like "the hunk of metal on the wall that we use to regulate temperature" (Searle 1980, p. 420) cannot rightly be credited with any mental properties any more than a stone can settled the matter, there would be no problem about AI and no need for a Chinese room argument/experiment to try to counter claims of AI in the first place.

A further argument Searle invokes in tandem with the incredulous stare purports to reduce the systems reply to an absurd panpsychism which would allow "mind is everywhere" (Searle 1980a, p. 420). To the extent that the absurd panpsychic conclusion depends on assumption of the "standard textbook definition" of computation, as does Searle's assertion in this connection that "there is a level of description at which my stomach does information processing" because "there is nothing to prevent [human interpreters] from treating the input and output of my digestive organs as information if they so desire," the absurdity attaches not to the systems reply but to this "standard textbook definition." I have already acknowledged (and tried to show) absurdity of this and the vexed character of the anemic mathematical notion of program implementation associated with it in the preceding chapter. That panpsychic implications the systems reply itself has (independently of those flowing from the "standard textbook definition" of computation) are not "independently absurd," is argued in the next chapter.

6.2 The Robot Reply and Causal Theory of Reference: Low Church Functionalism

I turn, now, to Weiss's "stronger rejoinder" (Weiss 1990, p. 168) which agrees that the symbol manipulation SIR (or RIS) does lacks semantics, but not for the as-if dualistic reasons that commend this judgment to Searle. According to Weiss, what SIR and RIS lack are "motor or sensory capacities" or both. The symbol manipulations of RIS lack semantics because "the Chinese speaking sub-system of Searle [and] his sense and movements are almost totally unconnected" (Weiss 1990, p. 168). This broaches what Searle calls the "robot reply" -- but Weiss's formulation differs from Searle's. It differs in taking such robotic considerations to be supportive of functionalism and, so to speak, continuous with the systems reply; as being "the denial of premise two, namely that Searle -- even working outdoors -- instantiates the required functional structure. No system lacking any motor or sensory capacities counts as a system with the right functional structure" (Weiss 1990, p. 168). Searle, on the other hand, takes the robot reply to be genuinely distinct from the systems reply since "it tacitly concedes," contrary to pure functionalism "that cognition is not solely a matter of formal symbol manipulation, since this reply adds a set of causal relation[s] with the outside world" (Searle 1980a, p. 169).

Here is Searle's version of the robot reply:

"Suppose we wrote a different kind of program from Schank's program. Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking -- anything you like. The robot would, for example, have a television camera attached to it that enabled it to `see,' it would have arms and legs that enabled it to `act,' and all of this would be controlled by its computer `brain.' Such a robot would, unlike Schank's computer, have genuine understanding and other mental states." (Searle 1980a, p. 420)

Whether this reply is contrary to the thesis (P2) that Searle instantiates a "system with the right structure and the right input output pattern" (Weiss 1990, p. 167), as Weiss would have it, or whether it is, rather, tacitly contrary to the functionalist thesis (understood as in Weiss's P1) that "any system with the right structure and the right input output pattern" understands Chinese" (Weiss 1990, p. 167), as Searle maintains, depends on what is understood by "the right input output pattern." If the input and output are just supposed to be symbol strings (abstracting from the actual physical properties of the symbols and the actual external causes of the input and effects of the output strings, as in conceptualizing an apparatus as instantiating an abstract Turing machine) Searle is right: the robot reply "concedes that cognition is not solely a matter of formal symbol manipulation" and that pure functionalism, at least, is wrong. On the other hand, if "the right input-output pattern" is understood to include actually "perceiving, walking, moving about, hammering nails, eating, drinking" (Searle 1980a, p. 420), so that the system would actually "react appropriately if somebody yelled `Fire' in Chinese" (Weiss 1990, p. 168), Weiss is right about neither RIS nor SIR instantiating such a system; but Searle is still right about the hypothesis that "cognition is ... solely a matter of formal symbol manipulation" (i.e., pure functionalism) having been tacitly given up.  Here (on this second construal) it is given up ab initio by taking a low church view of "the right input-output pattern."

Now I take it that the position Weiss ultimately wants to defend is a kind of chastened, or perhaps doubly chastened, functionalism that may be expressed thus: "the instantiation of a certain functional structure plus transducers and motor capacities can capture all [cognitive] mental properties" (Weiss 1990, p. 166: my emphasis). The second proviso of this formulation -- restricting the scope of functional explanations of mind to the "cognitive capacities" and excluding "phenomenal properties" from the scope of such explanations -- Weiss clearly accepts: thus he distinguishes his view from the view that functional structure plus sensorimotor capacities can capture "all mental properties" (Weiss 1990, p. 166: my emphasis) which he calls "full-blown functionalism in its strongest form" and styles to be "the position attacked" (Weiss 1990, p. 165) by Searle's experiment. Yet, Weiss's "full-blown functionalism," in allowing "functionalism" recourse to "transducers and motor capacities" in addition to "functional structure" for explaining cognition, seems weaker than the position "cognition ... is solely a matter of symbol manipulation" Searle's response to the systems reply seems specifically to attack.

Now it is possible to make the kind of systems-robot reply that Weiss attempts -- urging robotic considerations against the premise that Searle instantiates a system with the right structure and right input-output pattern -- I take it, if "transducers and motor capacities" are construed "narrowly" (cf. Fodor 1986b) or "solipsistically" (cf. Fodor 1980); that is to say if the criticism is that the internalized system (RIS) lacks any modules of the sort that would suffice to "hook up" the Chinese symbols to sensory input and motor output if the system were properly ensconced in a robot body, whether or not the system is actually so situated or not. Perhaps Weiss intends something like this "narrow robot reply." But this "narrow" or "internalist" defense of pure functionalism is unstable. The instability that emerges here may be seen, I believe, to cut deeper and to motivate a true "broad" or "externalist" robot reply out of the inner dynamics and inherent difficulties of the pure functionalist position itself. The instability concerns, at first view, the specification of the sorts of processes that would suffice for sensorimotor capacities if the system were properly ensconced in a robot body: the difficulty is that it will be impossible to give any purely formal specification of what would suffice for such counterfactual sensorimotor capacities. Specifically, it seems impossible to abstract from considerations time or speed of operation in this connection, as high church doctrine requires. It is nomologically impossible, I take it, for a system capable of performing one elementary operation per hour -- or like SIR, perhaps one elementary operation every several seconds -- to hear the clock strike twelve or watch the sunset, catch a thrown ball or "catch" more than a phoneme here and there of spoken conversation. Furthermore, operation speed is dependent on the physical properties of the instantiating system. Different implementations of the same program in different media -- e.g., SAM implemented by Schank's main-frame computer vs Searle-in-the-room vs "an elaborate set of water pipes with valves connecting them" (Searle 1980a, p. 421) -- may differ radically in speed and accuracy of operation.

Hayes observes,

The Turing Test is not directly related to Turing computability. For example, if one program runs 1000 times as fast as another, it makes no difference whatever to their Turing Equivalence, but it could make all the difference to one passing and the other failing the Turing Test. (Hayes et al. 1992, p. 232)

This difficulty, which precludes a pure functionalist take on the robot reply or any purely formal account of perceptual and agential mental abilities is also continuous with the difficulty (besetting functionalism itself) concerning what counts as a program implementation: how to specify this in a way that does not have the absurdly undiscriminating result that every sufficiently complex system or object is running every sufficiently simple program at every instant. Here, I submit, any would-be plausible take on functionalism must, at a minimum, distinguish between the way in which SAM is instantiated on a diskette or by a hardcopy of the program text (or perhaps by the molecules of Searle's wall, "suitably interpreted") and the way it is instantiated in being run on a computer. William Rapaport is on the right track, I think, when he suggests that "a boundary is crossed when we move from static to dynamic systems" (Rapaport 1993): "no one claims (or should claim) ... that a program qua algorithm understands" (Rapaport 1986, p. 272). More fully:

the question [of whether a computer can understand natural language, e.g.] is whether a computer that is running (or executing) a suitable program -- a (suitable program being executed or run) -- can understand natural language. A program actually being executed is sometimes said to be a "process" (cf. Tanenbaum 1976, p. 12). Thus, one must distinguish three things: (a) the computer (i.e., the hardware; in particular, the central processing unit), (b) the program (i.e., the software), and (c) the process (i.e., the hardware running the software). A program is like the script of a play; the computer is like the actors, sets, etc.; and a process is like an actual production of the play -- the play in the actual process of being performed. (Rapaport 1988, p. 81-82)

Though we often "revert to easier ways of speaking (`computers understand', `the program understands')" (Rapaport 1988, p. 82) in truth the only plausible candidate for understanding (or any other mental property) here is "a computer that is running (or executing) a suitable program" (Rapaport 1988, p. 81: my emphases). A program run, like a performance of a play -- unlike programs and plays considered as abstract objects -- is something that occurs at a place and time, and takes some time. Once it is recognized that thinking (understanding no less than perceiving) is essentially something that happens (at a place, for a time) it seems not implausible to propose that whether a given execution of an abstract program constitutes thought and what thought it constitutes -- i.e., whether the "symbols" processed are about anything and what in particular they're about -- depends not only on the form of the program being executed but on where and when and how quickly (cf. Dennett 1987b).

Call the view that Programming plus something else suffices for mind (for having some mental properties) "impure" or "low church" functionalism. The most prevalent species of low church functionalism holds that the something which added to Programming suffices to make it semantical and hence promotes it to being meaningful thought (understanding, calculating, detecting, etc.) is transduction and motor capacity (cf., Weiss 1990; Harnad 1991; Dretske 1985) -- a line of speculation inspired and strengthened by causal theories of reference for proper names and natural kind terms proposed by Hilary Putnam (1975) and Saul Kripke (1971). "According to this paradigm," as Michael Devitt expounds it, "the reference of a term is determined by an appropriate causal chain" (Devitt 1990, p. 79). A natural extension of this paradigm would seem to lead to the conclusion that, as Stevan Harnad puts it, "computation is not enough" to make symbols meaningful but "what ... might indeed be enough" would be something like "symbol systems causally connected to and grounded bottom-up in their robotic capacity to categorize and manipulate on the basis of their sensory projections, the objects, events and states of affairs that their symbols refer to" (Hayes et al. 1992, p. 221).

Now, Searle's response to the impure/causal theorist who thinks "if we put a computer in a robot" with sensory inputs and motor outputs it would "unlike Schanks's computer [or SIR], have genuine understanding and other mental states" is "that the same experiment applies in the robot case" (Searle 1980, p. 420) as follows:

Suppose that instead of the computer inside the robot, you put me inside the room and, as in the original Chinese case, you give me more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving "information" from the robot's "perceptual" apparatus and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols. (Searle 1980a, p. 420)

What's right about this rejoinder, I submit, is Searle's contention that "the same experiment applies": what's wrong, I submit, is the supposition -- which the impure/causal theorist is wont to grant! -- that it succeeded in the preceding case(s) of Searle, SIR and RIS. If the intuitions pumped in the nonrobotic case by Searle's experiment undermine the claims of Searle, SIR and RIS to understand (as so many low church causal theorists want to grant) then the revised experiment (since it pumps the same intuitions in the robotic case) undermines the causal theory too. If your causal theory implies that meaning requires richer causal (sensorimotor) associations between symbols and things than Searle, SIR/RIS or any computer of the present (non-robot-ensconced) ilk running SAM musters you should agree (as, e.g., Dretske 1985 does) with Searle's conclusion that SIR and RIS and Schank's machines running SAM only process meaningless symbols and don't really understand; but not for Searle's thought experimental reasons. Searle's "experiment" can't provide supplementary support (in addition to theoretic considerations deriving from causal theories of reference) for this contention because the intuition Searle's example pumps -- that SIR/RIS, not being consciously aware of the meanings of the symbols, thereby "understands nothing of Chinese" (Searle 1980, p. 419) -- can be pumped just as readily in the case of SIR/RIS ensconced in a robot as in the original case. CRE claims to establish the same conclusion the causal theory commends, but the way CRE purports to establish this conclusion -- by pumping intuitions about the person in the room's lack of awareness of meaning of the symbols being processed, which intuitions are credited in turn as overriding proof of the symbols's lack of (intrinsic) meaning -- if accepted, undermines the "robot reply" of impure functionalism no less than it undermines the systems reply of pure functionalism.

The equal (in)vulnerability of robot and systems replies to the "intuitions" CRE pumps confirms the suspicion that CRE depends essentially on the assumptions of privileged access and ontological subjectivity feeding these "intuitions." The notion some causal theorists seem to have that their theory buttresses CRE's support of R by enabling CRE to dispense with such Cartesian assumptions is untenable. For the same reason CRE cannot provide independent support for the causal theory of reference -- the "intuitions" CRE pumps being contrary to that theory's robot reply no less than to pure functionalism's systems reply -- the causal theory can't prop up CRE either. If you "intuit" that Searle in the Chinese room does not mean anything by the symbols he processes it seems you "intuit" -- contrary to causal theory -- that Searle in the robot room does not mean anything either. If, on the other hand, you infer on the basis of causal theory in conjunction with the sensorimotor deficits of Searle in the Chinese room (like present day computers) that Searle in the Chinese room (like present day computers) doesn't attach any meaning to the strings processed, you cannot (on pain of circularity) take the experiment as support for the causal theory. To regard CRE as providing independent support for causal theories of reference you need to credit the conclusion that SIR/RIS does not attach meaning to, and hence does not understand, the Chinese writing because of intuitions CRE pumps not just because causal theory (together with the unconnectedness of the symbols SIR/RIS processes to sensory inputs and motor outputs) implies it; yet crediting these "intuitions" about consciousness of meaning CRE pumps undermines the robot reply and causal theory. Imagine the robot to stand in whatever sensorimotor relation you please to the things denoted by the strings of Chinese characters -- imagine with Weiss, e.g., that when the Chinese characters meaning "Fire!" are input the robot hastens for the nearest exit or grabs a fire extinguisher -- and there still seems no reason to suppose Searle in the room in the robot's head (or the robot-room-Searle system) would be conscious of the meanings of the symbols.

The essential reliance of CRE on as if dualistic assumptions of privileged access and ontological subjectivity (confirmed by the inability of causal theory to support the experiment) is a very strong result. It means, I contend, that the attempt to use CRE to establish even so much as the weak antifunctionalist conclusion (C1W) is doomed from the start by the consideration that these assumptions lead unavoidably to the independently absurd conclusion that "there's no way I can know" (Harnad, p.45) "that anyone else but me has a mind" (Harnad, p.45); a conclusion Harnad -- a staunch proponent of Searle's "experiment" -- accepts! To my mind, this "other minds reply" invalidates Searle's experiment as a counterexample either to Turing machine functionalism or to the claim of SAM "or any Turing machine simulation of human mental phenomena" (Searle 1980a, p. 417) to have the mental properties it acts as it if has. But first, the "brain simulator reply" and the "combination reply" take up other loose ends. All the replies, however, neglect perhaps the most substantial loose end remaining in connection with the robot reply: this concerns the curiously strawmanish character of Searle's restriction of the robot reply to individualistic causal theoretic considerations. This is strawmanish because the strongest causal theoretic considerations favoring claims of AI, it seems to me, are social, having to do with what Putnam terms the "division of linguistic labor" (Putnam 1975, p. 245). It is such social considerations as that accounting programs perform the intellectual labor previously performed by accountants, after all, that lead us to say, e.g. that computers running accounting programs calculate net profits, determine gross revenues, etc. This consideration gives theoretic bite to the naive empirical considerations favoring claims of mental abilities on behalf of already existing computers to be developed in the next chapter and foreshadows the emergence of intrinsicality rather than intentionality per se as the crucial point at issue at the next chapter's conclusion.

6.3 The Brain Simulator Reply

The brain simulator reply, as Searle recounts it, insists that following the brain's program must suffice for intentionality. Searle recounts it as follows:

"Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what could be different about the program of the computer and the program of the Chinese brain?" (Searle 1980a, p. 420)

Searle responds,

Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: on the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing any neurophysiology. (Searle 1980a, p. 421)

Searle continues,

However, even getting this close to the operation of the brain is still not sufficient to produce understanding. To see this, imagine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn off and on. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answer pops out at the output end of the series of pipes. Now where is the understanding in this system? .... ...the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological properties. (Searle 1980a, p. 421)

Despite its popularity, I believe this reply and Searle's rejoinder merit only the briefest comment.

The popularity of this reply, I believe, stems from the difficulty, for functionalism, of specifying what procedures for producing proper output for a given input comprise the right ones. Now -- since we think -- it seems obvious that whatever means we (humans) use are right ones: so, the proponent of the brain-simulator reasons, it seems, that if computers were programmed to produce the requisite performances or competencies in our way (by the procedures we use), surely this would suffice for their having the same mental properties manifested by such performances or competencies on our part. One reason this merits only short consideration is that it's not enough -- since specifying the right procedures is required (on pain of behaviorism) to provide grounds for denying certain subjects's Turing test passing performances (those produced by the wrong methods) evidence the same mental properties in these subjects that comparable performances or competencies would evidence in us -- to urge that whatever procedures we use must be sufficient. What needs to be claimed in this connection is that brain-simulation (producing appropriate output by the same procedures we or our brains use) is necessary: and the objection to this (canvassed in Chapter One) is that -- though the chauvinism here is of manner not (as with the mind-brain identity theory) of matter -- this procedural chauvinism is still objectionably chauvinistic. Though brain simulation may be an appropriate research strategy for discovering a way to produce the behavioral competencies that evidence or (for the behaviorist) comprise intelligence no theoretic importance can attach (on pain of chauvinism) to a way's being our way. Furthermore, while it is apparent that the same programs will support the same input-output functions or timeless behavioral dispositions in computers as in us, it is by no means apparent that implementing the same procedures in materially different devices (with different "clock speeds") will underwrite either the same real time behavioral dispositions (for Turing test passing) in them as in us, as the behaviorist would stress, or even the same "dispositional capacity to cause conscious thoughts" (Searle 1992, p. 188), as Searle stresses. Searle is right in this: if you accept the original counterexample on the grounds he proposes -- on the basis of the intuition that Searle in the room would not be having the right conscious experiences of understanding or "giving off" the right qualia for understanding -- adding the proviso that the program he is implementing is a brain simulation program seems not to have the least tendency to shake that intuition.{12}

6.4 The Combination Reply

The "combination reply," as Searle describes it, is as follows:

"While each of the previous three replies might not be completely convincing by itself as a refutation of the Chinese room counterexample, if you take all three together they are collectively much more convincing and even decisive. Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole
behavior of the robot is indistinguishable from human behaviour, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case we would have to ascribe intentionality to the system." (Searle 1980a, p. 421)

Searle offers this rejoinder:

I entirely agree that in such a case we should find it rational and indeed irresistible to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it. Indeed, besides appearance and behavior, the other elements of the combination are really irrelevant. If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would attribute intentionality to it pending some reason not to do so. We wouldn't need to know in advance that its computer brain was a formal analogue of the human brain. But I really don't see that this is any help to claims of strong AI; and here's why: According to strong AI, instantiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality. As Newell (1979) puts it, the essence of the mental is the operation of a physical symbol system. But the attributions of intentionality that we make to the robot in this example have nothing to do with formal programs. They are simply based on the assumption that if the robot looks and behaves sufficiently like us, then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior and must have an inner mechanism capable of producing such mental states. If we knew independently how to account for its behavior without such assumptions we would not attribute intentionality to it, especially if we knew it had a formal program. And this is precisely the point of my earlier reply to objection II [the robot reply]. (Searle 1980a, p. 421)

Though there is much here, in Searle's rejoinder, that is objectionable -- some, e.g., the suggestion that programming excludes intentionality or semantics, to which I have already objected -- I believe here, too, his rejoinder is, in its main outlines, correct. If you accept the original counterexample on the grounds Searle proposes -- on the basis of the intuition that Searle in the room would not be having the right conscious experiences of understanding or "giving off" the right qualia for understanding -- adding all the provisos envisaged by the various replies no more serves to counteract this intuition than adding each separately did. Indeed, since each reply taken separately seems not at all to counteract this initial "intuition" -- zero times three being still zero -- it is hard to see how taking them all together would.

Finally, it seems that all these replies -- systems, robot, and brain-simulator -- are "odd" for a proponent of functionalism (or indeed for any nondualist) to make because they all seem to accept Searle's invitation to "regress to the Cartesian vantage point" (Dennett 1987b, p. 336) by allowing Searle's introspective unawareness or conscious disavowal or understanding to override overwhelming "external" evidence of understanding in the initial scenario. This -- contrary to the spirit if not the letter of functionalism -- accepts the dualistic idea that what's required for thought in addition to intelligent seeming (or Turing test passing) behavior is (the right) conscious experiences.{13} Once the consciousness criterion is tacitly accepted in crediting Searle's verdict concerning his lack of understanding or the meaninglessness of the "symbol" processing he does in the initial scenario, one bears the burden of giving reasons for thinking that fulfillment of whatever further criterion one proposes guarantees or (at least) increases the likelihood of the right conscious experiences being forthcoming or the right qualia being given off. The burden is to provide some theoretical warrant for supposing that SIR (the room-Searle system) is "giving off" the right qualia or having the right conscious experiences (on the systems reply), or that Searle would be the having the right conscious experiences if only he were following a more brainlike program than CSAM (on the brain simulator reply), or if only he were ensconced in a robot getting input from the robots transducers and sending output to the robot's effectors (the robot reply), or that SIR (rather than Searle) would be having the right conscious experiences if only these further robotic or brain simulation conditions were met also (the combination reply). This burden is insupportable for reasons having little to do with the network theory or the brain simulation addendum or the causal addendum but for reasons having much to do, rather, with the well-known methodological vexedness and empirical vagary of appeals to consciousness.

6.5 The Other Minds Reply

The other minds reply and the related "many mansion reply" (to be considered next) are characterized by Searle as "two other responses to my example that come up frequently (and so are worth discussing) but really miss the point" (Searle 1980a, p. 421). The other minds reply, as Searle states it, is as follows:

"How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers." (Searle 1980a, p. 421)

Searle's rejoinder (in full!) is the following:

This objection really is only worth a short reply. The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In "cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects. (Searle 1980a, p. 421-422)

This rejoinder misses the point and is really only worth a short reply.

The Chinese room example only works -- only counterinstances functionalism and (perhaps) claims of AI, as it has been the burden of my preceding discussion to show -- given the Cartesian assumptions (1) that meaning or intentionality either is a mode or modification of consciousness (as Descartes would have put it) or presupposes consciousness and (2) that consciousness (hence, given the preceding assumption, intentionality) can be directly discerned only from "the first person point of view" (Searle 1980b, p. 451) or the "point of view of the agent" (Searle 1980a, p. 420). On these assumptions, it seems the only way to tell if another agent or body really is conscious (and really has intentionality) and is not just acting as if conscious -- having only "as if forms of intentionality" (Searle 1989b, p. 197) licensing only "metaphorical ascriptions" (Searle 1984b, p. 5) -- is "by being the other body" (Harnad 1991, p. 42 abstract). Since Searle's putative counterexample to functionalism and (perhaps) AI depends on these two assumptions, and these assumptions give rise to the classical Cartesian other minds problem, Searle is obliged to address this difficulty if he expects us to credit his "experiment." The inadequacy of Searle's rejoinder to the "other minds reply" is especially damning since his "experiment" (CRE) is a variation on Turing's (1950) "imitation game" test, which in turn adapts Descartes's (1637) "language test": Descartes proposes this test expressly to account for the possibility of knowledge of other minds given these very assumptions of privileged access and ontological subjectivity on which the would-be force of CRE depends. Descartes is almost universally acknowledged to have other minds problems despite his allowance that thought is necessary for creatively productive behavior (e.g., conversation) and consequently that such behavior suffices to evidence thought. Since Searle denies the causal necessity of thought for creatively productive behavior and hence (it seems) the empirical adequacy of such behavior to evidence thought (by positing "as if" thought or "as if intentionality" as an alternative explanation of such behavior) Searle would seem to have other minds problems in spades.

In truth, the other minds problem is both about how I know that other people have cognitive states and about what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the objection is that it couldn't be conscious experiences I am attributing because, if it were, one could never know -- as the other minds objection assumes one does know -- that other people have minds. The trouble is that for any candidate thinker other than oneself,

all possible empirical evidence (and any theory of brain function that explains it) is just as compatible with the assumption that a candidate is merely behaving exactly as if it had a mind [= consciousness] (but doesn't) as with the assumption that a candidate really has a mind. So ... "having a mind" can't do any independent theoretical work for us [cf. Fodor (1980)] in the way that physics' unobservables do. Hence, consciousness can be affirmed or denied [in others] on the basis of precisely the same evidence [Harnad (1982, 1989b)]. (Harnad 1991, p. 46)

The trouble with CRE's reliance on a consciousness criterion of understanding is that it can't be substantiated just how far (beyond myself) this mysterious "inner light" of consciousness extends. This other minds reply does not, as Searle (1980, p.422) jeers, "feign anesthesia": it only requires critics of AI to consistently apply the criterion they propose to disqualify computers's claims to think. What the other minds reply says is that if consciousness were our basis for deciding whether any intelligent seeming thing was really a thinking subject, then one should have skeptical doubts about other minds. So, if we don't, and shouldn't, seriously entertain such doubts, this seems to show that we don't (or shouldn't) appeal to consciousness to decide what is and isn't thinking. It is no answer to this longstanding trouble with Cartesianism to feign amnesia.{14}

6.6 Many Mansions

The "many mansions reply" as Searle formulates it, is this:

"Your whole argument presupposes that AI is only about analogue and digital computers. But that just happens to be the present state of technology. Whatever these causal processes are that you say are essential for intentionality (assuming you are right), eventually we will be able to build devices that have those causal processes, and that will be artificial intelligence. So your arguments are in no way directed at the ability of artificial intelligence to produce and explain cognition" (Searle 1980a, p. 422)

Searle responds as follows:

I really have no objection to this reply save to say that it in effect trivializes the project of Strong AI by redefining it as whatever artificially produces and explains cognition. The interest of the original claim made on behalf of artificial intelligence is that it was a precise, well defined thesis: mental processes are computational processes over formally defined elements. I have been concerned to challenge that thesis. If the claim is redefined so that it is no longer that thesis, my objections no longer apply because there is no longer a testable hypothesis for them to apply to. (Searle 1980a, p. 422)

On the contrary, there is a precise well defined thesis that Searle's Chinese room objections to AI have been widely held to apply to and Searle has advertised these objections as applying to: the thesis that extant "computers with the right inputs and outputs ... literally have thought processes" (Searle et al. 1984, p. 146) which I call "strong AI proper" (SAIP) and Searle claims he has shown to be "demonstrably false" by his "parable about the Chinese room" (Searle et al. 1984, p. 146-147). What's more, unless we accept Searle's invitation to "regress to the Cartesian vantage point" (Dennett 1987b, p. 336) by identifying "thought processes" with streams of conscious experiences we "each know exactly what it's like to have" (Harnad 1991, p. 44) but could only know "whether any body other than our own has" by "being the other body" (Harnad 1991, p. 43 abstract) -- which would render the hypothesis untestable -- there is no reason to think SAIP not to be "a testable hypothesis" (Searle 1980a, p. 422).

Searle has recently said, "The epistemological and methodological questions are relatively uninteresting because they always have the same answer: Use your ingenuity. Use any weapon at hand, and stick with any weapon that works" (1990g, p.640). Given his proposed theoretical identification of mind or thought with consciousness, this, of course, is no answer to Searle's problem of providing some tolerably precise way of measuring (or at least detecting) the presence or absence of thought in others independently of its behavioral effects since behavioral evidence can't decide (on Searle's views) between the hypotheses of genuine and mere as if intentionality or thought. As Harnad puts it, in this connection, "I either accept the dictates of my intuition (which is the equivalent of `If it looks like a duck, walks like a duck, quacks like a duck ... it's a duck') or I admit that there's no way I can know"; accept that there's "no evidence for me that anyone else but me has a mind" (Harnad 1991, p. 44). To "accept the dictates of "intuition" and "ingenuity" as the only "weapon at hand" the only "weapon that works" -- though it is not an answer to which Searle is entitled -- is, I submit, the correct answer. What ingenuity suggests to explain why my pocket calculator displays "4" (or to predict it will display "4") after having "2", "+", "2", "=" entered on its keypad is the hypothesis "It adds 2 + 2 and gets 4." This works. If we allow with Searle that whether machines have mental properties is "an empirical question" (Searle 1980a, p.422); if we credit (as we should) working attributions of mental properties to machines -- "DOS recognizes the dir command," "Deep Thought considers more continuations of play at greater length than any human chess player," etc. -- above speculative "holiday" talk (Wittgenstein 1958, §38) about the nature of mind or the essence of thinking (such as Searle's); then, I submit, the empirical evidence suggests that pocket calculators, machines running DOS, and Deep Thought, have the mental properties of calculating that two plus two equal four, recognizing the dir command, and considering alternative continuations of play, respectively.


  1. Searle's Chinese room "experiment" (CRE) and his axiomatized Chinese room argument (CRA) are advertised by Searle and generally taken together as a kind of all purpose AI-be-gone remedy effective against Turing machine functionalism (FUN); against claims of AI proper (AIP), that computers can (or someday will) think, against claims of what I call "strong AI proper" (SAIP), that some computers already do think; and against behavior-ism and the Turing Test being a valid (inductively adequate) test for thinking.
  2. According to Schank's description, SAM, besides being able to generate paraphrases of input stories and "summaries that rely on measures of the relevant importance of events," and to "answer questions about the input story," can "translate the stories ... into Chinese, Russian, Dutch, and Spanish" (Schank 1977, p. 198). The Turing test passing capacity in question here is what I term "partial Turing test passing" capacity (i.e. the capacity to "simulate" isolated mental abilities) not full Turing test passing capacity (to "simulate" something like the full range of characteristically human mental abilities). By "Program" (with a capital "P") I mean the subset of computer programs at issue: either all the Turing test passing programs (insofar as the Chinese room is supposed to be directed against the Turing test and SAIP) or the subset of these which, additionally, compute their input-output functions by the right method or in the right manner (insofar as the Chinese room is additionally targeting FUN). Other abbreviatory conventions previously introduced and continued in this chapter (besides FUN, AIP, SAIP, CRA, CRE: see note 1, above) include A1, A2, A3, and C1 for the "axioms" and conclusion of the "brutally simple" argument (BSA) at the crux of CRA. (Chapter 2, subsection 3.1, above, provides a detailed reconstruction of BSA.)
  3. For expository purposes -FUN is represented here as (3x)(Px & -Mx), an equivalent of -(x)(Px -> Mx) the syntactic negation of FUN. s = Searle.
  4. Also, Harnad 1989, 1991; Pucetti 1980.
  5. By "pure" or "high church" functionalism I mean the view that programming "by itself" (Searle 1980a, p. 417 abstract) suffices for mind (i.e., for various mental properties); by "impure" or "low church" functionalism I mean views that programming plus something else -- typically, it is urged, the right causal (sensorimotor) connections of processed symbols to their referents -- suffices.
  6. Though Searle's "never" suggests understanding "[]" here as a temporal necessity operator signifying truth at all times, it is more in line with Searle's subsequent discussion, perhaps, to understand the necessity in question here as causal or nomological necessity. Note also that, strictly speaking, we should have to render C1S as []-((3x)Px -> (x)(Px -> Mx)) and IRS as []-((3x)Px -> (x)(Px -> Sx)) to avoid the problem of the simplified renderings being vacuously counterinstanced by times at which nothing instantiates a (partial Turing test passing) program: I ignore this complication.
  7. Read the "?"s, here, as "wildcards" for which "S" "W" or nothing at all may be substituted to give the three possible renderings of Searle's axioms and conclusion being considered.
  8. XR is the direct version of S2 of argument BS2A (Chapter 2, subsection 3.4).
  9. In accord with Searle's claim that the "same arguments would apply to ... any Turing machine simulation of any human mental phenomena" I have implicitly been treating CRA as, in effect, quantifying over Program types (all Ps) and mental properties (all Ms) all along: the "arguments" of the preceding chapter are really argument schemata.
  10. This description from the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, III, p. 257 is cited by Cole 1991b, pp. 410-411.
  11. I owe this happy phrase to Gene Cline, who gets it from David Lewis: this is the title of ' 2.8 of Lewiss On the Plurality of Worlds (Lewis 1986).
  12. It has been noted (e.g., by Rapaport 1993, Block 1978, Weiss 1990) that functionalism seems ill suited to account for the "qualitative" side of the mental. There is no reason to suppose that qualia do vary as a function of programming and some reason -- e.g., inverted qualia "experiments" -- for supposing they don't.
  13. Dualism and functionalism, as Putnam (1967) points out, are compatible; so it is merely contrary to the would-be physicalistic spirit or functional-ism -- not to its letter -- to allow (as would-be functionalist proponents of all these various replies seem to) that in addition to Turing test passing behavioral capacity both the right programming and the right conscious experiences are required for genuine mentation.
  14. Searle's more recent (1992) attempt to really address the other minds problem rather than merely stonewalling, as here (1980a), will be considered below, in Chapter 6.

next | previous