Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence by Larry Hauser

1. Introduction | 2. The Chinese Room Argument and Artificial Intelligence | 3. The Chinese Room Argument and Functionalism | 4. Conclusion | Endnotes
Chapter Two

Involved expressions with such inbuilt Chinese-box complexity impede validatory operations or render them unperformable. (Edmund Husserl, Logical Investigations vol.1, p.69).

1. Introduction

John Searle has frequently characterized his Chinese room argument (CRA) as being "directed at" the claim that "the appropriately programmed computer literally has cognitive states" (Searle 1980a, p.417) or as being intended to "demonstrate the falsity of" the claim that "appropriately programmed computers ... literally have thought processes" (Searle et al. 1984, p.146). He has always contended also that, as he puts it, the argument's "points are brutally simple" (Searle 1989a, p.703). These characterizations are misleading. CRA has no brutal consequences against claims of Artificial Intelligence on behalf of digital computers: simply put, the argument or "derivation from axioms" (Searle 1989a, p. 701) Searle sketches in the abstract of his original article (1980a, p. 417) and elaborates in latter discussions (1984a, 1988, 1989a, 1990a) is simply invalid. Claims of artificial intelligence proper (AIP), that computers can think (i.e., that artificial intelligence is possible), even strong claims of artificial intelligence proper (SAIP) that computers already do think, are entirely consistent with the premises of the Chinese room argument. In fact, despite Searle's continuing advertisement of the argument as targeted against AI -- against the claim that "appropriately programmed computers ... literally have thought processes" (Searle et al. 1984, p.146), or against the claim that "a machine [can] have conscious thoughts in exactly the same sense that you and I have" (Searle 1990a, p.26) -- CRA establishes no such conclusions.

Searle himself sometimes even seems to admit his argument lacks the force against the thesis of AI he seemed originally to suggest (and many still seem to suppose) it has. He allows, "I have not tried to prove a computer cannot think" (Searle 1990a, p.27). Far from being a brutal refutation of AI, then, CRA does not properly refute AI at all. Against claims of artificial intelligence properly so called (as opposed to the claims of "strong AI" in Searle's proprietary sense of that phrase), CRA is a non sequitur; and insofar as Searle continues to misleadingly style it an argument against "strong AI," an ignoratio elenchi. Even if CRA did establish its explicit conclusion that "programs by themselves are not minds" (Searle 1989a, p.407), this alone does not entail "It's impossible that machines such as digital computers will ever think," nor even that they do not think already. And in fact -- as I will show (in Chapter 4) -- there are substantial reasons for thinking computers literally have mental properties, or literally think, already; reasons quite independent of the (roughly functionalist) hypothesis that programs by themselves causally suffice for mental activity, or that minds are really just programs, that Searle explicitly stalks.

Section 2 of this chapter shows that the first, would-be antifunctionalist, conclusion of Searle's argument -- "Programs are [not] sufficient for minds" (Searle 1989a, p. 703) -- even were it were proven by Searle's argument (or simply true) would not logically impugn claims (that computers can think or do think) of AI proper: this Searle has rightly (if not always consistently) acknowledged. But the "brutally simple" crux of the argument (BSA) purporting to prove the aforementioned antifunctionalist conclusion (-FUN) also fails: the conclusion, "Programs are [not] sufficient for minds" is not a valid consequence of Searle's stated premises. This is shown in Section 3. Besides not having the brutal consequences against AI Searle advertises, neither does the unsupplemented CRA make a valid (much less sound) case for the antifunctionalist conclusion it targets in the first instance. In Section 3 I also consider various supplementary assumptions which would, if admissible, validate Searle's argument against functionalism after all. One such assumption -- that all things are computers or instantiate programs -- proves most interesting. Implausible though it seems on its face, this assumption turns out not to be unique to Searle nor is it original to him. In fact, it is an assumption that (most) functionalists share. Despite the unsupplemented CRA's failure to provide a sound (or even valid) argument against functionalism, then; and despite the implausibility, on its own head, of this supplementary assumption which would suffice (given Searle's other premises) to validly entail the antifunctionalist conclusion that "programs, by themselves, aren't minds"; Searle's CRA does make a compelling ad hominem case against any version of functionalism which accepts (as most do) the view that "just about any system has a level of description where you can describe it as a digital computer" or "as instantiating a formal program" (Searle et al. 1984, p.153). Perhaps this explains the pique and consternation Searle's argument has inspired in functionalists and cognitivists, despite its logical and evidential failings.

2.1 Some Preliminary Concepts and Distinctions | 2.2 The Chinese Room Argument
2. The Chinese Room Argument and Artificial Intelligence

2.11 Cognitivism and Functionalism | 2.12 Multiple Realizability | 2.13 Cognitivism and AI | 2.14 Interregnum | 2.15 The Chinese Room Argument and Experiment 
2.1 Some Preliminary Concepts and Distinctions

I reiterate some distinctions at the outset, beginning with Searle's initial characterization of the view he calls "strong AI," which he claims CRA is "directed at" (1980a, p. 417). In his original Chinese room article (Searle 1980a) Searle first characterizes "strong AI" as "the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition." (Searle 1980a, p.417) The view Searle says his Chinese room argument is "directed at," then, on this construal, is (roughly) the conjoint assertion of two claims. First, what I call "AI proper":
(AIP) Computers (if appropriately programmed) can literally have mental states.
This is, roughly, the classic AI claim -- articulated by Turing (1950), for instance -- that machines can think. A stronger version of this claim -- which I term "strong AI proper" -- would maintain that not only can computers think (not only could an appropriately programmed computer have mental states), but some present day computers already do:
(SAIP) Some computers actually have mental states already.
Since there seems to be no logical impossibility about computers thinking, both AIP and SAIP would seem to be best understood as making empirical claims: roughly, AIP predicts that computers can (be made to) think (if they don't already); SAIP claims some existing computers (have been made to) think or have mental properties already. Searle himself acknowledges the empirical status of the claim that artificial intelligence is possible, along with the empirical status of particular claims on behalf of actual devices, when he asks, "is there some logically compelling reason why [computers] could not also give off [thought or] consciousness?" (Searle 1990a, p.31), and answers that there is not; though "scientifically," he claims, "the idea is out of the question" (Searle 1990a, p.31).{1}

What Searle calls "strong AI," initially, then, is something like the conjunction of the claim that computers can think (or will, if they don't already), with the further claim that programs explain how we (humans and certain higher animals) think. Call this second tenet of Searle's "strong AI," which proposes "to elucidate the workings of the mind by treating them as computations" (Johnson-Laird 1988, p.9), cognitivism. It might be stated as follows:

(COG) To explain human cognitive (and other mental) properties is to specify the programs from whose execution those properties derive.
So Searle's first formulation of the claim he calls "strong AI," which he says CRA is directed against is roughly AIP & COG{2}; and if the conclusion of CRA is supposed to be just the denial of this, then the conclusion of Searle's argument should be equivalent to -AIP v -COG. What Searle actually says, however -- "my discussion here will be directed at the claims I have defined as those of strong AI" [my emphasis] -- suggests he aims to deny both conjuncts, i.e., to prove a conclusion equivalent to -AIP & -COG. Of course only this latter, stronger conclusion entails -AIP.

Unfortunately (for simplicity's sake), the following is what Searle eventually does conclude from CRA: "Programs by themselves are not minds" (Searle 1989a, p.703), or (alternately) "Programs are neither constitutive of nor sufficient for minds" (Searle 1990a, p.27). Perhaps we can combine these variant formulations and represent Searle's explicit conclusion, then, as follows:

(C1) Programs by themselves are not (sufficient for) minds.
Rather than being directly targeted against the empirical claims of AIP and SAIP, or the epistemological (or methodological) doctrines of cognitivism, Searle's explicit conclusion seems rather to be a denial of the metaphysical doctrine of Turing machine functionalism (just "functionalism," for short); the view which Searle takes to be informally summarized by the slogan, "mind is to brain as program is to hardware" (Searle 1980a, p.423). This slogan suggests the following first approximation of this view that CRA explicitly concludes against:
(F1) Minds are (caused by) programs: i.e., mental states are (caused by) program states, mental operations are (caused by) program operations, etc.
Since Searle's explicit conclusion is not the denial of something like F1 (All minds are programs) but rather its subalternate (Some programs are minds), call these programs that are claimed (by FUN) to be sufficient for minds "Programs" (with a capital "P"). It now be more appropriate to state the (Turing machine) functionalist view Searle expressly denies as follows:
(FUN) The right Programs are (sufficient to cause) minds: i.e., some Program states are (sufficient to cause) mental states, some Program operations are (sufficient to cause) mental operations, etc.
It not at all clear that either AIP or COG, or their conjunction, entails FUN; and it is not clear, consequently, that -FUN entails -AIP or -COG; so, it is not clear what force CRA is supposed to have against "strong AI" as Searle initially styles it.

2.11 Cognitivism and Functionalism

Obviously SAIP, the claim that (some) computers do think, entails AIP, the claim that (some) computers can think: what's actual is possible. And -SAIP obviously does not entail -AIP: we cannot conclude from the fact (if it is a fact) that no computer yet designed and programmed really has mental abilities and mental states, that no computer can or ever will be so designed and programmed as to have such abilities and states. This much is obvious. How SAIP and AIP are logically related to COG and FUN, and how COG and FUN are related, however, are vexing questions.

Does COG entail FUN? If so, were CRA a sound argument for -FUN, then CRA would entail -(AIP & COG); though of course, unless -COG entails -AIP, this will still not make CRA an argument against AI proper. I take it that COG does entail FUN, in conjunction with the following principle of scientific realism:

(SR) Whatever the best (most precise, extensive, interconnected, etc.) scientific theory ultimately postulates for explanatory purposes really exists and constitutes or causes the phenomena thus explained.
Unless we accept SR, COG will not entail that the programs that explain mental states and operations (assuming there to be such explanations) even exist, much less that they exist and are identical to what we naively refer to as mental states and processes. Though SR is not wholly uncontroversial or perhaps even clear (given its need to appeal, Peirce-like, to the ultimate explanatory posits of the "completed science" to have any chance of being acceptable), yet I don't propose to make heavy weather for SR, or belabor fictionalist worries, on this score. It is plausible enough to think that in order for a program to explain some particular mental state or performance, or type of mental state or performance, the program must exist. And surely this is what (most) cognitivists do believe: not just that, when we understand stories, e.g., we act as if we were following certain programs -- e.g., as if we apply scripts in the manner of Schank and Abelson's (1977) SAM -- but that our nervous systems actually implement such programs. Thus cognitivists typically hold that computational explanations of mind "treat computations ... as a literal description of some aspect of nature (in this case, mental activity)" (Pylyshyn 1984, p.xv); and most cognitivists, I take it, would think a program that our nervous systems could not possibly implement (which required more storage capacity, or faster "clock speeds," e.g., than our brains could possibly muster) could not possibly explain our thought processes, no matter how exactly (for every given input) our behavior matched the output one would expect from the program in question; no matter how precisely (as a consequence) the program predicted our behavioral response (output) for any given stimulus input. (Cf. Marr 1977).

So, I forego misgivings about SR and accept that for a program to explain some particular human performance, or type of human performance, the program must really exist; presumably in us, in our nervous systems: the states and operations of programs posited to explain our mental states (if they really do explain our mental states), moreover, have to cause or constitute the mental states and operations they explain. Accepting SR, then, COG entails FUN: but as our formulation of FUN (following Searle) is equivocal (between "causes" and "constitutes"), which version of FUN does it entail? This depends on the sort of explanatory accounts cognitivism takes programs to provide: reductive accounts, subsuming the mental phenomena (or mental descriptions of the phenomena) they explain under biconditional laws (of the form (x)(Px <-> Mx); or whether they provide nonreductive causal accounts, subsuming mental phenomena under conditional laws of the form (x)(Px -> Mx).{3}

2.12 Multiple Realizability

This brings us to the so called "multiple realizability" principle. All who identify themselves as cognitivists or functionalists (to the best of my knowledge) accept some such principle as this:
(MR) Identical types of mental attributes can be realized or caused differently (by different programs or algorithms) in different types of systems (and perhaps even different individual systems of a single type).
The multiple realizability thus proclaimed is really twofold. First, there is the (possible) fitness of different hardwares for implementing the same algorithms or programs: thus we encounter such claims as that a computer "could be made out cogs and levers like an old-fashioned calculator; it could be made out of a hydraulic system through which water flows; it could be made out of transistors through which an electrical current flows" (Johnson-Laird 1988, p.39); etc. Stones and toilet paper (Weizenbaum 1976, chap. 2), cats and cheese (Block 1990, p.260), trained pigeons (Pylyshyn 1984, p.57), and beer cans (Searle et al. 1984, p.159), are some of the more outré possibilities envisaged in the literature. Secondly, under the general heading of MR, there is also the (possible) fitness of various procedures for the same task; as there are various algorithms for sorting lists, for instance; as "it is possible to have different programs ... for a particular algorithm" (Pylyshyn 1984, p.89). While the literature tends to stress hardware MR, it will be this procedural MR that most concerns us; and it is procedural MR that suggests -- given its acceptance by cognitivists -- that cognitivism mustn't be viewed as proffering reductive explanations whose explanans cite biconditional laws predicating properties coextensive with the properties to be explained (predicated by the explanandum). What MR opposes then, along with the notion that there is some specific type of hardware that is necessary for implementing any specific program, is the idea that any specific program (P) is necessary for a given class of mental performances in general, for systems of all types. What COG in the light of MR then proposes is that there are programs or program states such that having the program P is causally sufficient for being in some mental state M; and perhaps we must add for a specific (type of) system (S). Note that MR, besides denying that programs sufficient for a specific mental property in one (type of) system are necessary for mind in all (types of) systems, also seems to allow that there could be programs or program states sufficient for M in some specific (type of) system S without being sufficient for M in general, in all (types of) systems; a possibility we'll presently come to consider more fully. Thus understood, MR would counsel or might suggest that it may be fruitless to seek causal laws of the form (x)(Px -> Mx): all it maintains that there are, and perhaps all it enjoins us to seek, are laws of the form (x)((Sx & Px) -> Mx). Let us allow then that COG, given SR, entails FUN; or rather (given MR also) that it entails FUN as understood to allow that having one sort of program causally suffices for a given sort of mental capacity (say for belief) in people, perhaps another (and not this) for cows, probably another (and not this) for Martians (if there are any), etc. Letting our formulation of FUN explicitly reflect these allowances yields the following:
(FUN') For systems of any specific kind, some program(s) are (sufficient for) minds in systems of that specific kind: i.e., some program operations are (sufficient for) mental operation in systems of one kind; some program states are (sufficient for) mental states in systems of another kind, etc.
Allowing, then, that COG, given SR and MR entails FUN'; then -FUN', given SR and MR entails -COG (hence -(AIP & COG)): if Searle's CRA establishes -FUN' then, CRA refutes the cognitivist half of the conjoint claim which Searle (misleadingly) calls "strong AI." As for the claim of AIP that (appropriately programmed) computers can think; if CRA entails -FUN', and -FUN' (given SR and MR) entails -COG, then if AIP entails COG, CRA will have force (by modus tollens) against AIP after all.

2.13 Cognitivism and AI

Does AIP entail COG? Searle's initial statement of the position he calls "strong AI" suggests that proponents of this view propose just such an inference: he says, "strong AI" claims "the appropriately programmed computer literally has cognitive states and that the programs thereby [my emphasis] explain human cognition" (Searle 1980a, p.417). Let us understand AIP to be an empirical hypothesis (as previously suggested), or prediction to the effect that AI research confirms (or eventually will confirm, if pursued sufficiently) hypotheses of the general form (x)((Cx & Px) -> Mx): thus understood AIP is the claim that for some computer, or better, perhaps, all computers of some type (C), there is some program type (P) which, implemented on such a computer, would suffice to cause some type of mental state or performance (M).

Suppose there were a program of a given type (P) which, implemented in the hardware of a TI-1706 pocket calculator (C), would cause such a calculator (so programmed) to perform (mental) acts of calculation (M).{4} It does not plainly follow from this that the same program or algorithm, if it could be implemented in different hardware, would have the same effect; nor that any other systems having the same calculating abilities as the TI-1706 have them in virtue of implementing the same program (or algorithm). It would not follow, in other words, from the fact that P sufficed to cause the TI-1706 to calculate that P explained the calculative performance of humans or any other calculative system. Indeed, P might not even explain the calculative performances of other pieces of TI-1706 hardware (even the same piece on a different occasion!), since it might be the case that more than one program would suffice to make TI-1706 type devices calculate: the program that makes TI-1706a calculate might not even explain TI-1706b's calculation (much less all calculation, including human calculation); or if TI-1706a were reprogrammed between time t and t', the program that made it calculate at t would not even explain this very same (hardware) device's calculation at t'.

Clearly, given MR, AIP can't entail COG: the fact that a given P suffices for M in systems of type S does not (given MR) entail that every system that manifests M must likewise implement a program of type P -- or, for that matter, any program at all. It would not follow, in other words, from the fact that a particular type of computer running appropriate programs "literally has cognitive states" that "the [same] programs thereby explain human [my emphasis] cognition." It would not even follow from the fact that some programs suffice for some species of cognition in some (specific types of) computers, that there must be some (perhaps different) programs that suffice for and explain the same species of cognition in humans. At most, the fact that computers can be programmed to "literally have cognitive states" might provide confirmation or inductive support for the hypothesis that the mental states in question result from programming in us also.

Thus, Searle is right, it seems, to confess that CRA does not at all go to demonstrate the falsity of AIP or prove that "a computer cannot think" (Searle 1990a, p.27); though whether the considerations CRA brings forward disconfirm AIP or evidence that "a computer cannot think" remains an open question. If the argument does not even do this, then Searle's characterization of the Chinese room argument as "directed at" AI is completely unwarranted.

2.14 Interregnum

It is well to recall, at this point, the argumentive purpose that motivated the preceding discussion of multiple realizability: its purpose was to discover -- given the somewhat equivocal nature of Searle's "retraction" of his claim to have refuted AI -- whether CRA's explicit, antifunctionalist conclusion entails the falsity of AIP.  The thought was that, if AIP entailed COG, and COG entailed FUN, then -FUN would entail -AIP by modus tollens; and in pursuit of that thought I was led to inquire whether there were some plausible way of construing the various claims in question so the entailments at issue hold. There is not, because AIP does not entail COG. This was shown by assuming, in effect, SAIP (the existence of a TI-1706 which has some mental, calculative properties in virtue of its programming) and noting that this would still not entail that we calculate in virtue of the same programming, or for that matter in virtue of any programming -- unless we assumed that there were only one way to carry out a given mental performance, or (alternately) that conditions sufficient to cause one type of system to manifest some mental property are thereby shown to be necessary conditions for any system having that property. But this last principle -- what suffices for one is necessary for all -- is surely unacceptable: this unacceptability is what the procedural part of the MR principle expresses.

Now in the course of discussing MR I suggested an even stronger rendering of the principle: to allow, not only that different causes might suffice for the same mental effect in different types of systems, but that the same causes that sufficed for one type of system might not suffice for another -- as equiangularity suffices for equilaterality in triangles, but not in rectangles (cf. Sharvy 1985). For my present expository and analytic purposes, it is best that we set this aside; for it is contrary to the whole thrust of Searle's understanding of the view he opposes, which he characterizes as the view that programs by themselves suffice for mind. "Strong AI," as Searle styles it, seems committed to the proposition that if a program P suffices for having mental property M in one type of system the same program must suffice for every other type of system. Perhaps FUN' is the doctrine that functionalists should hold; yet something like the original simpler thesis, FUN is what proponents of functionalism, by and large, do seem to hold; and it is something like FUN, and not FUN' which is Searle's actual target. I propose to table FUN', then, and proceed to consider whether CRA entails -FUN, as Searle expressly alleges.

2.15 The Chinese Room Argument and Experiment

I reiterate a distinction between Searle's Chinese room argument (CRA), and what I shall term the Chinese room example or experiment (CRE). By CRE I mean the thought experiment whereby Searle invites us to imagine that we are paper shuffling instantiations of a Chinese natural language understanding (NLU) program in order to support (Searle suggests, and according to the usual construal) the crucial premise -- "Syntax by itself is neither sufficient for nor constitutive of semantics" (Searle 1989a, p.703) -- of his argument.{5} By CRA I mean the encompassing argument, which invokes this claim (that syntax doesn't constitute or suffice for semantics) as a premise, in order to draw (first off) the antifunctionalist conclusion C1 that programs by themselves are not (sufficient for) minds. In the next section my aim is expository, to set out the argument Searle elaborates. I will then inquire whether CRA might plausibly be supplemented so as to have force against the claims of AI proper (against AIP and SAIP). If it lacks the sort of deductive force that Searle seems to have initially thought (and continues, misleadingly, and contrary to his explicit disavowal, to suggest) it has, perhaps CRA can be augmented and extended so as to have at least some evidential or inductive force against AIP and SAIP; even such force, perhaps, as to render these doctrines "scientifically ... out of the question" (Searle 1990a, p.31).

2.21 Causal Powers (at least) Equivalent to Brains | 2.22 Brainpower: A Captivating Picture | 2.23 Engines of Intentionality | 2.24 Conclusions Concerning Brainpower | 2.25 A "Polemical" Tack
2.2 The Chinese Room Argument

Searle gives perhaps his clearest and fullest statement of the Chinese room argument in his recent Scientific American article "Is the Brain's Mind a Computer Program?" (1990a); though much the same argument can be culled from various other presentations. The version of the argument Searle presents in Minds, Brains and Science (1984a), for instance, differs hardly at all from his "Is the Brain's Mind a Computer Program?" (1990a) presentation. The derivation sketched in the abstract of the original Chinese room article (Searle 1980a) is perhaps best regarded as an enthymemic first approximation of these explicit latter formulations. According to Searle's recent (1990a) presentation, CRA first states the following three premises or "axioms" (as Searle terms them):
(A1) Programs are formal (syntactic).
(A2) Minds have contents (semantics).
(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.
These three axioms, according to Searle, entail CRA's first conclusion, the antifunctionalist conclusion we have been canvassing:
(C1) Programs by themselves are not (sufficient for) minds.
Now to his first three premises or "axioms" and this conclusion, Searle adds a fourth axiom:
(A4) Brains cause minds.
And from this fourth axiom -- whether alone (Searle 1980a, p.417 abstract), or "in conjunction with my earlier derivation" (Searle 1990a, p.29) is unclear -- Searle claims to, "immediately derive, trivially" (Searle 1990a, p.29),
(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.
Finally, given this result (C2), Searle claims to derive two further conclusions:
(C3) Any artifact that produced mental phenomena, any artificial brain would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.

(C4) The way human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.

My appraisal of this argument will focus on the derivation of C1 and C2, and on the premises from which Searle claims, and possible auxiliary assumptions from which he might charitably be construed, to derive them. Though Searle does not outline how the derivation of C3 and C4 is supposed to go, it seems they depend on the derivation of C1 and C2 as intermediate results; so, my explicit criticism of the soundness of Searle's cases for C1 and C2 will be implied criticism of his cases for C3 and C4 also. (Though, in truth, C3 and C4 will not much concern me).

With respect to C2, I contend that the only construal of this claim that would allow it to be an immediate trivial consequence of A4, makes C2 a triviality in its own right: on this construal C2 becomes tantamount to the truism that whatever suffices to cause some effect has the same causal powers as whatever else suffices to cause that effect, in the sense that each has the causal power to bring about this very effect. Thus understood, C2 is too trite to be of interest, and also will be too weak for the use -- to argue against SAIP -- to which Searle, I take it, wants it put. On the other hand, construed in such a way as to suit it to figure in the sort of implied argument against (the scientific thinkability) of AIP or SAIP Searle may well envisage, C2 is not a consequence (obvious or otherwise) of A4 (whether taken alone, or "in conjunction with [Searle's] earlier derivation [of C1]").

2.21 Causal Powers (at least) Equivalent to Brains

Perhaps in styling CRA as aimed at the doctrines I have labeled AIP and SAIP, Searle is implicitly appealing to something like the following argument, as some commentators have suggested. (Call this argument SCRA: the supplemental Chinese room argument.)
(C2) Any other system capable of causing minds would have causal powers (at least) equivalent to a brain's.
(S) No (presently existing) computer has causal powers (at least) equivalent to a brain's.
(SAIP) No (presently existing) computers are capable of causing minds.
In a similar vein Lawrence Carleton suggests that in styling CRA as "directed at" something like SAIP, Searle must be relying on argumentation along these lines:
"Certain brain process equivalents produce intentionality" and "X [a digital computer] does not have these equivalents," therefore "X does not have intentionality." (Carleton 1984, p. 221)
Perhaps Searle has some such argument as this in mind when he says that anything like SAIP is "scientifically ... out of the question" (1990a, p.31).

Now C2, in a sense, is an immediate and trivial consequence of A4, "Brains cause minds"; while S just seems like an obvious empirical truth about the causal powers of (existing) computers. But the appearance of soundness here is deceiving and derives from equivocal use of the phrase "causal powers (at least) equivalent to brains." In the sense of that phrase in which the claim "Any system causing mind must have causal powers equivalent, in this sense, to a brain" follows immediately and trivially from the statement "Brains cause minds," it is not at all obvious -- in fact, it merely begs the question against SAIP to assume -- that presently existing computers don't have causal powers equivalent to brains: the sense that "equivalent causal powers" must bear for C2 to be a trivial consequence of A4 is just that of equivalence with respect to causing minds or mental states (which is the very thing at issue). On the other hand, the sense in which it's empirically obvious that presently existing computers fall short of brains in their causal powers, "causal powers (at least) equivalent to brains" must mean something like equivalent in all respects; but understood in this sense, the inference from A4 to C2 is anything but obvious and trivial. In fact it seems that nothing but another brain -- perhaps even nothing but the very same brain -- could have causal powers exactly equivalent to a brain's. If the inference from A4 to C2 were valid given this understanding of "causal powers (at least) equivalent" I might just as well argue, "Since Hydrogen bombs cause death, to cause death requires causal powers (at least) equal to an H-bomb. Cyanide lacks causal powers equal to an H-bomb. So, cyanide can't cause death."

So, if "(at least) equal causal powers" just means "equally capable of causing the specified effect," C2 follows trivially enough from A4; but the truth of S is no more obvious than the falsity of SAIP.  On this construal of "equal causal powers" S is tantamount to -SAIP, and premising that computers lack equal causal powers in this sense just begs the disputed question. On the other hand, if "equal causal powers" means "equivalent in all respects," then the inference from A4 to C2 is far from trivial. Rather it seems guilty of something like the formal fallacy of denying the antecedent (which is explicit on Carleton's formulation, as he notes) in its seeming assumption that if A's (e.g., brains) cause B's (e.g., mental states), then only A's can cause B's. Thus construed, the argument seems little better than this: All functioning brains cause mental states, and No digital computers are functioning brains, therefore No digital computers cause mental states.

Apparently, then, the argument requires an understanding of "equal causal powers" which is stronger than merely "equal with respect to being able to cause this effect" (the question begging alternative), yet weaker than "equivalent in all respects" (the invalidating alternative); something like "equivalent in the causally relevant respects," which would require something like a specification of which causal powers (e.g., of brains) are necessary for producing a given mental effect. Presumably such an (intermediately strong) understanding of "equal causal powers" would also speak to the difficulty that there are not only effects H-bombs have (e.g., fallout) that cyanide lacks, there are also causal powers cyanide has but not H-bombs (e.g., you can swallow cyanide). The problem is, since cyanide has powers that H-bombs lack, and H-bombs powers that cyanide lacks, which is causally more powerful? What could Searle mean by "at least"?

2.22 Brainpower: A Captivating Picture

Searle frequently seems to succumb to the thought that just as a certain amount of horsepower is required to lift a given weight a given distance, a certain amount of mindpower or brainpower is required to produce specific mental effects. Thus he explains, concerning C2's claim that "Anything else that caused minds would have to have causal powers at least equivalent to those of the brain,"
It is a bit like saying that if my petrol engine drives my car at seventy-five miles an hour, then any diesel engine that was capable of doing that would have to have a power output at least equivalent to that of my petrol engine. (Searle 1984a, p. 40-41)
Presumably more brainpower (something like human brainpower) is needed to produce mental effects of natural language understanding or mathematical calculation that are only met (or were only met, previous to the advent of computers) in humans; and less brainpower, presumably, is needed to produce mental states, such as sensations of smell, that even clams and starfish seem to have. Just as a 100 horsepower diesel engine and a 100 horsepower electric motor are causally equivalent with respect to their capacity to lift a unit weight over a unit distance; so, perhaps, chimp brains and porpoise brains are causally equivalent with respect to their capacity to X (where X is some measure of this hypothesized general mental force), and consequently with respect their mental performance or endowments. So if the brainpower of an average adult human is 1, and chimps have .3 and dogs .1; and if tool use requires brainpower of .25, say, and language understanding requires brainpower of .7; then chimps (but not dogs) use tools; and neither chimps nor dogs will have any language understanding. The underlying thought here is similar to Descartes' portrayal of reason (like Newtonian force) as "a universal instrument" and even suggestive of Descartes' allusions to this universal force (which he, like Searle, identifies with consciousness) as a kind of "light of nature": there is a mental force of consciousness (as there are physical forces of electricity and light), and some brains (e.g., Aristotle's and Einstein's) have, so to speak, megawatt capacities, as compared to the microwatt capacities of a frog's brain. That is the picture. Such a picture is even enshrined in figures of speech, as when describe a person we think lacking in intelligence as "a dim bulb." But how are we to understand it?

One alternative is to read "is conscious" as tantamount to "thinks" in the weak, generic sense of "having (some) mental properties." Descartes himself appears to suggest something like this weak or generic sense of the "thought" in the famous passage where he characterizes a "thinking thing" as "a thing which doubts, understands, affirms, denies, wills, refuses, which also imagines and feels (Descartes, 1642, p.19).{6} But the trouble with this alternative is that, so interpreted, the proposal that X = consciousness just seems to follow the question begging path of "equal causal powers" weakly construed. The question of whether something has the requisite consciousness to allow it to display some particular mental ability (e.g., the question of whether the robot with its computerized visual processing really sees) is not independent of (or answerable antecedently to answering) the question of whether it actually displays those abilities (or merely acts as if it did).

A more sophisticated variant of this approach -- going Aristotelian -- insists there's a hierarchy of mental powers, such that the higher powers (e.g., mathematical calculation) presuppose lower ones (e.g., sense perception), such that nothing can calculate that can't also perceive. Searle seems to avail himself of some such Aristotelian picture when he says,

From an evolutionary point of view, just as there is an order of priority in the development of other biological processes, so there is an order and priority in the development of Intentional phenomena. In this development language and meaning, at least in the sense in which humans have language and meaning, comes very late. Many species other than humans have sensory perception and intentional action, and several species, certainly the primates, have beliefs, desires and intentions, but very few species, perhaps only humans, have the peculiar but also biologically based form of intentionality we associate with language and meaning. (Searle 1983, p.8)
This plainly represents an advance over the simplistic approach, which made a system's actual manifestation of a specific mental property the measure of whether it had sufficient brainpower to have that property: now we can take the system's manifestation (or failure) to manifest "lower" mental faculties as evidence of its lacking brainpower sufficient for it to be truly possessed of whatever "higher" capacities (e.g., capacities for mathematical calculation) it might give the appearance of having. But observe that this Neo-Aristotelian line, on closer consideration, seems to partake of much the same circularity as the simplistic view that preceded -- the circle is just larger. It seems as plausible to regard pocket calculators' seeming abilities to calculate as evidencing that calculation doesn't presuppose such "lower" mental capacities as sense perception as to regard calculators' seeming lacks of such "lower" faculties as signaling that (appearances to the contrary) they don't really calculate. (This also applies to the preceding point: if Searle's intuition that calculators and their ilk haven't a shred of conscious awareness this finding will still be equivocal in it's implications between "Calculators don't calculate" on the one hand and "Calculation doesn't require consciousness" on the other.)

Alternately -- instead of going Aristotelian -- we might try being thoroughly Cartesian and identifying different levels of brainpower with having varying degrees of private, introspectable, conscious experience. This, of course, runs afoul of familiar philosophical difficulties. How do I know whether anything else is conscious, or (if they are) what their private conscious experiences are like? (Other minds problems.) There are doubts (of Watsonian vintage) about the public scientific utility of private introspected "facts"; doubts (of a Wittgenstinean stripe) about the explanatory relevance of conscious experience even where it might seem germane in one's own case, e.g., for explaining what it is to know what "red" or "pain" means; problems about the possible existence of unconscious mental states such as most (nonbehaviorist) research programs in modern psychology (from Mach and Freud, to Marr and Chomsky) have posited; etc. Certainly, much of what Searle has said in these connections over the years -- from his insistence, early on, "on the first person point of view" (Searle 1980c, p.421) to his recent talk of "ontological subjectivity" (Searle 1989b, p.194) -- suggests that he is all too willing to appeal to consciousness in the Cartesian manner; yet he has, all along, been all too unwilling to face up to the sorts of difficulties with such appeals, just noted. He dismisses other minds objections as attempts to "feign anesthesia" (Searle 1980a, p.421) and dismisses Yorick Wilks' attempt to remind him of Wittgenstein's discussion "to the effect that understanding and similar states cannot consist in a feeling or experience" (Wilks 1982, p.344) as "just irrelevant to my views" (Searle 1982, p.348) -- though this discussion seems precisely relevant to his views -- without explanation.

Well, if there is "no problem" in Searle's discussion "about how I know that other people have cognitive states" (Searle 1980a, p.421); and if we can understand Searle's insistence that, "I, of course, do not claim that understanding [sic] is the name of a feeling or experience" (Searle 1982, p. 348) to be denying that understanding is a feeling or experience that the word "understanding" names; then perhaps -- despite such appearances to the contrary as mentioned above -- we should credit Searle's explicit disavowal of "any of the Cartesian paraphernalia" (Searle 1987, p.146) and not view him as positing degrees of unitary mental force or "brainpower" identifiable with introspectable degrees of consciousness.{7}

What seems needed is to cash out such scientifically questionable appeals to the private data of conscious experience in terms of some publicly detectable correlates or causes of those experiences. Since it is largely through his disavowal of dualism in favor of a "monist interactionist" (Searle 1980b, p.454) or "biological naturalist" (Searle 1983; 1992) view that "mental phenomena ... are both caused by the operations of the brain and realized in the structure of the brain" (Searle 1983, p. ix) by which Searle seeks to distance himself from full-blooded Cartesianism, let us consider whether we mightn't (along these lines) indirectly measure "brainpower" by referring directly to such operations and structures in brains as produce it. Though we may be barred from directly measuring the consciousness levels produced by brains other than our own -- indeed, the notion of measurable degrees of consciousness is obscure enough even in its first person applications -- we might at least infer the levels of consciousness other brains can produce from some measurable (or at least publicly detectable) structural properties or causal features of brains. (As we can calculate the horsepower capacities of internal combustion engines from their displacements and compression ratios.)

2.23 Engines of Intentionality

One trouble with the proposal just bruited is we don't really know what the relevant structural properties of brains are: if mental output is to brains as horsepower output to engines, we don't know what features of brains are crucial to determining the mental capacities of brains as displacement size and compression ratio are crucial to determining the power capacities of internal combustion engines. Besides certain vague and scientifically eccentric suggestions that the crucial elements and structural features of brains are chemical rather than computational, such that "intentionality is ... causally dependent on the specific biochemistry of its origins" (Searle 1980a, p. 424), Searle has little to say concerning what the crucial mental power producing features of brains are; though he's adamant enough about what they aren't. Whatever the relevant, brainpower determining features of brains turn out to be, they will not (according to Searle) be computational features. It seems, then, that the mental power determining features of brains comparable to the physical power producing features of displacement and compression ratio in internal combustion engines, could not (according to Searle) be computational properties like operation speed and memory capacity but must be something else. Unfortunately, besides the sort of vague suggestions about the brain's chemical properties just cited -- speculation with only the slightest empirical basis (mainly in the fact that our brains are prodigious chemical "

Of course even the discovery that the crucial causal features of brains were chemical would not be wholly decisive against AI and SAIP.  Supposing future psychologists come to see the wisdom of Searle's conjecture -- it's the chemistry that counts -- begin to pursue a Searlean research program, and come to establish that the crucial mind producing features of brains really are chemical. Even suppose they additionally discover that the electrical switching capacities of brains are as irrelevant to their mental power outputs as the electromagnetic properties (most) internal combustion engines have (as accidental side effects of being made of steel) are irrelevant to their horsepower outputs. Yet it is entirely consistent with these findings -- if such were found out, concerning our brains -- that computers could produce mindpower, by different means, despite lacking whatever chemical production capabilities turned out to be crucial determinants of the mindpower of brains. Much as electric motors -- despite not having displacements or compression ratios -- produce horsepower, it might even turn out that the very electric switching capacities or computational properties (e.g., storage capacity and operation speed) which, according to Searle's conjecture, are irrelevant to the mental power outputs of brains, are nonetheless the crucial determinants of the mental capacities of computers; much as the very electromagnetic properties which are irrelevant to the horsepower capacities of internal combustion engines are crucial determinants of the horsepower capacities of electric motors. But now it is an easy extrapolation from this to see that whatever the crucial determinants of the mental powers of brains turns out to be -- contrary to the line of argument we are canvassing as a possible application of CRA against SAIP -- artificial intelligence cannot be shown to be "scientifically ... out of the question" (Searle 1990a) by showing that computers lack these crucial determinants (whatever they turn out to be).

Suppose we understood the physiological basis of brain's production of mental powers as well as we understand how internal combustion engines produce horsepower. Suppose also that we knew for certain that no computer had any of the features which are the crucial determinants of the mental powers of brains. We still cannot conclude, "Therefore, no computer has mental powers" without arguing fallaciously (in effect, denying the antecedent). We would be arguing, in effect: "Of brains, only those with chemical power Z, produce any mindpower"; "No computer has chemical power Z"; therefore, "No computer produces any mindpower." This is no more valid than to argue: "Of internal combustion engines, only those having displacements of (at least) one cubic centimeter produce any horsepower"; "No electric motor has a displacement of (even) one cubic centimeter"; "So no electric motor produces any horsepower." You might as well argue: "Of triangles, only those which are equilateral are equiangular"; "Some rectangles are not equilateral"; therefore, "Some rectangles are not equiangular" (Sharvy 1985, p.126).

2.24 Conclusions Concerning Brainpower

By allowing Searle the charity of the dubious (though familiar) idea of a unitary mental force (brainpower) comparable to the unitary conception of physical force (horsepower) in Newtonian physics, we explored the possibility of staking out a sense of "equal causal powers" intermediate between "equal in respect causing the particular effect at issue" (which made SCRA beg the question) and "equal in all respects" (which made the argument invalid). It turns out this is a charity that the proponent of SAIP can well afford: the intermediate sense of "equal causal powers" it allows, the sense of "equal brainpower" (whatever this turns out to be), leads the argument into invalidity as surely as the strong rendering of the phrase "equal causal powers" as "equal in all respects." It seems anything stronger than the weak (question begging) interpretation of C2 leads down this fallacious path: the question begging interpretation of C2 (as a truism, valid on its own head) is the only interpretation of C2 validly (much less obviously) entailed by A4. The specific (and considerable) demerits of taking "equal" to mean "equal in brainpower" do not even have to be considered. Any attempt to spell out specific respects in which anything capable of producing mental effects must be "(at least) equal to brains" will transgress the bounds of valid entailment -- committing something like the fallacy of denying the antecedent -- just as taking "equal" to mean "in all respects" does. Inferring C2 from A4 is not going to be valid if "equal" in C2 is understood to include any property besides the property of causing the specific mental effect at issue.

I conclude that the only way of interpreting C2 as a valid (much less immediate and trivial) consequence of A4 yields no interesting (non-question-begging) argument in conjunction with facts about the comparative physical endowments of humans and computers against SAIP: it seems Searle's Chinese Room Argument is very far from yielding any argument with any logical force against SAIP (or any inductive or evidential force against AIP, if -SAIP is supposed to disconfirm AIP) at all.

2.25 A "Polemical" Tack

Perhaps, while neither CRA nor the extended version of it (SCRA) just considered yield any logically compelling argument against the claims of SAIP and AIP, they nonetheless make an effective ad hominem case. CRA may make a polemically compelling argument against AI for someone whose main reason, or only reason, for holding such views as AIP and SAIP is their acceptance of functionalism. If I believe AIP because I believe FUN, and FUN entails AIP; and if Searle's CRA really does refute FUN as he alleges; then CRA is a polemically effective argument against my belief in AIP, despite it's not having -AIP as a valid logical consequence. It undermines my basis for holding AIP in the first place. And if functionalism is the only basis anyone has for believing in AI, or if it's the only basis or main basis for belief in AI on the part of researchers in Cognitive Science, then perhaps, if CRA is a sound argument for -FUN, it will put AI "scientifically ... out of the question" without making it logically unthinkable. If it's true that it's mainly because they accept functionalism that cognitive scientists find AIP thinkable; and presuming CRA refutes FUN; then CRA makes AIP unthinkable for such cognitive scientists, and perhaps (if these are all, or all the most influential cognitive scientists) scientifically disconfirms it.

On behalf of this polemical tack it can be observed that CRA must speak to cognitive scientists' reasons for accepting views such as AIP or SAIP (where they do), or else it couldn't have "struck a nerve" (Searle 1982, p. 345) and "achieved the status of a minor classic" (Fisher 1988, p. 279) in the field of Cognitive Science, as it has. Against this polemical tack, however, there are two weighty questions to be asked. Are there substantial independent reasons for accepting AI, which do not presume the truth of functionalism? In Chapter 4, below, I argue that there are. Does CRA validly deduce -FUN from Searle's premises: does the Chinese room argument refute functionalism? This is the question I now consider. Chapter 3 will consider the true relationship between argument (CRA) and experiment (CRE) and the force of CRE itself against the various claims (FUN, AIP, and SAIP) centrally at issue.

3.1 Searle's "Brutally Simple" Argument | 3.2 A Formalization of BSA | 3.3 Supplementary Hypothesis One: All Things Instantiate Programs | 3.4 Supplementary Hypothesis Two: Syntax Precludes Semantics | 3.5 The Minds and Programs at Issue | 3.6 Minds Have Semantics | 3.7 Interim Conclusions | 3.8 Virtually Universal Realizability | 3.9 The Concept of Computation
3. The Chinese Room Argument and Functionalism

Anyone who wishes to challenge the central thesis owes us a precise specification of which `axioms' and which derivations are being challenged. (Searle 1988, p. 232)

3.1 Searle's "Brutally Simple" Argument

As he has recently seemed to retract former claims to have refuted AIP or SAIP by his Chinese Room Argument, allowing, "I have not tried to show a machine cannot think" (1990a, p.27), Searle has, at the same time, increasingly emphasized the antifunctionalist thrust of his argument and come to style the following "brutally simple" points (Searle 1989a, p. 703) its crux:
(A1) Programs are syntactical.
(A2) Minds have semantics.
(A3) Syntax by itself is neither sufficient for nor constitutive of semantics.
(C1) Programs by themselves are not minds. (1989a, p.703)
I'll refer to this part of CRA as Searle's Brutally Simple Argument (BSA). Searle's demotion of the premise A4, "Brains cause minds," from the place of honor as CRA's first premise (as in 1983, 1984a), also seems indicative of this shift of emphasis. Along with his seeming surrender of previous claims that his CRA manages to "demonstrate the falsity" of "the view that appropriately programmed computers ... literally have thought processes" (1984b, p.146), it seems that Searle takes the premise "Brains cause minds" to be less central to his argument than he suggested when he first proposed CRA as,
an attempt to explore the consequences of two propositions. (1) Intentionality in humans (and animals) is a product of the causal features of the brain. .... (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. (1980a, p.417 abstract)
Searle's most recent formulations of CRA (1989a, 1990a) make it clear that A4 (together with C1) is supposed to support the conclusion C2 (which is supposed, in turn to supply a basis for arguing against the scientific tenability of AIP or SAIP). Clearly, A4 doesn't serve to support the primary (antifunctionalist) conclusion, C1. The preceding section examined ways of arguing for -SAIP or -AIP from C2, the conclusion that "Any other system capable of causing minds would have to have
causal powers (at least) equivalent to those of brains." I now consider whether CRA makes a sound argument for its first conclusion, the antifunctionalist conclusion C1, which Searle has come increasingly to emphasize, and which may yet (if CRA proves it) provide some basis for an inductive (or polemical or ad hominem) case against the scientific tenability of SAIP.  What emerges from a close analysis of this supposedly "brutally simple" part of CRA is that, simply (e.g., nonmodally) construed, BSA is flatly invalid. Nor does C1 seem to follow validly from A1, A2, and A3 on any obvious, uncontroversial causal or modal principles (as AIP follows from SAIP, for instance, via the obvious and uncontroversial modal principle that existence entails possibility).{8} Nor does C1 seem to follow validly from A1, A2, and A3 in conjunction with any plausible supplementary assumptions. In this last connection I consider two supplementary hypotheses:
(S1) All things have (or instantiate) Programs.
(S2) Programming precludes semantics.
Each of these theses finds some support (albeit equivocal) in Searle's writings. It turns out that either of these supplementary assumptions yields valid arguments (in conjunction with A1-A3) for C1. However, assumption of both S1 and S2 (in conjunction with A2) would entail that nothing has a mind or any mental properties at all -- a conclusion which, if not unacceptable simpliciter (if eliminativism is not simply unthinkable), must surely be unacceptable to Searle. Both of these suggested supplementary hypotheses, S1 and S2, seem wildly implausible on their faces; though insofar as functionalists commit themselves to S1 (insofar as the notions of programs and having programs does not distinguish between the sense in which computers -- and perhaps nervous systems -- implement programs and the sense in which any system whatsoever can be characterized as instantiating programs), Searle's BSA in conjunction with S1 comprises an effective ad hominem rejoinder to the functionalist thesis that minds are programs or that some programs constitute or suffice for mind.

3.2 A Formalization of BSA

As the preceding section showed CRA to have no such brutal consequences for AI (properly understood) as Searle had prominently advertised, our present inquiry aims to show that CRA does not provide a simple argument (valid on obvious and generally accepted principles) against functionalism. On the one hand, Searle's unsupplemented BSA is simply invalid as stated; on the other, supplementary hypotheses which do validate inferences to antifunctionalist conclusions (given A1-A3), and whose attribution to Searle is not entirely unwarranted, seem either absurd or obviously false.

I adopt the following dictionary for the or predicates of Searle's argument:

P := is a Program
F := is formal (syntactical)
S := has semantics
M := is a mind
Now, consider the following reconstruction of Searle's "brutally simple" argument:
A1. (x)(Px -> Fx)
A2. (x)(Mx -> Sx)
A3. -(x)(Fx -> Sx)
C1. -(x)(Px -> Mx)
Thus reconstructed the argument is admirably simple, but invalid. A3 -- being equivalent to (3x)(Fx & -Sx) -- asserts that some formal things or formalizations lack semantics. This is undoubtedly true: uninterpreted calculi presented in logic classes lack semantics, for instance. Yet it might still be the case, since there are formalisms (e.g., the aforementioned uninterpreted calculi) which are not Programs (or even programs), that all the formalisms that are Programs also have semantics and are minds. This is graphically illustrated by the following Venn diagram:

3.3 Supplementary Hypothesis One: All Things Instantiate Programs

Consider the supplementary claim, that all things instantiate Programs, represented as follows:
(S1) (x)Px
Clearly this claim, if it were acceptable, would yield a valid argument against FUN in conjunction with BSA's other premises. (Call BSA, thus supplemented, BS1A). S1 rules out the sort of counterexamples which were seen to invalidate the unsupplemented BSA. If all things have Programs, then it is no longer possible that the formalism(s) which A3 asserts lack semantics are not Programs; ergo, there are Programs which aren't minds, as illustrated below:

There is ample text to suggest that Searle holds something like S1, such as the following:

...just about any system has a level of description where you can describe it as a digital computer. You can describe it as instantiating a formal program. (Searle et al. 1984, p.153)

From a mathematical point of view, anything whatever can be described as if it were a computer. And that's because it can be described as instantiating or implementing a computer program. In an utterly trivial sense, the pen that is on the desk in front of me can be described as a digital computer. It just happens to have a very boring computer program. The program says: `Stay there.' Now ... in this sense, anything whatever is a digital computer, because anything whatever can be described as implementing a computer program.... (1984a, p.36)

Nor is Searle alone in holding such views, strange though they seem on their face. If functionalism holds that mind is to brain as program is to hardware in this "utterly trivial sense" in which "anything whatever is a digital computer" (Searle 1984a, p.36), it seems Searle does have the wherewithal for a valid argument against functionalism: BS1A, BSA supplemented by S1.{9} On the other hand it is surely right to complain that the sense in which programs are supposed (by functionalists and other partisans of AI) to cause the things which have them to display mental properties cannot just be in this utterly trivial sense in which the disk on the shelf instantiates Schank's SAM (for instance), no less than the computer running SAM.{10} Surely no one who claimed that SAM understands the stories it paraphrases would mean to suggest that it does so by being instantiated on a diskette sitting on a shelf (much less by being instantiated -- in the sense of being mappable in principle onto it -- by the shelf itself!); they mean to say that SAM understands (or the computer running it understands) when it is running or in virtue of its execution.

To keep BSA in conjunction with S1 from being a logically compelling argument against functionalism, then, the functionalist needs to distinguish implementing (i.e., running or executing) a program from merely instantiating one in the mathematical sense in which,

On the standard textbook definition of computation,
(1) For any object there is some description of that object such that under that description the object is a digital computer.
(2) For any program there is some sufficiently complex object such that there is some description of the object under which it is implementing the program. (Searle 1990c, p.26-27)
Yet many (and perhaps most) cognitivists and functionalists insist that the sense of "instantiating a program" they have in mind is just the mathematical sense. E.g., Philip Johnson-Laird asserts:
Cognitive science ... tries to elucidate the workings of the mind by treating them as computations, not necessarily of the sort carried out by the familiar digital computer, but of a sort that lies within this broader framework of the theory of computation. (Johnson-Laird 1988, p.9)
Since virtually everything instances programs in the mathematical sense, if this way of instancing programs is what functionalism holds constitutes or suffices for mind (and cognitivism takes to explain mental phenomena) it is clear that cognitivists and functionalist should find Searle's argument as disconcerting as many have. The silliness we have found both textual grounds and a logical need (to validate antifunctionalist conclusion C1) to impute to Searle -- this anemic understanding of "being programmed" or "having a program" which underwrites S1 -- is not original to Searle. Rather, it seems a staple of functionalist thought and cognitivist literature and efforts to specify a more robust sense of "being programmed" or "having a program" (e.g., Dennett 1987b, Goel 1991) -- in which running computers have the programs they're running but the disk on the shelf and the shelf itself do not -- have been few and halting. S1 is silly, and perhaps it needs to be imputed to Searle to validate his antifunctionalist argument (as I am here proposing); but this charge of silliness is of no avail to the functionalist against Searle's counterargument if it is a piece of silliness built into functionalist doctrine itself. Unless functionalism specifies an appropriately robust sense of "programmed," the imputation of silliness falls, in the first place, on functionalism itself and BS1A provides a compelling ad hominem or polemical refutation of functionalism.{11}

3.4 Supplementary Hypothesis Two: Syntax Precludes Semantics

I now consider an alternate way of supplementing BSA -- with the claim that "having a formal program as such disqualifies a system from having intentional states" (Savitt 1982, p.342). Since it is supposed, by Searle, to be their "formal" or "purely syntactic" nature that disqualifies programs for causing mental phenomena I propose the following (indirect) rendering of Savitt's "unofficial thesis":{12}
(S2) Syntax precludes semantics.
Though the textual evidence that Searle actually holds this "unofficial thesis" (Savitt 1982, p.342) is equivocal (as we shall see), it is not hard to discover. Most notably, there is the following:
If we knew independently how to account for [any system's, even an intelligent acting robot's] behavior without such assumptions [of mental properties] we would not attribute intentionality to it. .... ... as soon as we knew that the behavior [of an intelligent acting robot] was the result of a formal program ... we would abandon the assumption of intentionality. (Searle 1980a, p.421)
Though, in response to Savitt's imputation of this "unofficial thesis" Searle dismissively remarks, "I obviously don't hold any such view" as "that any system that had any program at all would thereby be disqualified from having intentional states" (Searle 1982c, p. 347), this denial is itself equivocal. It seems possible -- and even charitable, in that it reconciles these seemingly conflicting passages -- to take Searle to hold that, while having just any program at all doesn't disqualify a system from having intentional states, having a program that makes it display intelligent seeming behavior does. Though it is difficult to see how to square this claim with Searle's more recent contention that "programs have no physical, causal properties" (Searle 1990a, p. 27) it is consistent with comments Searle makes in connection with his original claim that knowing some behavior resulted from programming would make us abandon the assumption of intentionality: "If we knew independently how to account for [a systems] behavior without such assumptions [of intentionality] we would not attribute intentionality to it" (1980a, p. 421). So it seems it's only having something like a Turing test passing program that actually causes a system's intelligent seeming behavior which, for Searle, disqualifies a system from having the semantics (or intentional mental properties) its behavior seems to evidence. This way of supplementing BSA (with S2), like the preceding way (with S1), it seems, is ultimately going to throw us back on the following question: What programs and what manner of having them should the functionalist be understood to be claiming suffice for or constitute mental states?

Deferring consideration of these overarching questions, let us formalize S2 as follows:

(S2) (x)(Fx -> -Sx)
Now BSA, in conjunction with S2, given the uncontroversial presupposition that some things have programs,
(P1) (3x)Px
yields a valid argument (BS2A) for conclusion C1 (as Figure 3 shows). Since assuming S2 and P1 obviates the need for A3 to validate the entailment of C1, I will take these to replace A3 in the new, supplemented argument.{13}

There are several points of interest, I think, about this derivation and the premise S2 it assumes.

First, S2, like S1 is implausible on its face: S1 seemed absurd; S2 is just obviously false. If being a formalism (being syntactic or having syntax) precluded having intentional content or meaning or semantics English sentences could not have meanings (since English has syntax). Second, the assumption of both S1 and S2 in conjunction with A2 entails there is no meaning and are no minds (-(3x)(Sx v Mx))! This is illustrated by the following:

Limitation of the programs under consideration to just the (partial) Turing test passing programs -- next to be considered -- turns out to be of much less avail for avoiding this eliminativist outcome than one might think.{14} It turns out -- as we shall see -- that the same principles which underwrite claims like S1, to the effect that everything instantiates some program(s), and, moreover, that "we [humans] are instantiations of any number of programs" (Searle 1980a, p.422), would also seem to underwrite claims that at least every middle size object instantiates some (partial) Turing test passing program(s), and that we humans instantiate any number of such (partial) Turing test passing programs. Whether or not this eliminativist outcome suffers from pragmatic incoherence as some (e.g., Baker 1989) have alleged, Searle is no eliminativist: it seems Searle cannot hold both S1 and S2, but unclear which, if either, he does hold.

Having noted this possible inconsistency in Searle's view between his affirmations of S1 and S2 and his obvious belief that some things (e.g., ourselves) really have minds or mental properties, and having noted text seeming to warrant imputation to Searle of both S1 and S2, I close this section by noting a consideration which might incline us toward one alternative rather than the other, if only one of these two supplementary hypotheses can be maintained (on pain of eliminativism). The consideration is this: S1 makes a claim that most functionalists and cognitivists accept. Thus, if both S1 and S2 are implausible on their faces (and since neither BS1A nor BS2A, consequently, seem to be sound arguments against functionalism), perhaps we (or rather Searle) should forsake S2 and maintain S1 to make a compelling ad hominem case against functionalism (as we have seen that BS1A does). Assuming the supplementary premise S1 makes a valid argument (BS1A) out of Searle's BSA, then for anyone who accepts S1 (along with A1-A3), as functionalists seem wont to do, BS1A is not only valid, but (as it were) sound ad hominem.

3.5 The Minds and Programs at Issue

On all accounts (earlier and later) Searle has stressed the centrality of A3; and indeed, not only whether C1 follows from A1-A3 but, also, how C1 is best understood, seems to depend crucially on what we understand A3 to claim. As he premises that syntax "by itself" is "nether constitutive nor sufficient for" semantics (1990a), Searle likewise concludes that programs "by themselves" (1989b) are "neither constitutive or nor sufficient for minds" (1990a). Given the close parallel between the crucial premise A3, and the antifunctionalist conclusion C1 a plausible conjecture is that whatever sense "neither constitutive nor sufficient" bears in A3 must be the same as it bears in C1. But the difficulty with this conjecture is that Searle claims that A3 is "a conceptual truth" (1984a, p.39) or "a logical truth" (1990c, p.21) or "true by definition" (1990a) while the thesis of functionalism (which C1 purports to deny) is not generally understood, by its proponents as a conceptual claim, but as an empirical one. Thus, what FUN claims, and C1 must deny is that certain programs are causally sufficient for mind; and the problem is to understand how the logical insufficiency of syntax to determine semantics is supposed to entail (given A1 & A2) the causal insufficiency of programs for minds or thinking. Heating water at sea level under one atmospheric pressure to 100°C, e.g., is not logically sufficient for boiling it: nonetheless, it causally suffices. Even if we accept Searle's claim that syntax does not logically suffice for semantics it no more follows from this that syntax doesn't causally suffice for semantics than it follows from the logical possibility of water at sea level under one atmospheric pressure not boiling at 100°C that heating water to this temperature under these conditions won't causally suffice for boiling it.

In this light, then, consider BSA's would-be antifunctionalist conclusion C1. This conclusion, according to Searle should be understood to be claiming, "It is not the case that (necessarily(program implies mind))" (Searle 1989a, p. 703). To be directed against functionalism (which functionalists hold to be an empirical doctrine), as just remarked, I suppose that we must understand "necessarily" here to signify causal or perhaps metaphysical{15} necessity. The closest nonmodal approximation of this, I take it, is our reconstructed C1 above: -(x)(Px -> Mx).{16}

One notable thing about C1 thus construed (and again, much the same could be said of A3, as we shall see), is that C1 thus formulated seems too obviously true to need supporting argument. Consider, e.g., the following BASIC program (PRO):

1. Goto 2
2. Goto 1
It would be surprising if anyone thought to attribute any mental states to their computer on the basis of its execution of PRO: no proponent of AI (that I know of) has proposed that any and every program suffices (upon execution) to endow the computer executing it with mental attributes. But if everyone agrees ab initio that some programs, e.g., PRO, do not endow the machines that execute them with any mental attributes whatsoever; then, since (3x)(Px & -Mx) is logically equivalent to -(x)(Px -> Mx), it seems we have a conclusion no functionalist (or anyone else) would care to dispute. It would seem -- if we read him this way -- that Searle's conclusion does not tell against functionalism or any other doctrine that anyone actually holds. It seems we need to restrict P to, say, all and only programs which "satisfy the Turing test" (1989a, p.702) (making C1 the denial of a recognizably behaviorist claim) or all and only programs which "satisfy the Turing test" by the right procedure (making C1 deny a recognizably functionalist claim) to save it from obviousness and triviality. Note that this does not adversely affect the acceptability of A1: if all programs are formal or syntactic, then so, obviously, are all the Turing test passing ones.

Now if (all) minds are programs, then (assuming there are minds), some programs are minds. And if the programs that are minds are supposed, by functionalists, to be only those that pass the Turing Test, and only a restricted class of those, let us understand "program" in Searle's argument to refer to just these partial Turing test passing programs: the programs in question, then would seem to be only (some of) those which, when implemented, suffice to make the implementing system behave as if it had certain mental properties; as a sieve of Erosthenes program behaves as if it were searching for prime numbers, as the program of my pocket calculator behaves as if it calculates sums, products and remainders, as DOS behaves as if it recognizes the dircommand, etc. It is just such Programs as these,{17} or the machines that implement such Programs, whose mental properties are at issue.

Similarly, some terminological clarification is necessary with regard to Searle's use -- which I have followed -- of the term "mind" in these discussions; which pertains to my characterization of partial Turing test passing programs as those at issue. "Mind" is being used here, in CRA and attendant discussion, to "abbreviate" something like "mental processes" (Searle 1984a, p.39); so that "brains cause minds" is supposed to abbreviate, or is being used "as a slogan" (Searle et al. 1984, p.153) for something like "the brain causes mental states" (Searle et al. 1984, p.153) or "mental processes are caused ... by processes going on inside the brain" (Searle 1984a, p.39). By the same token, then "Programs are not (constitutive of or sufficient for) minds" should be understood to abbreviate or be a slogan for something like "Programs are not (constitutive of or sufficient for) mental properties." This means that the holistic connotations of "mind" are not supposed to be figuring in Searle's argument -- though he sometimes seems to invoke them in supporting discussion (e.g., Searle 1983, p.8). CRA, according to Searle's avowed use of "mind," should not depend on holistic appeals to the unity or universal powers of minds.

This proviso, just scouted, concerning how "mind" is to be understood in Searle's discussions, reflects back in turn on our proposed understanding of "Program" as something like "Turing test passing program": it means that the Turing test passing in question concerns what have been called "partial Turing tests" of specified mental properties or abilities (e.g., DOS's recognition of the print command) and not the full Turing test (described in Turing 1950) of whether a system resembles a person in all (conversationally distinguishable) mental respects. Unlike the restriction of the term "program" as it figures in these discussions to Turing test passing programs, however, which didn't affect the truth of A1, the allowance that "mind" is to be understood to mean "mental property" does affect the truth of the premise A3. Perhaps all minds (in the usual sense in which a mind is something like a system of interconnected mental properties or states) have semantics or intentionality; but it seems not all mental states (taken in isolation) do. It seems most plausible to think that mental states like undifferentiated anxiety and (perhaps) pain -- which are distinguished chiefly, perhaps, by their experiential or "qualitative" contents -- are not referential (intentional or semantically contentful) states at all. But perhaps this does nothing much to derail Searle's argument. Pains and undifferentiated anxiety are not the sorts of mental properties that computers are characteristically thought to be endowed with, or have some prospects of becoming endowed with; though, on the other hand, several authors (e.g., Putnam 1960) have sketched the general outlines of functional accounts of pain. Still, the sorts of mental properties that computers are characteristically thought to have some prospect of being or becoming endowed with are, generally, intentional ones (calculating that 2+2=4, seeking to checkmate its opponent, searching for primes, etc.). These also seem to be the kind of mental properties that most invite cognitivistic explanations and functionalist identifications.

3.6 Minds Have Semantics

If "mind" is understood to abbreviate "mental state" or "mental property" as Searle suggests, then A2 is tantamount to (half of) Brentano's (1874) thesis that intentionality is the mark (here, necessary condition) of the mental. What A2 literally claims then, once we unpack the "slogan" or "abbreviation" Searle intends by "mind," is that all mental states or processes are intentional or (alternately), that nothing that lacks intentionality is a mental phenomenon. As already observed, examples such as undirected anxiety, seem to falsify this half of Brentano's thesis: not all mental states have semantics, it seems, since undifferentiated anxiety is a mental state seemingly without intentionality. Moreover -- what makes Searle's advocacy of A2 even odder -- the half of Brentano's thesis that would hold intentionality to be a necessary condition for being a mental state is something Searle himself seems to expressly deny, in the following passage:
I do not think that intentionality is a "distinguishing feature of the mental," because it seems to me quite possible that there may be organisms that have mental phenomena in the form of conscious states, but have no intentionality. I do not know whether this is the case, but I certainly have no philosophical arguments against it. If we are talking about the "distinguishing features of the mental," I would think consciousness is a more plausible candidate than intentionality. (Searle 1989a, 707-708)
Perhaps no system has any mental states without having some mental states which are intentional; perhaps nothing can suffer undifferentiated anxiety, for instance, which is not also capable of suffering ordinary directed anxiety; perhaps nothing can have pains, for instance, which does not also have desires. Such speculation seems, to me, most probable: I am more chary than Searle, it seems, of the possibility of systems "that have mental phenomena in the form of conscious states, but have no intentionality." Since pain, diffuse anxiety and the like (mental states most plausibly identified as nonintentional, as being distinguished by qualitative and not their intentional content) seem to influence behavior mainly in connection with other mental states (e.g., beliefs and desires) which are intentional, whether there are systems of the sort Searle envisages seems to be an empirically mostly empty question. Consciousness without intentionality seems to be consciousness largely without behavioral implications, for which there would be virtually no public or observable evidence.

Such worries aside -- it seems, in addition to understanding "mind" as abbreviating something like "mental properties," we need to understand that the mental properties at issue are just the intentional ones. Thus understood A2 becomes a tautological claim to the effect intentional states are intentional, or that semantically contentful states have semantic content. To square this premise with the seeming facts (about undifferentiated anxiety, e.g.), and to reconcile it with Searle's own expressed opinions, then, we need to understand the mental states at issue to be just the intentional or semantically contentful states.

Now the first consequence of this further constraint on our understanding of "mind" as it occurs in BSA is that it will be perfectly consistent with the conclusion that "Programs aren't (sufficient for) mind" (where "mind" means "intentional mental properties") for programs to be sufficient for and constitutive of nonintentional mental properties; and if you incline, like Searle, to allow that it's "quite possible that there could be organisms that have mental phenomena in the form of conscious states, but have no intentionality," I suppose you would have to allow that the same is quite possible for inorganic systems, such as computers, as well. It may even be -- for all BSA has shown -- that computers or organisms have these nonintentional states in virtue of their programming; though, again, it's hard to know how we could ever tell whether this were so or not, if having such states were disconnected from having intentional states and thus (by my reckoning) largely (if not entirely) without behavioral consequences.

So, perhaps we can dismiss this possibility of Searle's as idle speculation -- though it's speculation that drives a further wedge between the claims of CRA and its advertised force against AIP or SAIP.  Since the mental properties we are most inclined to attribute to computers are intentional let us simply understand CRA (insofar as it is directed against AIP or SAIP) to be denying that computers have or can have intentional mental properties; and insofar as we are restricting our attention at present to the antifunctionalist crux of Searle's argument, let us simply understand BSA to be directed against the claim that program states and transitions (i.e., computational processes) are (constitutive of or sufficient for) intentional mental states and processes.

3.7 Interim Conclusions

The main conclusion of Section 2 of the present chapter was that Searle's vaunted Chinese room argument lacks the logical force against the claims of AIP (that computers might someday become endowed with genuine mental properties) or even SAIP (that some computers are already so endowed) Searle originally seemed to suggest his argument had, and which his continued proprietary use of the phrase "strong AI" to refer to functionalism continues, misleadingly, to suggest. I then urged that, nevertheless, CRA might turn out to have inductive (or polemical or ad hominem) force against these doctrines if the main or only rational support the claims of AIP and SAIP have derives from functionalist theory, and CRA turns out to successfully refute functionalism. I then turned to a consideration of whether BSA, the "brutally simple" crux of CRA, succeeds in establishing its antifunctionalist conclusion. The main conclusion of this second part of this chapter is once again equivocal: while the unsupplemented BSA does not comprise a valid argument for C1 or -FUN, C1 does follow as a valid consequence of the premises (A1-A3) of BSA given the supplementary premise S1, that all things have or instantiate programs. I noted the prima facie implausibility of S1, but was brought up short of thinking to use this as an objection to the supplemented BSA's case against functionalism by the remarkable fact that by and large, functionalists accept the idea that everything has or instantiates some program(s). Perhaps any number of programs. It turns out the premise S1, implausible though it is on its face, given which BSA yields -FUN (i.e., C1) as a valid consequence, is a premise that (with few exceptions) functionalists are not only willing to admit, but even keen to insist on. So, as Section 2 concludes that CRA might provide a telling ad hominem case against AIP and SAIP for anyone whose allegiance to these doctrines depends on their acceptance of FUN, if it turns out that CRA successfully refutes FUN, this present section has led to the conclusion that the part of CRA directed against FUN, BSA, though it does not by itself entail the denial of FUN, does entail -FUN granting the supplementary premise S1 which (most) functionalists quite willingly grant.

A major conclusion of this chapter so far, then, is that BS1A makes a compelling ad homimen case against any version of functionalism that grants S1 (as most do) along with Searle's other premises, A1-A3. This means, I take it, that functionalists must make good the (as yet few and halting) attempts to distinguish implementing programs (as computers and perhaps brains do) and simply instantiating programs as hurricanes (Searle 1980a, p.420), Searle's wall (Searle 1990c, p.27), Searle's pen (Searle 1984a, p.36), and seemingly everything else does. Thus, while it does not flatly refute functionalism BSA imposes a research imperative on functionalism -- to make out the just mentioned distinction -- because (given the difficulty of disputing Searle's other premises, as we shall see) this seems the only way to blunt the refutative force of Searle's argument. We now need to consider how some of the other discoveries of this chapter might either allow functionalism (in some as yet unforseen way) to avoid this research imperative just alleged, or else (alternately) how they might give this imperative some particular shape.

The most relevant datum in this connection -- most supportive of some remaining functionalist hopes to avoid Searlean refutation without rethinking the account of computation functionalist theory invokes -- would seem to be the restriction we found it necessary to impose on the sense of "program" invoked by BSA, the discovery that "program" as Searle uses the term here, must refer only to (some subset of) partial Turing test passing programs. The strategy this might suggest, as a way for functionalism to evade the force of BS1A, is this: cleave to the idea that everything instantiates some program(s), while denying that everything instantiates some partial Turing test passing program(s). The question is whether S1 ((x)Px) remains true -- or rather whether it remains acceptable or even obligatory on functionalist principles -- once we understand P to refer, not to instantiations of just any program, but just to instantiations of (a subset of) partial Turing test passing programs.

3.8 Virtually Universal Realizability

Searle notes that "the standard textbook definition of computation" has two interesting consequences:
1. For any object there is some description of that object such that under that description the object is a digital computer.
2. For any program there is some sufficiently complex object such that there is some description of the object under which it is implementing the program. (Searle 1990c, p.26-27)
Our supplementary premise S1 ((x)Px) that all things (under some description) instantiate programs derives, of course, from 1. Given the operative sense of "program" in Searle's BSA the question, however, is this: Is there some description under which everything is a Turing test passing computer or instances a Turing test passing program? According to our second consequence of "the standard textbook definition of computation," it seems, the answer to this question is that every "sufficiently complex object" is a Turing test passing computer and does instantiate a Turing test passing program. But how much complexity is sufficient; and what objects if any -- contrary to S1 -- does this complexity requirement exclude from the ranks of things instancing Turing test passing programs? I believe the thought, when functionalist and cognitivists say or suggest such things as Searle reports, is something like this: it is possible to map the successive states of anything with enough parts and possible configurations onto the succession of states executed by a program being run; there are interpretation functions under which a one to one correlation between the states of the object and states of the program can be stipulated. So long as the object has enough distinct states it will be able to map any program of a given complexity.
Thus for example the wall behind my back is implementing the Wordstar program, because there is some pattern of molecule movements under which is isomorphic to the formal structure of Wordstar. But if the wall is implementing Wordstar then if it is a big enough wall it is implementing any program. (Searle 1990c, p.27).
So we shouldn't need IBMs and MACs, we could just use our walls -- which come, if big enough, prepackaged with every program ever written and many unwritten -- if only we knew how to encode their input and interpret their output. Note, it is not a constraint on these mathematical interpretation functions that they be such as people like us could actually ever learn and apply; nor is it required, I take it, that the distinct states in question actually be distinguishable by us.

This means an awful lot of things are going to qualify as Turing test passing computers -- there are presumably any number of partial Turing test passing programs simultaneously executing at any one time on the head of a pin. And since molecular states are terribly coarse grained -- if we help ourselves to subatomic particles and states of these -- there may even be any number of full Turing test passing programs simultaneously executing on every molecule on the head of a pin! Here I agree with Searle: "Philosophically speaking, this does not smell right to me" (Searle 1990c, p.22) either. But for now, these observations suggest an easy way -- agreeable with the conception of computation invoked by cognitivists and functionalists themselves -- to make S1 true and the BS1A sound. Why sweat the small stuff? Let us just stipulate that the universe of discourse to which the BS1A is to be taken to apply includes no objects smaller than a pinhead. Then, on the principles just canvassed, S1 ((x)Px) is true. Everything bigger than the head of a pin will instantiate some (indeed, it seems, any number of) partial Turing test passing program(s). Now, by BS1A, it follows, for things bigger than pinheads, for all intentional mental states, that instancing a program does not constitute or causally suffice for having such states.

I conclude that if functionalism invokes a concept of computation which licenses S1, then (if A1-A3 are unassailable, as they seem) functionalists license their own refutation along the Searlean lines of our proposed argument BS1A. Given that humans and all higher animals and most all computers (in the usual sense of the term "computer," in which not everything is a computer) are bigger than pinheads, and that mental attributions to computers are almost exclusively attributions of intentional states, the limitations on the scope of BS1A just proposed are inconsequential.

Having just seen why functionalism is ill advised to propound an account of computation which implies not just multiple realizability but virtual "universal realizability" (Searle 1990b, p.27), I defer discussion of whether functionalism must unavoidably propound such an account. For our present purposes, let it suffice to say that many functionalists and cognitivists (e.g., Johnson-Laird 1988, Pylyshyn 1984) do propound accounts of computation that have this consequence.{18}

3.9 The Concept of Computation

According to standard definitions of computation an object O can be described as instantiating a state of some program P just in case there is a function which maps discrete physical states of O into discrete informational states of P; or thinking of programs diachronically, as sequences of states, an object O instantiates a program P if and only if there is a function which maps a sequence of the physical states of O into the sequence of informational states of P.  Suppose P is a (partial) Turing test passing addition program, which uses 1 kilobyte or 8196 bits of RAM: any object having (at least) 8196 discriminable components having (at least) two distinct possible states each then will be describable (in principle, under the right mapping) as instantiating some states of this program; and any sequence of n many states of O will instantiate some sequence of n program operations. As we have already noted it is not a constraint on such mappings that the interpretation function be one that we humans could actually apply (or even conceive), or that the distinct physical states of O which map states of the program should actually (or even conceivably) distinguishable by us; and, not surprisingly, it is this absence of constraint on acceptable mappings or interpretation functions which leads, counterintuitively, to the result of (all but) universal realizability of (partial) Turing test passing programs, and hence (via BS1A) to grave consequences for functionalism.

For our present purposes we may characterize programs as "systems of symbols" (Johnson-Laird 1988, p.29): "A program is a piece of text, it is the encoding of a particular algorithm in some programming language" (Pylyshyn 1984, p.88-89). More precisely, we could say that a program is like a piece of text with blanks (i.e., variables); that different ways of filling in the blanks comprise different program states: actual sequences of such states comprise an execution or running of the program. Now,

systems of symbols ... have three components: a set of primitive symbols, together with principles for constructing complex symbols out of them; a set of entities constituting the domain that is symbolized; and a method of relating symbols to the entities that they symbolize, and vice versa. (Johnson-Laird 1988, p.29)
Since the relation of (linguistic) signs to what they signify is conventional or "arbitrary" -- such that English, e.g., didn't have to take "dog" to refer to dogs but might just as well have taken "dog" to signify, e.g., cats -- it seems, "Any domain of entities can be represented by many different systems of symbols" (Johnson-Laird, 1988) Also, conversely, "Any system of external symbols, such as numerals, or an alphabet, is capable of symbolizing many different domains." (Johnson-Laird 1988, p.31) Accordingly,
in many models that take the form of computer programs there is much latitude in assigning semantic interpretations to the symbols, hence, to the models' states. Indeed, we routinely change our interpretation of the functional states of a typical computer, sometimes viewing the states as numbers, sometimes as alphabetic characters, sometimes as words or descriptions of a scene. Even when it is difficult to think of a coherent interpretation different from the one the programmer had in mind, such alternatives are, in principle, always possible. (There is an exotic result in model theory, the Löwenheim-Skolem theorem, which guarantees that such programs can always be coherently interpreted as referring to integers and to arithmetic relations over them.) (Pylyshyn 1984, p.44)
Here we are approaching Searle's "axiom" A3 -- that syntax doesn't suffice to determine semantics -- in a way that lends credence, even, to Searle's taking A3 "as a logical truth" (1990a, p.31). Here too is the doctrine underlying Searle's contention that someone (e.g., the inmate of his Chinese room) hand simulating a Chinese natural language understanding (NLU) program could "decide to interpret the symbols as standing for moves in a chess game" (Searle 1990a, p. 31); by which he is merely claiming, I take it, that such an interpretation is in principle possible, not that I (or anyone) could really "cook up" such alternate interpretation functions (especially in real, conversational, time). Taking the sequence of states of a NLU program as a sequence of states of a chess playing program (in a way that respected the rules of chess), while possible in principle, would be an intellectual tour de force which might be (and probably is) quite beyond the intellectual abilities of any actual person. We are concerned here with interpretations that are mathematically possible, which needn't be psychologically possible (much less easy) for beings such as us.

Now, just as the vast panoply of mathematically possible interpretation functions underwrites something like Searle's A3, a similarly vast panoply of mathematically possible mappings between symbol systems -- or between various physical instantiations of symbol systems -- underwrites the two pregnant features of "the standard textbook definition of computation" that Searle remarks (1990c, p.26). Again, these are 1) that any object is a digital computer (or instantiates some program(s) under some description) and 2) that any program is instantiated by any sufficiently complex object. Despite the brave talk of proponents of "syntactic" theories of mind -- e.g., Fodor (1975) and Stitch (1983) -- about the suitability of their "syntactic" theories for providing causal explanations (of intelligent behavior) consistent with materialist presuppositions, and despite the lip-service that cognitivist and functionalist accounts of computation pay to the causal properties and connections among the computational states of actual devices, Searle is right about this: "syntax is not intrinsic to physics" (1990c, p.26), and "if computation is defined syntactically, then nothing is a digital computer solely in virtue of its physical properties" (1990c, p.28). This is because, given the (practically and psychologically unconstrained) mathematical sense of "interpretation" or "instantiation" that functionalists generally suppose in their accounts, the physical makeup and causal properties of an object place almost no constraints on which program(s) the object instantiates (under some instantiation function). Indeed any object (of sufficient complexity) will instantiate virtually any program you choose under some mathematically possible instantiation function.

Again, this problem with the "standard textbook definition" of computation that functionalists characteristically accept, is that "the physics is irrelevant [to the question of what program(s) a thing instantiates] except insofar as it admits of the assignments of O's and 1's and of state transitions between them" (Searle, 1990c, p.26): the trouble is that in the absence of any psychological or other constraints on admissible instantiation functions, the computational properties of a thing come almost entirely unconnected from its actual construction and causal connections. Thus, the obligatory obeisances paid to the material and causal characteristics of computers by textbook accounts are misleadingly beside the point. Pylyshyn, for instance, informs us, unhelpfully,

A computer is a physical object whose properties vary over time in accordance with the laws of physics. In principle one might characterize a computer's dynamic behavior as a causally connected sequence of physical state descriptions, with transitions subsumed under various physical laws. (Pylyshyn 1984, p.55)
But every physical object's physical properties "vary over time in accordance with the laws of physics"! In principle one might characterize anything's behavior as "a causally connected sequence of physical state descriptions." Even among those physical objects (e.g., our IBMs and Macintoshes) that we think are computers in practice and not just in principle it can be said, "Because an unlimited number of physical properties and their combinations can be specified in the physical description, there is, in fact, an unlimited number of such [causal] sequences" (Pylyshyn 1984, p.55) going on in our computers, besides the causal sequence which we would actually identify as the execution of the program in the machine. Thus "by itself, the record of a particular sequence of physical states tells us nothing about the computational processes going on in the machine" (Pylyshyn 1984, p.55) because,
Very few of the physically discriminable properties of the machine are relevant to its computational function -- for example, its color, temperature, and mass, as well as its very fast or very slow electrical fluctuations, variations in electrical properties above or below relevant ranges, variations in electrical properties that are within tolerance levels, and so on. .... Thus in a given computer only a minuscule subset of physically discriminable states are computationally discriminable. Of course each of the infinitely many sequences of physical descriptions could correspond to some computational sequence. At least in principle there is nothing to prevent such an interpretation. In other words, computational states are relativized to some convenient mapping, which I shall call the instantiation function (IF). By mapping from physical to computational states, such a function provides a way of interpreting a sequence of nomologically governed physical state changes as a computation, and therefore of viewing a physical object as a computer. (Pylyshyn 1984, p.55).
But the mapping needn't be convenient at all: on standard textbook accounts it only needs to be mathematically possible. This account has exactly the consequence that Searle discerns:
The same principle that implies multiple realizability would seem to imply universal realizability. If computation is defined in terms of the assignment of syntax then everything would be a digital computer, because any object whatever could have syntactical ascriptions made to it. You could describe anything in terms of 0's and 1's. (1990c, p.26)
Not only, on this account, can any object whatever have some program(s) ascribed to it, any "sufficiently complex" object can have every (sufficiently simple) program ascribed to it.{19}

4. Conclusion

Though CRA does not logically exclude the possibility of computers thinking -- CRA does not entail -AIP -- barring some alternative account (to the standard textbook one) of computation we have seen that Searle's "brutally simple" argument BSA supplemented by S1 (BS1A) does entail -FUN. Thus, if such alternative accounts are not forthcoming, and if the only good reasons for believing AIP true derive from FUN, BS1A will disconfirm AIP without logically disproving it. I will not pursue the possibility of providing an alternative account of computation which would avoid this consequence (-FUN) because I believe there are powerful reasons independent of BSA and S1 for doubting the truth of functionalism (which I consider in Chapter 5) and there are quite considerable reasons (set forth in Chapter 4) independent of FUN for believing AIP to be true. Before proceeding to these matters, however, it seems we are owing an account of Searle's vaunted Chinese room experiment: what is CRE's relation to CRA and how does CRE bear upon the centrally disputed questions (FUN, AIP, and SAIP)?


  1. Searle, like Descartes, identifies thought with consciousness
  2. I assume the following standard notational conventions: "&" for "and" (conjunction), "v" for "or" (disjunction); "-" for "not" (negation), "->" for "if ... then" (material implication), "<->" for "if and only if" (material equivalence), (x) for "all x" (universal quantification), "(3x)" for "some x" (existential quantification).
  3. Take "P" to be some program attribution and "M" to be some mental attribution: read "(x)(Px <-> Mx)," then, as "x implements (or runs) program P if and only if x has mental property M" and read "(x)(Px -> Mx)" as "If x implements (or runs) program P then x has mental property M."
  4. TI-1706 is a registered trademark of Texas Instruments.
  5. Though Searle characterizes the "point of the parable about the Chinese room" as being to reveal that "from syntax alone you can't get the mental [semantic] content" (Searle 1989a, p. 147) and commentators have by and large accepted this notion that "Searle's thought experiment is devoted to shoring up axiom 3 [the claim that syntax is not sufficient for semantics] specifically" (Churchland & Smith Churchland 1990, p. 34), I suggest (in Chapter 3, Section 4.3) that the relation of the logical and evidential support between Searle's experiment and this "axiom" of Searle's argument might better be understood as the reverse: the axiom is needed to shore up the experiment.
  6. This appearance is deceiving. Descartes actually seems to hold that the unity of minds is such that it's all or nothing: minds either have all these powers (or at least all the purely intellectual ones) or none (i.e., don't exist). A res cogitans is not a thing that doubts or understands, etc., but one which doubts and understands, etc.
  7. This is a controversial point to which I return in Chapter 6.
  8. Possible modal construals will be canvassed in Chapter 3, subsection 4.2, below.
  9. Herb Hendry points out (personal communication) that S1 yields a valid argument for C1 independently of A1, in conjunction with A2 and A3 alone. Hendry also points out that replacing A3 with the weaker (3x)-Sx still yields a valid argument (assuming S1 and A2) for C1. Whether these observations impugn the reconstruction of Searle's argument set forth here or evidence instead the logical imperspicuity of Searle's argument (as I believe) depends on whether more perspicuous reconstructions are available. Reasons for resisting Hendry's suggestion that Searle's A3 should be rendered (x)(Fx -> -Sx) -- its evident falsity and inconsistency with other theses Searle seems to hold -- are considered in subsection 3.4, below. Hendry's suggestion that conclusion C1 be similarly strengthened to (x)(Px -> -Mx) likewise seems objectionable. Not only would this stronger conclusion not follow if Hendry's supposed strengthening of A3 is not admitted, but (x)(Px -> -Mx) like (x)(Fx -> -Sx) is inconsistent with other claims Searle seems to accept: (x)(Px -> -Mx) in conjunction with S1 (which Searle apparently accepts) entails (x)-Mx -- there are no minds! -- which Searle plainly does not accept. Hendry notes that the conclusion -(x)(Px -> Mx) -- which asserts that at least one program is not a mind -- seems too weak. I agree, and argue (in subsection 3.5, below) that to avoid such trivialization of the conclusion the programs at issue need to be understood to be, just the (partial) Turing test passing programs or a subset of these.
  10. SAM is the acronym of Schank and Abelson's "Script Applier Mechanism" (Schank & Abelson 1977), the story understanding program Searle is supposed to be implementing (by hand tracing) in the Chinese room thought experiment. (Chapter 3, note 2, provides a fuller description of SAM.)
  11. Subsection 3.9, below, explicates the "standard textbook definition" Searle alleges (rightly I think) to have these consequences.
  12. The direct alternative, (x)(Px -> -Sx), is canvassed in Chapter 3, subsection 4.3, below.
  13. I agree with Hendry (personal communication) that this supplanting of A3 seems contrary to Searle's understanding of his own argument (contrary to the purported relation of support by the experiment (CRE) for the allegedly crucial premise A3). But note, this is not the only oddity about A3 and the support supposedly provided for it by the CRE. One further oddity in this connection is this: the occupant of the Chinese room seems to exemplify not merely a formalism without semantics, but -- more directly -- to instantiate a program yet lack semantics: CRE provides more direct and compelling warrant for conclusion C1 than for the premise A3 Searle claims CRE supports. These issues are taken up in Chapter 3, subsection 4.3, below.
  14. I owe this "partial Turing test" designation to Jon Sticklen.
  15. Perhaps Turing machine functionalism can be construed as an essentialist doctrine (cf. Putnam 1975) proposing that the "microstructure" constitutive of minds is computationally specifiable; that decomposition of mental processes into elementary data processes (as with Turing machine specifications) amounts to characterization of their essential "internal structure."
  16. I consider an alternative modal reconstruction of Searle's conclusion, []-(x)(Px -> Mx) (understanding `[]' to signify metaphysical or temporal or causal necessity), in the next chapter (subsection 4.2).
  17. I.e., the subset of programs I previously distinguished as "Programs" (with a capital `P').
  18. Rich Hall (personal communication) points out the similarity of considerations supporting this virtually universal realizability result for computer programs to considerations advanced by Goodman and Quine in "Steps Toward a Constructive Nominalism." Goodman and Quine observe, "The stock of available inscriptions [for nominalist construction] can be vastly increased if we include, not only those that have colors or sounds contrasting with the surroundings, but all appropriately shaped spatio-temporal regions even though they be indistinguishable from their surroundings in color, sound, texture, etc. But the number and length of inscriptions will still be limited insofar as the spatio-temporal world itself is limited" (Goodman & Quine 1947, p. 175).
  19. Putnam (1988, Appendix) purports to prove what seems an even stronger result that "there is a sense in which everything has every functional organization."

next | previous