J. Preston and M. Bishop (eds.), Views into the Chinese Room (Oxford: Oxford University Press, 2002), pp. 123-143.

Nixin' Goes to China

Larry Hauser

Abstract

The intelligent-seeming deeds of computers are what occasion philosophical debate about artificial intelligence (AI) in the first place.  Since evidence of AI is not bad, arguments against seem called for. John Searle's Chinese Room Argument (1980a, 1984, 1990, 1994) is among the most famous and long-running would-be answers to the call. Surprisingly, both the original thought experiment (1980a) and Searle's later would-be formalizations of the embedding argument (1984, 1990) are quite unavailing against AI proper (claims that computers do or someday will think). Searle lately even styles it a "misunderstanding" (1994, p. 547) to think the argument was ever so directed!  The Chinese room is now advertised to target Computationalism (claims that computation is what thought essentially is) exclusively.  Despite its renown, the Chinese Room Argument is totally ineffective even against this target.

"Strong AI," AI proper, and Computationalism

Searle always describes the Chinese Room as targeting "Strong AI": what this phrase is invoked to mean, however, varies. At the outset of his original presentation, Searle says,
My discussion here will be directed at the claims I have defined as those of strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition. (Searle 1980a, p. 417)
"Strong AI," here, seems to denote the conjunction of the metaphysical claim, "the appropriately programmed computer literally has cognitive states" and the methodological one "that the programs thereby explain human cognition."1  It is natural to think that the metaphysical claim being targeted is the affirmative answer to Turing's (1950) question "Can a machine think?" The plan of the original Chinese room article (mirroring Turing 1950) and not a few of Searle's subsequent remarks encourage just this interpretation. Take, for instance, Searle's characterization of Strong AI in the 1984 Panel Discussion of the question "Has Artificial Intelligence Research Illuminated Human Thinking" as
the view that says: "It isn't just that we're simulating thinking or studying thinking. Our appropriately programmed computers with the right inputs and outputs will literally have thought processes, conscious and otherwise, in the same sense that you and I do." (Searle et. al. 1984, p. 146)
Searle continues,
I like that thesis, because it's clear that we know exactly what someone is saying when he says, "Look, my machine or the machine we're going to eventually build has thought processes in exactly the same sense that you and I have thought processes." It's clear, and it's false, and it's demonstrably false. (Searle et. al. 1984, p. 146)
The "demonstration" that follows is the Chinese Room, patently here being advertised as targeting the thesis that computers already do or someday will think; as targeting AI proper.  Likewise, Chapter 2 of Minds Brains, and Science -- where Searle first formalizes the Chinese Room Argument (as below) -- is titled "Can Computers Think?"

Compare now:

There are several common misunderstandings of the Chinese Room Argument and of its significance. Many people suppose that it proves that `computers cannot think'. But that is a misstatement. (Searle 1994, p.546)
Retraction noted and accepted.

But if the argument doesn't target the thesis that "the appropriately programmed computer literally has cognitive states" what does it target? "Strong AI" as ever is what the targeted view is called, but Strong AI is now to be understood, strictly, as the view that "the mind is a computer program" (1994, p. 546): the target is Computationalism.

Computationalism says that computation is what thought is essentially: (the right) computation is metaphysically necessary for thought, i.e., regardless of anything else that could be the case, nothing would be thinking if it weren't (right) computing; and (right) computation metaphysically suffices, i.e., such computation would be thought under any circumstances, no matter what else might be the case. Much as water has a chemical essence that chemical science has discovered, Computationalism maintains, thought has a computational essence that cognitive science will discover. Thus aimed, against Computationalism,

this is the point of the parable -- if I don't understand Chinese on the basis of implementing the program for understanding Chinese, then neither does any digital computer solely on that basis [my emphasis] because no digital computer has anything that I do not have [in the example]. (Searle 1994, p. 546)
Depending, as it does on the modal force of "solely on that basis" this "simple refutation of Strong AI" (Searle 1994, p. 546) is deceptively unsimple and, in fact, is very far from being "obviously sound and valid" (Searle 1994, p. 546).

Of course if you dismiss the apparent mentality of extant machines as bogus or "as if", as many, if not most, cognitive scientists incline to do; if you make computationalism your sole basis for belief that computers can or someday will think; then the Chinese room may seem, to you, unproblematically, to target both.  But while the psychological gap between Computationalism and AI may, for many cognitive scientists, be slight;  the real logical and epistemological gap is considerable.  What Searle's proposes to bridge the gap -- "[b]rains cause minds" and "[a]nything else that caused minds would have to have causal powers at least equivalent to those of a brain" (Searle 1984, pp. 39-40) -- moreover, are argumentative matchsticks.2  Consider, for instance, the following would-be bridging argument:

1.  Brains cause minds (mentality).
2.  Anything else that caused minds would have do have causal powers at least equivalent to a brain.
3.  Computation alone does not cause minds.
:. 4.Something else about the brain causes mentality.
5. Digital electronic computers lack this something else.
:. 6.  Digital electronic computers don't have mentality.
The trouble is that premise 5 is not so easily supported.  In fact, it's insupportable unless we're told what this additional something is and wherein computers are supposed to be lacking it.  Most importantly -- as difficult as it may be for those who have antecedently dismissed the possibility to allow it -- observational evidence strongly suggests that computers already do seek, compare, and decide things.  On the evidence the would-be bridging argument might more plausibly  be continued from 4 as follows:
5a.  Digital computers exhibit mentality.
:. 6a.  Digital computers have this something else.
Bridge out.

The Chinese Room Argument v. Computationalism

Searle (1994, pp. 546-547) presents the argument targeting computationalism as having the following "logical structure" (see Searle 1984, pp. 39-42, Searle 1989a, Searle 1990 for earlier related formulations):
It is a derivation of a conclusion from three premises:
premise 1: programs are formal (syntactical),
premise 2: minds have contents (semantics),
premise 3: syntax is not sufficient for semantics.
"The story about the Chinese Room," as Searle explains it, "illustrates the truth of premise 3" (Searle 1994, p. 546). Searle continues,
From these three propositions the conclusion logically follows:
programs are not minds.
"And this conclusion," Searle tells us "refutes Strong AI."

But this conclusion misses the point. Computationalism holds "the essence of the mental is the operation [my emphasis] of a physical symbol system" (Searle 1980a, p. 421 cites this formulation by Newell 1979). The Computationalist hypothesis identifies minds with processes: with program operations or program runs and not simply with the programs themselves. Obviously, no one ever proposed to identify static program instantiations, e.g., on diskettes, with cognition.  Only operating programs are candidate thinkers.

Suppose we modify the conclusion to actually speak to the Computationalist hypothesis, then, by substituting "processes" for "programs", substituting likewise in the first premise to preserve logical structure. Thus revised, the argument is to the point; but it's unsound. Reformulated premise 1 -- "processes are syntactic" -- is false. Processes are not syntactic: not purely so as the would-be modal force of the argument requires. The sentence tokens `Ed ate' and `Ed ate', though physically different (differently shaped: rendered in different typefaces), have the same syntactic form.  This sameness of syntactic form is what makes them instances of the same sentence. Similarly, every instantiation of a given program is syntactically identical with every other instance: this is what makes them instances of the same program. In the simplest case, since spatial sequence of stored instructions and the temporal sequence of operations resulting from the execution of those instructions have the same syntax, the difference between inert instantiation and dynamic instantiation is not syntactic. The property of being a process is not, then, a purely formal or syntactic property but includes, essentially, a nonsyntactic element -- an element of dynamism -- besides. (See Hauser 1997, p.211).3

Argument and Experiment

The Chinese room experiment itself might seem to circumvent the preceding objection: the man-in-the-room is supposed to be running the envisaged natural language understanding program and still not understanding Chinese. This seems to show not just that "Syntax by itself is neither constitutive of nor sufficient for semantics" (Searle 1990: original emphasis) but that syntax-in-motion isn't sufficient by itself either. The overarching argument would now be more perspicuously rendered as follows:4
premise 2a: thought has contents (semantics),
premise 3a: computational processing (by itself) is not sufficient for semantics.
Therefore,
thought is not computational processing.
Note that the first premise drops out. The argument as restated has nothing to do with syntax.5  Also note that where Searle's original premise 3, "syntax is not sufficient for semantics," can credibly be styled "a conceptual truth that we knew all along" (Searle 1988, p. 214), 3a cannot. So, it seems the experiment must bear some empirical weight; and here, it's the experiment that doesn't suffice. It's methodologically biased, depending (as it does) on a dubious tender of epistemic privilege to first-person disavowals of understanding (and other intentional mental states).  That other "blind trace" experiments in the recent history of cognitive science have even seemed to their advocates to yield intuitions not unsupportive of AI and Computationalism confirms the diagnosis of methodological bias and, perhaps, raises further issues about the robustness of Searle's thought  experimental design.

The Chinese Room Experiment

Methodological Bias: Searle's Cartesian Paraphernalia

Consider the transition from "it seems to me quite obvious that I do not understand" to "I understand nothing" (Searle 1980a, p. 418). Result 3a depends not just on Searle's seeming to himself not to understand but on his really not understanding. That it would seem thus to Searle, from his "first person point of view" (Searle 1980b, p. 451) in the situation described is, perhaps, unexceptionable. What is exceptionable is the transition from its seeming so from "the first person point of view" to its being so. Those who understand natural languages such as English or Chinese are normally aware of understanding them: it both seems to them that they understand English or Chinese, and they do. Conversely, those who do not seem to themselves to understand a natural language (when they hear it spoken or see it written) normally do not understand. But notice that in the normal case of not understanding it also seems that I do not understand to others, from the "third-person" or "external point of view." Since the Chinese Room setup is designedly abnormal in just this respect, we cannot appeal to the normal covariance of seeming to oneself not to understand with not understanding to decide this case in favor of Searle not understanding. Since Searle's imagined introspective lack of awareness or sincere disavowal of understanding is all the warrant the Chinese Room Experiment provides for the conclusion that Searle would not understand the Chinese stories, questions, and answers in the envisaged scenario, the experiment fails.

It seems only an a priori tender of the epistemic privilege of overriding all "external" "third person" appearances to how it seems "from the point of view of the agent, from my [first person] point of view" (Searle 1980a, p. 420) will suffice to save the experiment. This confirms what Searle's characterization of the experiment as implementing the methodological imperative "always insist on the first person point of view" (Searle 1980b, p. 451) suggests. The experiment does invite us "to regress to the Cartesian vantage point" (Dennett 1987b, p. 336) and it seems to succeed only if we accept the invitation. The experiment's support for would-be result 3a is just as dubious as the Cartesian grant of epistemic privilege on which it depends.  Of course, one needn't be a metaphysical dualist (Cartesian or otherwise), to hold some methodological brief for first person authority.  However, to sustain a case based on first person authority against AI -- especially in the face of overwhelming third person evidence of it -- I submit, you do.  You need to answer the question of what the additional something is in us that promotes our computation to thought to counter the third-person evidence: that's metaphysical.  And you need to answer as Searle does, that "consciousness" or "ontological subjectivity" or "qualia" are what's essential: that's dualistic.6  The further question of wherein computers are lacking in ontological subjectivity -- and why we should think so -- remains.  It remains because on the third person evidence the natural conclusion would seem to be -- again -- that whatever else besides computation is required for thought, computers have this too, since in seeking, comparing, and deciding things, as they evidently do, they evidently think.

To reiterate: given the abnormality of the case Searle describes, one's self-avowed or introspected lack of understanding of Chinese in the example does not suffice to establish actual lack of understanding in the face of overwhelming evidence of understanding "from the external point of view". If the experiment reliably showed that one would not be conscious of understanding the Chinese (stories, questions, and answers) one was processing, it would still fail to show that this processing would not amount to understanding; and even the antecedent here is ill-supported by Searle's thought experiment.  Even if overriding epistemic privilege were granted to actual introspective deliverances or first-person disavowals, the Chinese room experiment's extension of such privilege to imagined introspections (under scarcely imaginable circumstances) would still be highly dubious.

Other Blind Trace Scenarios

A further consideration undermines the Chinese Room experiment: similar (thought) experimental procedures seem to elicit contrary intuitions: Turing 1950, and Newell & Simon 1963, seem to draw conclusions contrary to Searle on the basis of similar blind trace scenarios.

Searle imagines himself to be hand tracing a natural language understanding program on Chinese input (stories and questions), thereby producing appropriate Chinese output (answers). Hand tracing is something computer programmers do to check and debug programs. In such cases, the programmer knows what the input means, what the output means, and what the program is supposed to be doing (the function it's supposed to be computing). The programmer must know these things in order to tell whether the program is processing the former into the latter correctly. The case Searle imagines is unusual in that the trace is "blind":

Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me they call "the program." (Searle 1980a, p. 418)
Not only isn't Searle conscious of the meanings of the input and output strings he processes, he isn't even aware that the input strings are stories and questions, or that the output strings are answers. He may not even "recognize the Chinese writing as Chinese" or for that matter, as writing as opposed to "just so many meaningless squiggles" (Searle 1980a, p. 418). And he doesn't know that the set of instructions transforming input strings into output strings he is following is a natural language understanding (or any other kind of) program.

Searle's blind trace procedure has precedents in Turing's wartime use of "blind" human "computers" to decrypt the German naval code, and Newell and Simon's use of "protocols" derived from "blind" inferences of human subjects to gather information about human reasoning processes.  Yet these experimenters fail to draw anything like Searle's conclusion -- even seem moved to advocate opposing conclusions -- by their reflection on these blind trace cases.

Turing, Bombes, Wrens, and Enigma

During World War Two, Turing directed a project aimed at breaking the German naval code, "Enigma." The work was initially done by members of the Women's Royal Naval Service (Wrens) acting as human computers following decryption programs Turing devised. To maintain secrecy, the Wrens were kept in the dark about the meaning of the input they received and output they produced (that these were messages about the locations of submarines, etc.). Even about the input and output being enciphered and deciphered German and about the input and output being encrypted and decrypted messages. "The Wrens did their appointed tasks without knowing what any of it was for" (Hodges 1983, p. 211), like Searle in the Chinese room. Overseeing this veritable Chinese gymnasium (compare Searle 1990a, p. 28), Turing, in Andrew Hodges' words, "was fascinated by the fact that people could be taking part in something quite clever, in a quite mindless way" (Hodges 1983, p. 211): more incisively, it seemed that the Wrens were doing something intelligent (deciphering encoded messages) unawares.  As the work of the Wrens was taken over by machines of Turing's devising, called "Bombes," Turing seems to have surmised that the Bombes were likewise doing something intelligent -- same as the Wrens. This intuition occasioned by a situation comparable to the situation Searle imagines in his thought experiment helped, I take it, inspires Turing's famous (1950) defense of machine intelligence.

Newell and Simon: Blind Protocols

In developing their General Problem Solver program (GPS), Allen Newell and Herbert Simon used a blind trace procedure to gather "protocols" (running verbal commentaries) of deductive reasonings performed in the blind by human subjects. The protocols were intended to "extract information" (Newell & Simon 1963, p. 282) about procedures humans use or manipulations humans perform in the course of their deductive reasonings. The information extracted was then to be used to "write [GPS] programs that do the kinds of manipulation humans do" (Newell & Simon 1963, p. 283).

Here is how Newell and Simon describe the experimental situation:

A human subject, a student in engineering in an American college, sits in front of a blackboard on which are written the following expressions:

(R -> -P) & (-R -> Q) | -(-Q & P).

This is a problem in elementary symbolic logic, but the student does not know it [my emphasis]. He does know that he has twelve rules for manipulating expressions containing letters connected by ["ampersands" (&)], "wedges" (v), ["arrows" (->)] and ["minus signs" (-)], which stand [which the subject does not know] for "and," "or," "implies," and "not." These rules [inference and equivalence rules though the subject doesn't know this] show that expressions of certain forms . . . can be transformed into expressions of somewhat different form. . . . The subject has practiced applying the rules, but he has previously done only one other problem like this. The experimenter has instructed him that his problem is to obtain the expression in the upper right corner from the expression in the upper left corner using the twelve rules. . . .  The subject was also asked to talk aloud as he worked; his comments were recorded and then transcribed into a "protocol"; i.e., a verbatim record of all that he or the experimenter said during the experiment. (Newell & Simon 1963, pp. 278-279)

Here is an excerpt from the initial portion of this subject's protocol.
Well, looking at the left hand side of the equation, first we want to eliminate one of the sides by using rule 8 [A & B -> A / A & B -> B]. It appears too complicated to work with first. Now -- no, -- no, I can't do that because I will be eliminating either the Q or the P in that total expression. I won't do that first. Now I'm looking for a way to get rid of the [arrow] inside the two brackets that appear on the left and right sides of the equation. (Newell & Simon 1963, p. 280)
In treating protocols of such "blind" deductions as sources from which information about the subject's deductive thought processes can reliably be extracted Newell and Simon, like Turing, seem to credit intuitions about these blind traces contrary to Searle's.

Nix

The point of experiments is to adjudicate between competing hypotheses.  Since nondualistic (functionalist, behaviorist, and neurophysicalist) hypotheses about the nature of thought do not privilege the first person, to tender overriding epistemic privileges to the first person fatally prejudices the experiment. The tendency of other blind trace scenarios to yield intuitions conflicting with Searle's reinforces this indictment. Though Searle is (understandably) keen to style his experiment to be an attempt just to "remind us of a conceptual truth that we knew all along"; it's not. Neither, as just seen, can it bear any empirical weight.7

Unequivocal Intentionality

Searle frequently insists that the intuitions his thought experiment provokes are not Cartesian and have nothing to do with Dualism, just common sense. He would, no doubt, dismiss contrary intuitions evoked in others, or by other blind trace experiments, as the theory laden and biased ones. The intentionality Turing thought he descried in the Wrens and Bombes, and that Newell & Simon perhaps took for granted in the protocols they elicited -- according to Searle's elaboration on this -- are not real intrinsic intentionality like ours. Neither Wrens, nor Bombes, nor Newell and Simon's "deducers," to this way of thinking, really has thought processes; not in "exactly the same sense that you and I have" (Searle et. al. 1984, p. 146: my emphasis). Similarly, when we describe the doings of computers in intentional terms -- as when we say they're "seeking" and "deciding" things -- such speaking, Searle insists, does not attribute actual intrinsic intentionality to the devices. It's figurative speaking: as if attribution. Since the mental terms are not being used in their literal senses, they're being used ambiguously. Would-be contrary blind trace examples and intuitions, like Turing's and Newell & Simon's, then are undone.  Undone, Searle would have it, by equivocation on the mental terms in their would-be conclusions.  By the same stroke all independent seeming evidence of machine intelligence is also undone.  They just act as if intelligent.  Deep Blue doesn't even play chess.  Computers don't even compute.  Not really; not literally. (Cf. Searle 1999).

Of course,

If we face a putative counterexample to our favorite philosophical thesis, it is always open for us to protest that some key term is being used in a special sense, different from its use in the thesis." We may be right, but the ease of the move should counsel a policy of caution. Do not posit an ambiguity unless you are really forced to, unless there are really compelling theoretical or intuitive grounds to suppose that an ambiguity really is present. (Kripke 1977, p.268).
Searle himself, in a different context (where AI is not in question), agrees,
an ordinary application of Occam's razor places the onus of proof on those who claim that these sentences are ambiguous. One does not multiply meanings beyond necessity. (Searle 1975b, p. 40).
I submit there are no compelling intuitive reasons for accepting the ambiguity between "intrinsic" and "as if" (attributions of) intentionality Searle alleges. Intuitive tests for ambiguity yield no evidence of ambiguity in such contexts. Tests, for instance, which enable us to "hear" ambiguity as zeugma or punning in certain contexts yield no sense of zeugma or punning when applied to mental predications of computers.8 There are, it seems, then, compelling intuitive grounds to suppose that such predications are unambiguous literal predications.  Theoretical grounds Searle does offer for positing an ambiguity here, where intuition recognizes none, are woefully inadequate. Consciousness, on Searle's account is what confers real intrinsic intentionality. Yet he confesses, "The real gap in my account is . . . that I do not explain the details of the relation between Intentionality and consciousness" (Searle 1991, p. 181).  Indeed, he scarcely explains it even in outline.

Ambiguity Tests

Following Zwicky & Sadock,
From here on the count noun understanding is a neutral term to cover both those elements of `meaning' (in a broad sense) that get coded in semantic representations, and those that do not. Each understanding corresponds to a class of contexts in which the linguistic expression is appropriate -- though, of course, a class of contexts might correspond to several understandings, as in examples like Someone is renting the house (courtesy of Morgan [1972]). (Zwicky & Sadock 1975, p. 3, n. 9).
Though "philosophers perennially argue for ambiguities on the basis of a difference in understanding alone," Zwicky & Sadock note (1975, p.4), nevertheless,
It will not do, of course, to argue that a sentence is ambiguous by characterizing the difference between two understandings. (Zwicky & Sadock, p.3)
A difference in understanding is a necessary, but not a sufficient, condition for ambiguity. (Zwicky & Sadock, p.4)
The choice between ambiguity, "several underlying syntactic (or semantic) representations" (Zwicky & Sadock, p.2) and lack of specification, "a single representation corresponding to different states of affairs" (Zwicky & Sadock, p. 2), remains open. To illustrate this second notion, and the contrast with ambiguity, Zwicky & Sadock, consider as an example sentence (Zwicky & Sadock, p. 2)
My sister is the Ruritanian secretary of state.
This sentence, it may be observed,
is unspecified (general, indefinite, unmarked, indeterminate, vague, neutral) with respect to whether my sister is older or younger than I am, whether she acceded to her post recently or some time ago, whether the post is hers by birth or by merit, whether it has an indefinite tenure or will cease at some specific future time, whether she is right-handed or left-handed, and so on. (Zwicky & Sadock, p. 2-3)
Yet it shouldn't be said that this sentence is
many ways ambiguous just because we can perceive many distinct classes of contexts in which it would be appropriate, or because we can indicate many understandings with paraphrases. (Zwicky & Sadock, p.4)
Compare Deep Blue considers sacrificing a pawn and Kasparov considers sacrificing a pawn. The difference between my understanding of "considers" in these two sentences seems quite like the difference between these various understandings of "Secretary of State" in Zwicky & Sadock's example. It seems unlike the difference between the disparate understandings of such clearly ambiguous sentences as "They saw her duck" and "He cooked her goose" (Zwicky and Sadock, p.3). The disparate understandings of "her duck" here -- i.e., "a certain sort of bird" and "a certain kind of action" (Zwicky and Sadock, p.4) -- don't seem to be "the sort of thing that languages could plausibly fail to specify" (Zwicky & Sadock, p.4) semantically. In such cases, where "lack of specification is implausible" (Zwicky and Sadock, p.4), "the burden of proof falls on anyone who insists that [the] sentences . . . are unspecified rather than ambiguous" (Zwicky & Sadock, p.4). On the other hand, sentences like "My sister is the Ruritanian Secretary of State" despite being "unspecified with respect to some distinction" (Zwicky & Sadock, p.4), and indeed any number of them, nevertheless "have otherwise quite similar understandings" (Zwicky and Sadock, p.4). The distinctions are all the sort of thing that languages could plausibly fail to specify" (Zwicky and Sadock, p.4) and the burden of proof falls on anyone who insists that the sentences are ambiguous. Zwicky and Sadock propose a number of tests whereby this burden may be discharged; or by means of which to assess possible borderline cases.

In another context -- not in connection with the claim that predications of mental terms to computers are figurative "as-if" predications -- Searle himself accepts tests for distinguishing ambiguity from lack of specification in questionable cases. Searle asks us to consider "the following sequence of rather ordinary English sentences, all containing the word `cut'" (Searle 1980d, p.221):

1. Bill cut the grass.
2. The barber cut Tom's hair.
3. Sally cut the cake.
4. I cut my skin.
5. The tailor cut the cloth.
---------------------------------------
6. Sam cut two classes last week.
7. The President cut the salaries of the employees.
8. The Raiders cut the roster to 45.
---------------------------------------
9. Bob can't cut the mustard.
10. Cut the cackle!
11. Cut it out!
Searle deems it "more or less intuitively obvious" (Searle 1980d, p.221) that "the occurrence of the word `cut' in the utterances of 1-5 is literal" (1980d, p.221). Similarly, he deems it obvious that "the sense or senses in which `cut' would be used in the utterances of 6-8," on the other hand, "is a figurative extension of the literal meaning in 1-5" (Searle 1980d, p.222). In 9-11 "the occurrences of the word `cut' are clearly in idioms" (Searle 1980d, p. 222): these will not concern us. The main problem is how justify the distinction between the first group (1-5) and the second group (6-8) "if someone wanted to deny it" (Searle, 1980d, p.222). Searle proposes, that the distinction between the literal use of "cut" in 1-5 and its figurative employment in 6-8 can be made out by four different tests.9
Asymmetrical Dependence of Understanding: "A person who doesn't understand 6-8, but still understands 1-5, understands the literal meaning of the word `cut'; whereas a person who does not understand 1-5 does not understand that literal meaning; and we are inclined to say he couldn't fully understand the meaning of `cut' in 6-8 if he didn't understand the meaning in 1-5." (Searle 1980d, p.222)
Translation: "in general, 1-5 translate easily into other languages; 6-11 do not." (Searle 1980d, p.222)
Conjunction Reduction: "certain sorts of conjunction reductions will work for 1-5 that will not work for the next group. For example,
12. General Electric has just announced the development of a new cutting machine that can cut grass, hair, cakes, skin, and cloth.
But if I add to this, after the word `cloth', the expression, from 6-8, `classes, salaries, and rosters', the sentence becomes at best a bad joke and at worst a category mistake." (Searle 1980d, p.222)
Comparative Formation: "the fact that we can form some comparatives such as, `Bill cut more off the grass than the barber did off Tom's hair', is further evidence that we are not dealing with ambiguity as it is traditionally conceived." (Searle 1980d, p.224)
Let us put our problematic class of mental predications of machines to these tests to see how they fare; whether they support the judgment that such usages are figurative like the uses of "cut" in 6-8.

Translation

It is manifestly false that statements like "Deep Blue considers possible continuations of play," "DOS recognizes the dir command," "My pocket calculator calculates that the square root of 2 is 1.4142135," and the like, are particularly difficult to translate into other languages. Computer manuals help themselves generously to locutions of this sort and are published in and translated to and from English, Japanese, French, German, etc. By the translation test, it seems attributions of mental properties to computers, as in computer manuals, are literal and not figurative.

Comparative Formation

Comparatives such as "Deep Blue considers more continuations to a greater depth than its human opponent" and, "My pocket calculator extracts square roots more quickly and accurately than I" are familiar locutions. They provoke no sense of zeugma: we "hear" no punning. This argues that "we are not dealing with ambiguity as it is traditionally conceived" (Searle 1980d, p.224) here just as it argued that we were not using "cut" ambiguously when we spoke of Bill cutting the lawn and the barber cutting Tom's hair. We can form the comparison without making a bad joke or a category mistake.

Conjunction Reduction

Here again the test seems to support the verdict that the usages at issue are literal and not figurative. Consider the sentence, "Kasparov considered the possible continuation QxR check." If I add "and so did Deep Blue," there is no zeugma. The Comparative Formation and Conjunction Reduction tests enable us to "hear" ambiguity as zeugma or punning ("a bad joke"). Thus the humorous impression made by a conjunction reduction like "She came home in a flood of tears and a sedan chair" reveals an ambiguity of "in" between the sentential context, "She came home in a flood of tears" and the sentential context "She came home in a sedan chair." Unlike "GE's new cutting machine cuts cloth and salaries," I hear no pun (or otherwise have any intuition of semantic anomaly) in "Kasparov considered the continuation QxR check, and so did Deep Blue."

Asymmetrical Dependence of Understanding

This test yields no firm verdict against the literalness of the attributions in question. The other three tests appeal beyond our original intuitions, either to empirical facts of translatability or to special comparative or conjunctive contexts where ambiguities can be heard as zeugma or punning. The understanding test, however, does not seem to extend our original intuitions. If your original intuition, like Searle's, is that mental predications of computers are figurative and equivocal, you'll intuit that someone who didn't know how to make such predications of people but only knew how to ascribe these predicates to machines would "not understand the literal meaning" and that one who hadn't first learned how to make such predications of people "couldn't fully understand" the meaning of such terms as applied to computers. If you don't share Searle's original intuition you won't share these further intuitions either. The other tests have a degree of theory neutrality our intuitions about priority and asymmetrical dependence of understanding lack. (Searle himself presents the Understanding Test as continuous with his initial intuitive demarcation of literal, figurative, and idiomatic uses (Searle 1980d, p.221-222), and seems to admit its indecisiveness in this connection.) The other three tests are what come in when the distinction between the literal uses and the figurative uses isn't so obvious as in the "cut" examples. The other three tests come in "if someone wanted to deny" (Searle 1980d, p.222) your initial intuitions . . . as I deny Searle's intuitions about the figurativeness of mental attributions to computers. The verdict these tests render in this case -- in the case of such mental attributions as Searle would deem figurative (and equivocal), which I deem literal (and unequivocal) -- is clear: it's unequivocal.

Nix Nix

None of the ambiguity tests Searle proposes warrants any judgment of figurativeness, or supports a charge of ambiguity, against mental attributions to computers. The Understanding Test is too theory dependent to serve as a check on equally theory dependent intuitions about figurativeness and literalness. The more theory neutral tests of Translation, Conjunction Reduction, and Comparative Formation, on the other hand, all tell in favor of the literalness and univocality of the predications in question. If Searle's Chinese Room Experiment motivates and licenses a semantic distinction between "as-if" and "intrinsic" attributions of mental properties it will have to do so in opposition to, and not in agreement with, the semantic intuitions tapped by these tests. If the Chinese room thought experiment is supposed to license and motivate the claim that attributions of intentional phenomena to computers and their ilk are literally false, it will have to do so in opposition to a wealth of actual observations.  We say, "my calculator calculates that 5/12 = .4166666", "DOS recognized the DIR command," "Deep Blue considered castling queen side," and the like. And we mean it.

Naive AI

I don't claim to know what thought or intelligence essentially is, myself.  I even have grave doubts that it is essentially any way (but that's another story).  Still, I think, I know it when I see it.   I agree with Searle (1980a, p.422) that whether computers think (or someday will) is "an empirical question".  I further confess that beyond the platitudinous -- "Use your ingenuity. Use any weapon at hand, and stick with any weapon that works" (Searle 1990b, p.640) -- besides having next to nothing to say about what it is I think I see when I think I see it, I have little to say about how I know it when I see it either.  Except this: what ingenuity suggests to explain Deep Blue's intercourse with Kasparov is "Deep Blue considers castling queen side," "seeks to protect its king," "decides to castle king side," and the like.  It may seem unconfortably like chicken sexing; but it works (c.f. Hauser 1993b).

Here, some may worry, with Searle, that failure to distinguish between as if attribution (as to computers) and literal attribution (as to ourselves) will result in panpsychic proliferation; not just to simple devices such as thermostats, but even plants; even "water flowing downhill" which "tries to get to the bottom of the hill by ingeniously seeking the line of least resistance" (or so it may seem).  And "if water is mental," Searle adds, "everything is mental" (1989b, p.198).  I won't speak to thermostats and plants; at least not yet.  I will speak to the water.  There is a genuine difference in sense between the sense in which water seeks the bottom of the hill and the sense in which I do, I think.  A difference having to do with Grice's (1957) distinction between natural and nonnatural (attributions of) meaning, however, not with Searle's alleged distinction between literal and as if (attributions of) intentionality.  Just as the smoke from my neighbor's chimney cannot (be said to) naturally-mean there's a fire on his hearth unless there is a fire on his hearth; neither can water be said to seek the bottom of the hill by the line of least resistance unless there is a hill and a line of least resistance.  Ponce de Leon, on the other hand, may well seek to find the Fountain of Youth, regardless of its nonexistence much as the sentence "There's fire on his hearth" can nonnaturally-mean there is, even when there isn't. This ambiguity -- akin to Grice's natural and nonnatural senses of meaning -- is both theoretically well motivated, I take it, and one that standard ambiguity tests arguably do reveal.  Consider:

  1. That smoke and 'Feuer' mean fire.
  2. Jack and Jill sought to fetch a pail of water in going up the hill and the water the line of least resistance in coming down.
  3. Jack and Jill sought the pail of water less successfully than the water the line of least resistance.
If our intuitions are unclear with regard to simpler devices than computers, and simpler organisms than us -- or even the cases just cited, themselves -- I say, let the chips fall where they may.  Let them fall wherever working (predictive-explanatory) ingenuity, differences of meaning that do get intuited in standard ambiguity tests, and warranted scientific theoretic considerations dictate.  And if the chips fall too far towards panpsychism for comfort in Christendom . . . this is not a scientific consideration.  Searle's would-be blanket distinction between as-if thought or intentionality (theirs) and real literal thought or intentionality (ours) promises to salve such discomfort; but, alas (for panpsychic discomfort sufferers), Searle's distinction is baseless.  Ex hypothesi -- since as-if-thinking things behave exactly as if they think -- the distinction lacks any predictive-explanatory basis.  Given the failure of standard ambiguity tests to reveal any such ambiguity (as we have seen), it lacks any semantic-intuitive basis.  And given the inadequacies of the Chinese room experiment (shown above) it lacks any scientific-theoretic basis.

Conclusion

If it looks like a duck and walks like a duck and quacks like a duck it ain't necessarily a duck; but it is prima facie a duck.  Where applications of standard ambiguity tests yield intuitions to the contrary, accusations of ambiguity -- or introductions thereof -- need theoretical warrant.  Under the present (mainly, received "folk psychological") conceptual dispensation, for all we presently scientifically understand, computers literally do seek and decide things and have other characteristically intentional qualities such as we observe of them.  Perhaps cognitive science will eventually precise everyday "folk psychological" concepts in such a way as to rule out (certain) computational would-be cases; or future research will discover that what we've mistakenly been calling "seeking" and "deciding" in (certain) computers isn't really of the same nature as human seeking and deciding (as research previously discovered that whales aren't biologically fish and that solid seeming glass is really, chemically, liquid).  Allowing for this, given an explanation of "the details of the relation between Intentionality and consciousness" (Searle 1991, p.181) Searle could have theoretical grounds for distinguishing real intrinsic conscious intentionality, from bogus as-if intentionality; he could, but doesn't.10  He doesn't provide so much as an abstract -- much less the details -- of the wanted phenomenological account of intentionality.  Small wonder: Searle's would-be differentiation of intrinsic intentionality (ours) from as-if intentionality (theirs) depends crucially on discredited dualistic notions of subjective intrinsicality according to which meaning is phenomenologically "in" consciousness (see Hauser 1993a, Chap. 5).  Since scientific rehabilitation of such notions seems prerequisite for spelling out the "details of the relationship between consciousness and intentionality" I, for one, am not holding my breath.

Contrary to Searle's failed thought experiment, there is ample evidence from real experiments -- e.g., intelligent findings and decisions of actual computers running existing programs -- to suggest that processing does in fact suffice for intentionalityThe same evidence likewise supports (to a lesser degree, of course) the more daring claim that (the right) processing suffices essentially or in theory.  Nothing in the Chinese room withstands here either.

Acknowledgments

I am indebted to Carol Slater and Herbert Simon for their extensive comments an earlier drafts of this paper.  I also thank the other members of the Michigan State University Philosophy of Language Discussion Group -- Barbara Abbott, Aldo Antonelli, Gene Cline, Rich Hall, Myles McNally, and Paul Rusnock -- for their many useful comments and suggestions on earlier drafts.  Additionally, I owe a considerable debt of gratitude to  John Preston and Mark Bishop for their editorial guidance, and especially for their insightful comments and pressing questions regarding several key issues.  Mistakes, misunderstandings, and other infelicities remaining are, of course, my own.

Notes

  1. Since the focus of this paper is the bearing of Searle's Chinese room argument on claims relating to the existence of artificial intelligence (not on claims that programs explain human intelligence); since the relation between the two questions is by no means simple or uncontroversial (see Hauser 1993a, Chap. 2, Sec. 2 for discussion); and since Searle's own discussion is almost wholly couched in metaphysical (not epistemological) terms; I pursue the methodological question no further.
  2. See Hauser 1997 for further discussion.
  3. Less simple cases -- e.g., processes that loop -- complicate the simple isomorphism between typographic order and temporal sequence.  It might also be asked (as one reviewer of this essay asked), "Why shouldn't there be a perfectly good dynamic conception of syntax?"  In either case a more concrete notion of implementation may be invoked.  Such a notion needs to be made out anyhow -- if computation is to explain actual behavior -- and amounts here just to the (rightful) recognition that candidate thinkings are not processes in the abstract, but in real time.  David Chalmers, in a similar vein, observes, "Certainly no mere program is a candidate for possession of a mind. Implementations of programs, on the other hand, are concrete systems with causal dynamics, and are not purely syntactic" (1996, p. 327).
  4. I speak of thoughts (in place of minds) here. Since "mind", for Searle, abbreviates "mental processes" (Searle 1984, p. 39) -- so, e.g., the phrase "brains cause minds" serves "just as a slogan" for "the brain causes mental states" -- nothing except expository ease turns on this move from talk of minds to talk of thoughts (i.e., of mental processes or states). For me, as for Searle, I take it, a thing is or has a mind just in case it thinks or has mental properties.
  5. Hauser 1997 discusses this oddity in more detail.
  6. Searle frequently protests that he employs no "Cartesian apparatus" (Searle 1992, pp. xxi, 14) or "Cartesian paraphernalia" (Searle 1987, p. 146), and in several places (Searle 1992, p. 95f; 1987, p. 146) explicitly disavows "privileged access" (at least in name). He protests too much, I think.
  7. It might be urged that the Chinese room experiment at least shows that when the first-person point of view is considered to be privileged -- or even equal -- blind trace scenarios speak against AI; so, such cases provide no such support for AI and Computationalism as Turing, Newell, and Simon, perhaps, have taken them to provide.  At least, to enlist third-person intuitions about blind trace scenarios as supportive of AI, it may here be urged, exhibits a contrary methodological bias -- against dualistic hypotheses -- on a par with the pro dualistic bias of the Chinese room experiment.  I think the parity here is overstated. Ab initito (considering past performance) dualistic hypotheses are more worthy of scientific dismissal than competing physicalist hypotheses.  Likewise, private "first person" introspection reports or (dis)avowals are less worthy of scientific credence than public "third person" observation reports. Imagined introspective observations or (dis)avowals (as in the Chinese room scenario) are less worthy yet.  But even if parity is granted, the accusation of methodological bias against the Chinese room remains, along with independent evidence of AI from actual intellectual accomplishments by real machines; for all Searle imagines.
  8. Zeugma involves, roughly, the use of a single token of an expression to govern or modify two or more words to which the expression applies in different senses as in, "She came home in a flood of tears and a sedan chair," "The President let his pants and the country down," etc.
  9. The conjunction reduction and comparative formation tests are advocated by Zwicky & Sadock along with several other tests Searle doesn't mention.  I only consider the tests Searle himself proposes here.  The other tests described by Zwicky & Sadock (not mentioned by Searle) similarly fail to discover ambiguity in the cases of mental predications of computers at issue .
  10. Such an explanation might be grounds for dismissing the apparent intellectual attainments of computers as counterfeits; but this is very far from assured.  Even given a triumphant phenomenological theory of intentionality, the application of the account to exclude computers would not be easy (to say the least) due to the "ontological subjectivity" (Searle 1989b, p.194) and consequent epistemic privacy of the explanans.

Works Cited