My discussion here will be directed at the claims I have defined as those of strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition. (Searle 1980a, p. 417)"Strong AI," here, seems to denote the conjunction of the metaphysical claim, "the appropriately programmed computer literally has cognitive states" and the methodological one "that the programs thereby explain human cognition."1 It is natural to think that the metaphysical claim being targeted is the affirmative answer to Turing's (1950) question "Can a machine think?" The plan of the original Chinese room article (mirroring Turing 1950) and not a few of Searle's subsequent remarks encourage just this interpretation. Take, for instance, Searle's characterization of Strong AI in the 1984 Panel Discussion of the question "Has Artificial Intelligence Research Illuminated Human Thinking" as
the view that says: "It isn't just that we're simulating thinking or studying thinking. Our appropriately programmed computers with the right inputs and outputs will literally have thought processes, conscious and otherwise, in the same sense that you and I do." (Searle et. al. 1984, p. 146)Searle continues,
I like that thesis, because it's clear that we know exactly what someone is saying when he says, "Look, my machine or the machine we're going to eventually build has thought processes in exactly the same sense that you and I have thought processes." It's clear, and it's false, and it's demonstrably false. (Searle et. al. 1984, p. 146)The "demonstration" that follows is the Chinese Room, patently here being advertised as targeting the thesis that computers already do or someday will think; as targeting AI proper. Likewise, Chapter 2 of Minds Brains, and Science -- where Searle first formalizes the Chinese Room Argument (as below) -- is titled "Can Computers Think?"
There are several common misunderstandings of the Chinese Room Argument and of its significance. Many people suppose that it proves that `computers cannot think'. But that is a misstatement. (Searle 1994, p.546)Retraction noted and accepted.
But if the argument doesn't target the thesis that "the appropriately programmed computer literally has cognitive states" what does it target? "Strong AI" as ever is what the targeted view is called, but Strong AI is now to be understood, strictly, as the view that "the mind is a computer program" (1994, p. 546): the target is Computationalism.
Computationalism says that computation is what thought is essentially: (the right) computation is metaphysically necessary for thought, i.e., regardless of anything else that could be the case, nothing would be thinking if it weren't (right) computing; and (right) computation metaphysically suffices, i.e., such computation would be thought under any circumstances, no matter what else might be the case. Much as water has a chemical essence that chemical science has discovered, Computationalism maintains, thought has a computational essence that cognitive science will discover. Thus aimed, against Computationalism,
this is the point of the parable -- if I don't understand Chinese on the basis of implementing the program for understanding Chinese, then neither does any digital computer solely on that basis [my emphasis] because no digital computer has anything that I do not have [in the example]. (Searle 1994, p. 546)Depending, as it does on the modal force of "solely on that basis" this "simple refutation of Strong AI" (Searle 1994, p. 546) is deceptively unsimple and, in fact, is very far from being "obviously sound and valid" (Searle 1994, p. 546).
Of course if you dismiss the apparent mentality of extant machines as bogus or "as if", as many, if not most, cognitive scientists incline to do; if you make computationalism your sole basis for belief that computers can or someday will think; then the Chinese room may seem, to you, unproblematically, to target both. But while the psychological gap between Computationalism and AI may, for many cognitive scientists, be slight; the real logical and epistemological gap is considerable. What Searle's proposes to bridge the gap -- "[b]rains cause minds" and "[a]nything else that caused minds would have to have causal powers at least equivalent to those of a brain" (Searle 1984, pp. 39-40) -- moreover, are argumentative matchsticks.2 Consider, for instance, the following would-be bridging argument:
1. Brains cause minds (mentality).The trouble is that premise 5 is not so easily supported. In fact, it's insupportable unless we're told what this additional something is and wherein computers are supposed to be lacking it. Most importantly -- as difficult as it may be for those who have antecedently dismissed the possibility to allow it -- observational evidence strongly suggests that computers already do seek, compare, and decide things. On the evidence the would-be bridging argument might more plausibly be continued from 4 as follows:
2. Anything else that caused minds would have do have causal powers at least equivalent to a brain.
3. Computation alone does not cause minds.
:. 4.Something else about the brain causes mentality.
5. Digital electronic computers lack this something else.
:. 6. Digital electronic computers don't have mentality.
5a. Digital computers exhibit mentality.Bridge out.
:. 6a. Digital computers have this something else.
It is a derivation of a conclusion from three premises:
premise 1: programs are formal (syntactical),
premise 2: minds have contents (semantics),
premise 3: syntax is not sufficient for semantics.
From these three propositions the conclusion logically follows:
programs are not minds.
But this conclusion misses the point. Computationalism holds "the essence of the mental is the operation [my emphasis] of a physical symbol system" (Searle 1980a, p. 421 cites this formulation by Newell 1979). The Computationalist hypothesis identifies minds with processes: with program operations or program runs and not simply with the programs themselves. Obviously, no one ever proposed to identify static program instantiations, e.g., on diskettes, with cognition. Only operating programs are candidate thinkers.
Suppose we modify the conclusion to actually speak to the Computationalist hypothesis, then, by substituting "processes" for "programs", substituting likewise in the first premise to preserve logical structure. Thus revised, the argument is to the point; but it's unsound. Reformulated premise 1 -- "processes are syntactic" -- is false. Processes are not syntactic: not purely so as the would-be modal force of the argument requires. The sentence tokens `Ed ate' and `Ed ate', though physically different (differently shaped: rendered in different typefaces), have the same syntactic form. This sameness of syntactic form is what makes them instances of the same sentence. Similarly, every instantiation of a given program is syntactically identical with every other instance: this is what makes them instances of the same program. In the simplest case, since spatial sequence of stored instructions and the temporal sequence of operations resulting from the execution of those instructions have the same syntax, the difference between inert instantiation and dynamic instantiation is not syntactic. The property of being a process is not, then, a purely formal or syntactic property but includes, essentially, a nonsyntactic element -- an element of dynamism -- besides. (See Hauser 1997, p.211).3
premise 2a: thought has contents (semantics),Therefore,
premise 3a: computational processing (by itself) is not sufficient for semantics.
thought is not computational processing.Note that the first premise drops out. The argument as restated has nothing to do with syntax.5 Also note that where Searle's original premise 3, "syntax is not sufficient for semantics," can credibly be styled "a conceptual truth that we knew all along" (Searle 1988, p. 214), 3a cannot. So, it seems the experiment must bear some empirical weight; and here, it's the experiment that doesn't suffice. It's methodologically biased, depending (as it does) on a dubious tender of epistemic privilege to first-person disavowals of understanding (and other intentional mental states). That other "blind trace" experiments in the recent history of cognitive science have even seemed to their advocates to yield intuitions not unsupportive of AI and Computationalism confirms the diagnosis of methodological bias and, perhaps, raises further issues about the robustness of Searle's thought experimental design.
It seems only an a priori tender of the epistemic privilege of overriding all "external" "third person" appearances to how it seems "from the point of view of the agent, from my [first person] point of view" (Searle 1980a, p. 420) will suffice to save the experiment. This confirms what Searle's characterization of the experiment as implementing the methodological imperative "always insist on the first person point of view" (Searle 1980b, p. 451) suggests. The experiment does invite us "to regress to the Cartesian vantage point" (Dennett 1987b, p. 336) and it seems to succeed only if we accept the invitation. The experiment's support for would-be result 3a is just as dubious as the Cartesian grant of epistemic privilege on which it depends. Of course, one needn't be a metaphysical dualist (Cartesian or otherwise), to hold some methodological brief for first person authority. However, to sustain a case based on first person authority against AI -- especially in the face of overwhelming third person evidence of it -- I submit, you do. You need to answer the question of what the additional something is in us that promotes our computation to thought to counter the third-person evidence: that's metaphysical. And you need to answer as Searle does, that "consciousness" or "ontological subjectivity" or "qualia" are what's essential: that's dualistic.6 The further question of wherein computers are lacking in ontological subjectivity -- and why we should think so -- remains. It remains because on the third person evidence the natural conclusion would seem to be -- again -- that whatever else besides computation is required for thought, computers have this too, since in seeking, comparing, and deciding things, as they evidently do, they evidently think.
To reiterate: given the abnormality of the case Searle describes, one's self-avowed or introspected lack of understanding of Chinese in the example does not suffice to establish actual lack of understanding in the face of overwhelming evidence of understanding "from the external point of view". If the experiment reliably showed that one would not be conscious of understanding the Chinese (stories, questions, and answers) one was processing, it would still fail to show that this processing would not amount to understanding; and even the antecedent here is ill-supported by Searle's thought experiment. Even if overriding epistemic privilege were granted to actual introspective deliverances or first-person disavowals, the Chinese room experiment's extension of such privilege to imagined introspections (under scarcely imaginable circumstances) would still be highly dubious.
Searle imagines himself to be hand tracing a natural language understanding program on Chinese input (stories and questions), thereby producing appropriate Chinese output (answers). Hand tracing is something computer programmers do to check and debug programs. In such cases, the programmer knows what the input means, what the output means, and what the program is supposed to be doing (the function it's supposed to be computing). The programmer must know these things in order to tell whether the program is processing the former into the latter correctly. The case Searle imagines is unusual in that the trace is "blind":
Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me they call "the program." (Searle 1980a, p. 418)Not only isn't Searle conscious of the meanings of the input and output strings he processes, he isn't even aware that the input strings are stories and questions, or that the output strings are answers. He may not even "recognize the Chinese writing as Chinese" or for that matter, as writing as opposed to "just so many meaningless squiggles" (Searle 1980a, p. 418). And he doesn't know that the set of instructions transforming input strings into output strings he is following is a natural language understanding (or any other kind of) program.
Searle's blind trace procedure has precedents in Turing's wartime use of "blind" human "computers" to decrypt the German naval code, and Newell and Simon's use of "protocols" derived from "blind" inferences of human subjects to gather information about human reasoning processes. Yet these experimenters fail to draw anything like Searle's conclusion -- even seem moved to advocate opposing conclusions -- by their reflection on these blind trace cases.
Here is how Newell and Simon describe the experimental situation:
A human subject, a student in engineering in an American college, sits in front of a blackboard on which are written the following expressions:Here is an excerpt from the initial portion of this subject's protocol.
(R -> -P) & (-R -> Q) | -(-Q & P).
This is a problem in elementary symbolic logic, but the student does not know it [my emphasis]. He does know that he has twelve rules for manipulating expressions containing letters connected by ["ampersands" (&)], "wedges" (v), ["arrows" (->)] and ["minus signs" (-)], which stand [which the subject does not know] for "and," "or," "implies," and "not." These rules [inference and equivalence rules though the subject doesn't know this] show that expressions of certain forms . . . can be transformed into expressions of somewhat different form. . . . The subject has practiced applying the rules, but he has previously done only one other problem like this. The experimenter has instructed him that his problem is to obtain the expression in the upper right corner from the expression in the upper left corner using the twelve rules. . . . The subject was also asked to talk aloud as he worked; his comments were recorded and then transcribed into a "protocol"; i.e., a verbatim record of all that he or the experimenter said during the experiment. (Newell & Simon 1963, pp. 278-279)
Well, looking at the left hand side of the equation, first we want to eliminate one of the sides by using rule 8 [A & B -> A / A & B -> B]. It appears too complicated to work with first. Now -- no, -- no, I can't do that because I will be eliminating either the Q or the P in that total expression. I won't do that first. Now I'm looking for a way to get rid of the [arrow] inside the two brackets that appear on the left and right sides of the equation. (Newell & Simon 1963, p. 280)In treating protocols of such "blind" deductions as sources from which information about the subject's deductive thought processes can reliably be extracted Newell and Simon, like Turing, seem to credit intuitions about these blind traces contrary to Searle's.
If we face a putative counterexample to our favorite philosophical thesis, it is always open for us to protest that some key term is being used in a special sense, different from its use in the thesis." We may be right, but the ease of the move should counsel a policy of caution. Do not posit an ambiguity unless you are really forced to, unless there are really compelling theoretical or intuitive grounds to suppose that an ambiguity really is present. (Kripke 1977, p.268).Searle himself, in a different context (where AI is not in question), agrees,
an ordinary application of Occam's razor places the onus of proof on those who claim that these sentences are ambiguous. One does not multiply meanings beyond necessity. (Searle 1975b, p. 40).I submit there are no compelling intuitive reasons for accepting the ambiguity between "intrinsic" and "as if" (attributions of) intentionality Searle alleges. Intuitive tests for ambiguity yield no evidence of ambiguity in such contexts. Tests, for instance, which enable us to "hear" ambiguity as zeugma or punning in certain contexts yield no sense of zeugma or punning when applied to mental predications of computers.8 There are, it seems, then, compelling intuitive grounds to suppose that such predications are unambiguous literal predications. Theoretical grounds Searle does offer for positing an ambiguity here, where intuition recognizes none, are woefully inadequate. Consciousness, on Searle's account is what confers real intrinsic intentionality. Yet he confesses, "The real gap in my account is . . . that I do not explain the details of the relation between Intentionality and consciousness" (Searle 1991, p. 181). Indeed, he scarcely explains it even in outline.
From here on the count noun understanding is a neutral term to cover both those elements of `meaning' (in a broad sense) that get coded in semantic representations, and those that do not. Each understanding corresponds to a class of contexts in which the linguistic expression is appropriate -- though, of course, a class of contexts might correspond to several understandings, as in examples like Someone is renting the house (courtesy of Morgan ). (Zwicky & Sadock 1975, p. 3, n. 9).Though "philosophers perennially argue for ambiguities on the basis of a difference in understanding alone," Zwicky & Sadock note (1975, p.4), nevertheless,
It will not do, of course, to argue that a sentence is ambiguous by characterizing the difference between two understandings. (Zwicky & Sadock, p.3)
A difference in understanding is a necessary, but not a sufficient, condition for ambiguity. (Zwicky & Sadock, p.4)The choice between ambiguity, "several underlying syntactic (or semantic) representations" (Zwicky & Sadock, p.2) and lack of specification, "a single representation corresponding to different states of affairs" (Zwicky & Sadock, p. 2), remains open. To illustrate this second notion, and the contrast with ambiguity, Zwicky & Sadock, consider as an example sentence (Zwicky & Sadock, p. 2)
My sister is the Ruritanian secretary of state.This sentence, it may be observed,
is unspecified (general, indefinite, unmarked, indeterminate, vague, neutral) with respect to whether my sister is older or younger than I am, whether she acceded to her post recently or some time ago, whether the post is hers by birth or by merit, whether it has an indefinite tenure or will cease at some specific future time, whether she is right-handed or left-handed, and so on. (Zwicky & Sadock, p. 2-3)Yet it shouldn't be said that this sentence is
many ways ambiguous just because we can perceive many distinct classes of contexts in which it would be appropriate, or because we can indicate many understandings with paraphrases. (Zwicky & Sadock, p.4)Compare Deep Blue considers sacrificing a pawn and Kasparov considers sacrificing a pawn. The difference between my understanding of "considers" in these two sentences seems quite like the difference between these various understandings of "Secretary of State" in Zwicky & Sadock's example. It seems unlike the difference between the disparate understandings of such clearly ambiguous sentences as "They saw her duck" and "He cooked her goose" (Zwicky and Sadock, p.3). The disparate understandings of "her duck" here -- i.e., "a certain sort of bird" and "a certain kind of action" (Zwicky and Sadock, p.4) -- don't seem to be "the sort of thing that languages could plausibly fail to specify" (Zwicky & Sadock, p.4) semantically. In such cases, where "lack of specification is implausible" (Zwicky and Sadock, p.4), "the burden of proof falls on anyone who insists that [the] sentences . . . are unspecified rather than ambiguous" (Zwicky & Sadock, p.4). On the other hand, sentences like "My sister is the Ruritanian Secretary of State" despite being "unspecified with respect to some distinction" (Zwicky & Sadock, p.4), and indeed any number of them, nevertheless "have otherwise quite similar understandings" (Zwicky and Sadock, p.4). The distinctions are all the sort of thing that languages could plausibly fail to specify" (Zwicky and Sadock, p.4) and the burden of proof falls on anyone who insists that the sentences are ambiguous. Zwicky and Sadock propose a number of tests whereby this burden may be discharged; or by means of which to assess possible borderline cases.
In another context -- not in connection with the claim that predications of mental terms to computers are figurative "as-if" predications -- Searle himself accepts tests for distinguishing ambiguity from lack of specification in questionable cases. Searle asks us to consider "the following sequence of rather ordinary English sentences, all containing the word `cut'" (Searle 1980d, p.221):
1. Bill cut the grass.Searle deems it "more or less intuitively obvious" (Searle 1980d, p.221) that "the occurrence of the word `cut' in the utterances of 1-5 is literal" (1980d, p.221). Similarly, he deems it obvious that "the sense or senses in which `cut' would be used in the utterances of 6-8," on the other hand, "is a figurative extension of the literal meaning in 1-5" (Searle 1980d, p.222). In 9-11 "the occurrences of the word `cut' are clearly in idioms" (Searle 1980d, p. 222): these will not concern us. The main problem is how justify the distinction between the first group (1-5) and the second group (6-8) "if someone wanted to deny it" (Searle, 1980d, p.222). Searle proposes, that the distinction between the literal use of "cut" in 1-5 and its figurative employment in 6-8 can be made out by four different tests.9
2. The barber cut Tom's hair.
3. Sally cut the cake.
4. I cut my skin.
5. The tailor cut the cloth.
6. Sam cut two classes last week.
7. The President cut the salaries of the employees.
8. The Raiders cut the roster to 45.
9. Bob can't cut the mustard.
10. Cut the cackle!
11. Cut it out!
Asymmetrical Dependence of Understanding: "A person who doesn't understand 6-8, but still understands 1-5, understands the literal meaning of the word `cut'; whereas a person who does not understand 1-5 does not understand that literal meaning; and we are inclined to say he couldn't fully understand the meaning of `cut' in 6-8 if he didn't understand the meaning in 1-5." (Searle 1980d, p.222)
Translation: "in general, 1-5 translate easily into other languages; 6-11 do not." (Searle 1980d, p.222)
Conjunction Reduction: "certain sorts of conjunction reductions will work for 1-5 that will not work for the next group. For example,
12. General Electric has just announced the development of a new cutting machine that can cut grass, hair, cakes, skin, and cloth.
But if I add to this, after the word `cloth', the expression, from 6-8, `classes, salaries, and rosters', the sentence becomes at best a bad joke and at worst a category mistake." (Searle 1980d, p.222)Comparative Formation: "the fact that we can form some comparatives such as, `Bill cut more off the grass than the barber did off Tom's hair', is further evidence that we are not dealing with ambiguity as it is traditionally conceived." (Searle 1980d, p.224)
Here, some may worry, with Searle, that failure to distinguish between as if attribution (as to computers) and literal attribution (as to ourselves) will result in panpsychic proliferation; not just to simple devices such as thermostats, but even plants; even "water flowing downhill" which "tries to get to the bottom of the hill by ingeniously seeking the line of least resistance" (or so it may seem). And "if water is mental," Searle adds, "everything is mental" (1989b, p.198). I won't speak to thermostats and plants; at least not yet. I will speak to the water. There is a genuine difference in sense between the sense in which water seeks the bottom of the hill and the sense in which I do, I think. A difference having to do with Grice's (1957) distinction between natural and nonnatural (attributions of) meaning, however, not with Searle's alleged distinction between literal and as if (attributions of) intentionality. Just as the smoke from my neighbor's chimney cannot (be said to) naturally-mean there's a fire on his hearth unless there is a fire on his hearth; neither can water be said to seek the bottom of the hill by the line of least resistance unless there is a hill and a line of least resistance. Ponce de Leon, on the other hand, may well seek to find the Fountain of Youth, regardless of its nonexistence much as the sentence "There's fire on his hearth" can nonnaturally-mean there is, even when there isn't. This ambiguity -- akin to Grice's natural and nonnatural senses of meaning -- is both theoretically well motivated, I take it, and one that standard ambiguity tests arguably do reveal. Consider:
If our intuitions are unclear with regard to simpler devices than computers, and simpler organisms than us -- or even the cases just cited, themselves -- I say, let the chips fall where they may. Let them fall wherever working (predictive-explanatory) ingenuity, differences of meaning that do get intuited in standard ambiguity tests, and warranted scientific theoretic considerations dictate. And if the chips fall too far towards panpsychism for comfort in Christendom . . . this is not a scientific consideration. Searle's would-be blanket distinction between as-if thought or intentionality (theirs) and real literal thought or intentionality (ours) promises to salve such discomfort; but, alas (for panpsychic discomfort sufferers), Searle's distinction is baseless. Ex hypothesi -- since as-if-thinking things behave exactly as if they think -- the distinction lacks any predictive-explanatory basis. Given the failure of standard ambiguity tests to reveal any such ambiguity (as we have seen), it lacks any semantic-intuitive basis. And given the inadequacies of the Chinese room experiment (shown above) it lacks any scientific-theoretic basis.
- That smoke and 'Feuer' mean fire.
- Jack and Jill sought to fetch a pail of water in going up the hill and the water the line of least resistance in coming down.
- Jack and Jill sought the pail of water less successfully than the water the line of least resistance.
Contrary to Searle's failed thought experiment, there is ample evidence from real experiments -- e.g., intelligent findings and decisions of actual computers running existing programs -- to suggest that processing does in fact suffice for intentionality. The same evidence likewise supports (to a lesser degree, of course) the more daring claim that (the right) processing suffices essentially or in theory. Nothing in the Chinese room withstands here either.