The Chinese Room

John Searle's (1980a) Chinese room argument is one of the best known and widely credited counters to claims of artificial intelligence (AI): to claims that computers can think, and to allied doctrines of computationalism and cognitivism which hold that thought generally, and human thought, in particular, are species of computation.  Together these claims and doctrines comprise what Searle (1980a) dubs "strong AI." 

Searle's would-be refutation of these claims, in the first instance, takes the form of a thought experiment: imagine yourself to be a monolingual English speaker hand-executing a computer program for understanding Chinese.  As Searle describes it, you are "given a large batch of Chinese writing" plus "a second batch of Chinese script" and "a set of rules" in English "for correlating the second batch with the first"; a third batch of Chinese symbols and more instructions in English enable you "to correlate elements of this third batch with elements of the first two batches" and instruct you "to give back certain sorts of Chinese symbols with certain sorts of shapes in response." Imagine “you get so good” at using these look-up tables and following these instructions, that your responses are "absolutely indistinguishable from those of Chinese speakers.” Just by looking at your answers, nobody on the outside can tell you "don't speak a word of Chinese." By “manipulating uninterpreted formal symbols" according to rote instructions, you "behave like a computer."   Yet, in imagining yourself to be the person in the room, Searle declares, it's obvious that you “do not understand a word of the Chinese”: you have “inputs and outputs that are indistinguishable from those of the native Chinese speaker,” you can “have any formal program you like,” but you “still understand nothing."  And since "the computer has nothing more than I have in the case where I understand nothing" (1980a, p. 418), Searle concludes, neither does the computer understand anything.  Furthermore, since "nothing . . . depends on the details of [the] programs," the same verdict "would apply to any [computer] simulation" of any "human mental phenomenon" (1980a, p. 417).  Contrary to "strong AI", then, no matter how intelligent-seeming a computer behaves, and no matter what programming makes it so behave, since the symbols it processes are meaningless to it; it will not really be thinking.  Just “as if” (Searle 1984, p.50) thinking; mere simulation; not the real thing.

Searle (1984) elaborates on his initial (1980a) presentation, by developing a “derivation” from the “axioms” which Searle (1990) re-presents as follows:

(A1) Programs are formal (syntactic).
(A2) Minds have mental contents (semantics).
(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.

The thought experiment supports A3.  From these axioms, Searle claims, we may derive this conclusion:  

(C1) Programs are neither constitutive of nor sufficient for minds.

The derivation speaks of “semantics”; the initial presentation was couched in terms of “intentionality”; in either case meaning is featured as the missing ingredient that thought processes have, that “formal” computational processes lack.  On further review, however, it has seemed to many “that consciousness is at the root of the matter” (Chalmers 1996, p. 322) due to the experiment’s reliance on the “first-person point of view” (Searle 1980b, p. 451).  Searle – not wanting to allow computers to think at all (not even unconsciously) – resists this construal.  However, by Searle’s own reckoning, “Only a being that could have conscious intentional states could have intentional states at all" (Searle 1992, p. 132); so if the Chinese room is not in the first instance about consciousness, it is in the next.  Searle’s defense of the argument and attendant doctrines of “intrinsic intentionality,” “ontological subjectivity,” and the “Connection Principle” are all about consciousness.

Initial Objections to the Chinese room argument took, notably, two tacks. One decried Searle's determination to "always insist on the first person point of view" (Searle 1980b, p. 451) as objectionably dualistic.  Another noticed that the symbols Searle-in-the-room processes are not meaningless ciphers, but Chinese inscriptions.  As such, they are meaningful; and so is Searle's processing of them in the room, whether he knows it or not. In reply to this second sort of objection, Searle insists that what's at issue here is intrinsic intentionality not mere derived intentionality as had by inscriptions and other linguistic signs. Whatever meaning might be attached to Searle-in-the-room's computations due to the meaning of the Chinese symbols he processes will not be intrinsic to the process or the processor, but "observer relative," existing only in the minds of beholders such as Chinese speakers outside the room. The nub of the experiment, then, is that "instantiating a program could not be constitutive of intentionality, because it would be possible for an agent to instantiate the program and still not have the right kind of intentionality" (Searle 1980b, pp. 450-451: emphasis added); the intrinsic kind; which is, by Searle’s own reckoning, the conscious kind.  Whether this is objectionably dualistic (as the first tack alleges), or not, it certainly makes consciousness the root matter. 

In the usual case, when someone doesn't understand a word of Chinese, this is apparent both from the "first-person point of view" of the agent and the "third-person perspective" of querents. The Chinese room scenario is designedly abnormal in just this regard: third-person and first-person evidence of understanding drastically diverge. To the agent himself it seems clear he doesn’t understand; to Chinese-speaking interlocutors, it seems equally clear that he does.  The initial experimental result, that Searle-in-the-room doesn’t understand a word of Chinese credits his imagined introspective sense of not understanding in the face of overwhelming third-person evidence to the contrary.   What is exceptionable is the transition from its seeming so from "the first person point of view," to its being so.  Why should the first person perspective be epistemically privileged as to override the third-person perspective?  This is warranted, Searle maintains, by the "ontological subjectivity" (Searle 1989b, p.194) of the mental, meaning it “exists only as experienced by a human or animal subject and in that sense … exists only from a first-person point of view” (Searle 2004, p.94).  “Consciousness and intentionality are unique in that they have a first-person ontology” (Searle 2004, p.84).  Whether such posits of epistemic privilege and ontological subjectivity "regress to the Cartesian vantage point" (Dennett 1987b, p. 336), or not, they do make consciousness "the essence of the mental" (Searle 1991b, p. 144). 

Since it seems to me-in-the-room that I don’t understand, yet seems to them outside that I do, why not describe this odd imagined phenomenon as “understanding unawares” – much as the much discussed phenomena of blindsight is described as a kind of “seeing unawares” – and dub it “blind” or “unconscious understanding”?  (It might even be said that since the program I’m imagined to implement in the room is a program for understanding Chinese not a program for being aware of understanding Chinese that the imagined result -- that I’m not aware of understanding -- is in no way unexpected or counter to claims of strong AI.)  Searle’s notorious “Connection Principle” -- if it were tenable -- would close off this avenue of escape by insisting that “ascription of an unconscious intentional phenomenon to a system implies the phenomenon is in principle accessible to consciousness (Searle 1990f, p.586).  The Connection Principle, however, is untenable.  As Searle explains it, "if we think of the ontology of the unconscious in the way suggested -- as an occurrent neurophysiology capable of causing conscious states and events" the unconscious phenomenon does not "have aspectual shape ... right there and then" (Searle 1992, p. 169), while unconscious, "for the only occurrent reality of that shape is the shape of conscious thoughts" (Searle 1992, p. 171); and the trouble is that many well established psychological phenomena seem explicable only on the hypothesis that unconscious mental states and processes do have intentionality (hence "aspectual shape") right there and then, for they are subject to intentional (semantically mediated) effects while unconscious. “For example, subjects memorized the word pair "ocean-moon" with the expectation that when they were later asked to name a detergent they would be more likely to give the target "Tide" than would subjects who had not previously been exposed to the word pairs" (Nisbett & Wilson, p. 243). 

 

"The real gap in my account is ... that I do not explain the details of the relation between intentionality and consciousness" (Searle 1991c, p.181)

 "the mind consists of qualia [subjective conscious experiences] . . . right down to the ground" (1992, p. 20)

  • Searle, John. 1980a. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3, 417-424.
  • Searle, John. 1980b. "Intrinsic Intentionality." Behavioral and Brain Sciences 3: 450-456.
  • Searle, John. 1984. Minds, Brains, and Science. Cambridge: Harvard University Press.
  • Searle, John. 1989a. "Reply to Jacquette." Philosophy and Phenomenological Research XLIX: 701-708.
  • Searle, John. 1989b. Consciousness, unconsciousness, and intentionality. Philosophical Topics XVII(1, spring):193-209. ------. 1989c. How performatives work. Linguistics and Philosophy 12:535-558.
  • Searle, John. 1990. "Is the Brain's Mind a Computer Program?" Scientific American 262: 26-31.
  • Searle, John. 1991b. The mind-body problem. In John Searle and his critics, ed. Ernest Lepore and Robert Van Gulick, 141-147. Cambridge, MA: Basil Blackwell.
  • Searle, John. 1991c. Perception and the satisfactions of intentionality. In John Searle and his critics, ed. Ernest Lepore and Robert Van Gulick, 181-192. Cambridge, MA: Basil Blackwell.
  • Searle, John. 1992. The Rediscovery of the Mind, Cambridge, MA: MIT Press.