The Ur Article
Searle, J. 1980a. "Minds,
brains, and programs" Behavioral and Brain Sciences 3:417-424.
Against "strong AI" - the claim
that "the appropriately programmed computer really is a mind in
the sense that computers given the right programs can be literally said
to understand and have other cognitive states" (p. 417) -
Searle imagines himself locked in a room, receiving (as input) Chinese
writing (stories and questions); he processes the writing by following
a set of written instructions in English specifying a Natural Language
Understanding (NLU) program for Chinese modeled on Schank & Abelson
1977's Script Applier Mechanism (SAM); and he produces (as
output) Chinese writing (answers to the questions)
"indistinguishable from ... native Chinese speakers" (p. 418). In
the room, Searle is a Turing test passing (cf., Turing
1950) human computer (cf., Turing 1937); yet, he doesn't
understand a word of Chinese. So, Searle concludes, neither does
a computer running SAM or any other NLU program. "The computer
has nothing more than I have in the case where I understand nothing"
(p. 418): it attaches no meaning (semantics) to the physical symbols
(syntax) it processes and hence has no genuine mental states.
This result is said to generalize to "any Turing machine simulation of
human mental phenomena" (p. 417).
considers several would-be rejoinders to the experiment. The systems reply says the
thinker (in the scenario) isn't Searle, it's the whole
Searle-in-the-room system. Searle responds by imagining himself
to "internalize all the elements of the system" by memorizing the
instructions, etc.: "all the same," he intuits, he "understands nothing
Chinese" and "neither does the system" (p. 419). The
robot reply would add "a set of causal
relation[s]" between the symbols and "the outside world" by putting
the computer in a robot. Searle replies that "the addition
of such 'perceptual' and 'motor' capacities adds nothing in the way
of understanding": imagining himself in the room in the robot,
computationally acting as "the robot's homunculus," still, he
instantiating the program I have no intentional states of the relevant
type" (p. 420). The
brain simulator reply envisages a program that "simulates the
sequence of neuron firings in the brain of a native Chinese speaker"
(p. 420). Searle replies, "even getting close to the operation
of the brain is still not sufficient to produce understanding" (p.
421). The combination reply imagines
all of the above and Searle replies, in effect, that three times nil
is nil. The other minds reply insists "if you are going to attribute cognition to other people" on the basis
of their behavior "you must in principle also attribute
it to computers" on the basis of theirs. Searle dismisses this
as an epistemological worry beside his metaphysical point. "The
problem in this discussion," he says, "is not about how I know that
other people have cognitive states. But rather what it is that I am
attributing to them when I attribute cognitive states" and "it couldn't
computational processes and their outputs because the computational
processes and their outputs can exist without the cognitive state"
(p. 421-422). To the many mansions
reply - that would-be AI-crafters might succeed by supplemental (or
wholly other) noncomputational devices, if computational means don't
suffice - Searle retorts, this "trivializes the project of strong AI by
redefining it as whatever artificially produces and explains cognition"
(422). In conclusion, Searle advances his own thought that the
brain must produce intentionality by some noncomputational means
which are "as likely to be as causally dependent on ... specific
... as lactation, photosynthesis, or any other biological phenomenon" (p. 424).
Searle: Elaboration and Defense
Searle, J., 1980b, "Intrinsic Intentionality", Behavioral and Brain Sciences
In this companion piece Searle
rebuts objections targeting the Ur (1980a)
article raised in the accompanying Open Peer
Commentary. This, he observes, requires him to "make fully explicit
some of the points that were implicit in the target article" and
"involve recurring themes in the commentaries" (p. 450). As
Searle explains it, "the point of the Chinese room example" was to show
that "instantiating a program could not be constitutive of
intentionality, because it would be possible for the agent to
instantiate the program and still not have the right kind of
intentionality" (p. 450-451: my emphasis); intrinsic intentionality. Cases of "intrinsic intentionality are cases of
actual mental states." Assertions that computers "decide"
or "represent" things, by contrast, are just "observer relative
of intentionality, which are ways that people have of talking about
entities figuring in our activity but lacking intrinsic intentionality"
(p. 451), like words, sentences, and thermostats. Much opposition
to the Chinese room argument rests "on the failure to appreciate this
distinction" (p.452). The difference between intrinsic
and observer-relative concerns awareness. "Who
does the interpreting?" (p. 454), that is the question; a question
the methodological imperative "in these discussions [to] always insist
upon the first person point of view" (p. 451). (Cf., Hauser 2002; Hauser 1993.)
Against complaints (by Block, Dennett,
Pylyshyn, and Wilensky) "that the argument
just based on intuitions of mine" Searle insists that intuitions "in
deprecatory sense have nothing to do with the argument" (p. 451): in
room, it is a plain "fact about me that I don't understand Chinese":
"the first person point of view" there can be no doubt.
Searle holds it to be likewise indubitable "that my thermostat lacks
professed doubts (of Marshall and
McCarthy) on this score he
attributes to "confusing observer-relative ascriptions of intentionality with
ascriptions of intrinsic intentionality" (p. 452). The curiosity
is that the first person point of view seems here, inaccessible (cf., Nagel 1974).
Against Minsky, Block,
and Marshall's suggestions that
psychology might "assimilate intrinsic intentionality"
under a "more general explanatory apparatus" that "enables us
to place thermostats and people on a single continuum" Searle insists
"this would not alter the fact that under our present concept of
people literally have beliefs and thermostats don't" (p. 452). As for those - like Dennett and Fodor - who "take me to task because
I don't explain how the brain works to produce intentionality," Searle
replies, "no one else does either, but that it produces
mental phenomena and that the internal operations of the brain
are causally sufficient for the phenomena is fairly evident from
what we do know": we know, for instance, that light "reflected
from a tree in the form of photons strikes my optical apparatus"
which "sets up a series of neuron firings" activating "neurons in the
visual cortex" which "causes a visual experience, and the visual
experience has intentionality" (p. 452: cf., Explanatory Gaps). The objection
"that Schank's program is just not good enough, but newer and better
programs will defeat my objection" (Block,
Sloman & Croucher, Dennett, Lycan, Bridgeman, Schank), Searle says, "misses the
point" which "should hold against any program at all, qua formal
computer program"; and "even if the formal tokens in the program have
some causal connection to their alleged referents in the real world, as
long as the agent has no way of knowing that, it adds no intentionality
whatever to the formal tokens," and this applies (contra Fodor) whatever "kind of causal
linkage" is supposed (p. 454). Haugeland's demon assisted
brain's neurons "still have the right causal powers: they just need
some help from the demon" (p. 452); the "semantic activity" of which
Haugeland speaks "is still observer-relative and hence not sufficient
for intentionality" (p. 453).
Contrary to Rorty, Searle protests his view "does
not give the mental a `numinous Cartesian glow,' it just implies that
mental processes are as real as any other biological processes" (p.
452). Hofstadter is similarly mistaken: Searle advises him to
"read Eccles who correctly
perceives my rejection of dualism" (p.. 454): "I argue against strong
AI" Searle explains, "from a monist interactionist position" (my emphasis: cf., Searle 1992 ch. 1). Searle thanks Danto, Libet, Maxwell,
Puccetti, and Natsoulas for adding "supporting
arguments and commentary to the main thesis." He responds to Natsoulas'
and Maxwell's "challenge to provide some answers to questions about the
relevance of the discussion to the traditional ontological and
mind-body issues" as follows: "the brain operates causally both at the
level of the neurons and at the level of the mental states,
in the same sense that the tire operates causally both at the level of
the particles and at the level of its overall properties" (p.
455). (Note that the properties of tires in question -
"elasticity and puncture resistance" - being dispositions of
matter arranged as in "an inflated car tire" are materialistically
relatively unproblematic. But mental states, according to Searle,
are not dispositions (as behaviorism and functionalism
maintain) but something else; not "a fluid" (p. 451) he assures us; something made of qualia and
partaking of ontological subjectivity (cf., Searle 1992), but without the
numinous Cartesian glow.)
Searle complains, "seems to think that it is an objection
that other sorts of mental states besides intentional ones could
have been made the subject of the argument," but, Searle says,
"I quite agree. I could have made the argument about pains,
tickles, and anxiety," he continues, but "I prefer to attack strong
AI on what its proponents take to be their strongest ground" (p.
453). (Note this response implicitly concedes that the experiment is "about consciousness rather than about semantics" (Searle 1997): pains, tickles, and anxiety have
no semantics, yet the experiment, Searle allows, would be none
the worse for this .) The remaining commentators (Abelson, Rachlin, Smythe, Ringle,
Menzel, and Walter), in Searle's estimation, "missed the point or concentrated on peripheral issues" (p. 455):
the point, he avers, is that "there is no reason to suppose that
instantiating a formal program in the way a computer does is any
reason at all for ascribing intentionality to it" (p.
Searle, J., 1984a, Minds,
Brains, and Science, Cambridge: Harvard University Press.
Chapter two is titled "Can
Machines Think?" (N.B.).
After initially summarizing the view being opposed - "strong AI" - as
the view "that the mind is to the brain as the program is to the
computer" (Computationalism) Searle
proceeds to advertise the Chinese room as "a decisive refutation" of
claims such as Herbert Simon's claim that "we already have machines
that can literally think" (AI Proper:
cf., Hauser 1997a) ). The argument
that follows reprises the thought experiment and several of the
replies to objections from Searle 1980a with a notable addition ... a "derivation from axioms" (Searle 1989a) supposed to capture the
argument's "very simple logical structure so you can see whether it is
valid or invalid" (p. 38). The derivation proceeds from the
following premises (p. 39):
to the following conclusions (pp.
2. Syntax is not
sufficient for semantics.
3. Computer programs are
entirely defined by their
formal, or syntactical, structure.
4. Minds have mental
contents; specifically they have
1. No computer program by itself is sufficient to give a
system a mind. Programs, in short, are not minds and they are not
by themselves sufficient for having minds.
The stated "upshot" (p. 41) is that
Searle's own "monist interactionist" (Searle 1980b, p. 454) hypothesis
of "biological naturalism" (Searle 1983,
p. 230) - "namely, mental
states are biological phenomena" (p. 41) - is confirmed. (cf., Pinker 1997).
CONCLUSION 2. The way
that the brain functions to cause
minds cannot be solely in virtue of running a computer program.
CONCLUSION 3. Anything
else that caused minds would have to have causal powers at least
equivalent to those of the brain.
CONCLUSION 4. For any
artefact that we might build which had mental states equivalent to
human mental states, the implementation of a computer program would not
by itself be sufficient. Rather the artefact would have to have
powers equivalent to the powers of the human brain.
Searle, J. (1990a), "Is
the brain's mind a computer program?", Scientific American
Searle again rehearses his 1980a thought experiment here as "a
decisive refutation" of the computational theories of mind, or "strong
AI," and restates the derivation from axioms with minor variations
(cf., Searle 1984a). He then
proceeds to address the Connectionist Reply and the Luminous Room
Counterexample, both posed by Paul and Patricia Churchland in a
companion (1990) article. The
Connectionist Reply has it that Searle-in-the-room's lack of
understanding is due to the system's serial computational architecture.
The experiment, consequently, fails to show that symbol processing by a
more brainlike parallel or connectionist system would similarly lack
semantics and similarly fail to understand. Searle replies that the
insufficiency of connectionism is easily shown by a "Chinese gym"
variation on the original thought experiment. Imagine that a gym full
of "monolingual English-speaking men" implements a connectionist
architecture conferring the same Chinese language processing abilities
envisaged in the original experiment. Still, "No one in the gym speaks
a word of Chinese, and there is no way for the gym as a whole to learn
the meanings of any Chinese words" (p. 28). The
Luminous Room Counterexample presents an absurd "refutation" of
Maxwell's electromagnetic wave
theory light: a man in a dark room causes electromagnetic waves by
around a bar magnet; he concludes from the failure of the waving to
illuminate the room that electromagnetic waves are neither
constitutive of nor sufficient for light. The Chinese room example,
according to the Churchlands, is completely analogous and equally
ineffectual as a "refutation" of the computational theory of
mind. Searle disputes the analogy. It breaks down, he
claims, "because syntax [purely formally] construed has no physical
powers and hence
no physical, causal powers" such that it might be possibly be "giving
off consciousness" (p. 31) at an undetectably low level as with the
light (cf., Searle 1997).
Searle, J. (1992), The Rediscovery of the Mind.
Cambridge, MA: MIT Press.
Though the Chinese room argument
is not itself prominently featured in it, this work can be viewed as an
attempt to shore up the foundations on which that argument rests, and
to nurture background assumptions (e.g., the Connection Principle) and
supplementary contentions (e.g., the observer relativity of syntax)
which encourage Searle's wonted "intuition" about the room.
1 -countering criticism that the Chinese room argument depends on
dubious dualistic assumptions (Hauser
1993a, chap. 6; forthcoming) or
requires us to "regress to the Cartesian vantage point" (Dennett 1987, p. 336) - defends
Searle's claim to "give a coherent account
of the facts about the mind without endorsing any of the discredited
Cartesian apparatus" (Searle 1992: 14). He then deploys - by
my count - at least five Cartesian devices in developing his own
naturalist" account in the pages immediately following. He
(1) the essential ontological subjectivity of mental phenomena
("the actual ontology of mental states is a first-person ontology" (p.
16)) and its correlative "Connection Principle" that "[b]eliefs,
desires, etc... . are always potentially conscious" (p. 17); (2) a
distinction "between something really having a mind, such as a human
being, and something behaving as if it had has a mind, such as
a computer" (p. 16), a distinction Descartes deploys to deny nonhuman
animals any genuine mentality which Searle redeploys (with similar
intent) against computers; (3) a methodological principle of privileged
access according to which "the first-person point of view is primary"
(p. 20); (4) a distinction between primary ("intrinsic") and
secondary ("observer relative") properties; and, perhaps most notably,
(5) a Cartesian ego, i.e., "a `first person' an `I,' that has these
mental states" (p. 20). He even dots the "`I'" (cf., Descartes 1642, Meditation 2), and
appropriately so, for Searle's "`I'" is no more identifiable with body
or brain than Descartes'. Not every property had (e.g.,
grayness), nor every event undergone (e.g., a hemorrhage), nor even
every biological function performed by a brain (e.g.,
cooling the blood) is mental: but being had by a subject is supposed to
constitute the mental as mental. Nor would it avail - it would be
here - to say, "thoughts are subjective properties of
brains": it is precisely in order to explicate what it is for a
property to be subjective that Searle introduces "a `first
person' an `I' that has these mental states" in the first
place. Given his acceptance of all of this, it's hard
then to see what it is about Cartesian dualism - besides the name -
that Searle thinks "discredited."
Chapter 3 is notable for
acknowledging, finally, "the three hundred years of discussion of the
`other minds problem'" about which Searle had hitherto - in his
original (1980a) presentation and subsequent discussions (1980-1990) of the other minds reply" - feigned amnesia.
Searle's proposed "solution" to this problem, however, is not new but, essentially, a reworking of the well worn argument from
analogy. Neither is it improved. The
analogical argument in its original form - wherein behavioral effects are held to provide independent confirmation of the hypothesis suggested by physiological
resemblance (cf., Mill 1889, p. 204-205n) - is generally thought
too weak to ward off the solipsism "implicit in ... any theory of
which adopts the Cartesian egocentric approach as its basic frame
of reference" (Thornton 1996). Yet, Searle's "solution" is to weaken the argument further
by discounting the evidentiary import of behavior. In so doing
Searle regresses in this connection not only to Cartesianism,
but beyond it, employing stronger as-ifness apparatus to
exclude computers from the ranks of thinking things than Descartes
to exclude infrahuman animals. (cf., Harnad 1991, Hauser 1993b).
7 elaborates and defends what Searle calls the "Connection Principle": "The
notion of an unconscious mental state implies accessibility to
consciousness" (p. 152: cf., Searle
1990f). As the credulity of Harnad (1991) and Bringsjord (1992) attest , such inviolable
linkage of mentality to consciousness facilitates acceptance of
Searle's example: if the argument is to be "about semantics" (Searle 1997, p. 128) and thought in general - not just
consciousness thereof/therein - the possibility of unconscious understanding must be foreclosed. Enter "the Connection
Principle" (p. 162).
9 (p. 208) reprises the Wordstar-on-the-wall argument of Searle 1990c (p.27) in pursuit of the
supplemental stratagem (cf., Searle 1997)
of maintaining that "syntax is essentially an observer-relative
notion. The multiple realizability of computationally
equivalent processes in different physical media is not just a
sign that the processes are abstract, but that they are not intrinsic
to the system at all. They depend on interpretation from
the outside" (p. 209: original italics). This buttresses
the Chinese room argument against the rejoinder that, while the
(as characterized by Searle) merely "reminds" us of the "conceptual
truth that we knew all along" (Searle 1988,
p. 214) that syntax alone doesn't suffice for semantics by
definition or in principle, whether implemented syntax
processes suffice for semantics causally or in fact
(what is chiefly at issue here) is an empirical question. The
Chinese room experiment, the rejoinder continues, is ill equipped to
answer this empirical question due to the dualistic methodological bias
introduced by Searle's tender of overriding epistemic privileges to the
first-person. Furthermore, Searle's would-be thought
experimental evidence that computation doesn't suffice for meaning
(provided by Searle-in-the-room's imagined lack of
introspective awareness of the meaning) is controverted
by real experimental evidence (provided by the actual
intelligent-seeming-behavior of programmed computers) that it does
(cf., Hauser 2002). However,
if syntax and computation "exist only relative to observers and
interpreters," as Searle insists, arguably, empirical claims of causal-computational
sufficiency are "nonstarters" (Searle 1997,
p. 176) and the possibility that implemented syntax causally suffices
for thought (or anything else) is foreclosed (cf., Searle 1990a's rejoinder to the Luminous Room
of Churchlands 1990).
Searle, J. (1994), "Searle, John R." in A Companion to the Philosophy of Mind, ed.
S. Guttenplan, Basil Blackwell Ltd., Oxford, pp. 544-550.
strenuously disavows his previously advertised claim to have "demonstrated the falsity" of the claim "computers ... literally have
thought processes" (Searle et
al. 1984, p. 146) by the Chinese room argument. He here
styles it "a misstatement" to "suppose that [the Chinese room] proves
that 'computers cannot think'" (p. 547). The derivation from
axioms contra Computationalism is reprised (from Searle 1984a, 1989a, 1990a). Characterizing "the
question of which systems are causally capable of producing
consciousness and intentionality" as "a factual issue" Searle relies on
renewed appeal to the need for "causal powers ...
at least equal to those of human and animal brains" to implicate the
inadequacy of actual computers for "producing consciousness and
intentionality" (p. 547).
Searle, J. (1997), The
Mystery of Consciousness, New York: A New York Review Book.
This book is based on several
consciousness-related-book reviews by Searle that were originally
published in the New York Review of Books (1995-1997). Notably, it includes Daniel Dennett's reply to Searle's review of Consciousness Explained (and
Searle's response) and David Chalmers' reply to Searle's review of The Conscious Mind (and
Searle's response). Though in defending the Chinese room argument
against Dennett, Searle bristles, "he misstates my position as being
consciousness rather than about semantics" (p. 128), The Mystery of
Consciousness, ironically, features the Chinese room argument quite
prominently; beginning, middle, and end.
Chapter One re-rehearses the
thought experiment and re-presents the intended argument as "a simple
three-step structure" as elsewhere (cf., Searle 1984a, 1989a, 1990a, 1994).
Its validity is high-handedly presumed ("In order to refute the
argument you would have to show that one of those premises is
false") and its premises touted as secure ("that is not a likely
prospect" (p. 13)), as always, despite, as Searle himself notes,
"over a hundred published attacks" (p. 11). To these attacks,
Dennett complains, Searle "has never ... responded in detail": rather,
Dennett notes, despite "dozens of devastating criticisms," Searle
"has just presented the basic thought experiment over and over again"
(p. 116) unchanged. Unchanged, but not, I observe,
unsupplemented; as it is here. Searle continues, "It now seems to
me that the
Chinese Room Argument, if anything, concedes too much to Strong AI
in that it concedes that the theory is at least false," whereas,
"I now think it is incoherent" because syntax "is not intrinsic to
the physics of the system but is in the eye of the beholder" (p.
If I choose to interpret them so, Searle explains, "Window open = 1,
window closed = 0" (p. 16). On yet another interpretation (to
an earlier formulation) "the wall behind my back is implementing the
Wordstar program, because there is some pattern of molecule movements
under which is isomorphic to the formal structure of Wordstar" and "if
it is a big enough wall it is implementing any program" (Searle 1990c, p.27). This
supplemental argument "is deeper," Searle says, than the Chinese room
argument. The Chinese room argument "showed semantics was not
intrinsic to syntax"; this argument "shows that syntax is not intrinsic
to physics" (p. 17). (cf., Searle 1992,
Since the Chinese room argument
is so "simple and decisive" that Searle is "embarrassed to have to repeat it" (p.
11) - yet has so many critics - it must be we critics misunderstand:
so Searle steadfastly maintains. We think the
argument is about consciousness somehow, or that it's "trying to prove that `machines can't think' or even `computers can't
think'" when, really, it's directed just at the "Strong AI" thesis
that "the implemented program, by itself, is sufficient for
having a mind" (p. 14). This oh-how-you-misunderstand-me
plaint is familiar (cf., Searle 1984a, 1990a, 1994))
and fatuous. Searle takes it up again, in conclusion
here, where he explains,
do not offer a proof that computers are not conscious. Again, if
by some miracle all Macintoshes suddenly became conscious, I could not
disprove the possibility. Rather I offered a proof that
computational operations by themselves, that is formal symbol
manipulations by themselves, are not sufficient to guarantee the presence of consciousness. The proof was that the symbol
manipulations are defined in abstract syntactical terms and syntax by
itself has no mental content, conscious or otherwise.
Furthermore, the abstract symbols have no causal powers to cause
consciousness because they have no causal powers at all. All the
causal powers are in the implementing medium. A particular medium
in which a
program is implemented, my brain for example, might independently have
causal powers to cause consciousness. But the operation of the
program has to be defined totally independently of the implementing
since the definition of the program is purely formal and thus allows
implementation in any medium whatever. Any system - from men
sitting on high stools with green eyeshades, to vacuum tubes, to
silicon chips - that is rich enough and stable enough to carry the
program can be
the implementing medium. All this was shown by the Chinese Room
Argument. (pp. 209-210)
Here it is all about
consciousness, yet Searle bristled that Dennett "misstates my position
as being about consciousness rather than about semantics" (p.
128). Searle is right: I don't understand. Furthermore, if it all comes down to programs as abstract entities
having no causal powers as such - no power in abstraction to
cause consciousness or intentionality or anything -
then The Chinese Room Argument is gratuitous. "Strong AI," thus
construed, is straw AI: only implemented programs were ever
candidate thinkers in the first place. It takes no fancy "Gedankenexperiment"
or "derivation from axioms" to show this! Even the Law of Universal Gravitation is causally impotent in the abstract
- it is only as instanced by the shoe and the earth that the shoe is caused to drop. Should we say, then, that the
earth has the power to make the shoe drop independently of
gravitation? Of course not. Neither does it follow from the
causal powers of programs being powers of their implementing
media (say brains) that these media (brains) have causal powers to
cause consciousness "independently" of computation. That brains
"might," for all we know, produce consciousness by (as
yet unknown) noncomputational means, I grant. Nothing in the
Chinese room, however, makes the would-be-empirical hypothesis that
they do any more probable (Hauser 2002).
here, the supplemental Wordstar-on-the-wall argument - though more as a
substitute than a supplement. It does not so much
take up where the Chinese room argument leaves off as take over the
whole burden: to show that that the brain's computational power isn't in
the brain in the objective way that
gravitational power is in the earth; that computation, unlike
gravitation, is "in the eye of the beholder." Along
these lines, in response to Chalmers, Searle complains,
consciousness, "functional organization" and "information" are
because as he uses them, they have no causal explanatory power. To the
you make the function and the information specific, they exist only
to observers and interpreters. (p. 176)
Chalmers has since replied:
[T]his claim is quite
false. Searle has made it a number of times, generally without any
substantive supporting argument. I argue in Chapter 9 of the book,
and in more detail in my papers "A
Computational Foundation for the Study of Cognition" and "Does a
Rock Implement Every Finite-State Automaton?" that the relevant
notions can be made perfectly precise with objective criteria, and are
therefore not at all observer-relative. If a given system has a given
functional organization, implements a given computation, and therefore
realizes certain information, it does so as a matter of objective fact.
Searle does not address these arguments at all. ("On `Consciousness and
Where Chalmers finds Searle's "response" an "odd combination of mistakes, misrepresentations, and
unargued gut reactions." Dennett complains similarly of the
unresponsiveness of Searle's "response" to him:
[H]e trots out the
Chinese Room yet one more time and has the audacity to ask, "Now why
does Dennett not face the actual argument as I have stated it?
Why does he not tell us which of the three premises he rejects in the
Chinese Room Argument?" Well, because I have already done so, in great
detail, in several of the articles he has never deigned to
answer. For instance in "Fast
Thinking" (way back in The Intentional Stance, 1987) I explicitly quoted
his entire three premise argument and showed exactly why all three
of them are false, when given the interpretation they need for the
argument to go through! Why didn't I repeat that 1987 article in
my 1991 book? Because, unlike Searle, I had gone on to other
things. I did, however, cite my 1987 article prominently in a
footnote (p. 436), and noted that Searle's only response to it had been
to declare, without argument, that the points offered there were
irrelevant. The pattern continues; now he both ignores that
challenge and goes on to misrepresent the further criticism of the
Chinese Room that
I offered in the book under review ... . (p. 117)
Elsewhere, Copeland 1993 makes a careful cogent
case that Searle's would-be thought experimental counterexemplification
of Computationalism is invalid. Copeland, like Chalmers,
complains that the supplemental "Wordstar-on-the-wall argument" is
"simply mistaken" (pp. 136-7). Searle has never - to my knowledge
- replied to Copeland. The pattern continues.
With so many weighty unmet
criticisms against it, the least that can be said is that the Chinese
Room Argument is hardly "simple and decisive." Simply
understood, the argument is simply invalid (cf., Copeland 1993, Hauser 1997a); and issues about what
things are "by themselves ... sufficient to guarantee" are not simple. Whether it can further be fairly said of
the Chinese Room Argument that "just about anyone who knows anything
about the field has dismissed it long ago" as
"full of well-concealed fallacies," as Dennett says (p. 116), depends
on how you count experts. I, for one, have dismissed it
and do find it full of fallacies (Hauser 1993, 1997a); though the argument still has
defenders (cf., Bringsjord 1992,
Harnad 1991). It can, I
think fairly, be said, that the Chinese room argument is a potent
conversation starter, and has been a fruitful discussion piece.
Discussion of the argument has raised and are is helping to clarify
a number of broader issues concerning AI and computationalism.
It can also, I think fairly, be said that Searle's arguments pose
no clear and presently unmet challenge to claims of AI or
much less "proof" against them, as Searle claims ( p. 228).
Searle, J. (2002), "Twenty One
Years in the Chinese Room" in J. Preston & M. Bishop (eds.), Views
Into the Chinese Room: New Essays on Searle and Artificial
Intelligence (Oxford: Clarendon Press), 51-69.
Searle begs off responding to the many detailed
arguments presented in this volume and elsewhere, having "already
responded to more criticisms of the Chinese room argument," he says,
"than to all of the criticisms of all of the other controversial
philosophical theses" he has ever advanced. For all that, Searle
retains "confidence that the basic argument is sound" because he has
"not seen anything to shake its fundamental thesis" that "the purely
formal or abstract or syntactical processes of the implemented computer
program could not by themselves be sufficient to guarantee the
presence of mental content or semantic content of the sort that
is essential to human cognition" (p. 51). Note, the "fundamental
thesis" being urged here to support the argument is the very conclusion
the argument is supposed to
support (cf., CONCLUSION 1 of Searle
1984a)! Thus Searle ignores Copeland
2002's logical reply (see also Copeland
1993). Programming is conceptually insufficient to
guarantee cognition, Searle observes, "because of the
distinction between syntax and semantics" (compare premise 2 of Searle
1984a). It's causally insufficient "because the
program is defined independently of the physics of its implementation
(compare the "Wordstar on the wall" argument
of Searle 1997 and Searle
1992, ch. 9). Consequently, "Strong AI" attempts either to
equate thought with computation or to explain thought in terms of
computation are only comprehensible as misguided efforts inspired by
mistaken beliefs "that the investigation of consciousness and
intentionality, phenomena which are inherently subjective and mental,
beyond the reach of an objective science" (p. 60). "Once we
recognize the existence of an ontologically subjective domain, then
there is no obstacle to having an epistemically objective science of
that domain" (p. 66), and with dawning recognition of this, we are
now seeing "an inexorable paradigm shift taking place: we are moving
from computational cognitive science to cognitive neuroscience" (p.
Others: Commentaries and
Original "Open Peer Commentaries" (complete)
Abelson, Robert P.
argument is just a set of Chinese symbols", Behavioral and Brain
Searle's complaints about "intentionality" raise interesting worries about the evidentiary
between machines' representations and the (putative) facts
represented and concerning their lack of appreciation of the
for ... falsification" (p. 425) of their representations in
particular. Nevertheless, "Searle has not made convincing his
case for the fundamental essentiality of intentionality in
understanding" (p. 425). Hence, "we might well be humble and give
the computer the benefit of the doubt when and if it performs as well
as we do" (p. 424).
Block, Ned (1980), "What
intuitions about homunculi don't show", Behavioral and Brain
The crucial issue with regard to
the imagined homunculus in the room is "whether the homunculus falls in
the same natural kind ... as our intentional processes. If
so, then the homunculus head does think in a reasonable sense of
the term" (p. 425); commonsense based intuitions not withstanding.
Furthermore, "the burden of proof lies with Searle to show that the
intuition that the cognitive homunculi head has no intentionality (an
intuition that I and many others do not share) is not due to doctrinal
hostility to the symbol-manipulation account of intentionality" (p.
Bridgeman, Bruce (1980), "Brains + programs = minds", Behavioral and Brain Sciences
Searle thinks "we somehow
introspect an intentionality that cannot be assigned to machines" (p.
427), but "human intelligence is not as qualitatively different from
machine states as it might seem to an introspectionist" (p. 427).
"Searle may well be right that present programs (as in Schank and
Abelson 1977) do not instantiate intentionality according to his
definition. The issue is not whether present programs do
this but whether it is possible in principle to build machines that
make plans and achieve goals. Searle has given us no evidence
that this is not possible" (p. 427-8): "an adequately designed machine
could include intentionality as an emergent property even though
individual parts (transistors, neurons, or whatever) have none" (p.
Danto, Arthur C. (1980), "The use and mention of terms and the simulation of linguistic
understanding", Behavioral and Brain Sciences 3:428.
Danto would "recast Searle's
thesis in logical terms", in terms of the U-properties of words (use
properties, e.g., meaning: distinguishable only by those able
to use the words) and the M-properties (mention properties, e.g., shape:
distinguishable even to those unable to use the words). This
recasting "must force [Searle's] opponents either to concede machines
do not understand" on the "evidence that in fact the machine operates
pretty much by pattern recognition" and "Schank's machines, restricted
to M-properties, cannot think in the languages they simulate thinking in"; or else for them "to abandon the essentially behaviorist
theory of meaning for mental predicates" they cling to, since "an
M-specified simulation can be given of any U-performance, however
protracted and intricate"
and if we "ruthlessly define" U-terms in M-terms "then we cannot any
longer, as Schank and Abelson wish to do, explain outward
behavior with such concepts as understanding."
Dennett, Daniel (1980), "The milk of human intentionality", Behavioral and Brain Sciences
Searle argument is "sophistry" - "tricks with mirrors that give his case a certain
spurious plausibility": "Searle relies almost entirely on ill-gotten
gains: favorable intuitions generated by misleadingly presented thought
experiments." In particular Searle's revisions to the experiment
in response to the robot reply and systems reply taken together present "alternatives so outlandishly unrealizable as to caution us not to
trust our gut reactions in any case." "Told in detail the
doubly modified story suggests either that there are two people, one of
whom understands Chinese, inhabiting one body, or that one
has, in effect, been engulfed within another person, a person who
understands Chinese" (cf., Cole 1991a).
On Searle's view "the
`right' input-output relations are symptomatic but not conclusive or
criterial evidence of intentionality: the proof of the pudding is in
the presence of some (entirely unspecified) causal properties that are internal to the operation of the brain" (p. 429). Since Searle "can't
really view intentionality as a marvelous mental fluid", his concern
with the internal properties of
control systems appears to be a misconceived attempt to capture the
interior point of view of a conscious agent" (p.
429). Searle can't see "how any mere computer, chopping away
at a formal program could harbor such a point of view" because "he is
looking too deep" into "the synapse filled jungles of
the brain" (p. 430). "It is not at that level of description that
a proper subject of consciousness will be found" but rather at the
systems level: the systems reply is "a step in the right direction" and
that is "away from [Searle's] updated version of élan vital" (p. 430).
Eccles, John C. (1980), "A dualist-interactionist
perspective", Behavioral and Brain Sciences 3:430-431.
Though Searle asserts that "the
basis of his critical evaluation of AI is dependent on" (p. 430) the
proposition, "`Intentionality in human beings (and animals) is a
product of causal features of the brain" this unsupported invocation of
"a dogma of the psychoneural identity theory" (431) does not
figure crucially in his arguments against strong AI. Thus Eccles
finds, "Most of Searle's criticisms are acceptable for dualist
interactionism"; and he agrees with Searle, "It is high time that
Strong AI was discredited.
Fodor, J. A.
"Searle on what only brains can do", Behavioral and Brain Sciences
Fodor agrees, "Searle is
that instantiating the same program that the brain does is not, in and
itself, a sufficient condition for having those propositional attitudes
characteristic of the organism that has the brain" but finds "Searle's
treatment of the robot reply ... quite
unconvincing": "All that Searle's example shows is that the kind of
causal linkage he imagines - one that is, in effect, mediated by a man
sitting in the head of a robot - is, unsurprisingly, not the right
kind. Though we "don't know how to say what the right kind of
causal linkages [to endow syntax with semantics] are, nevertheless,
no clue as to why ... the biochemistry is important for intentionality
and, prima facie, the idea that what counts is how the organism is
connected to the world seems far more plausible." Furthermore,
is "empirical evidence for believing that `manipulation of symbols'
is involved in mental processes"; evidence deriving "from the
considerable success of work in linguistics, psychology, and AI that
has been grounded in that assumption."
Haugeland, John (1980), "Programs, causal powers, and intentionality", Behavioral and Brain
In the first place, Searle's
suggestion "that only objects (made of the stuff) with `the right
causal powers' can have intentionality" is "incompatible with the main
argument of his paper": whatever causal powers are supposed to
cause intentionality "a superfast person - whom we might as well call `Searle's
demon'" - might take over these powers presumably (as
per the thought experiment) also without understanding; showing these (biochemical or
whatever) factors to be insufficient for intentionality too!
Dismissing the demon argument
it Searle's thought experiment) Haugeland characterizes the
central issue as "what differentiates original from derivative intentionality" - the "intentionality that a thing (system,
state, or process) has `in its own right'" - from intentionality
"that is `borrowed from' or `conferred by' something else."
"What Searle objects to is the thesis, held by many, that good-enough
AI systems have (or will eventually have) original
intentionality." It is a plausible claim that what distinguishes
systems whose states have original intentionality is that these states
are "semantically active" through being "embodied in a `system' that
channels' for them to interact with the world" like thought, and unlike
text. "It is this plausible claim that underlies the thesis that
(sufficiently developed) AI systems could actually be
and have original intentionality. For a case can
surely be made that their `representations' are semantically active
(or, at least, that they would be if the system were built into a
robot)" (cf., the robot reply).
Still, Haugeland sympathizes with
Searle's denial that good-enough AI systems have (or will eventually
have) original intentionality. Not for Searle's demon-based
reason - that no matter how much semantically appropriate interactivity
a program had it wouldn't count as semantics (since the
demon might have the same). Rather, for a much more "nitty-gritty
empirical" reason: Haugeland doubts whether programming can in fact capture or impart the appropriate type and degree of system-world
interactivity. Again, not because, if there were such a
program, it still wouldn't suffice (as Searle argues), "but
because there's no such program": none is or ever will be
good-enough. Speculation aside, at least, "whether there
is such a program, and if not, why not are ... the important questions."
Hofstadter, Douglas R.
(1980), "Reductionism and Religion", Behavioral and Brain Sciences
Searle's argument is a "religious
diatribe against AI masquerading a a serious scientific
argument." Like Hofstadter himself, Searle "has deep difficulty
in seeing how mind, soul, `I,' can come out of brain, cells, atoms";
but while claiming to accept this fact of nature, Searle will not
consequence that, since, all physical processes "are formal,
that is, rule governed," "`intentionality' ... is an outcome of formal
processes." Searle's thought experiment provides no real evidence
to the contrary because "the initial situation, which sounds plausible
enough, is in fact highly unrealistic", especially as concerns time
scale. This is fatal to the experiment since "any
time some phenomenon is looked at on a scale a million times different
from its familiar scale, it doesn't seem the same!" Thus, "what
Searle is doing" is "inviting you to identify with a nonhuman which he
lightly passes off as human, and by so doing he invites you to
participate in a great fallacy."
Libet, B. (1980), "Mental
phenomena and behavior", Behavioral and Brain Sciences 3:434.
Though Searle's thought
experiment "shows, in a masterful and convincing manner, that the
behavior of the appropriately programmed computer could transpire in
the absence of a cognitive mental state" Libet believes "it is also
possible to establish the proposition by means of an argument based on
simple formal logic. In general, where "systems A and B are known
to be different, it is an error in logic to assume that because systems
A and B both have property X, they must both also have property
Y". From this, Libet urges, it follows that "no behavior of a
regardless of how successful it may be in simulating human behavior, is
ever by itself sufficient evidence of any mental state." While
he concurs with Searle's diagnosis of "why so many people have
believed that computer programs do impart a kind of mental process or
state ot the computer" - it's due to their "residual behaviorism or
operationalism" underwriting "willingness to accept input-output
patterns as sufficient for postulating ... mental states" - Libet here
proposes a cure more radical even than Searle. Libet deems
Searle's admission (in response to the combination
reply) that it would be rational to attribute
intentionality to "a robot whose behavior was indistinguishable over a
large range from human behavior ... pending some reason not to"
[my emphasis] - too concessive. "On the
basis of my argument," Libet asserts, one would not have to know that
the robot had a formal program (or whatever) that accounts for its
in order not to have to attribute intentionality to it. All we
need to know is that the robot's internal control apparatus is not made
in the same way and out of the same stuff as is the human brain."
Lycan, William G. (1980), "The functionalist reply (Ohio State)", Behavioral and Brain
Searle's counterexample (among
others) effectively refutes behaviorism, the view
that "if an organism or device D passes the Turing test, in the
sense of systematically manifesting all the same outward
behavioral dispositions that a normal human does, then D has all the same sorts of contentful or intentional states that
humans do. But Searle's would-be counterexamples have no such
force as advertised against functionalism, "a more
species-chauvinistic view" according to which "D's manifesting
all the same sorts of behavioral dispositions we do does not alone
suffice for D's having intentional states: it is necessary in
addition that D produce the behavior from stimuli in roughly
the way that
we do," i.e., that D's "inner procedures" and
"inner functional organization" should be "not unlike ours."
Lycan accepts Searle's judgment that neither Searle nor the room-Searle
system nor the room-Searle-robot system understands; but this is not
at all prejudicial to functionalism, he maintains, for the simple
that the imagined systems "are pretty obviously not functionally
at the relevant level to human beings who do understand Chinese."
Lycan pitches the relevant level fairly low and expresses "hopes for a
sophisticated version of the `brain simulator' (or the
`combinationmachine') that Searle illustrates with his plumbing
Besides agreeing with Searle's
intuitions about the imagined systems (except the "combination
machine") Lycan endorses a theoretical point that Searle's
subsequent presentations have come more and more prominently to feature
(cf., Searle 1984a, Searle 1990a). Lycan puts it
thus: "A purely formally or syntactically characterized element has no
meaning or content in itself, obviously, and no amount of mindless
syntactic manipulation of it will endow it with any. Lycan
further agrees that this "shows that no computer has or could
have intentional states merely in virtue of performing
syntactic operations on formally characterized elements. But
that does not suffice to prove that no computer can have intentional
states at all," as Searle seems to think. Our
brain states do not have the contents they do just
in virtue of having their purely formal properties either" (my
"the [semantic] content of a mental representation is not determined
within its owner's head (Putnam 1975a;
Fodor 1980[b]): rather it is
determined in part by the objects in the environment that actually
figure in the representation's etiology and in part by social and
contextual factors of other sorts." (Searle 1983 tries mightily - and, in
my opinion, fails miserably - to counter such "semantic
Given his considerable agreement
with Searle's intuitions and principles, perhaps not unsurprisingly, in
the end, Lycan concludes less with a bang than a whimper that "nothing
Searle has said impugns the thesis that if a sophisticated future
computer not only replicated human functional organization but harbored
its inner representations as a result of the right sort
of causal history and had also been nurtured with a favorable social
setting, we might correctly ascribe intentional states to it."
McCarthy, John, (1980), "Beliefs, machines, and theories", Behavioral and Brain Sciences
dismissal of the idea that thermostats may be ascribed belief,"
McCarthy urges, "is based on a misunderstanding. It is not
a pantheistic notion that all machinery including telephones,
light switches, and calculators believe. Belief may usefully
be ascribed only to systems about which someone's knowledge can
best be expressed by ascribing beliefs that satisfy axioms [definitive
of belief] such as those in McCarthy (1979). Thermostats are
sometimes such systems. Telling a child, `If you hold the candle
under the thermostat, you will fool it into thinking the room is
too hot, and it will turn off the furnace' makes proper use of the
child's repertoire of mental concepts."
In the case
of the Chinese room, McCarthy maintains "that the system understands
Chinese" if "certain other conditions are met": i.e., on the condition
that someone's knowledge of this system can best be expressed
by ascribing states that satisfy axioms definitive of understanding.
Marshall, John C. (1980), "Artificial intelligence - the real thing?", Behavioral and Brain
himself incredulous that anyone at present could actually believe computers "literally have cognitive states," Marshall points out that
programming might endow systems with intelligence without providing a
theory or explanation of that intelligence. Furthermore Searle is
misguided in his attempts to belabor "the everyday use of mental
vocabulary." "Searle writes, `The study starts with such facts as
that humans have beliefs, while thermostats, telephones, and adding
machines don't'": Marshall replies, "perhaps it does start there, but
that is no reason to suppose it must finish there": indeed the
"groping" pursuit of "levels of description" revealing "striking
resemblances between [seemingly] disparate phenomena" is the way of
all science; and "to see beyond appearances to a level at which there
are profound similarities between animals and artifacts" (my
emphasis) is the way the mechanistic scientific enterprise must proceed in psychology as in biology more generally. It is "Searle, not
the [cognitive] theoretician, who doesn't really take the enterprise
seriously." His unseriousness is especially evident in the
cavalier way he deals
with - or rather fails to deal with - the other minds problem.
Maxwell, Grover (1980), "Intentionality: Hardware, not software", Behavioral and Brain
that "Searle makes exactly the right central points and supports
them with exactly the right arguments" Maxwell explores "some
implications of his results for the overall mind-body problem."
Assuming that "intentional states are genuinely mental in the
what-is-it-like-to-be-a-bat? sense" - i.e., accepting Searle's
later-named thesis of "ontological
subjectivity" - Maxwell finds the argument weighs heavily against
eliminativism and reveals functionalism to be "just another variety"
thereof. The argument's "main thrust seems compatible with
interactionism, with epiphenomenalism, and with at least some versions
of the identity thesis. Maxwell sketches his own version of
the identity thesis according to which mental events are part of the
hardware of `thinking machines'" and "such hardware must somehow be got
into any machine we build" before it would be thinking. "Be all
this as it may," he concludes, "Searle has shown the total futility of
the strong AI route to genuine artificial intelligence."
Menzel, E. W. Jr. (1980), "Is the pen mightier than the computer", Behavioral and Brain
convention if nothing else, in AI one must ordinarily assume, until
proven otherwise, that one's subject has no more mentality than a
rock: whereas in the area of natural intelligence one can often
get away with the opposite assumption" in other respects "the
problems of inferring mental capacities are very much the same in
the two areas." Here, "the Turing test (or the many counterparts to the
test which are the mainstay of comparative psychology)" seeks
"to devise a clear set of rules for determining the status of subjects
of any species." But "Searle simply refuses to play such games"
and consequently "does not ... provide us with any decision rules for
the remaining (and most interesting) undecided cases." His
"discussion of `the brain' and `certain brain processes' in this
connection is not only vague" but would "displace and complicate the
problems it purports to solve": "their relevance is not made clear,"
and "the problem of
deciding where the brain leaves off" - or more generally where is the locus of cognition - "is not as easy as it sounds."
"Einstein," Menzel notes, "used to say `My pencil is more intelligent
than I am'": pencil equipped brains acquire mental abilities in virtue
being so (among other ways) equipped and "it is only if one confuses
and past, and internal and external happenings with each other, and
considers them a single `thing,' that `thinking' or even the causal
power behind thought can be allocate to a single `place' or `entity'."
Minsky, Marvin (1980), "Decentralized minds", Behavioral and Brain Sciences 3:439-440.
a mind so split into two parts that one merely executes some causal
housekeeping for the other, I should suppose that each part - the
Chinese rule computer and its host - would then have its own separate
phenomenologies - perhaps along different time scales.
No wonder the host can't `understand' Chinese very fluently" (cf., Cole 1991a). Searle's argument,
couched as it is in "traditional ideas inadequate to this tremendously
difficult enterprise" could hardly be decisive, especially in the face
of the fact that "computationalism is the principal
source of the new machines and programs that have produced for us
the first imitations, however limited and shabby, of mindlike activity."
(1980) ,The primary source of intentionality", Behavioral and
Brain Sciences 3:440-441.
shares Searle belief that "the level of description that computer
programs exemplify is not one adequate to the explanation of
mind" as well as his emphasis on the qualitative or phenomenal
content of perception (in particular) - "the qualitative
being thereness of objects and scenes" - being something over and
above the informational content. The remaining question
for both concerns the explanatory
gap between physiology and phenomenology or "what is the `form of
realization?' of our visual [and other] experiences that Searle is
when he attributes them to us."
Puccetti, Roland (1980), "The chess room: further demythologizing of strong AI", Behavioral
and Brain Sciences 3:441-442.
grounds he has staked out, which are considerable" Puccetti deems
Searle to be "completely victorious": Puccetti wants "to lift the
sights of his argument and train them on a still larger, very tempting
target. To this end he devises a Chinese-room-like scenario
involving having "an intelligent human from a chess-free culture"
follow a the instructions of a "chess playing" program: since he
"hasn't the foggiest idea of what he's doing," Puccetti concludes,
"[s]uch operations, by themselves, cannot , then, constitute
understanding of the game, no matter how intelligently played." Chess
playing computers "do not have the intentionality towards the chess
moves they make
that midget humans had in the hoaxes of yesteryear. They simply
know now what they do."
Pylyshyn, Zenon W.
`causal power' of machines", Behavioral and Brain Sciences
insists that causal powers of the implementing medium under and beneath
the powers that make it an implementation are crucial
for intentionality, for Searle "the relation of equivalence with
respect ot causal powers is a refinement of the relation of equivalence
with respect to function": this has the consequence that "if more
and more of the cells in your brain were replaced by integrated circuit
chips programmed in such a way as to keep the input-output function
of each unit the identical to that of the unit being
replaced, you would in all likelihood just keep right on speaking
as you are doing now except that you would eventually stop meaning anything by it" (cf., zombies).
Furthermore the "metaphors and appeals to intuition" Searle advances
"in support of this rather astonishing view" are opaque and
unconvincing. "But what is the right kind of stuff?
Pylyshyn asks. "Is
it cell assemblies, individual neurons, protoplasm, protein molecules,
atoms of carbon and hydrogen, elementary particles? Let Searle
name the level, and it can be simulated perfectly well in `the wrong
of stuff'. Indeed, "it's obvious from Searle's own argument that
the nature of the stuff cannot be what is relevant, since the
monolingual English speaker who has memorized the formal rules is
supposed to be
an example of a system made of the right stuff and yet it
allegedly still lacks the relevant intentionality." "What
is frequently neglected in discussions of intentionality," Pylyshyn
concludes, "is that we cannot state with any degree of precision what
it is that entitles us to claim that people refer ... and
therefore that arguments against the intentionality of computers," such
as Searle's, "typically reduce to `argument from ignorance'."
Rachlin, Howard (1980), "The behaviorist reply (Stony Brook)", Behavioral and Brain Sciences
finds it "easy to agree with the negative point Searle makes about mind
and AI" - "that the mind can never be a computer program." But
Searle's "positive point ... that the mind is the same thing as the
brain ... is just as clearly false as the strong AI position that he
criticizes." The "combination robot example" -"essentially a
behavioral example" - illustrates Rachlin's point. "Searle says
`If the robot looks and behaves sufficiently like us, then we would
suppose, until proven otherwise, that it must have mental
states like ours'" (Rachlin's emphasis). "But proof otherwise,"
Rachlin insists, "can only come from one place -
the robot's subsequent behavior": Searle's willingness "to abandon the
assumption of intentionality (in a robot) as soon as he discovers that
a computer was running it after all" is "a mask for ignorance."
contrary to anyone's expectations, all of the functional properties of
the human brain were discovered. Then the "human robot" would be
unmasked, and we might as well abandon the assumption
of intentionality for humans too.
But we should
not so abandon it. "It is only the behaviorist, it seems who is
able to preserve terms such as thought, intentionality,
and the like (as patterns of behavior). The "Donovan's brain
reply (Hollywood)" shows the utter absurdity of identifying mind with
brain. Let Donovan's brain be "placed inside a computer console with
the familiar input-output machinery," taking the place of the CPU and
being "connected to the machinery by a series of interface
mechanisms." "This `robot' meets Searle's criterion for a
thinking machine - indeed it is an ideal thinking machine from his
point of view" - but it would be no less "ridiculous to say" Donovan's
brain was thinking in processing the input-output than to say the
original computer was thinking in so doing. Indeed it would
probably be even more ridiculous since a "brain designed to interact
with a body, will surely do no better (and probably a lot worse) at
operating the interface equipment than a standard computer mechanism
for such equipment."
Ringle, Martin (1980), "Mysticism as a philosophy of artificial intelligence", Behavioral
and Brain Sciences 3:444-445.
salient interpretation, "the term `causal powers' refers to the
capacities of protoplasmic neurons to produce phenomenal states such as
felt sensations, pains, and the like." "But even if we accept
Searle's account of intentionality" as dependent on phenomenal
consciousness, the assumption made by his argument - that things of
"inorganic physical composition" like silicon chips, "are categorically
incapable of causing felt sensations" - "still seems to be
fact that mental phenomena such as felt sensations have been,
historically speaking, confined to protoplasmic organisms in no way
demonstrates that such phenomena could not arise in a nonprotoplasmic
system. Such an assertion is on a par with a claim (made in
that only organic creatures such as birds or insects could fly.
Since Searle "never explains what sort of biological phenomenon it is, nor does he
ever give us a reason to believe there is a property inherent in
protoplasmic neural matter that could not, in principle, be replicated
in an alternative physical, substrate," even in silicon chips, "[o]ne
can only conclude that the knowledge of the necessary connection
between intentionality and protoplasmic embodiment is obtained through
some sort of mystical revelation."
Rorty, Richard (1980), "Searle and the special powers of the brain", Behavioral and Brain
claim "`that actual human mental phenomena might be dependent on actual
physical-chemical properties of actual human brains' ... seems just a
device for insuring that the secret powers of the brain will move
further and further back out of sight every time a new
model of brain functioning is proposed. For Searle can tell
us that any such model is merely a discovery of formal patterns, and
the `mental content' has still escaped us." "If Searle's present
pre-Wittgensteinian attitude gains currency," Rorty fears, the good
work of Ryle and Putnam will be undone and `the mental' will
regain its numinous Cartesian glow"; but this, he predicts, "will
boomerang in favor of AI. `Cognitive scientists' will insist that
only lots more simulation and money will shed light upon these
deep `philosophical' mysteries."
Schank, Roger C. (1980), "Understanding Searle", Behavioral and Brain Sciences 3:446-447.
Searle is "certainly right" in denying that the Script Applier Mechanism program
(SAM: Schank & Abelson 1977) can understand and consequently he is also
right in denying that SAM "explains the human ability to understand'":
"Our programs are at this stage are partial and incomplete. They
cannot be said to be truly understanding. Because of
this they cannot be anything more than partial explanations of human
abilities." Still, Searle is "quite wrong" in his assertion "that
our programs will never be able to understand or explain human
abilities" since these programs "have provided successful embodiments
of theories that were later tested on human subjects": "our notion
of a script (Schank & Abelson 1977) is very much an explanation of human abilities."
Sloman, Aaron and Monica
Croucher (1980), "How to turn an information processor into an
understander," Behavioral and Brain Sciences 3:447-448.
Sloman and Croucher combine
elements of robot and systems replies. In their view a system having a computational
architecture or form capable of intelligent sensorimotoric
functioning in relation to things is "required before the familiar
mental processes can occur," e.g., mental processes such as beliefs and
desires about such things. "Searle's thought experiment
... does not involve operations linked into an appropriate system in an
appropriate way." Anticipating Searle's reply - that "whatever
the computational architecture ... he will always be able to repeat his
thought experiment to show that a purely formal symbol manipulating
system with that structure would not necessarily have
motives, beliefs, or percepts" for "he would execute all the programs
himself (at least in principle) without having any of the alleged
desires, beliefs, perceptions, emotions, or whatever" - Sloman &
Croucher respond, "Searle is assuming that he is a final authority on
such questions whether what is going on in his mental activities" and
"that it is impossible for another mind to be based on his mental
processes without his knowing"; and this assumption is
unwarranted. Sloman and Croucher hypothesize "that if he really
does does faithfully execute all the program, providing suitable time
sharing between parallel subsystems where necessary, then a
collection of mental processes will occur of whose nature he will be
if all he thinks he is doing is manipulating meaningless symbols" (cf., Cole 1991a).
Smythe, William E.
games", Behavioral and Brain Sciences 3:448-449.
Since "intentional states are, by
definition, `directed at' objects and states of affairs in the world"
and "this relation is not part of the computational account
of mental states" this "casts considerable doubt on whether any
purely computational theory of intentionality is possible." While
Searle's thought experiment "may not firmly establish that
computational systems lack intentionality ... it at least undermines
one powerful tacit motivation for supposing that they have it"
deriving from the fact that the "symbols of most AI and cognitive
systems are rarely the kind of meaningless tokens that Searle's
game requires." "Rather, they are often externalized in forms that
carry a good deal of surplus meaning to the user, over and above their
procedural identity in the systems itself, as pictorial and linguistic
inscriptions, for example." "An important virtue of Searle's
argument is that it specifies how to play the simulation game
correctly" such that "the procedural realization of the symbols" is all
Walter, Donald O. (1980), "The thermostat and the philosophy professor", Behavioral and Brain
Sciences 3: 449.
For Searle "a program is formal" whereas "`intentionality'" is "radically
different" and "not definable in terms of ... form but of
content. Searle merely, "asserts this repeatedly, without making
anything explicit of this vital alternative": such explication is owed
before Searle's argument can be credited.
Wilensky, Robert (1980), "Computers, cognition and philosophy", Behavioral and Brain Sciences
In the Chinese room scenario we
are misled into identifying the two systems by the implementing system
being "so much more powerful than it need be. That is, the
homunculus is a full-fledged understander, operating at a small
percentage of its capacity to push around some symbols. If we
replace the man by a device that is capable of performing only these
operations, the temptation to view the systems as identical greatly
diminishes" (cf., Copeland 1993, Cole 1991a). Furthermore,
Wilensky observers, "it seems to me that Searle's argument has nothing
to do with intentionality at all. What causes difficulty in
attributing intentional states to the machines is the fact that most of
these states have a subjective nature as well"; so, "Searle's
argument has nothing to do with intentionality per se, and sheds no
light on the nature of intentional states or on the kinds of mechanisms
capable of having them" (cf., Searle 1997).
Adam, Alison (2002), "Cyborgs in the Chinese Room:
Boundaries Transgressed and Boundaries Blurred" in
J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 319-337.
"Searle's desire to hold of to intentionality" as a uniquely human
prerogative "through all the increasingly elaborate versions of his
Chinese Room thought-experiment is a last-ditch defense of "one of the
last refuges of enlightenment thinking, the uniqueness of the human
animal" (p. 319). On the other hand, "the 'machines on top' view
of the roboticists does not mount a real challenge to the boundary
question, it merely reverses the roles" (p. 335). Adam challenges
the question, advocating in its stead, "the lessening of dualisms and
blurring of boundaries" such as we see in "actor-network theory
(ANT) and cyborg feminism." Such would "offer an alternative
reading of the human-machine boundary which acknowledges its cultural
aspects, getting away from both polarized 'for or against' arguments
and 'doom and gloom' futuristic scenarios" (p. 331). "ANT is part
of a new style of research in science and technology studies fermenting
over the last twenty-five years or so" which "involves looking at the
process of creating scientific and technical knowledge" sociologically,
"in terms of a network of actors or actants, where power is located
the network ... and may equally reside with non-humans as with humans
(Callon 1986, Latour 1992)," wherein "humans and non-humans are to be
treated symmetrically in our descriptions of the world, especially with
regard to the agency
of nonhumans" (pp. 331-2). Cyborg feminism observes, in "our reliance
spectacles, hearing aids, heart pacemakers, dentures, dental crowns,
joints, not to mention computers, faxes, modems, mobile phones, and
that "we are all cyborgs, 'fabricated hybrids of machine and organism'
(Hardaway 1991: 150)" and challenges us "to walk away from troubling
and the boundaries they set up" - whether between human-machine
man-woman - "to embrace the deliberate blendings and the ambiguities
throw up." Alternatives such as cyborg feminism and ANT "offer
prospect of blurring the old boundaries so that machines may be further
accommodated into our culture in ways which we find comfortable rather
threatening" (p. 334-5).
Igor (2002). "Neural
Depictions of 'World' and 'Self': Bringing Computational Understanding
to the Chinese Room" in J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 250-268.
Aleksander maintains, contrary to Searle (1980, 1999b), that "neurocomputation
can be intentional" (p. 265) and conscious (p. 266) regardless of
whether it's biologically or artificially sustained. Due to the fact
that a "neural system" creates "a new, rich way of representing reality
which is not symbolic" and there being "no need to translate these
[neural representations] into meaning-deficient symbols that require
further definition," Searle's example wouldn't apply to such a system
1990's Connectionist Reply). The "'aboutness'" of
(substructures in) such a network would, furthermore, be intrinsic to
the network as an "emergent " property (pp. 250-251: original
emphasis) thereof; the property of being (an) "ego-centered
world-representing" (p. 266) neural activity. Ego-centered
world-depictions "may recreate pictures but more generally they encode
anything which is pertinent to the generators of those pictures in the
world and the organism's relationship to such objects." An
understanding of "the word 'cup'," for instance, in such a system
"would encode my [the system's] entire experience of what 'cup' means
to me [or the system]!" (p. 263) - not just "an image of a cup," but
"depictions in the motor areas of how I might grip the cup, how I might
fill it, how I might drink from it, and how it might break if dropped,"
and so on. Such a "rich way of representing reality," Aleksander
maintains, fully "encodes the aboutness" regarding "a cup" (p. 263) at
issue regardless of whether the network is biologically or artificially
sustained, and regardless of whether the network is externally
"grounded" (Harnad 1990) in causal relations
with actual cups (p. 257: cf., the Robot Reply).
Bishop, Mark (2002), "Dancing With Pixies: Strong
Artificial Intelligence and Panpsychism" in J. Preston & M. Bishop
(eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 360-378.
Bishop sees the difference "between genuinely following a rule and
merely acting in accordance with it" (citing Wittgenstein 1953,
§§207-8, 232) as underpinning Searle's Chinese room argument.
The Chinese room, as Bishop sees it, rhetorically "asks 'Does the
appropriately programmed computer follow the rules of (i.e. understand)
Chinese when it generates 'correct' responses to questions asked about
a story, or is it merely that its behavior is correctly described by
those rules?" (p. 363); and (rightly Bishop would say) it encourages
the latter answer. In a related vein, Bishop "outlines a
reductio-style argument" further pressing Searle's claim "that
syntax is not intrinsic to physics" targeted "against the notion
that a suitably programmed computer qua performing computation
can ever instantiate genuine phenomenal states" (p.360: see Searle
1992). Bishop's argument - like, Searle's Wordstar on the wall
that every (sufficiently large) surface instantiates every
(sufficiently small) program, and Hilary Putnam's (1988) argument "that
'every open [physical] system implements every finite state automaton
(FSA)'" (on which Bishop's argument is based) - would show
computational properties to be unfit for causal or explantory
employment due to their observer-relativity and consequent ubiquity, if
it were successful. Bishop would avoid the usual trouble
for such arguments - that their post hoc "mappings" of automata
onto states of physical systems fail to capture relevant
counterfactuals (what would have happened if input or machine state had been different)
of the automaton - by "relaxing the requirement that the physical
instantiates the full combinatorial structure of a program with general
to the relatively trivial requirement that it just instantiate the
state transitions for a given execution trace" (p. 373). Bishop
such relaxation on the grounds that, while "input sensitive
reasoning may or may not be a necessary property of any system which it
claimed understands a language (and hence recognizes the string
however it does not constitute a necessary condition of any system that
phenomenal states" (my emphases). This is shown, Bishop
by a variation on David Chalmers' "Fading Qualia Argument" (Chalmers 1996, p. 255). Bishop's
variant shows, he thinks, that "in the context of FSA behavior with
defined, counterfactuals cannot be necessary for phenomenal experience" (p.
Ned (2002), "Searle's Arguments Against
Cognitive Science" in J. Preston & M. Bishop
(eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 70-79.
Searle's thought experiment, "derived from the Chinese nation
thought experiment of Block (1978)" (p. 70), when taken as an
argument that the Chinese system has no thoughts (as Searle
takes it), fails: "Searle uses the fact that you are not aware of the
Chinese system's thoughts as an argument that it has no thoughts" when,
to the contrary, "real cases of multiple personality disorder are often
cases in which one personality is unaware of the others" (p. 74:
compare Cole 1991a). Such
examples are more effective when taken as directed against the claim
system is a phenomenally conscious system," as Block took his Chinese
nation experiment to be directed, and which Block sees "as the
argumentative heart" of Searle's position. Since "Searle has
argued independently of the Chinese room (Searle
1992, ch. 7) that intentionality requires consciousness," Block
doctrine, if correct, can be used to shore up the Chinese Room
argument" (p. 74). Even so shored up, however, Searle's argument
would further depend "on an adventurous empirical claim ... that the
scientific essence of thought is chemical rather than
computational" and, Block
asks, "Why should we believe him on this empirical question,
rather than the scientists who study the matter?" (p. 76).
then, to Wordstar on the wall, Block allows "Searle is right that
whether something is a computer and what computer it is is in part up
to us," e.g., "any physical device that can be interpreted as an
inclusive OR gate can be also be interpreted as an AND gate."
Still, contrary to Wordstar on the wall, it "is not totally up to
us" (original emphasis), e.g., "[a]n inclusive OR gate cannot be
interpreted as an exclusive OR gate." Despite there being "a great deal
of freedom as to how to interpret a device," "there are also very
important restrictions on this freedom, and that is what makes it a
substantive claim that
the brain is a computer of a certain sort" (p. 78). (Cf., Block 1980.)
Selmer (1992), What Robots Can and Can't Be, Kluwer, pp.
In Chapter 5, "Searle,"
a variant of John Searle's Chinese room experiment involving an
idiot-savant "Jonah" who "automatically, swiftly, without conscious
can "reduce high-level computer programs (in, say, PROLOG and LISP) to
super-austere language that drives a Register machine (or Turing
and subsequently "can use his incredible powers of mental imagery to
a Register machine, and to visualize this machine running the program
results from his reduction" (p. 185). The variant is designed to be
and robot-reply-proof, building in Searle's wonted changes -
of the program (against the systems reply) and added sensorimotor
(to counter the robot reply) - from the outset. Bringsjord then
three further objections - the Churchlands’ (1990) connectionist reply,
Cole’s (1991a) multiple-personality
and Rapaport’s (1990) process reply - and offers rebuttals. (cf., Hauser 1997b).
Selmer & Ron Noel, "Real Robots and the Missing Thought-Experiment
in the Chinese Room Debate" in J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 144-166.
Contrary to those who urge that progress, especially in
robotics, will soon consign would-be a priori disproofs of AI,
such as Searle's, to the dustbin of history, Bringsjord and Noel offer
reflections on robots of their own that, they contend, actually
strengthen Searle's disproof by closing a gap in Searle's argument. The gap (first noted by Dennett
1980 and since exploited by Harnad
1991 & 2002) is the thought
experiment's seeming vulnerability to a combined systems-robot reply. The gap-closing experiment Bringsjord and Noel propose imagines
Searle to be monitoring and operating his own body as a robot (R)
by handworking robot-control program PR; now R
will as-if-seeu ("u" for understandingly) while "clearly
Searle seesu absolutely nothing," which closes the loophole
Harnad would exploit. (Cf., the robotic-systems-like multiple
personality replies of Cole 1991a
and Block 2002.)
COMMENT: The trouble with the "missing thought experiment,
as I see it, is that in it Searle's body including his brain
would be R, not Searle; the person formerly known as "Searle"
has been evicted from his former bodily habitation. Consequently,
the systems reply (understood as making Copeland
2002's "point about entailment") still applies; and, furthermore,
whatever first person authority Searle-the-experimenter originally had
to pronounce on the mental states of Searle-the-subject is lost now the
subject is no longer Searle. (LH)
Chalmers, D. (1996), The
Conscious Mind: In Search of a Fundamental Theory, Oxford
University Press, pp. 322-328.
room argument is characterized by Chalmers as an "internal objection" (p. 314) to "Strong AI". Where external objections
- e.g., H. Dreyfus (1979), H. Dreyfus & S. Dreyfus (1986) - allege
the inability of computers to do many
of the things humans do, internal objections, like the Chinese
room, argue that it wouldn't be thinking (or even evidence of thinking)
anyhow, even if they did. Though Searle's "original [1980a] version directs the argument
against machine intentionality rather than machine consciousness,"
Chalmers says, "All the same, it is fairly clear that consciousness
is at the root of the matter" (p. 322). At the systems reply, Chalmers thinks, "the argument
reaches an impasse": an impasse broken, Chalmers maintains, by
"dancing qualia" proof (online)
that "any system that has the same functional organization at a fine
enough grain will have qualitatively identical conscious experiences" (p. 249: cf., Searle 1980a,
the brain simulator reply).
Churchland, Paul, and
Patricia Smith Churchland (1990), "Could a Machine Think?", Scientific
American 262(1, January): 32-39.
The Churchlands point up what
as the "question-begging character of Searle's axiom" that "Syntax
itself is neither constitutive of nor sufficient for semantics" (Searle 1980a, p.27). Noting its
similarity to the conclusion "Programs are neither constitutive of
nor sufficient for minds," the axiom, they complain, "is already
carrying 90 percent of the weight of this almost identical conclusion"
which "is why Searle's thought experiment is devoted to shoring up
axiom 3 specifically" (p.34). The experiment's failure in this regard
is shown by imagining an analogous "refutation" of the electromagnetic
theory of light involving a man producing electromagnetic waves by
waving a bar magnet about in a dark room; observing the failure of the
magnet waving to illuminate the room; and concluding that
electromagnetic waves "are neither constitutive of nor sufficient
for light" (p.35). The intuited "semantic darkness" in the Chinese Room no more disconfirms the
computational theory of mind than the observed darkness in the Luminous
Room disconfirms the electromagnetic theory of light. Still, the
Churchlands, like Searle, "reject the Turing test as a sufficient
condition for conscious intelligence" and agree with him that "it is
also very important how the input-output function is achieved; it
is important that the right sorts of things be going on inside the
artificial machine"; but they base their claims "on the specific
behavioral failures of classical [serial symbol manipulating] machines
and on the specific virtues of [parallel connectionist] machines with a
more brainlike architecture" (p.37). The brainlike behavioral virtues
of such machines - e.g., fault tolerance, processing speed, and near
instantaneous data retrieval (p.36) - suggest, contrary to Searle's
"common-sense intuitions," that "a neurally grounded theory of meaning"
(p.37) will confirm the claims of future "nonbiological but massively
parallel" machines to true (semantics laden) artificial intelligence
(p.37) - the Connectionist Reply (I call it). Searle's (1990a) "Chinese gym" version of the
experiment - targeting connectionism - seems "far less responsive or
compelling than his first [version of the experiment]" (p.37). First,
"it is irrelevant that no unit in his system understands Chinese since
... no neuron in my brain understands English." Then there is the
heightened implausibility of the scenario: a true brain simulation
"will require the entire human populations of over 10,000 earths" (p.
Cole, David (1991a), Artificial Intelligence and
Personal Identity. Synthese 88:399-417.
Searle's `Chinese Room' argument," Cole allows, "shows that no computer
will ever understand English or any other natural language."
Drawing on "considerations raised by John Locke and his successors
(Grice, Quinton, Parfit, Perry and Lewis) in discussion of personal
identity," Cole contends Searle's result "is consistent with the
computer's causing a new entity to exist (a) that is not identical with
the computer, but (b) that exists solely in virtue of the machine's
computational activity, and (c) that does understand
English." "This line of reasoning," Cole continues, "
reveals the abstractness of the entity that understands, and so the
irrelevance of the fact that the hardware itself does not understand."
"Thus," he concludes, "Searle's argument fails completely to show any
limitations on the present or potential capabilities of AI" (online abstract).
Copeland, B. J. (1993), Artificial
Intelligence: A Philosophical Introduction, Blackwell, pp. 121-139 & pp. 225-230.
6, titled "The Strange Case of the Chinese Room," Copeland
undertakes "careful and cogent refutation" (p. 126) of Searle's
argument, pursuing the systems reply.
This reply, Copeland thinks, reveals the basic "logical flaw in
Searle's argument" (p. 126). The Chinese room argument invites us
to infer the absence of a property (understanding) in the whole
(system) from lack of understanding in one part (the man); and
this is invalid. The argument commits the fallacy of composition.
But Searle "believes he has shown the systems reply to be entirely
in error" (p. 126: my emphasis)! Consequently, Copeland,
proposes to "take Searle's objections one by one" to "show that
none of them work" (p.126). He identifies and carefully examines
four lines of Searlean resistance to the systems reply, debunking each
(I think successfully).
Having demolished Searle's
supporting argument Copeland proceeds to discuss Searle's thesis that "there is no way the system can get from the syntax to the
semantics." In this connection, Copeland imagines a
souped-up robot ensconced descendant of SAM - Turbo Sam - trained up
until he "interacts with the world as adeptly as we do, even writes
poetry." Whether to count Turbo Sam as understanding
(among other things) his own poetry amounts to "a decision on whether
or not to extend to an artefact terms and categories that we currently
apply only to each other and our
biological cousins"; and "if we are ever confronted with a robot like
Turbo Sam we ought to say it thinks" (p. 132: my
emphasis). "Given the purpose for which we apply the concept of a
thinking thing," Copeland thinks, "the contrary decision would be
impossible to justify" (p. 132). The real issue, as Copeland sees
it, is "whether a device
that works by [symbol manipulation] ... can be made to behave
I have described Turbo Sam as behaving" (132): The Chinese room
argument is a failed attempt to settle this empirical question by a
priori philosophical argument.
first Searlean line of resistance portrays the systems reply as simply,
intuitively, preposterous. As Searle has it, "the idea that
somehow " the conjunction of that person and bits of paper
might understand" Chinese is ridiculous (Searle
1980a, p. 419).
Copeland agrees "it does sound silly to say the man-plus-rulebook
understands Chinese even while it is simultaneously true that
the man doesn't understand" (p. 126); but to understand why it
sounds silly is to see that the apparent silliness does not embarrass
the systems reply. First, since the fundamental issue concerns
computational systems in general, the inclusion of a man in
the room is an inessential detail "apt to produce something akin to
tunnel vision": "one has to struggle not to regard the man in the room
as the only possible locus of Chinese-understanding" (p. 126).
Insofar as it depends upon this inessential detail in the thought
experimental setup, the "pull towards Searle's conclusion" is
"spurious" (p. 126). The second reason the systems reply sounds
silly in this particular case (of the Chinese room) is that "the wider system Searle
has described is itself profoundly silly. No way a man could
handwork a program capable of passing a Chinese Turing test" (p.
126). Since the intuitive preposterousness Searle alleges against
the systems reply is so largely an artifact of the "built-in absurdity
of Searle's scenario" the systems reply is scarcely
impugned. "It isn't because the systems reply is at fault
that it sounds absurd to say that the system [Searle envisages] ...
may understand Chinese" (p. 126); rather it's due to the absurdity and
inessential features of the system envisaged.
Searle alleges that the systems reply "begs the question by insisting
without argument that the system understands Chinese" (Searle 1980a, p. 419). Not
so. In challenging the validity of Searle's inference
from the man's not understanding to the system's not understanding,
Copeland reminds us, he in no way assumes that the system
understands. In fact in the case of the system Searle
actually envisages - modeled on Schank and Abelsons' "Script Applier
Mechanism" - Copeland thinks we know this is false!
He cites Schank's own confession that "No program we have written can
be said to truly understand" (p.128) in this connection.
Copeland considers the rejoinder Searle himself fronts. The
"swallow-up stratagem" Weiss 1990 calls it: "let the individual internalize all the elements of the
system" (Searle 1980a, p.
419). By this stratagem Searle would scotch the systems reply, as
Copeland puts it, "by retelling the story so there is no `wider
system'" (p. 128). The trouble is that, thus revised, the
argument would infer absence of a property (understanding) in the part (the room-in-the-man) from its absence in the whole (man). This too is invalid. Where the original version
commits a fallacy of composition the revision substitutes a fallacy of division;
to no avail, needless to say.
- Finally Copeland considers Searle's
insistence, against the systems reply, that there is "no way the system can get from the syntax to the semantics" (Searle 1984a, p. 34: my emphasis) either.
Just as "I as the central processing unit [in the Chinese room
scenario] have no way of figuring out what any of these symbols means,"
Searle explains, "neither does the system" (Searle 1984a, p. 34). Here, as
Copeland points out, it is Searle who begs the question: "The Chinese
room argument is supposed to prove Searle's thesis that mere
symbol manipulation cannot produce understanding, yet Searle has just
tried to use this thesis to defend the Chinese room argument
against the systems reply" (p. 130)
The concluding section of Chapter
6 first debunks Searle's "biological objection" as fatally dependent on
the discredited Chinese room argument for support of its crucial
contention that it "is not possible to endow a device with the same
[thought causing] powers as the human brain by programming it"
(p. 134), then goes on to dispute Searle's contention that "for any
object there is some description under which that object is a digital
computer" (Searle 1990c, p.
This "Wordstar-on-the-wall argument" - which would trivialize claims of
AI, if true - is, itself, off the wall. Searle is "simply
mistaken in his belief that the `textbook definition of computation'
implies that his wall is implementing Wordstar" (pp.
136-7). Granting "that the movements of molecules [in the
wall] can be described in such a way that they are `isomorphic' with a
sequence of bit manipulations carried out by a machine running
Wordstar" (p. 137); still, this is
not all there is to implementing Wordstar. The right counterfactuals
must also hold (under the same scheme of description); and
they don't in the case of the wall. Consequently, Searle
fails to make out his claim that "every object has a description under
which it is a universal symbol system." There is, Copeland
asserts, "in fact every reason to believe that the class of such
objects is rather narrow; and it is an empirical issue whether the
a member of this class" (p. 137).
Chapter 10, titled "Parallel
Distributed Processing" (PDP), takes up the cudgel against Searle's (1990a) Chinese gym variant of his
argument, a variant targeting PDP and Connectionism. Here, amidst
much nicely nuanced discussion, Copeland makes a starkly obvious
central point: "the new [Chinese gym] version of the Chinese room
commits exactly the same fallacy [of composition] as the old [version]" (226).
Copeland, B. J. (2002), "The Chinese Room
Logical Point of View" in J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on
Searle and Artificial Intelligence (Oxford: Clarendon Press),
Copeland distinguishes four versions of the argument
and finds them all "unsatisfactory" (p. 109): the "vanilla version" and "outdoors version" (of Searle 1980a) go against
traditional symbol-processing AI; the "simulator" and "gymnasium" versions (Searle 1990a) go against connectionism. While Copeland's reasons for rejecting Searle's arguments are
crucially system invoking, Copeland stresses the importance of
distinguishing his "logical reply to the vanilla
from "what Searle calls the Systems Reply" (p.
111). The Systems Reply (as
Searle describes it) is just the question-begging assertion that "the system
does understand" (Searle 1980a, p. 419: my emphases).
Copeland's "logical reply" merely asserts that the would-be conclusion,
"the system doesn't understand," doesn't logically follow from
the premise that "the human clerk in the room doesn't
understand." (This "point about entailment" (p.
111) I believe is what advocates of "the systems reply" are most
charitably understood to have been making all along: granting Clerk
wouldn't understand, still, possibly, the system
would.) The "outdoors version" in which Clerk has memorized the
program with its tables, etc., and does the lookups, etc. in his head
suffers from the
opposite logical flaw: the (internalized) system's lack of
understanding logically does not follow from Clerk's lack of
understanding; or, rather, it only follows assuming the "Part-Of principle" (p. 112) that if any part
understands then the whole understands together with "Searle's
Thesis" that if one sincerely disavows understanding then one
really doesn't understand. Both the Part-Of principle
implicated here) and Searle's Incorrigibilty Thesis (implicated in
version of the argument) are "left totally unsupported by Searle" (p.
112), and both are dubious. The
gymnasium version would directly counterinstance
connectionism by replacing the serial
processing Clerk with a gymasium full of clerks working in parallel,
but here again, the logical reply forfends: "The fallacy involved in
from part to whole is even more glaring here than in the original
version" (p. 116). The simulator version
would indirectly counterinstance connectionism, envisaging "a
connectionist network," N, "that is said to understand Chinese"
simulated by Clerk handworking a program "'computationally equivalent'
to N" (p. 114). Since Clerk would not understand by
virtue of his serial computations, Searle concludes, neither does N
understand by virtue it's parallel computations. Here,
again, Clerk's nonunderstanding fails to imply Room's nonunderstanding
- the "logical reply" still applies - and, additionally, the simulation
version commits "the simulation fallacy" of inferring
possession of a property of the thing simulated (N) from
the possession of that property by the simulation (Room). Also, notably, there are networks comprising "O-machines"
(as described by Turing 1938) "that cannot be simulated by a universal
Turing machine" (p. 116). Besides undercutting the simulation
version of the argument, this means, more generally that "even if some
version of the argument were sound, the argument could not possibly
establish ... that whatever is 'purely formal' or 'syntactical' is
neither constitutive of nor sufficient for mind'" since O-machine
procedures, while purely formal, are not handworkable (as in the
Coulter, Jeff & Wes Sharrock, "The Hinterland of the Chinese
J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 181-200.
"If computation requires intelligence, and if computation can
be done on machines, then [Turing] thought, since machines can do
computations they must possess intelligence" (p. 184). However,
"adaptation of Turing's method" of reducing "complex calculations" to
"very simple instructions that those devoid of mathematical abilities
could follow," instead, "ensures that calculations can be carried out
without any intelligence at all": "[t]he Chinese Room dramatizes this
point with respect to the
simulation of linguistic production" (pp. 184-5). While finding
this "considerable merit in the Chinese room analogy," (p. 191) Coulter
and Sharrock, nevertheless, reject Searle's underlying "dualistic
according to which the meanings are [consciously] 'attached' to the
by a speaker's 'interpretation', which interpretive work is located in
the 'mind' of the speaker" (p. 190-1): given this
"appeals to the deliverances of physics and biology do not work out
for Searle's (philosophical) project" - "to bring 'mind' within the
of the natural sciences" - since "physics and biology, as ordinarily,
empirically, undertaken by professional practitioners, make no
to 'consciousness' nor to 'mind'." To rightly insist that
"'consciousness' is not a free-standing notion, but is (generally) a
relational one, that 'consciousness' is (generally) 'consciousness of'
is to rightfully resist "the view that 'consciousness' is any
sort of (discretely identifiable) phenomenon" that might
somehow be "'realized in' and 'caused by' hitherto undiscovered
neurophysiological processes" (p. 197-8). Proper elucidation of
"consciousness" and the rest of "our 'mental vocabulary," accordingly,
"lead us not into the interiors of our skulls, speculatively construed,
but into a richer appreciation of the complex ways in which our 'mental
lives' are inextricably bound up with the rest of our lives as we lead
them in society with others" (p. 199).
Daniel (1987), "Fast Thinking," Behavioral and Brain Sciences
a program - any program by itself - is not sufficient for semantics" is
Searle's stated conclusion. Dennett observes that it
is "obvious (and irrelevant)" that "no computer program `by itself'"
- as a "mere sequence of symbols" or even "lying unimplemented on
the shelf" - "could `produce intentionality'" (p. 324-325).
Only the claim "that no concretely implemented running computer program
could `produce intentionality'" is "a challenge to AI" (p. 425);
so this is how the argument's conclusion must be construed to be of
interest. When the argument's premises are reconstrued
as needed to support this conclusion, however, its premises are, at best, dubious.
are purely formal (i.e., syntactic) syntactical" - apropos program runs - is false.
If details of `embodiment' are included in the
specification of a program, and are considered essential to it, then
program is not a purely formal object at all ... and without some details of embodiment being fixed - by the internal semantics of the
language in which the machine is ultimately written - a program is not
even a syntactic object, but just a pattern of marks inert as
wallpaper. (p. 336-337)
is neither equivalent to nor sufficient for semantics" - apropos
program runs - is a
dubious prediction. More likely,
embodied, running syntax -
the `right program' on a suitably fast machine - is sufficient
for derived intentionality, and that is all the intentionality
there is. (p. 336)
have mental contents" - even this -
is a dubious proposition if content is "viewed, as Searle can now be
seen to require, as a property to which the subject has conscious
privileged access" (p. 337). Searle is required to so
view it since his case "depends on the `first-person point
of view' of the fellow in the room"; so, "that is the crux for
Searle: consciousness, not `semantics'" (p. 335).
Harnad, Steven (1991), "Other bodies, other minds: a machine incarnation of an old
philosophical problem", Minds and Machines 1:5-25.
endorses Searle's Chinese Room Experiment as a reason for preferring
his proposed Total Turing Test (TTT) to Turing's original "pen pal"
test (TT). By "calling for both linguistic and robotic
capacity," Harnad contends, TTT is rendered "immune to
Searle's Chinese Room Argument" (p. 49) because "mere sensory
transduction can foil Searle's argument": Searle in the room must
the internal activities of the machine ... without displaying the
critical mental function in question" yet, if "he is being the device's sensors ... then he would in fact be seeing!" (p.
50). Though thwarted by transduction, Harnad thinks that as
an "argument against the TT and symbol manipulation" the Chinese
room has been "underestimated" (p. 49). The Chinese room, in
Harnad's estimation, adequately shows "that symbol manipulation is not
all there is to mental functions and that the linguistic version of
the Turing Test just isn't strong enough, because linguistic
communication could in principle (though perhaps not in practice) be no
mindless symbol manipulation" (p. 50). "AI's
favored `systems reply'" is a "hand-waving" resort to "sci-fi
fantasies," and the Churchland's (1990) "luminous room" rests on a false analogy.
sees that the Chinese Room Experiment is not, in the first
place, about intentionality (as advertised), but about consciousness therein/thereof: "if there weren't
something it was like [i.e., a conscious or subjective experience]
to be in a state that is about something" or having intentionality
"then the difference between "real" and "as-if" intentionality would
vanish completely" (p. 53 n. 3), vitiating the experiment. Acknowledging this more forthrightly than Searle (1999 ), Harnad faces the Other-Minds Problem arising from such close linkage
of consciousness to true ("intrinsic") mentality as Harnad insists on,
in agreement with Searle (cf., Searle 1992).
Your "own private experience" being the sole test of whether your mentality
intrinsic, on this view, it seems there "is in fact, no evidence for
me that anyone else but me has a mind" (p. 45). Remarkably,
Harnad accepts this: no behavioral (or otherwise
public test) provides any evidence of (genuine intrinsic)
mentation "at all, at least no scientific evidence" (p. 46). Regrettably, he never explains how to reconcile this contention
(that no public test provides any evidence of true [i.e.,
private] mentation) with his contention that TTT (a
public test itself) is a better empirical test than TT. (Hauser 1993b replies to this article.)
Harnad, Stevan (2002), "Minds, Machines, and Searle 2:
What's Right and Wrong about the Chinese Room Argument" in J. Preston & M. Bishop
(eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 294-307.
Harnad remarks that among those who have commented on or
otherwise considered it, "the overwhelming majority ... think the CRA
is dead wrong" and confesses his "impression that, apart from myself,
the only ones who profess to accept the validity of the CRA seem to be
those who are equally persuaded by ... 'Granny Objections' - the kinds
of soft-headed friends that do even more mischief to one's case than
one's foes" (pp.
296-7). For this, he holds "Searle is partly to blame" for his
unclarity about (1) "What on earth is 'Strong AI'?" (p. 297) and (2)
"synonymy of the 'conscious' and the 'mental' is at the heart of the
CRA" (p.302). Seen as arguing that conscious mental
are not just computational, however,
Harnad maintains, while "CRA is not a proof; yet it remains the most
plausible prediction based on what we know" (p. 302). He deems
empirically "decisive variant" to be the rejoinder where we "assume
Searle has memorized all the symbols" so that "Searle himself would be
all there was to the system" (p. 301): against this, "Systematists" are
resort to "ad hoc speculations" - "that, as result of memorizing and
manipulating very many meaningless symbols, Chinese-understanding would
either consciously in Searle, or, multiple-personality-style, in
conscious Chinese-understanding entity inside his head of which Searle
was unaware" - that are wildly implausible (pp. 301-302).
"there is also a sense in which the Systems Reply is right, for
the CRA shows that cognition cannot be all just computational,
it certainly does not show that it can't be computational at
all"; and CRA, moreover, is foiled by the addition of sensorimotor
capacities to the system (cf., Harnad
1991), "nor would it work against a hybrid
computational/noncomputational one" (p. 303). Consequently,
"Searle was also over-reaching in concluding that the CRA redirects
our line of inquiry from computation to brain function" (cf., Searle 2002): short of that, "there are
still plenty of degrees of freedom in both hybrid and non-computational
approaches to reverse-engineering cognition." Among such
or hybrid approaches what are "now called 'embodied cognition' and
robotics'," including Harnad's own approach "of grounding symbol
in the sensorimotor (T3) world with neural nets" (p. 304), seem
Haugeland, John (2002), "Syntax, Semantics, Physics" in J. Preston & M. Bishop
(eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 379-392.
argument "is a modus tollens" (p. 379): arguing, in effect, if
the system understood then so would Searle; but Searle
wouldn't understand (as the thought experiment shows); therefore the
system doesn't understand. The truth of the "if ... then ..."
is suspect, however. In the original version, Searle "simply slides
from characterizing himself as a part of the system - namely
its central processor - to speaking of himself as if he were the whole":
this "obvious [part-whole] fallacy" is what "'the systems reply'" (p.
380) points out. Searle's revision
of the scenario against the systems reply - wherein he would "internalize the whole system by simply memorizing all its code and data, then
carrying out the operations in his head" (p. 380: original emphasis) -
commits the converse whole-part or "level-of-description" fallacy:
inferring the internalized system's nonunderstanding from Searle's
nonunderstanding would be "like inferring
from the fact that some of the internal processes in an engine
the feature of rotating, that the engine itself is rotating" (p. 383).
look closely, here, is to see that "[w]hat we are to imagine in the
internalization fantasy is something like a patient with multiple
personality disorder" (p. 380: cf., Cole
1991a, Block 2002), and Searle's
supporting argumentation - to the effect that whatever is in the
part is in the whole (cf., Copeland 2002) - "equivocates on the word 'in'" (p. 381) between a property
possession sense (in which understanding is supposed not to
be in Searle), and a containment
sense (in which the internalized processing is supposed to be
in Searle). "In sum, the Chinese Room Argument fails, due to a part-whole
a level-of-description fallacy, a fallacy of equivocation, or some
of the three" (p. 382). Furthermore, since "serious AI is nothing other
a theoretical proposal as to the genus of the relevant causal powers
computational], plus a concrete research program for homing in on the
Searle's "observation that syntax by itself (without causal
is insufficient for semantics is, though true, entirely beside the
point" (p. 388: cf., Hauser 2002). The Wordstar on the Wall argument to the
every (sufficiently complex) object implements every (sufficiently
simple) program (cf., Searle 1990c:
27, Searle 1992: 208-9) would come
but is based on "several
deep misconceptions about syntax and computation" (p. 389).
"for quite different reasons," Haugeland shares Searle's doubt that "AI
any hope at all of homing in on the causal powers that are prerequisite
genuine intelligence and semantics" (p. 388). (Cf., Haugeland 1980.)
Hauser, Larry (1993a), Searle's
Chinese Box: The Chinese Room Argument and Artificial Intelligence,
Michigan State University (Doctoral Dissertation).
Searle's Chinese room argument is fallacious
(chap. 2). Furthermore, the supporting Chinese room thought
experiment is not robust (similar scenarios yield conflicting
intuitions), fails to generalize other mental states (besides
understanding) as claimed, and depends for its credibility on a
dubious tender of epistemic privilege - the privilege to override
all external or "third person" evidence - to first person (dis)avowals
mental properties like understanding (chap. 3). Searle's
that everyday predications of mental terms to computers are
discountable as equivocal (figurative) "as-if" attributions is
unwarranted: standard ambiguity tests evidence the univocality of such
attributions (chap. 4). Searle's further would-be-supporting
of intrinsic intentionality (ours) from as-if
intentionality (theirs) is untenable. It depends either on
dubious doctrines of objective intrinsicality according
to which meaning is literally in the head (chap. 5); or else it depends
on even more dubious doctrines of subjective intrinsicality according to which meaning is "in" consciousness (chap. 6).
Hauser, Larry (1993b),
Reaping the Whirlwind: Reply to Harnad's Other Bodies Other Minds. Minds
and Machines 3, pp. 219-238.
proposed "robotic upgrade" of Turing's Test (TT), from a test of
linguistic capacity alone to a Total Turing Test (TTT) of
linguistic and sensorimotor capacity - to protect against the
Chinese room experiment - conflicts with his claim
that no behavioral test provides even probable warrant for mental
attributions. The evidentiary impotence of behavior - on Harnad's
view - is due to the ineliminable consciousness of thought (
cf., Searle's 1990f "Connection
Principle") and there being "no evidence" ( Harnad 1991, p. 45) of consciousness besides "private experience" (Harnad 1991,
p. 52). I agree with Harnad that distinguishing real from "as if" thought on the basis of (presence or lack of) consciousness - thus
rejecting Turing or other behavioral testing as sufficient warrant for
mental attribution - has the skeptical consequence Harnad
accepts: "there is in fact no evidence for me
that anyone else but me has a mind" (Harnad
1991, p. 45). I disagree with his acceptance of
it! It would be better to give up the neo-Cartesian "faith" ( Harnad 1991, p. 52) in private
conscious experience underlying Harnad's allegiance to Searle's
controversial Chinese Room Experiment than to give up all claim to know
others think. It would be better to allow that (passing) Turing's Test
even strongly evidences - thought.
Harnad's allegiance to the Connection Principle causes him to
the force of Searle's argument against computationalism and against
Turing's test (TT), he is further mistaken in thinking his "robotic
upgrade" (TTT) confers any special immunity to Searle's thought
experiment. Visual transduction can be unconscious, as in
"blindsight," which will be "as-if seeing" by Harnad's and Searle's
lights. So, by these lights Searle can transduce
visual input without actually (i.e., consciously) seeing. "If the
critical mental function in question is not required to be
conscious (as I advocate), then TT and TTT are both immune to
Searle's example. If the critical mental function in question is required to be conscious (as Harnad advocates), then both TT and TTT
vulnerable to Searle's example, perhaps" (p. 229).
Larry (1997a), "Searle's Chinese Box: Debunking the Chinese
Room Argument", Minds and Machines 7: 199-226.
Searle's original 1980a
presentation suborns a fallacy: Strong AI or Weak AI;
not Strong AI (by the Chinese room experiment); therefore, Weak
AI. This equivocates on "Strong AI" between "thought is essentially
computation" (Computationalism), and "computers actually (or
someday will) think" (AI Proper). The experiment targets
Computationalism ... but Weak AI (they simulate) is logically opposed
to AI Proper (they think), not to Computationalism. Taken
as targeting AI Proper, the Chinese room is a false dichotomy wrapped
in an equivocation. Searle's invocation of "causal powers
(at least) equivalent to those of brains" in this connection (against
AI Proper) is similarly equivocal. Furthermore, Searle's
advertised "derivation from axioms" targeting Computationalism is,
itself, unsound. Simply construed it's simply invalid and
unsimply construed (as invoking modalities and second order
- since program runs (what's at issue) are not purely syntactic (as Searle's first "axiom" asserts they are) - it
makes a false assumption.
Hauser, Larry (2002), "Nixin' goes to China",
Preston & M. Bishop (eds.), Views Into the
Chinese Room: New Essays on Searle and Artificial Intelligence (Oxford:
Clarendon Press), 123-143 .
holds `the essence of the mental is the operation of
a physical symbol system' (Newell 1979 as cited by Searle 1980a, p. 421: my
emphasis). Computationalism identifies minds with processes or
(perhaps even more concretely) with implementations, not with programs "by themselves" (Searle
1997, p. 209). But substituting "processes" or
"implementations" for "programs" in "programs are formal (syntactic)"
falsifies the premise: processes or implementations are not
purely syntactic but incorporate elements of dynamism (at least)
besides. In turn, substituting "running syntax" or "implemented
syntax" for "syntax" in "syntax is not sufficient for
semantics" makes it impossible to maintain the conceit that this is "a
conceptual truth that we knew all along" (Searle
1988, p. 214). The resulting premise is clearly an empirical
hypothesis in need of empirical
support: support the Chinese room thought experiment is inadequate
to provide. The point of experiments being to adjudicate between
competing hypotheses, to tender overriding epistemic privileges
to the first person (as Searle does) fatally prejudices the
experiment. Further, contrary to Searle's failed thought
there is ample evidence from real experiments - e.g.,
intelligent findings and decisions of actual computers running
existing programs - to suggest that processing does in fact suffice for intentionality. Searle's would-be distinction between
genuine attributions of "intrinsic intentionality" (to us)
and figurative attributions of "as-if" intentionality (to them) is
too facile to impugn this evidence.
Penrose, Roger (2002), "Consciousness, Computation,
and the Chinese Room" in J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 226-249.
Penrose finds the Chinese room example's demonstration "that
the mere carrying out of a successful algorithm does not itself
imply that any understanding has taken place" (p. 229) "rather
convincing" - especially "when there is just a single person carrying
out the algorithm" and "where we restrict attention to the case of an
algorithm which is sufficiently uncomplicated for a person to actually
carry it out in less than a lifetime" - though it falls short of
"rigorously establishing" (p. 230) this point against "Strong
AI." Penrose also agrees that Strong AI implies "an extreme form
of dualism," "as Searle has pointed out":
another "very serious" difficulty (p. 231). Still, Penrose thinks
Searle's dogmatic claims that "biological objects (brains) can have
and 'semantics'" while "electronic ones cannot," "does not ... point
way towards any helpful scientific theory of mind" (p. 232). This
principally due to Searle's acceptance of the "Weak AI" notion that
physical action can be simulated computationally" (p. 226) which -
Searle's rightful rejection of Strong AI - makes "awareness" (p. 226)
"consciousness" (p. 233) objectively indiscernible. "Science,
all, is concerned with objectively discernible fact" (p. 233). In
this regard, Penrose judges, his own hypothesis that there is a
"non-computational ingredient ... upon which the biological action of
our conscious brains depends" (p. 238) - as evidenced by humans'
abilities to solve uncomputable mathematical problems, e.g. the halting
problem - to be scientifically preferable. Penrose
then proceeds to attempt to clarify and defend this alternative against
Searle's 1997 criticisms.
S. (1997), How the Mind Works, W. W. Norton & Co., New
York, pp. 93-95.
Searle appeals to his Chinese room example, as Pinker
tells it, to argue this: "Intentionality, consciousness, and other
phenomena are caused not by information processing ... but by the
physical-chemical properties of actual human brains" (p. 94).
replies that "brain tumors, the brains of mice, and neural tissues kept
alive in a dish don't understand, but their physical chemical
are the same as the ones of our brains." They don't understand
"these hunks of neural tissue are not arranged into patterns of
that carry out the right information processing" (p. 95: cf., Sharvy 1985). Pinker endorses
Paul & Patricia Churchland's (1990)
electromagnetic room thought experiment as a refutation of Searle's (1990a) Chinese Gym variation on the
Chinese room (a variant aiming to show that connectionist networks and
parallel processing don't suffice for semantics or
Preston, John & Bishop, Mark (2002) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press).
Preston, John (2002), "Introduction," in J. Preston & M. Bishop (eds.), Views Into the Chinese Room: New Essays on
Searle and Artificial Intelligence (Oxford: Clarendon Press), 1-50.
John Preston, 1-50
- Twenty-One Years in the Chinese Room, John
- Searle's Arguments Against Cognitive Science,
Ned Block, 70-79
- Understanding, Orientations, and Objectivity,
Terry Winograd, 80-94
- A Chinese Room that Understands, Herbert
A. Simon & Stuart Eisenstadt, 95-108
- The Chinese Room from a Logical Point of View,
B. Jack Copeland, 109-123
- Nixin' Goes to China, Larry Hauser,
- Real Robots and the Missing
Thought-Experiment in the Chinese Room Debate, Selmer Bringsjord & Ron Noel, 144-166
- Wittgenstein's Anticipation of the Chinese Room,
Diane Proudfoot, 167-180
- The Hinterland of the Chinese Room,
Jeff Coulter & Wes Sharrock, 181-200
- Searle's Misunderstandings of Functionalism and Strong AI, Georges Rey, 201-225
- Consciousness, Computation, and the Chinese Room,
Roger Penrose, 226-249
- Neural Depictions of 'World' and 'Self':
Bringing Computational Understanding to the Chinese Room, Igor
- Do Virtual Actions Avoid the Chinese Room?,
John G. Taylor, 269-293
- Minds, Machines, and Searle 2: What's Right and
Wrong about the Chinese Room Argument, Stevan Harnad, 294-307
- Alien Encounters, Kevin Warwick, 308-318
- Cyborgs in the Chinese Room: Boundaries Transgressed
and Boundaries Blurred, Alison Adam, 319-337
- Change in the Rules: Computers, Dynamical
Systems, and Searle, Michael Wheeler, 338-359
- Dancing With Pixies: Strong Artificial
Intelligence and Panpsychism, Mark Bishop,
- Syntax, Semantics, Physics, John Haugeland,
sets the Chinese room argument (CRA)
in historical and theoretical perspective and observes (p. 26), "The beauty, as well as the
import, of the CRA, is its close proximity not just
to the Turing Test scenario, but also to the original explanation of a
Turing machine" and, significantly, the CRA "abstracts from the human
computer in much the same way (by ignoring limitations
of speed, memory, and reliability, for example)" as the Turing's
original (1937) explanation of his
machine. Still, Preston notes, despite there being "little
agreement about exactly how the argument goes wrong"
(p. 47: my emphasis), nevertheless, there is "a sort of consensus among
cognitive scientists to the effect that the CRA is and has been shown
to be bankrupt." Indeed, Preston notes, "[s]ome prominent
philosophers of mind declined to contribute" to this volume "on the
the project would give further exposure to a woefully flawed bit of
philosophizing," and even "some who have contributed to the volume
[including your humble annotator (Hauser
think of the CRA not just as flawed, but as pernicious and wholly
undeserving of its fame." (p. 46-47).
Proudfoot, Diane, "Wittgenstein's Anticipation of the
Chinese Room" in J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 167-18.
"Wittgenstein used the notion of the living reading-machine, as Searle
did that of the man in the Chinese Room, to test the thesis that symbol
manipulation is sufficient for understanding," and "Wittgenstein's
view," like Searle's. "is that reading-machines, living or not, do not
(as a matter of fact) genuinely read, calculate, and so on" (p. 169).
Nevertheless, "Wittgenstein did not side with Searle. His
arguments provide compelling objections not only to the Chinese Room
Argument but also to the model of the mind that appears to underlie it"
since, for Wittgenstein, "whatever item in consciousness might on
occasion accompany understanding, it cannot be a guarantor of
understanding" (p. 170) or the lack thereof. Indeed, Searle's
seeming picture of meanings as a distinctive conscious experiential
processes accompanying language use is precisely the picture that
Wittgenstein most resolutely opposes. Wittgenstein "denied that
understanding, thinking, intending, meaning, and so on consist in any sort of process" (p. 176: my emphasis), whether symbol manipulative or experiential. Rather, on Wittgenstein's view, "a symbol
manipulator S understands only if S has a particular history (one which involves learning and training)" and "in addition, S
must participate in a particular social environment (one which
includes normative constraints and further uses of the symbols)."
It is because "[n]either the man in the Chinese Room nor
Wittgenstein's living reading machine satisfies these requirements" that, in
fact, "neither understands," but "[t]here is nothing in Wittgenstein's
externalist conditions on understanding that in principle prevents a 'reading machine', living or otherwise, from coming to
understand" (pp. 177-8: my emphasis).
Rapaport, William J.
(1990), "Computer Processes and Virtual Persons: Comments on Cole's
`Artificial Intelligence and Personal Identity"', Technical
Report 90-13 (Buffalo: SUNY Buffalo Department of Computer
Rapaport seeks "to clarify and extend the issues" raised by Cole 1991a, arguing , "that, in Searle's celebrated Chinese-Room Argument, Searle-in-the-room
does understand Chinese, in spite of his claims to the contrary.
He does this in the sense that he is executing a computer `process'
that can be said to understand Chinese" (online
Rey, Georges (2002), "Searle's Misunderstandings of
Functionalism and Strong AI" in J. Preston & M. Bishop
(eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 201-225.
Searle's Chinese room argument poses no sustainable challenge
to the coherence of "what many have called the
'computational-representational theory of thought' (CRTT)" as "a possible explanation of mental phenomena." Since CRTT "is committed neither to
the 'Turing test' nor to a conversion manual theory of language," the
original room example
"is not remotely relevant to CRTT" (p. 203-4). "Consequently, in
modifying Searle's example to accommodate CRTT one has to
someone following a whole set of programs - but, really, given
that we have every reason to think many of them run simultaneously 'in
parallel', quite a number of people, each one running one of
them, with whatever 'master control' (if any) exerted by a normal human
being" (p. 208). Imagine a machine with the requisite complexity
"set loose upon the world," and "once one imagines this machine in
of the far too simple Chinese Room, it's awfully hard to share Searle's
scepticism" (p. 210). "The important point is that, from an explanatory point of view, its intelligent behavior would be subject to
of the same systematic decomposition, laws, and counterfactuals as a
Chinese speaker." Given "that it is on the basis of such ...
we are ordinarily justified in attributing contentful states to human
this would be "good reason to ... regard the machines states as
intentional" (p. 211). Further "worrying at this point about the
of the man/men in the room would seem to involve a blatant fallacy of
as "was, of course, precisely the complaint of those who defended the
'Systems Reply': such a reply only seems implausible "if the activity
in the room is quite as simple as Searle's original example suggests" (p. 213). Searle's argument to the effect that syntax is not intrinsic
to physics because
any arbitrary program maps onto any (sufficiently large) wall (WordStar on the wall) "ignores a number of
constraints any reasonable version of CRTT places upon its application
to the real
world" - e.g., that mappings must provide "explanations of
counter-factual-supporting regularities," and cohere with "molecular and anchored' analyses of appurtenant "subsystems
and relations" - which "regularities are simply not available in the
case of an arbitrary wall" (p. 216). As for "the
reasonable proposal being pursued by CRTT ... that states of a person
have semantic properties by virtue of their computational organization and their causal relations to the world," Rey observes, "Searle has little
to say about the relation of any of this rather large body of work to
Strong AI, except to be shortly dismissive of it." Such accounts,
according to Searle, "'will leave out the subjectivity of mental
content'" and so "'still not get at the essence'" thereof (p. 219).
Searle deploys the Connection Principle or first-person perspectives
CRTT," however, "he needs - yet again - to consider its substantial
resources for dealing with them" and with puzzling problems about
(e.g. about the "aspectual shape" or "intensionality (with an
thereof). Since Searle's view provides no comparable resources of
its own, Rey avers, "Searle's (1992: 1) own insistence on materialism,
while perfectly correct, is simply inadequate" (p. 222).
Simon, Herbert A. & Stuart Eisenstadt (2002), "A Chinese Room that
Understands" in J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 95-108.
Simon and Eisenstadt contend judgment that "a computer can be
programmed (and has been programmed) to understand natural language" is
warranted under "the same criteria for the presence of understanding in
the case of computer behavior as ... in the case of human behavior" (p.
95). This "Empirical Strong AI thesis (SAI-E)" is distinct "from
Logical Strong AI (SAI-L), the thesis Searle refutes, and which asserts
that 'an appropriately programmed digital computer ... would thereby
necessarily have a mind' (Searle 1999a:
115)" (p. 95: my elision). Since it seems we distinguish "rote
responses from responses with understanding" by the fact that "the
later involve intensions in some essential way" (p. 100); following
Carnap's (1955) suggestion that "the intension of a predicate can be
a robot just as well as for a human speaker" behaviorally "and even
more completely if the internal structure of the robot is sufficiently
known to predict how it will function under various conditions"; Simon
and Eisenstadt maintain that both external (behavioral) and internal
(structural) evidence support the claims that the responses of three
specimen systems - EPAM (Elementary Perceiver and Memorizer), ZBIE (a
program), and the (image & descriptive associative) CaMeRa -
involve intensions. "On the basis of this evidence, and a large body
of other evidence provided by programs that solve difficult problems,
programs that make scientific discoveries and rediscoveries," etc.,
case for Strong AI in its empirical version, SAI-E, is overwhelming" (p.
107), and the target of Searle's argument, SAI-L, is uninteresting.
Taylor, John G. (2002), "Do Virtual Actions Avoid the
Chinese Room?" in J. Preston & M. Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 269-293.
The Chinese room argument "gave a powerful rebuff to the
approach which claims that the mind is a program of the computational
(p. 267) and "the strength of Searle's arguments" should point "those
in neural network models of the brain, and especially of its
consciousness" to "look at the very nature of semantics" (p. 270) in
the brain. Since brains cause consciousness (cf., Searle 1980a) and, "as Searle [1997:
202] states 'The brain is a machine'" (p. 290), understanding how the
does it would seem to be the first step, if we are ever "to create
conscious machines." To this end Taylor pursues the hypothesis of
"semantics as virtual actions" of/in "the frontal lobes" (p. 287).
Virtual actions (roughly speaking) are would-be incipient
acts which Taylor identifies with "actions made in parallel with the
auditory code of the word" in "the original learning of the word": such
a "movement pattern," subsequently having been "learnt to be
inhibited," remains "still present,
even if ineffective": "it is the relic of movement, which still
constraints for future processing" (p. 285). Taylor suggests such
semantics (however implemented) avoid Searle's argument: since "frontal
semantic activations involve possibly contradictory virtual actions" it
would "not be possible to construct a logically consistent set of rules
to describe these parallel activations" (p. 289). Here I worry:
Peter zigs while Paula zags, that's not contradictory; neither is it
if I simultaneously clench my left fist and relax my right; and,
where there are simultaneous "contradictory virtual actions" these will
be actions of different neural committees (so to speak). So,
the contradiction? Perhaps in the vicinity of such concerns,
worries, "Are there little slaves scurrying back and forth in the model
I have presented?" (cf., Searle 1990a's
Chinese Gymnasium); to which, Taylor replies "decidedly not."
the proposed "manner in which meaning is brought into the total neural
coding of language involves grounding the representations by actions on
the objects of relevance," "the 'virtual actions' meaning
to the symbols of these external objects has no arbitrariness at all";
and "[t]here are no homunculi working with arbitrary rules" (p. 287-8:
cf., the Robot
Warwick, Kevin (2002), "Alien Encounters" in J. Preston & M. Bishop
(eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 308-318.
"[Human] biases lie at the heart of all such refutations" of
the possibility of "computer-controlled conscious robots" as Searle's
Chinese room argument (p. 309). Contrary to all such arguments,
reasonable," Warwick believes, "to expect that consciousness,
upon a complex network of processing cells, will be different between
species [and even machines] whose processing cells are organized
differently" (p. 313) but still to suspect that it exists in
these different forms. "Autonomous robots" (p. 315) show that
too can be individuals, doing things in their own individual way,
due to their program, partly due to experience" and "[a] corollary of
is that, if they have the appropriate physical capabilities, machines,
perhaps, can not only learn to communicate with each other
but can also learn to relate syntax to semantics, grounded by
own experience of physically sensed entities (p. 315: cf., the Robot Reply). "Restriction of life and
consciousness to biological systems is a purely cultural stance" (p.
316) since "[i]n
the past many cultures, and indeed a number of 'non-Western' cultures
at the present time" and not so restrictive, "bestowing [life,
intelligence, and consciousness], in a broad sense to all things" (p.
316). "R]ecent cybernetic research with brain implants" even
"suggests that semantic concepts, for humans and machines alike, can be
passed directly into a human brain from a machine brain and vice versa (Warwick 1998)" (p. 317).
Wheeler, Michael (2002), "Change in the Rules:
Computers, Dynamical Systems and Searle" in J. Preston & M. Bishop
(eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 338-359.
"[T]he emerging dynamical systems approach to cognition
(henceforth DSC)" - "the view that '[natural] cognitive systems
are dynamical systems and are best viewed from the perspective of
dynamics' (van Gelder & Port 1995:
5)" (p. 338: original emphasis) - seconds the denial "that computation
is sufficient for mind" in which Searle's Chinese room argument
concludes. Furthermore, Searle's would-be positive account of
states of mind as "what Searle calls 'causally emergent system
features'" (p. 352) of "systems of neurons in the same way that
solidity and liquidity are emergent features of systems of molecules"
(Searle 1992, p. 112) invites explication on DSC lines. "In
fact," Wheeler suggests, "the term 'causally emergent system feature'
is simply another name for the kind of property that, as the outcome of
self-organization I have glossed as 'new structured order'" (p. 353): flocking (as of birds) provides a striking example. But, thus explicated,
the parallel that Searle "draws between mental states and other less
controversial causally emergent phenomena such as solidity, and we now
can add flocking" seems to be at odds with the Chinese Room Argument.
While the causal emergence of mind, "if DSC is right, will
involve all sorts of fine-grained temporal
complexities" (p. 54) that are extracomputational; still, "any
such dynamical systems account, couched in terms of geometric
structures such as state spaces, trajectories, attractors, and phase
portraits, remains a purely formal story" (p. 347: original
emphasis); "what the Chinese room actually establishes, if successful,
is the more general claim that no purely formal process could ever
constitute a mind"; hence, "the anti-formalist conclusion of the
Chinese Room Argument appears to be contradicted by the very concrete
account of mind that Searle himself develops" (p. 356),
since that account needs elaboration on DSC (i.e., formal) lines.
Winograd, Terry (2002) "Understanding, Orientations,
Objectivity" in J. Preston & M. Bishop
(eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
(Oxford: Clarendon Press), 80-89.
Winograd argues that "the question Searle poses, 'Does the
computer understand' is a meaningless question when it is stripped of
an appropriate context of utterance" (p. 80) - specifically, the
context of "attitudes and role relations that can have implications for
social interaction" (p. 84). Consequently, whether or not to
understand" is "not an objective question but a choice (not individual
but within a linguistic community) of what we choose to approach as
'autonomous'" (p. 84). Nevertheless, "whether something is
or understands "is not simply an idle matter of opinion" (p. 84) and
since "AI claims are couched in terms of 'doing what a person does',"
where "the background of expectation is ... based on the background of
full human abilities" (p. 85), here, the choice is clear: "no existing
AI program understands" (p. 85). In the case of envisaged
brain simulative, and other systems imagined by the various "Replies" that Searle (1980a) attempts to counter,
however, the choices are less clear than Searle (in rebutting these
replies) allows. Insofar as further discussion of Searle's example and arguments
delves "into questions about topics such as autonomy, potential for
and change of action, and assumptions about boundaries of autonomous
systems" (p. 90) it is apt to remain fruitful: "fine-tuning definitions
and concocting clever gedankenexperiments, on the other
hand, "can become self-indulgent and detached from real concerns" (p.
Anderson, David. 1987. Is the Chinese room the
real thing? Philosophy 62:
Unannotated & Further References
Block, Ned. 1978.
with Functionalism. In C. W. Savage, ed., Perception and Cognition:
in the Foundations of Psychology, Minnesota Studies in the Philosophy
Science, Vol. 9, 261-325. Minneapolis: University of Minnesota
Boden, Margaret A. 1988. Escaping from the Chinese
room. In The philosophy of artificial intelligence, ed.
Margaret Boden, 89-104. New York: Oxford University Press. Originally
appeared as Chapter 8 of Boden, Computer models of the mind.
Cambridge University Press: Cambridge (1988).
Callon, M. 1986. Some Elements of a
Sociology of Translation: Domestication of Scallops and the Fishermen
of St. Brieuc Bay. In Power, Action, and Belief: A New
Sociology of Knowledge, ed. J. Law, 196-229. London:
Routledge & Kegan Paul.
Cam, Phillip. 1990. Searle on Strong AI. Australasian
Journal of Philosophy 68: 103-108
Carleton, Lawrence. 1984. Programs, Language
Understanding, and Searle. Synthese 59: 219-230.
Carnap, Rudolf. 1955. Meaning and Synonymy
in Natural Languages. Philosophical Studies, 7: 33-47.
Chalmers, David. Absent Qualia, Fading
Qualia, Dancing Qualia.
Computational Foundation for the Study of Cognition.
----. Does a Rock Implement Every Finite-State
Cole, David. 1984. Thought and Thought Experiments. Philosophical
Studies 45: 431-444.
minds: Cam on Searle. Australasian Journal of Philosophy 69:
Dennett, Daniel. 1991. Consciousness
Explained. Boston: Little, Brown.
1637. Discourse on method. Trans. John Cottingham, Robert Stoothoff and
Dugald Murdoch. In The philosophical writings of Descartes,
Vol. I, 109-151. New York: Cambridge University Press.
Fodor, J. A.
1980b. Methodological solipsism considered as a research strategy
in cognitive science. Behavioral and Brain Sciences 3:
Fisher, John A. 1988. The wrong stuff: Chinese rooms
and the nature of understanding. Philosophical Investigations
Gregory, Richard. 1987. In defense of artificial
intelligence - a reply to John Searle. In Mindwaves, ed. Colin
Blakemore and Susan Greenfield, 235-244. Oxford: Basil Blackwell.
Haraway, D. 1991. A Cyborg Manifesto:
Science, Technology and Socialist-Feminism in the Late Twentieth
Century. Socialist Review 80 (1985): 65-107. Reprinted in Haraway's Simians, Cyborgs, and Women: The
Reinvention of Nature (London, Free Association Books), 149-81.
Harman, Gilbert. 1990. Intentionality: Some
distinctions. Behavioral and Brain Sciences 13: 607-608.
Harnad, Stevan. 1982. Consciousness: An afterthought. Cognition
and Brain Theory 5: 29-47.
-----. 1989a. Minds, machines
and Searle. Journal of Experimental and Theoretical Artificial
Intelligence 1: 5-25.
-----. 1989b. Editorial
commentary on Libet. Behavioral and Brain Sciences 12: 183.
1990. The symbol grounding problem. Physica D 42: 335-346.
Hauser, Larry. 1992. Act, aim, and unscientific
explanation. Philosophical Investigations 15: 313-323.
----. 1993c. Why isn't my pocket calculator a
thinking thing? Minds and Machines 3: 3-10.
-----. 1993b. The sense of
"thinking." Minds and Machines 3:12-21.
----. 1997b. Review of Selmer Bringsjord's What
Robots Can and Can't Be. Minds and Machines 7: 433-438.
Hayes, P. J. 1982. Introduction. In Proceedings of
the Cognitive Curricula Conference, vol. 2, ed. P. J. Hayes and M.
M. Lucas. Rochester, NY: University of Rochester.
Hayes, Patrick, Stevan Harnad, Donald Perlis, and Ned.
Block. 1992. Virtual symposium on virtual mind. Minds and
Machines 2: 217-238.
Jackson, Frank. 1982. "Epiphenomenal qualia." Philosophical Quarterly 32:127-136.
Jacquette, Dale. 1989. Adventures in the Chinese room.
Philosophy and Phenomenological Research XLIX: 606-623
Latour, Bruno. 1992. Where are the Missing
Masses? The Sociology of a Few Mundane Artifacts. In Shaping
Technology / Building Society: Studies in Sociotechnical Change,
225-258. Cambridge, MA: MIT Press.
Lyons, William. 1985. On Searle's "solution" to the
mind-body problem. Philosophical Studies 48: 291-294.
----. 1986. The Disappearance of Introspection.
Cambridge, MA: MIT Press.
MacQueen, Kenneth G. 1990. Not a trivial consequence. Behavioral
and Brain Sciences 13:193-194.
Maloney, Christopher. 1987. The right stuff. Synthese
John. 1979. Ascribing mental qualities to machines. Philosophical
Perspectives in Artificial Intelligence, ed. M. Ringle. Atlantic Highlands, NJ: Humanities Press.
Mill, J. S. 1889. An Examination
of Sir William Hamilton's Philosophy (6th ed.). Longmans Green.
Nagel, Thomas. 1974. What
is it like to be a bat? Philosophical Review 83: 435-450.
Nagel, Thomas. 1986. The
View from Nowhere. Oxford: Oxford University Press.
Roger. 1994. Shadows of the Mind: A Search for
the missing Science of Consciousness. Oxford: Oxford University
Hilary. 1988. Representation and Reality. Cambridge, MA: MIT Press.
Polger, Thomas. Zombies. A Field Guide to
the Philosophy of Mind, ed. M. Nani and M. Marrafa.
R. F. and van Gelder (eds.) 1995. Mind as Motion:
Explorations in the Dynamics of Cognition. Cambridge, MA: MIT
Roland. 1980. The chess room: further demythologizing of strong AI. Behavioral
and Brain Sciences 3: 441-442.
Putnam, Hillary. 1975a. The meaning
of `meaning'. Mind Language and Reality. Cambridge:
Cambridge University Press.
Putnam, Hillary. 1983. Models and
Reality. Realism and Reason: Philosophical Papers Volume 3. Cambridge: Cambridge University Press.
Hillary. 1988. Representation and Reality. Cambridge, MA:
Rapaport, William J. 1986. Searle's experiments with
thought. Philosophy of Science 53: 271-279.
-----. 1988. Syntactic
semantics: foundations of computational natural language understanding.
In Aspects of artificial intelligence, ed. James H. Fetzer,
81-131. Dordrecht, Netherlands: Kluwer.
Russell, Bertrand. 1948. Human Knowledge:
Its Scope and Limits. New York: Simon & Schuster..
Savitt, Steven F. 1982. Searle's demon and the brain
simulator. Behavioral and brain sciences 5(2): 342-343.
Roger C., and Robert P. Abelson. 1977. Scripts, plans, goals, and
understanding. Hillsdale, NJ: Lawrence Erlbaum Press.
Schank, Roger C. 1977. Natural language, philosophy,
and artificial intelligence. In Philosophical perspectives in
artificial intelligence, 196-224. Brighton, Sussex: Harvester Press.
Searle, John R., and Daniel Dennett. 1982. The myth of
the computer. New York Review of Books 57 (July 24): 56-57.
-----. 1990. "The emperor's
new mind": An exchange. New York Review of Books , XXXVII (June
Searle, John R. 1971a. Speech acts. New York:
Cambridge University Press.
-----. 1971b. What is a
speech act? In The philosophy of language, ed. John Searle.
Oxford: Oxford University Press.
----. 1975a. Speech acts and recent linguistics.
In Expression and meaning, 162-179. Cambridge: Cambridge
-----. 1975b. Indirect speech
acts. In Expression and meaning, 30-57. Cambridge: Cambridge
-----. 1975c. A taxonomy of
illocutionary acts. In Expression and meaning, 1-29. Cambridge:
Cambridge University Press.
-----. 1977. Reiterating the
differences: A reply to Derrida. Glyph 2:198-208. John Hopkins.
-----. 1978. Literal meaning.
In Expression and meaning, 117-136. Cambridge: Cambridge
----. 1979a. What is an intentional state? Mind
-----. 1979b. Intentionality
and the use of language. In Meaning and use, ed. A.
Margalit, 181-197. Dordrecht, Netherlands: D. Reidel Publishing
----. 1979c. The intentionality of intention and
action. Inquiry 22: 253-280.
-----. 1979d. Metaphor. In Expression
and meaning, 76-116. Cambridge: Cambridge University Press.
-----. 1979e. Referential and
attributive. In Expression and meaning, 137-161. Cambridge:
Cambridge University Press.
-----. 1979f. Expression
and meaning. Cambridge: Cambridge University Press.
-----. 1980c. Analytic
philosophy and mental phenomena. In Midwest studies in philosophy,
vol. 5, 405-423. Minneapolis: University of Minnesota Press.
-----. 1980d. The background
of meaning. In Speech act theory and pragmatics, ed.
J. R. Searle, F. Kiefer and M. Bierwisch, 221-232. Dordrecht,
Netherlands: D. Reidel Publishing Co.
----. 1982. The Chinese room revisited. Behavioral
and Brain Sciences 5: 345-348.
1983. Intentionality: an essay in the philosophy of mind.
New York: Cambridge University Press.
1984b. Intentionality and its place in nature. Synthese 61:3-16.
-----. 1985. Patterns,
symbols and understanding. Behavioral and Brain Sciences 8:
-----. 1986. Meaning,
communication and representation. In Philosophical grounds of
rationality, ed. R. Grandy et al., 209-226.
----. 1987. Indeterminacy, empiricism, and the
first person. Journal of Philosophy LXXXIV: 123-146
-----. 1988. Minds and brains
without programs. In Mindwaves, ed. Colin Blakemore and Susan
Greenfield, 209-233. Oxford: Basil Blackwell.
1989a. Reply to Jacquette. Philosophy and Phenomenological Research
-----. 1989b. Consciousness,
unconsciousness, and intentionality. Philosophical Topics ,
-----. 1989c. How performatives work. Linguistics
and Philosophy 12: 535-558.
-----. 1990b. Consciousness,
unconsciousness and intentionality. In Propositional attitudes: the
role of content in logic, language, and mind, ed. Anderson A. and
J. Owens, 269-284. Stanford, CA: Center for the Study of Language and
1990c. Is the brain a digital computer? Proceedings of the
American Philosophical Association 64: 21-37.
-----. 1990d. Forward to
Amichai Kronfeld's Reference and computation. In Reference
and computation, by Amichai Kronfeld, xii-xviii. Cambridge:
Cambridge University Press.
----. 1990e. The causal powers of the brain. Behavioral
and Brain Sciences 13:164.
-----. 1990f. Consciousness,
explanatory inversion, and cognitive science. Behavioral and Brain
Sciences 13: 585-596.
-----. 1990g. Who is
computing with the brain? Behavioral and Brain Sciences 13:
-----. 1991a. Meaning,
intentionality and speech acts. In John Searle and his critics,
Ernest Lepore and Robert Van Gulick, 81-102. Cambridge, MA: Basil
----. 1991b. The mind-body problem. In John
Searle and his critics, ed. Ernest Lepore and Robert Van Gulick,
141-147. Cambridge, MA: Basil Blackwell.
----. 1991c. Perception and the satisfactions of
intentionality. In John Searle and his critics, ed. Ernest
Lepore and Robert Van Gulick, 181-192. Cambridge, MA: Basil Blackwell.
----. 1991d. Reference and intentionality. In John
Searle and his critics, ed. Ernest Lepore and Robert Van Gulick,
227-241. Cambridge, MA: Basil Blackwell.
----. 1991e. The background of intentionality and
action. In John Searle and his critics, ed. Ernest Lepore and
Robert Van Gulick, 289-299. Cambridge, MA: Basil Blackwell.
----. 1991f. Explanation in the social sciences.
In John Searle and his critics, ed. Ernest Lepore and Robert
Van Gulick, 335-342. Cambridge, MA: Basil Blackwell.
----. 1999a. Chinese
room argument. In The MIT Encyclopedia of the Cognitive
Sciences, ed. R. A. Wilson and F.C. Keil, 115-116. Cambridge,
MA: MIT Press.
----. 1999b. Mind,
Language, and Society: Philosophy in the Real World. London:
Weidenfield & Nicolson.
J. R., J. McCarthy, H. Dreyfus, M. Minsky, and S. Papert. 1984. Has
artificial intelligence research illuminated human thinking? Annals
of the New York City Academy of Arts and Sciences 426: 138-160.
Searle, J. R., K. O. Apel, W. P. Alston, et al. 1991. John
Searle and his critics. Ed. Ernest Lepore and Robert
Van Gulick. Cambridge, MA: Basil Blackwell.
Richard. 1985. It ain't the meat it's the motion. Inquiry 26:
Stipp, David. 1991. Does that computer have something
on its mind? Wall Street Journal, Tuesday, March 19, A20.
Thornton, Stephen. "Solipsism and Other
Minds" entry in the Internet
Encyclopedia of Philosophy.
Alan M. 1936-7. On computable numbers with an application to
the Entsheidungsproblem. In The Undecidable, ed. Martin
Davis, 116-154. New York: Raven Press, 1965. Originally published
in Proceedings of the London Mathematical Society, ser.
2, vol. 42 (1936-7), pp. 230-265; corrections ibid., vol. 43
(1937), pp. 544-546.
1950. Computing machinery and intelligence. Mind LIX:
K. 1998. In the Mind of the Machine. London: Arrow.
Weiss, Thomas. 1990. Closing the Chinese room. Ratio
(New Series) III: 165-181.
Weizenbaum, Joseph. 1965. Eliza a computer
program for the study of natural language communication between man and
machine. Communications of the Association for Computing Machinery
1976. Computer power and human reason. San Francisco: W. H.
Wilks, Yorick. 1982. Searle's straw man. Behavioral
and Brain Sciences 5(2):344-345.
Zalta, Edward N. 1999. Gottlob Frege. Stanford Encyclopedia of
Top of page