Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence by Larry Hauser


1. Functionalism and AI | 2. Evidence of AI | 3. Allegedly Equivocal "As-if Intentionality" | 4. Occam's Razor and Aristotle's Admonition | 5. Naiveté Revisited | 6. "As-if Intentionality" Again | 7. Marks of the Metaphorical | 8. The Bogey of Panpsychism | 9. Conclusion and Prospectus | Endnotes
Chapter Four:

When prospecting for intuitions, we should prefer a field which is not too much trodden into bogs by traditional philosophy, for in that case even "ordinary" language will often have become infected with the jargon of extinct theories, and our own prejudices too, as the upholders and imbibers of theoretical views, will be too readily, and often insensibly, engaged. (Austin 1957, p.384)

I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. (Turing 1950, p.452)

1. Functionalism and AI

If you base your conviction that computers can think (AIP) on Turing machine functionalist principles and if such principles allow (as they seem) that almost anything instantiates any number of partial Turing test passing programs (if instantiating programs in the anemic mathematical sense is supposed to cause things to have, or constitute their having, mental properties), then the "brutally simple" part of Searle's Chinese room argument strikes a telling blow against your acceptance of AIP by undermining its functionalist basis. And, in fact, functionalism is beleaguered -- not only from Searle's argument, but from other sources as well. Besides long-standing qualia-based objections (e.g., the inverted spectrum) such as Block (1978) and others have pressed, a similar charge of "provincialism," or "anthropocentrism," to that which functionalist criticism levels at the mind-brain identity theory, can be lodged against functionalism itself. Much as functionalism accuses the identity theory of sinning against the principle of the multiple realizability (MR) of the same procedures in different hardware, functionalism -- or at least many functionalists -- can be accused of sinning against the multiple realizability of the same task by different procedures. If the functionalist requires strong (procedural) equivalence or similarity of computational processes (or Martian would-be thought processes) to human thought processes before attribution of like mental properties to ours is warranted -- and it's difficult to see how else the additional procedural constraint over and above weak (input-output) equivalence is to be specified -- this seems scarcely less prejudicial to the computer's (or Martian's) chances of admission to the club of thinking things than the identity theorist's requirement of hardware equivalence. Again, though different hardwares may support the same procedures, there is no guarantee that they will. It seems unlikely, for example, that the eyes and connected neurological apparatus in houseflies support anything like human visual processing algorithms, yet it seems not unlikely that houseflies see. Once again, as with Searle's "brutally simple" argument, this "provincialism" -- or "chauvinism" or "species chauvinism" (Block 1978) -- objection, perhaps, does not so much make a knock-down argument against functionalism as it defines a research imperative for functionalism to provide a characterization of procedural equivalence stronger than weak (input-output) equivalence yet weak enough not to be objectionably provincial. No one presently knows how to do this.

Perhaps the most lethal line of attack on Turing machine functionalism (and, perhaps, functionalism generally) is that mounted by Hilary Putnam's (1975; 1988, chap. 1-5) and other "externalist" criticisms of functionalist doctrine. Putnam's disavowal of the doctrine he, perhaps more than anyone, was responsible for elaborating -- the Turing machine functionalist hypothesis that minds are Turing machines (that mental states are Turing machine states, mental processes Turing machine operations, etc.) -- seems especially telling. Many find Putnam's reasons for rejecting the doctrine he fathered troubling, and perhaps compelling. Drawing on his Twin Earth thought experiment and related counterexamples to show, "We cannot individuate concepts and beliefs without reference to the environment" (Putnam 1988, p.72), Putnam concludes:{1}

Meanings aren't "in the head". The upshot of our discussion for the philosophy of mind is that propositional attitudes, as philosophers call them -- that is, such things as believing that snow is white and feeling that a certain cat is on the mat -- are not "states" of the human brain and nervous system considered in isolation from the human and nonhuman environment. A fortiori they are not "functional states" -- that is, states definable in terms of parameters which would enter into a software description of the organism. Functionalism, construed as the thesis that propositional attitudes are just computational states of the brain, cannot be correct. (Putnam 1988, p.72)

Unlike the provincialism objection (which might be met, if functionalism can provide an adequate account of strong equivalence) or the strengthened Searlean "brutally simple" argument{2} (which could be countered by an account of program implementation that doesn't have the consequence that everything implements virtually any program you please, if such were provided), Putnam's externalist criticism -- if sound -- seems immediately fatal to the identification of folk psychological states and processes with computational states and processes.{3}

If the thesis of artificial intelligence (AIP) is as dependent on functionalism for its plausibility and support as Searle pretends{4} and so many of its cognitive scientific advocates have obligingly seemed to suppose, then AI is on the ropes along with functionalism; and, no doubt, such troubles with functionalism as just surveyed underlie the odd recent spate of breast-beating about "big problems" (Doyle 1988, p.20), "methodological malaise" (Bundy and Ohlsson 1990, p.143), and "Kuhnian crisis" (Partridge and Wilks 1990, p.363) in AI. I say "odd" because despite "the not so well kept secret ... that AI is internally in a paradigmatic mess" (Chandrasekaran 1990, p.14), which has given "old hands at artificial intelligence ... recently to deploring the state of research in the field" (Doyle 1988, p.19) -- despite the deplorable state of the Turing machine functionalist theory -- the state of the art of making computers do intelligent things might fairly be described as flourishing.

2. Evidence of AI

Contrary to misgivings about AI deriving from the vexed state of Turing machine functionalist theory; attendant, rather, on AI's admirable and continuing string of practical achievements; there seems to be ample evidence of a much more homely, empirical sort for the proposition that machines (such as digital computers) can think or have mental properties -- evidence that is immune to the critical thrust of Searle's anti-functionalist argument (construed as in BS1A), and to Putnam's externalist (Twin Earth based) objections to functionalism, and other arguments against functionalism as well. I agree with Searle, that AIP -- whether computers can think -- is an empirical question. But contrary to Searle, I submit that the empirical evidence already supports the claim that computing machines
can think, because it evidences the claim that they already do. The most compelling way to argue for the possibility of artificial intelligence is from the many intelligent artifacts already in existence: the best evidence for the claim that computers can think or have mental properties is that, on many occasions, very many of them do. Indeed, if we couch the issue in terms of isolable mental attributes (as Searle in the Chinese room argument and experiment, as we have seen, couches it) then there is every reason to think computers considerably less powerful than Turing imagined -- "having a memory capacity of 109" and "able to play the imitation game so well that the average interrogator will have no more than 70% chance of making the right identification after five minutes of questioning" (Turing 1950, p.442) -- genuinely possess mental properties or thought processes already. Even such "simple" devices as our pocket calculators, which after all add, subtract, multiply, and divide (in short, calculate), now seem to have mental properties and to be thinking things.

If we turn from vagaries of cognitive scientific or speculative metaphysical theories about minds to unreflective practical judgments, to our "unthinking" attributions of mental properties to machines such as calculators and computers, the evidence for SAIP (that some computers actually think already) seems imposing enough. We say, "my calculator adds and subtracts, divides and multiplies, but can't extract square roots"; "DOS recognizes the dir command but not the die command"; "Deep Thought considers more possible continuations of play than its human opponents"; etc. Properties we take to be mental when we ascribe them to humans and other animals get ascribed to programmed machines on virtually every page of every computer or software manual. This being so, there really is a brutally simple argument -- any number of them, in fact -- for SAIP and hence (since what's actual is possible), for AIP by way of SAIP. Arguments like the following (cf. Hauser 1993a; 1993b):

My pocket calculator calculates.
Calculating is thinking.
My pocket calculator thinks.

DOS recognizes certain commands.
Recognition is a mental process.
DOS has mental processes.

Deep Thought considers different continuations of play.
Considering is thinking.
Deep thought thinks.

Such arguments, of course, can be multiplied ad libitum for all the acts of all those devices we are unreflectively inclined (if not practically compelled) to describe in mental terms.

Call the view that accepts SAIP on the basis of the prima facie evidence of specific mental properties had by specific devices, and accepts AIP because SAIP entails AIP, "naive AI." An advocate of naive AI, like me, holds that some computers can think on the grounds that some computers (e.g., my pocket calculator and my laptop personal computer) already evidently do; and against naive AI -- AIP held for these reasons -- Searle's Chinese room argument seems unavailing. Since Searle has often made it clear that he regards it as "just a plain (testable, empirical) fact about the world that it contains certain biological systems, specifically human and certain animal brains, that are capable of causing mental phenomena with intentional or semantic content" (Searle 1982b, p.57); and since he likewise has insisted, "It is an empirical question whether any given machine has causal powers equivalent to the brain" (Searle 1980b, p. 452); why shouldn't we accept evidence that existing machines do exhibit mental properties -- which we find compelling enough, to lead us to predicate specific mental properties of particular machines under various circumstances -- at face value?

3. Allegedly Equivocal "As-if Intentionality"

Now against naive AI and the sort of humble arguments just canvassed it may be urged that "calculates," "recognizes," and the like -- mental predicates which we seem prereflectively inclined (if not practically compelled) to ascribe to computers -- are being used equivocally when so ascribed. Searle notes,

Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states. (Searle, 1980a, p.423)

Then he proceeds immediately to admonish us "to overcome this impulse" (Searle 1980a, p.423). To overcome this impulse, according to Searle, is to recognize that though "We often attribute `understanding' and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts," nonetheless, since the predication is merely figurative, "nothing is proved by such attributions" (Searle 1980a, p.419). Thus against such simple "proofs" of AI -- of the present reality (and a fortiori the possibility) of artificial intelligence -- as the syllogisms set out in the preceding section, it would be Searle's contention, I take it, that such "proofs" are invalidated by their equivocation on the mental predicates they invoke.

According to Searle, when "I say that my pocket calculator adds and subtracts, but does not divide," by such usage "I am not ascribing any intrinsic mental phenomena ... to my pocket calculator" (Searle 1980a, p.407). Similarly, Searle asserts, "There is an enormous difference between attributing a `desire' to a chess-playing computer to castle on the strong side and my saying I have a desire to drink a glass of cold beer" (Searle 1980c, p.418). Perhaps, if we cannot entirely "overcome this impulse" to make mental predications of machines such as computers, we can nonetheless vaccinate ourselves against drawing the wrong philosophical conclusions by recognizing such usage as systematically ambiguous between "ascriptions of intrinsic mental phenomena" such as we make to ourselves and certain other animals, on the one hand, and between "observer relative mental ascriptions on the other" (Searle 1980c, p.407). "In the computer case" our mental attributions are "just a useful shorthand for describing how the system functions" (Searle 1980c, p.418). "In my own case, I am stating facts about intrinsic mental phenomena" (Searle 1980c, p.418).{5}

4. Occam's Razor and Aristotle's Admonition

How are we to assess such claims? Before considering Searle's explication of the ambiguity he alleges between "intrinsic" and "observer-relative" predications of mental terms, I ask what is required of any such account for it to succeed. What is required to make charges of ambiguity or equivocation stick? At this point, I claim, two noteworthy principles should guide our deliberations. The first of these is one that Searle, in another context, enunciates and invokes himself:

... an ordinary application of Occam's razor places the onus of proof on those who wish to claim that these sentences are ambiguous. One does not multiply meanings beyond necessity. (Searle 1975b, p. 40)

Similarly, Joe Hanna advises,

we are not allowed to multiply meanings arbitrarily. Though the sky and a person's mood may be blue in different senses, surely the sky is blue today in the same sense (but perhaps not to the same degree) as it was yesterday, and one person's mood is blue in the same sense as another's. (Hanna 1968, p.39)

and in the same vein, Saul Kripke cautions,

It is very much the lazy man's approach to philosophy to posit ambiguities when in trouble. If we face a putative counterexample to our favorite philosophical thesis, it is always open for us to protest that some key term is being used in a special sense, different from its use in the thesis. We may be right, but the ease of the move should counsel a policy of caution. Do not posit an ambiguity unless you are really forced to, unless there are really compelling theoretical or intuitive grounds to suppose that an ambiguity really is present. (Kripke 1977, p.268)

This seems undeniably sound advice; and I take it that this means, in the present context, that we should take our everyday unreflective mental attributions to computers, and perhaps some other inanimate things, at face value unless or until Searle (or whoever) makes a convincing case against so taking them. In lieu of a clear exposition of the ambiguity being invoked, the claims at issue remain credible. The "burden of proof" on claims of ambiguity generally, which Searle himself acknowledges and accepts, plainly falls, in the present context, on Searle himself.

But there is a second methodological principle which, I think, also needs to be acknowledged with regard to psychological theories and such accounts of behavior or conduct as they provide; a principle voiced by Aristotle's admonition:

among statements about conduct ... those which are particular are more genuine, since conduct has to do with individual cases, and our statements must harmonize with the facts in these cases. (Nichomachean Ethics II:6, 1107a28-31)

While Searle never explicitly draws any such moral from remarks such as the following -- in which he surveys the history of overarching theories of minds (their natures and properties) and finding this history to consist, mainly, in unsupported (and often insupportable) metaphysical speculation -- perhaps he should. He says,

Some of the traditional answers [to the ontological question of what minds or mental states such as beliefs are] are that a belief is a modification of a Cartesian ego, Humean ideas floating around in the mind, causal dispositions to behave in certain ways, or a functional state of a system. I happen to think that all these answers are false. (Searle 1983, p.14)

Amplifying this, deploring the dearth of well founded knowledge and the pretensions of overarching theories about the nature and causes of mental phenomena, Searle complains of

how little we know of the functioning of the human brain, and how much the pretensions of certain theories depend on this ignorance.... Many of the claims made about the mind in various disciplines ranging from Freudian psychology to artificial intelligence depend on this sort of ignorance. Such claims live in the holes in our knowledge. (Searle 1984a, p.8-9)

Yet Searle's own metaphysical pronouncements concerning the nature and properties of minds, e.g., his recent elaborated "Connection Principle" which claims,

The ascription of an unconscious intentional phenomenon to a system implies the phenomenon is in principle accessible to consciousness. (Searle 1990f, p.586)

seem just as speculative and empirically unsupported (if not insupportable) as those he criticizes.

I suppose taken speculatively, such a claim as Searle's Connection Principle, which speculates that all mental phenomena must be potentially conscious, has as much (and no more) right to live in the holes of our knowledge as the alternative speculative metaphysical hypotheses about the essential nature of mind (e.g., Hume's, or behaviorism's, or functionalism's) Searle mentions.{6} I also suppose (pace the proceeding "application of Occam's razor") that the grip on life of such metaphysical speculations is too tenuous to allow them to warrant discounting whole classes of particular judgments about things's mental properties as ambiguous, as Searle's attempt to distinguish mere "as-if" from real "intrinsic" intentionality attempts to do.

Perhaps Wittgenstein's advice to give preference in our philosophical deliberations to "language at work" over "language on holiday" (Wittgenstein 1958, §38) is a kind of general acknowledgment of the same methodological principle Aristotle invokes with regard to "statements about conduct" (hence, about individuals' motives, beliefs or other mental properties). Searle's "Connection Principle" is holiday talk about the mental: statements like "Deep Thought considers alternate continuations of play," and "DOS recognizes the dir command," instance the mental vocabulary at work, describing, explaining, enabling us to predict. The Aristotelian cum Wittgenstinean principle -- call it the "principle of naiveté" -- suggests it would be a mistake to discount the latter on the basis of anything like the former. This too seems sound advice.

5. Naiveté Revisited

The naiveté principle can be further supported by consideration of the background or network of concepts, or "theory" -- what Paul Churchland and others have called "folk psychology" -- to which our leading question "Can machines (such as digital computers) think?" and our particular attributions of mental properties (e.g., recognition, consideration, etc.) to particular devices belong. If the theory or network of concepts to which this question and these claims belong is folk psychology, then it seems that the pronouncements of the folk (particularly their working pronouncements) should be credited in these connections; much as one credits the working pronouncements of mathematicians when doing philosophy of mathematics; as one credits the working pronouncements of biologists in doing the philosophy of biology; etc. However little sympathy you have with "ordinary language" arguments or appeals to "ordinary usage" in general,{7} it seems entirely warranted here where the question is a folk psychological one. And it is surely a misunderstanding -- though one which those who call themselves "ordinary language philosophers" have perhaps invited -- to think that "ordinary usage" means the usage of the person on the street (as opposed to the specialized usage of scientists, e.g.). The appropriate contrast is rather the one we have canvassed already, between "ordinary" working use of language (including scientific usage) and idle (metaphysical?) "language on holiday" (cf. Wittgenstein 1958, §38; §132). Regardless of that (Wittgenstein exegesis aside) it happens that in the case of folk psychology, the "experts" are the folk -- here "expert" use and everyday use coincide.

Perhaps it might still be objected -- granting the folksiness of folk psychology -- that, nevertheless, the question "Can machines think" should not turn so much on present usage, as on the scientific fruitfulness or utility of proposed usage's, supporting whatever ultimate verdict about the mental powers of machines they might support. It might be urged, in this spirit, that the touchstone of our inquiry should be whatever best supports "attempts to develop a scientifically respectable psychology" (Goel 1991, p.131); that we should accept such conclusions as would best allow us to "develop and systematize [folk psychological explanation] until it meets accepted standards of scientific explanation" (Goel 1991, p.131); and that we might even be led by this scientific methodological imperative to "reject this whole [folk psychological] apparatus "as a superstitious remnant of our dark untutored past" (Goel 1991, p.131). Here I should say; 1) that acceptance of the principle of naiveté is not at all inconsistent with (though it does not require) accepting this scientific methodological imperative{8}; and 2) that Searlean doctrines like the "Connection Principle" offer little prospect of contributing to attempts to "develop and systematize" folk psychology "until it meets the standards of ... scientific explanation" anyway. The invitation "to regress to the Cartesian vantage point" (Dennett 1987, p.336 ) that Searle's positive suggestions amount to -- is more likely to be counterproductive than productive of scientific development. Considerations of scientific fruitfulness and utility are unlikely, I submit, to favor reemphasis of "the external point of view" (Searle 1980a, p.418) in favor of "the first person point of view" (Searle 1980b, p.451) Searle advocates.

6. "As-if Intentionality" Again

Even if, strictly speaking, Searle's Chinese Room Argument -- being directed against Turing machine functionalism and not directly against AI (AIP or SAIP) -- lacks direct force against naive AI, Searle claims that his Chinese Room Experiment, besides supporting or illustrating the premise that syntax does not suffice for semantics, also motivates and licenses a distinction between cases of real "intrinsic intentionality, which are cases of actual mental states" and cases of mere "as-if intentionality" such as we attribute when we make "observer relative ascriptions of intentionality, which are ways people have of speaking about entities figuring in our activities but lacking intrinsic intentionality" (Searle 1980b, p.451). The Chinese Room Experiment motivates and licenses this distinction, Searle thinks, by showing "that it is both conceptually and empirically possible to have human capacities in some realm [e.g., possible to have Chinese text processing capabilities indistinguishable from a native Chinese speaker's] without having any intentionality at all" (Searle 1980a, p.423). Searle in the Chinese Room acts just as if he understands Chinese without really understanding:

from the external point of view -- that is from the point of view of somebody outside the room in which I am locked -- my answers to the questions are indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. (Searle 1980a, p.418)

I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. (Searle 1980a, p.418)

Now, "For the same reasons," Searle goes on, "Schank's computer [running SAM{9}, a program which answers questions and produces paraphrases of certain stories in English] understands nothing of any stories, whether in Chinese, English, or whatever" (Searle 1980a, p.418).

The point "that it is conceptually and empirically possible to have human capacities in some realm without having any intentionality at all" (Searle 1980a, p.423), that something can act in all respects as if it has some particular mental property or ability and yet still not have it, Searle maintains, is completely general; and once we grasp this point "we should be able to overcome this impulse to postulate mental states" (Searle 1980a, p.423) of machines which behave as if they have them. Even if we can't entirely overcome the impulse to speak this way; even if attributions of "calculation," "consideration," "recognition," and the like to computers "gives predictive power we can get by no other method" (Dennett 1981, p.23); yet we can still "overcome the impulse" to think we speak literally in so speaking. Though, "We often attribute `understanding' and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts," nonetheless, Searle maintains, nothing is proved by such attributions. (Searle 1980a, p.419) We can acknowledge the body of usage -- attributing mental predicates to computers -- to which naive AI appeals, Searle holds, and still maintain "that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing" (Searle 1980, p.419). Likewise (as for "understanding") for every other predication of mental terms of computers -- when we say "Calculators calculate," or "Deep Thought considers alternate continuations," or "DOS recognizes commands" -- Searle would say, that in the literal sense our calculators don't really calculate, Deep Thought doesn't really consider continuations, and DOS doesn't really recognize commands. All these, according to Searle, should be discounted as metaphorical "observer relative ascriptions" of "as-if" intentionality.

I have two initial worries about this. First, these supposedly observer relative attributions of "as-if" mental properties or intentionality are not much like other predications we recognize as metaphorical. When a woman says "my date was an octopus," she is saying her date resembled an octopus in some respects (it was as if he had eight hands) but not in all, or even many respects. When someone compares discussing the Chinese room argument in undergraduate classes to "mud wrestling," she is not saying that discussing the Chinese Room Argument in undergraduate courses is like mud wrestling in all (or even many) respects, but in quite limited respects (e.g., in respect of being messy, combative, and perhaps not very uplifting or illuminating).{10} It's an odd sort of "metaphorical" attribution where the subject of the attribution is supposed to be in every discernible relevant respect just like the things to which we literally attribute the predicates in question -- yet this is how it is with these supposedly metaphorical "observer relative attributions" of "as-if intentionality" to computers. Perhaps Searle is using "metaphorical" metaphorically.

My second worry arises if you respond -- as I believe Searle would respond -- to this first worry as follows: It's not that the as-if cognitive systems (e.g., Searle in the room, or digital computers) to which we make observer relative attributions resemble genuine cognitive systems in every relevant respect (which is absurd), nor even in every discernible relevant respect (which may be slightly less absurd), but just in every relevant respect that's discernible from the external point of view (which is not absurd). There is a difference, which is discernible, "from the point of view of the agent, from my point of view" (Searle 1990a, p.420 [my italics]). And, of course, the well-known difficulty with this is that it's only discernible from my point of view: so, if sense determines reference (as Searle holds), and if what licenses or determines my referring these mental properties to you is the same publicly observable evidence (behavioral evidence) that licenses or determines my ("observer relative," "as-if") application of the same mental predicates to computers, then it seems I attribute mental properties to you, unlike myself, in the same sense (on the same basis) as I attribute them to computers. It seems my attributions to you, on Searle's account, are just "analogical," "as-if," "observer-relative" attributions also. This is the infamous other minds problem Searle has, until recently, steadfastly stonewalled (see, e.g., 1980a, p.421-422; 1990g, p.640).{11}

I proceed, now, to a detailed consideration of my first complaint: "observer relative" attributions of "as-if" intentionality do not, on their face, seem metaphorical at all; they seem like unequivocal attributions of the mental properties they attribute.

7. Marks of the Metaphorical

Naive AI argues, e.g., that pocket calculators calculate (minor premise), that calculating is thinking (major premise), and hence, that calculators think; and the Searlean response, is that the argument is invalidated by it's equivocation on its middle term, "calculates" (i.e., it commits the "Fallacy of Four Terms"). According to Searle, in the ("metaphorical" or "analogical," "as-if," "observer relative") sense in which "calculates" is truly ascribable to my pocket calculator (call him "Cal" for the sake of argument) calculating isn't thinking (which falsifies the major premise); and in the ("literal," "intrinsic") sense in which calculating is really thinking, it's not true that Cal calculates (falsifying the minor premise). So, if Searle is right the naive argument is unsound on either attempted validating reconstruction, and does not stand. The crucial point at issue -- what needs to be shown for the "four terms" objection to the naive argument to be sustained -- is that "calculates" is being used equivocally here: that I use the word "calculates" with a different sense or a different meaning when I say, "Cal calculates," than when I say, "I calculate" or (one hopes, apropos of my second misgiving, above) when I say, "You calculate."

Well all metaphorical usage, I presume is ambiguous, but not all ambiguous usage is metaphorical -- so while I speak loosely here, following Searle I think, of "marks of the metaphorical" -- what really concerns us is whether the usages at stake are ambiguous (whether or not the ambiguity is due to their being metaphorical). In this connection, I observe that the tests we will consider in this section are general tests for ambiguity. The crucial question is whether the usages in question are ambiguous; and the argument of this section is that, according to various tests for ambiguity which Searle himself acknowledges, it's not clear they are. Indeed, I should go further and say that, by these tests, they're clearly not; but whether or not the considerations to be adduced in this section establish this second stronger verdict, they clearly establish the first. The charge of ambiguity is not sustainable on the basis of any of the standard tests that Searle himself acknowledges. Nor are they sustainable on the basis of any other test that I know of.

Questions of ambiguity are dicier than is generally appreciated: even though "philosophers perennially argue for ambiguities on the basis of a difference in understanding alone" (Zwicky & Sadock 1975, p.4), nevertheless,{12}

It will not do, of course, to argue that a sentence is ambiguous by characterizing the difference between two understandings. (Zwicky & Sadock, p.3)

A difference in understanding is a necessary, but not a sufficient, condition for ambiguity. (Zwicky & Sadock, p.4)

Given that this necessary condition of a difference in understanding is met, the choice is between ambiguity, "several underlying syntactic (or semantic) representations" (Zwicky & Sadock, p.2) and lack of specification, "a single representation corresponding to different states of affairs" (Zwicky & Sadock, p.2).{13} To illustrate this second notion, and the contrast with ambiguity, consider the example sentence (Zwicky & Sadock, p.2) below:

My sister is the Ruritanian secretary of state.

This sentence, it may be observed,

is unspecified (general, indefinite, unmarked, indeterminate, vague, neutral) with respect to whether my sister is older or younger than I am, whether she acceded to her post recently or some time ago, whether the post is hers by birth or by merit, whether it has an indefinite tenure or will cease at some specific future time, whether she is right-handed or left-handed, and so on. (Zwicky & Sadock, p. 2-3)

Yet it shouldn't be said that this sentence is

many ways ambiguous just because we can perceive many distinct classes of contexts in which it would be appropriate, or because we can indicate many understandings with paraphrases. (Zwicky & Sadock, p.4)

Offhand, it seems, the difference between my understanding of "considers alternative continuations of play" in "Deep Thought considers alternative continuations of play" and "Karpov considers alternative continuations" is more like the difference between these various understandings of "My sister is the Ruritanian Secretary of State" than the difference between the disparate understandings of such clearly ambiguous sentences as "They saw her duck" and "He cooked her goose" (Zwicky and Sadock, p.3).

The disparate understandings of the object phrase "her duck" in "They saw her duck" -- i.e., "a certain sort of bird belonging to the woman and a certain kind of action performed by the woman" (Zwicky and Sadock, p.4) -- do not seem to be "the sort of thing that languages could plausibly fail to specify" (Zwicky & Sadock, p.4). In such cases, where "lack of specification is implausible" (Zwicky and Sadock, p.4), "the burden of proof falls on anyone who insists that [the] sentences ... are unspecified rather than ambiguous" (Zwicky & Sadock, p.4). On the other hand, sentences like "My sister is the Ruritanian Secretary of State" despite being "unspecified with respect to some distinction" (Zwicky & Sadock, p.4), and indeed any number of them, nevertheless "have otherwise quite similar understandings" (Zwicky and Sadock, p.4), the distinctions are all "the sort of thing[s] that languages could plausibly fail to specify" (Zwicky and Sadock, p.4); and in such cases judgments of ambiguity are, perhaps strongly, contraindicated.

Since it seems that every sentence satisfies the necessary condition for ambiguity of having some different understandings, perhaps we might strengthen our necessary condition so as to actually effect some exclusions, as follows: a plausibly specified difference in understanding is a necessary, but not a sufficient, condition for ambiguity.{14} On this strengthened criterion "My sister is the Ruritanian Secretary of State" perhaps even fails to meet the necessary condition for ambiguity; but there will still be a class of cases -- e.g., the difference in understanding as between "cut" as in "cut the cake" (with a spatula), "cut the lawn" (with a lawn mower), and "cut the cloth" (with scissors) -- where this necessary condition is met, and yet we may still dispute whether the usages are ambiguous. Elsewhere (not in connection with the claim that ascriptions of mental terms to computers are just ambiguous "as-if" attributions) Searle (1980d) himself proposes several tests for distinguishing true ambiguity from mere lack of specification in questionable cases.
Searle asks us to consider "the following sequence of rather ordinary English sentences, all containing the word `cut'" (Searle 1980d, p.221):

1. Bill cut the grass.
2. The barber cut Tom's hair.
3. Sally cut the cake.
4. I cut my skin.
5. The tailor cut the cloth.
6. Sam cut two classes last week.
7. The President cut the salaries of the employees.
8. The Raiders cut the roster to 45.
9. Bob can't cut the mustard.
10. Cut the cackle!
11. Cut it out!

Searle takes it as "more or less intuitively obvious" (Searle 1980d, p.221) that "the occurrence of the word `cut' in the utterances of 1-5 is literal" (1980d, p.221) and that "the sense or senses in which "cut" would be used in the utterances of 6-8," on the other hand, "is a figurative extension of the literal meaning in 1-5" (Searle 1980d, p.222). In 9-11 "the occurrences of the word `cut' are clearly in idioms" -- these will not concern us. The main problem which Searle considers is how justify the distinction between the first group (1-5) and the second group (6-8) "if someone wanted to deny it" (Searle, 1980d, p.222).

Searle proposes, that the distinction between the literal use of "cut" in 1-5 and its figurative employment in 6-8 can be made out by four different tests.{15} They are:

Priority or Asymmetrical Dependence of Understanding: "A person who doesn't understand 6-8, but still understands 1-5, understands the literal meaning of the word "cut"; whereas a person who does not understand 1-5 does not understand that literal meaning; and we are inclined to say he couldn't fully understand the meaning of "cut" in 6-8 if he didn't understand the meaning in 1-5." (Searle 1980d, p.222)

Translation: "in general, 1-5 translate easily into other languages; 6-11 do not." (Searle 1980d, p.222)

Conjunction Reduction: "certain sorts of conjunction reductions will work for 1-5 that will not work for the next group. For example,

12. General Electric has just announced the development of a new cutting machine that can cut grass, hair, cakes, skin, and cloth.

But if I add to this, after the word "cloth", the expression, from 6-8, "classes, salaries, and rosters", the sentence becomes at best a bad joke and at worst a category mistake. (Searle 1980d, p.222)

Comparative Formation: the fact that we can form some comparatives such as, "Bill cut more off the grass than the barber did off Tom's hair", is further evidence that we are not dealing with ambiguity as it is traditionally conceived. (Searle 1980d, p.224)

Now let us put our problematic class of mental predications of machines to these tests Searle proposes, to see how well they warrant the judgment that such usages are figurative like the uses of "cut" in 6-8.{16}

First, as regards Translation: I believe it is manifestly false that statements like "Deep Thought considers possible continuations of play," "DOS recognizes the dir command," "My pocket calculator calculates that the square root of 2 is 1.4142135," and the like, are particularly difficult to translate into other languages. Computer manuals, which (as already noted) help themselves generously to locutions of this sort, are published in (and translated to and from) English, Japanese, French, German, etc. By the translation test, it seems attributions of mental properties to computers, as in computer manuals, are literal and not figurative.

Next consider Comparative Formation: the fact that we can form comparatives such as "Deep Thought considers more continuations to a greater depth than its human opponent" and, "My pocket calculator extracts square roots more quickly and accurately than I" argues as strongly that "we are not dealing with ambiguity as it is traditionally conceived" (Searle 1980d, p.224) here as it argued that we were not using "cut" ambiguously when we spoke of Bill cutting the lawn and the barber cutting Tom's hair because we could form the comparison "Bill cut more off the lawn than the barber off Tom's hair" (without or making a "bad joke" or a category mistake).

As for Conjunction Reduction -- here again the test seems to support the verdict that the usages at issue are literal and not figurative. Consider the sentence, "Karpov considered the possible continuation QxR check." If I add "and so did Deep Thought," there is no zeugma: the resulting sentence is neither a joke nor a category mistake. The Comparative Formation and Conjunction Reduction tests, you might say, are tests that enable us to "hear" ambiguity as zeugma or punning ("a bad joke"). Thus the humorous impression made by a conjunction reduction like "She came home in a flood of tears and a sedan chair" reveals an ambiguity of "in" between the sentential context, "She came home in a flood of tears" and the sentential context "She came home in a sedan chair." All of Ryle's examples of category mistakes (Ryle 1949, pp.16f) -- e.g., someone who, after having been shown the Library and the Administration Building and all the rest of the campus and buildings proceeds to complain of not having been shown the University, as promised -- have this "bad joke" quality also. Unlike "GE's new cutting machine cuts cloth and salaries," I hear no punning or play on words (or otherwise have any intuition of semantic anomaly) in "Karpov considered the continuation QxR check, and so did Deep Thought."{17}

Which leaves only Priority or Asymmetrical Dependence of Understanding: this test yields no firm verdict against the literalness of the attributions in question either. Where the other three tests appeal beyond our original intuitions, either to empirical facts of translatability or to special comparative or conjunctive contexts where ambiguities can be heard as zeugma or punning, the understanding test does not seem to extend our original intuitions. Thus, if your original intuition, like Searle's, is that mental predications of computers are figurative and equivocal, your intuition will be that someone who didn't know how
to make such predications of people but only knew how to ascribe these predicates to machines would "not understand the literal meaning" of the predicates, and you might incline to think that someone who hadn't first learned how to make such predications of people "couldn't fully understand" their meaning applied to computers either. If you don't share Searle's original intuition, you won't share these other "intuitions" either. The other tests have a degree of theory neutrality our "intuitions" about priority and asymmetrical dependence of understanding seem to lack.

Searle himself presents the Understanding Test as continuous with his initial intuitive demarcation of literal, figurative, and idiomatic uses (Searle 1980d, p.221-222), and seems to admit its indecisiveness in this connection. The other three tests are what come in when the distinction between the literal uses and the figurative uses isn't so obvious as it is (as Searle points out) in the "cut" examples. The other three tests come in "if someone wanted to deny" (Searle 1980d, p.222) your initial intuitions (as I deny Searle's "intuitions" about the figurativeness of mental attributions to computers). And the verdict these tests render in this case -- in the case of such mental attributions as Searle would deem figurative (and equivocal), which I deem literal (and unequivocal) -- is unequivocal.

Again, it seems that the claim being made by the Understanding Test is not just, that as a matter of empirical fact, people learn (or even, as a matter of causal necessity must learn) to apply the predicates in question to other people before they can apply them to machines. I suppose it is an incontestable matter of fact that people do learn to apply mental predicates to people before they come to apply them to machines. But to see that this is insufficient to show that the former attributions are literal and the latter merely figurative consider -- what I also suppose is an incontrovertible matter of fact -- that we learn to apply these predicates to people before we come to apply them to dogs and cats (for instance); yet most of us would allow (and Searle in particular allows) that many mental predications to dogs and cats are true literal predications. As a matter of fact, I suppose, people learn to apply the word "eat" to people before they learn to apply it to other animals, but no one would ever take this to argue "eat" is being used figuratively in "Cats eat birds."

Finally, there's the vagary of Searle's claims that "a person who did not understand 1-5 does not understand that literal meaning" and, "he couldn't fully understand the meaning in 6-8 if he didn't understand the meaning in 1-5." (Searle 1980d, p.222). Surely someone needn't be conversant with all the different understandings or sets of truth conditions determined by all these uses (1-5) (and perhaps not with any of these particular uses or understandings) to rightly be said to understand the literal meaning of "cut" and to be able to literally and figuratively extend it.{18} Surely one can be a competent user of "cut" without knowing about lawn cutting and hair cutting and cake cutting -- I suppose most of us knew what "cut" means full well before we ever heard of diamond cutting. Similarly, people knew what "split" meant (and it meant the same as it does now) before they ever heard of splitting atoms; and I suppose many competent English speakers still haven't heard of splitting atoms, and most English speakers would be incompetent to recognize a case of atom splitting if they encountered one. Again, I suppose as a matter of empirical fact (and perhaps of nomological necessity) no one learns what it means to split atoms without having previously understood what it meant to split logs, poker pots, etc.; and, again, I suppose this has no tendency to show that "split" is used figuratively in talk of "splitting atoms."

I conclude that none of the four ambiguity tests Searle proposes warrants any judgment of figurativeness, or supports a charge of ambiguity, against mental attributions to computers. The Understanding Test is too theory dependent to serve as a check on (equally theory dependent) intuitions about figurativeness and literalness of different predications. The more theory neutral tests of Translation, Conjunction Reduction, and Comparative Formation, on the other hand, all tell in favor of the literalness and univocality of the predications in question.{19} If Searle's Chinese Room Experiment motivates and licenses a semantic distinction between "as-if" and "intrinsic" attributions of mental properties it will have to do so in opposition to, and not in agreement with, the semantic intuitions tapped by Conjunction Reduction and Comparative Formation tests.

8. The Bogey of Panpsychism

Besides "intuitions" evoked or "insights established" by his Chinese Room Experiment, Searle offers another motive for accepting the distinction between real "intrinsic" and mere "as-if" intentionality or mental attribution he proposes: avoidance of panpsychism. The proponent of naive AI (or any advocate of SAIP, I suppose), according to Searle, must be haunted by this specter: if we allow that computers literally have mental attributes, then why not water and wind? Since "relative to some purpose or other anything can be treated as if it were mental" (Searle 1989, p.198), Searle warns, we must distinguish literal ascriptions of "intrinsic" mental states to humans and various animals from metaphorical "as-if" attributions we make to computers because "the price for giving this distinction up would be that everything then becomes mental" (Searle 1989, p.198). Again: "If you deny the distinction then it turns out that everything in the universe has intentionality" (Searle 1990f, p.586).

I have two problems with this. The first is that it doesn't follow from the allowance that some computers sometimes have mental properties nor from the denial of Searle's proprietary notion of "as-if intentionality" that "everything becomes mental": it only follows from the first that the conjecture that "No inanimate things have mental properties" must be rejected. But it needn't follow from this either that there's no such thing as "as-if intentionality" in Searle's proprietary sense of the phrase (though, I think, there isn't); nor does it follow from this in turn (if one did reject Searle's proprietary distinction between real "intrinsic intentionality" and counterfeit "as-if intentionality" -- as I do) that all inanimate things have mental properties. Perhaps some computers do, but "doorknobs, bits of cloth, and shingles" (Searle 1990g, p.635) still don't. To my way of thinking, doorknobs, bits of cloth, and shingles don't even act as if they have mental properties, which is why we are not tempted to and would not find it predictively advantageous to attribute any mental properties to them: whereas, in the case of computers, we are tempted (if not compelled) and do find it predictively advantageous (even practically indispensable) to do so. My second misgiving is that even if the denial of a (semantic) distinction between "intrinsic" and "as-if" intentionality did entail panpsychism (which it doesn't) -- this would not be a fatal objection, because panpsychism (though contrary to received views in Christendom) is not the absurdity Searle (1990f, p.587) alleges it to be. I will address this second point, that the conclusion of Searle's "general reductio ad absurdum" of "any attempt to deny the distinction between intrinsic and as-if intentionality" (Searle 1990f, p.587) is not absurd, first.

Concerning the alleged absurdity of panpsychism -- I start by reiterating my aversion to arguing from sweeping metaphysical assumptions to more particular conclusions. The policy of allowing exalted metaphysical speculations to overturn more particular judgments of fact is generally suspect because grand metaphysical hypotheses are generally more dubious than more particular judgments people want to use them to overrule. If particular judgments are supposed to confirm or disconfirm our metaphysical speculations, we cannot systematically allow our speculative metaphysical convictions to systematically override our intuitions about particular cases. The principle of naiveté should lead us to reject the style of argumentation instanced by Searle's would-be reductio ad absurdum,


no less than the following would be defense of AI:{20}


Since Searle does argue in this manner, however, I will say a word or two in defense of the tenability (the nonabsurdity, if not the truth) of panpsychism.

Consider the range of possible views concerning the extent or distribution of mental properties in the world: they range from eliminativism, the view that nothing has any mental properties; to solipsism, the view that nothing besides me has any; to the Cartesian view that all and only humans have mental properties (call it "anthropism"); to the view that all and only animals -- or perhaps just certain "higher" animals -- have any mental properties (which might be called "zoologism"); to the view that all and only living things have mental properties (which might be called "biologism"); to panpsychic views that all things have mental properties either collectively (pantheism), or distributively (animism). Among these views, it seems only eliminativism can plausibly be said to be absurd (or at any rate, to be impossible to consistently believe).{21} Perhaps Solipsism can be discounted as a mere theoretical possibility which is practically impossible (for psychological and sociological reasons) to believe. All the rest would seem to be not only theoretically but practically live alternatives. Of course, Cartesian anthropism (perhaps formerly) or some form of zoologism (perhaps presently) are the prevalent persuasions in Christendom; but both historically and across cultures the more prevalent persuasion actually seems to be some version of panpsychism, whether pantheistic (as instanced, e.g., by Hinduism and Buddhism), or animistic (as instanced, e.g., by the beliefs of shamanistic, "primitive religions").

Given the real possibility of panpsychism, I submit, particular attributions of mental properties -- e.g., to oak trees, computers, or collectives -- cannot be disallowed just because they are judged to be contrary to anthropism or zoologism, or to "our" (Cartesian or zoologistic) "intuitions." Appeals to such "intuitions" as Searle's putative intuitions that computers like cars and thermostats (1980b), plants (1990g), stomachs (1990f), like "doorknobs, bits of chalk and shingles" (1990g) are "not conscious at all" (1990g, p.635) (and have no mental properties whatsoever) are highly suspect. The suspicion is that such "intuitions" are culturally provincial and theory driven (perhaps theology laden): better the particular judgments our naive attributions of mental properties to such things as collectives, plants, and computers reveal should inform our metaphysical speculations than that our metaphysical speculations should censor our particular judgments. If it's illicit to discount such naive attributions of mental properties by appeals to grand metaphysical doctrines (or their attendant "intuitions") -- if naive intuitions, instead, should go to confirm or disconfirm our metaphysical speculations -- then, as a matter of fact, I submit, our naive attributions to ferns (which, we think, sometimes need watering), and Credit Unions (which, we trust, know how much is in our accounts) as well as computers (which, we say, "calculate," "consider," "recognize," etc.), argue for something like panpsychism.

Consider the slippery slope (of life-forms and non-life-forms) from professor to paperweight. If you are concerned to draw a line to restrict the types of things to which mental properties can be attributed, I submit that you will be better advised to draw it, like Descartes, between professor and (infrahuman) primate than anywhere else. You will find no firmer foothold on this slope, and no better place for drawing a principled line between the minded and the mindless, than between humans and their nearest primate relations.{22} Searle's comparison of consciousness to "a rheostat" (Searle 1990g, p.635) such that "there can be an indefinite range of different degrees of consciousness," given the continuum of life-forms and nonlife-forms, seems apt: what seems less apt, or less motivated (considering this continuum) is his claim that consciousness "is an on/off switch" (Searle 1990g, p.635). Now I, for one, am not terribly concerned with drawing a line -- I am willing (consonant with my naive methodological persuasions) to say "mind is where you find it" (and we seem to find it in Credit Unions, oak trees, and computers -- but not everywhere). If this is panpsychism, it is weak panpsychism -- a kind of agnostic rejection of prior ontological restrictions on what kinds of things can have mental properties, or of prior restraints (semantic type restrictions) on the sort of things of which mental predicates are univocally predicable. This does not commit me to the metaphysical generalization that all things have mental properties (which we might call "strong panpsychism").

Most people in Christendom are, I realize, more averse to panpsychism than I am. So, it is fortunate that I needn't rest my criticism of Searle's attempted indirect proof of the distinction between "as-if" and "intrinsic" mentality on the preceding defense of (the plausibility of) panpsychism; because rejection of Searle's distinction has no such dreadful panpsychic consequences as he alleges. Perhaps the preceding remarks, styling my view as a kind of agnosticism -- so I do not commit myself to any generalizations about the mental properties of all inanimate things by allowing that some inanimate things (e.g., certain computers, running certain programs) have the particular mental properties they seem to evince (and which we naively attribute to them) -- take us part of this way already. Mostly, though, we need to be wary here of the dichotomy Searle proposes: either rule out the literal ascribability of any mental properties to any inanimate thing, or else almost any mental property will be literally ascribable to anything at all. Why would anyone suppose this? The reason I suspect is Searle's unremitting conflation of his proprietary distinction between "intrinsic" and "as if" (attributions
of) mental properties with the obvious distinction (which no one denies) between literal and figurative (or "metaphorical" or "analogical") attribution. Of course there is a difference between figurative and literal attribution of mental properties. There's a difference between figurative and literal attribution of all sorts of properties. To speak of "Jack Nicklaus' fluid backswing" involves figuratively attributing the physical property of fluidity to Nicklaus' backswing. Similarly, I suppose, to speak of a "storm raging" figuratively attributes the mental property of raging to the storm. This does not anymore go to show that "rage" is systematically ambiguous (that there's real intrinsic rage and mere "as if" rage) than it goes to show that "fluidity" is systematically ambiguous (between "intrinsic" and "as-if" fluidity).{23}

Ordinarily, when I figuratively attribute some property, e.g., fluidity to Nicklaus' backswing, or rage to the storm, or malice to DOS (when it erases all my files), the object of my attribution resembles things that literally have the property in some respects -- the effortless continuity of Nicklaus' backswing, the aggravation and frustration DOS causes me by not obeying my intended instructions -- but not all respects. And I suppose there is no way to know in advance, no prior way to delimit what analogy some fertile imagination might perceive among things, however various, and to what things such an imagination might come to figuratively attribute whatever property. You can see how refusing to acknowledge a distinction between literal and figurative mental attribution threatens the conclusion that any mental property is going to be predicable of anything you can imagine. As Searle expresses it, "it would be up to any beholder to treat ... hurricanes as mental if he liked" (1980a, p.420), since "relative to some purpose or other almost anything can be treated as if it were mental" (Searle 1989b, p.198). Similarly, if I deny the difference between literal and figurative attributions of fluidity, this threatens the conclusion that anything anyone can imagine to be anything like a fluid will turn out to really be a fluid: panfluidity threatens. But panfluidity doesn't threaten, because we can and do distinguish figurative from literal uses of "fluid"; similarly panpsychism doesn't threaten, because we can and do distinguish figurative from literal uses of mental terms like "recognize," "calculate," and "consider." We can distinguish figurative from literal uses of mental terms quite independently of whether we accept or reject Searle's distinction between "as-if" and "intrinsic" intentionality because the literal/figurative and "intrinsic"/"as-if" distinctions are not the same distinction.

That Searle's distinction between "as-if" and "intrinsic" (attribution of) mental properties is not coextensive with our ordinary general distinction between figurative and literal (attribution of) mental properties is plain to see, also, from the consideration that, if we were to replace the latter (literal/figurative distinction) by the former ("as-if"/"intrinsic" distinction), it would abolish distinctions we presently
can and do make. Our attributions of mental predicates to computers don't all seem equally metaphorical. Rather, as with attributions to humans and animals also, we seem to distinguish a proper subset of mental attributions to computers as figurative. We think we discern a difference between saying "DOS recognizes the print command" and "DOS hates me"; only in the later case do we immediately recognize that we are speaking figuratively. Searle must either dismiss our intuitions about this and say that these attributions are all equally (i.e., completely) figurative; or else (to account for our intuitions here), acknowledge that some of these usages (e.g., "DOS hates me") are more figurative than others.

Perhaps differences in degrees of figurativeness could be cashed out in terms of behavioral similarity: DOS acts very much like it recognizes the print command and only a little like it hates me. But however we do it, distinguishing degrees of figurativeness of mental attribution undermines the motive of avoiding panpsychism Searle offers for distinguishing "as if" and "intrinsic" (attributions of) mental properties as he does. If we already distinguish among mental attributions to inanimate things, between highly figurative and not so figurative attributions, on the basis of how much things act like they have the properties (or however we do it) -- less figurative being more literal -- we already seem to be distinguishing figurative from literal attribution. We can allow this without fear of indiscriminate proliferation of mental states to everything because differences in how much things act like they have specific mental properties cut across animate/inanimate or biologically-brained/brainless boundaries. Behavioral differences (or whatever differences) we already use to attribute (and refrain from attributing) specific mental states (e.g., recognition) to people and animals already suffice to prevent panpsychic proliferation without Searle's blanket distinction.

If Nicklaus's backswing resembled literally fluid things in every respect (relevant to fluidity), then his backswing would literally (and not just figuratively) be fluid; and if DOS's behavior resembled malice in every respect (relevant to maliciousness), then DOS would literally (not just figuratively) be malicious when it erased my files. But the Chinese Room -- or the man in it -- behaves in every respect as if it understands Chinese (according to Searle's Experiment); which seems drastically unlike standard cases of figurative attribution.

In this connection, I suppose it might be objected that I misrepresent the case by mistaking the Chinese room's (or Searle-in-the-room's) behaving in every respect as if it understand with its being in every respect like an understander. I suppose Searle will say the as-if understanding system of Searle-in-the-room is only like the intrinsic Chinese understander (e.g., a native Chinese speaker) "from the external point of view -- from the point of view of someone reading my answers" (Searle 1980a, p.418), but that it is utterly unlike an intrinsic Chinese understander "from the point of view of the agent, from my point of view" (Searle 1980a, p.420). In this vein Searle maintains that the insufficiency of behavioral evidence to warrant attributions of understanding (or any other mental property) is a main point of his Chinese Room Example. He maintains that CRE shows that two things could be behaviorally indistinguishable{24} and yet have different mental properties. According to Searle,

The example shows that there could be two "systems," both of which pass the Turing test, but only one of which understands; and it is no argument against this point to say that since they both pass the Turing Test they both understand since this argument fails to meet the argument that the system in me that understands English has a great deal more than the system in me that merely processes Chinese. In short, the systems reply [in presuming that something's behaving as if it understands is tantamount to its understanding] simply begs the question by insisting without argument that the system must understand Chinese. (Searle 1980a, p. 419)

To which I respond that if something acts exactly as if it understands, then it is Searle (who would deny it understands), and not the advocate of Turing's Test (who allows it does understand) who must produce arguments.

9. Conclusion and Prospectus

What the present chapter shows, I take it, is the following. First, that our actual semantic intuitions (as tapped, e.g., by Comparative Formation and Conjunction Reduction tests) do not support -- and rather strongly weigh against -- Searle's charge that mental predications of computers are invariably figurative and that such predications are all, consequently, equivocal. Second, that Searle's characterization of the systematic ambiguity he alleges between "as-if" and "intrinsic" (attribution of) mental properties as a distinction between figurative and literal attribution is confused and fails to support Searle's contention that denial of the former distinction seriously, if not unavoidably, threatens the panpsychic conclusion that everything is mental. It fails, in this connection, because one can deny the former ("as-if"/"intrinsic") distinction without denying the later (figurative/literal) distinction; and this later distinction is all we need to escape the threatened (strong) panpsychic conclusion. Naive semantic intuitions (e.g., those tapped by Comparative Formation and Conjunction Reduction tests) disconfirm the claim that we recognize a semantic distinction between "as/if" and "intrinsic" intentionality such as Searle proposes; and the metaphysical motive (to avoid panpsychic proliferation of mental states) Searle offers to show we should make such a semantic distinction (if we don't already) doesn't wash. If the distinction Searle proposes is defensible at all, it seems it will have to be a scientific theoretical defense.

If the scientific tack is taken in defense of the distinction at issue, then it will no longer be open to Searle to style himself as the defender of actual or ordinary usage against those who suggest that the development of cognitive science might

assimilate existing mental phenomena to some larger natural kind that would subsume both existing mental phenomena and other natural phenomena under a more general explanatory apparatus. (Searle 1980b, p.452)

According to the diagnosis to which the considerations just summarized have led, it must be Searle's task to scientifically distinguish the genuine intrinsic mental phenomena from mere "as-if" pseudo-mental phenomena among presently recognized mental phenomena (including, e.g., Deep Thought's consideration of alternate continuations, and our pocket calculators' calculations) under a more discriminating explanatory apparatus (than our present folk psychological apparatus). Searle will have to give up the claim that,

even if some future science comes up with a category that supersedes belief and thus enables us to place thermostats and people on a single continuum, this would not alter the fact that under our present concept of belief, people literally have beliefs and computers and thermostats don't. (Searle 1980b, p.452)

I don't know if thermostats have beliefs -- but if the preceding arguments are correct, judging from our working predications and the failure of ambiguity tests like conjunction reduction and comparative formation, it seems that "under our present concept" of aiming or seeking they aim or seek to keep temperatures at or above their set points; under our present use of the term "detect" they detect whether temperatures fall below or exceed their set points. Under our present concepts of seeking and detecting, it seems, people literally seek and detect things and so do thermostats and computers.{25} Perhaps Searlean cognitive scientific research will come up with categories that supersede aiming and intending and allow us not to place thermostats and people on a single continuum as we presently do; or rather (for this seems a more perspicuous formulation of the hope in question), perhaps cognitive scientific research will lead to the discovery that what we've mistakenly called "consideration" and "calculation" in computers isn't really of the same nature as human calculation (much as biological research led to the discovery that whales aren't fish) without any change or supersession of concepts.{26} If the arguments of this chapter are sound, some sort of scientific vindication of this sort seems Searle's last hope of providing an argument against naive AI, and hence against AI per se (i.e., contrary to AIP).


  1. This thought experiment (described more fully in Section 7 of the next chapter) involves imagining a substance XYZ that falls from the skies and fills the lakes and streams of a planet, Twin Earth, like ours in every other respect. Twin Earthians call XYZ "water." Now my twin and I both sincerely assert, "Water is wet" (I in English, he in Twin English). Even supposing we are in exactly similar neurological conditions when we say it, I will have asserted (and be believing) H2O is wet, my twin that XYZ is wet. Since it is possible for us to have these different beliefs despite being in identical neurological conditions beliefs "ain't in the head" (Putnam 1975, p. 227).
  2. Argument BS1A of Chapter 2, subsection 3.3, above.
  3. Hauser 1993c considers whether this appearance might be deceiving. I note that Putnam argues from (A) Intentional mental states aren't In the head, and (B) Computational states are In computers, to (C) Mental states aren't computational states. Conversely, Searle (1990c, 21-38) argues from (A') Intentional mental states are intrinsic (i.e., In the head), and (B') Computational states aren't intrinsic to (or supervenient on local physical properties of) computers, to (C) Mental states aren't computational states. This suggests it may be open to a nominal functionalism, consonant with Putnam's Twin Earth result, to hold A, B', and not C.
  4. I have already noted (in Chapter 2, above) how Searle's insistence on styling Turing machine functionalism (FUN) "Strong AI" misleadingly creates the impression that his Chinese room argument (CRA) against FUN somehow tells against AIP to boot: as an argument against AIP, CRA is an ignoratio elenchi. In the same obfuscatory vein, Searle writes, "When, for example, somebody feels comfortable with the idea that a computer would suddenly and miraculously have mental states just in virtue of running a certain sort of computer program, the underlying assumptions [of FUN/"Strong AI"] that make this view seem possible are seldom stated explicitly" (Searle 1992, p. 9) -- as if everyone's everyday mental attributions to machines were somehow underwritten by their (covert?) allegiance to this high-powered theory.
  5. Dretske (1985) contends, similarly, that our everyday attributions of mental properties to computers are systematically equivocal.
  6. As Cartesian egos are consciousnesses, I won't call the modifications of Cartesian egos view an alternative to Searle's hypothesis. They are the same hypothesis. Descartes even proposes the following "Connection Principle" of his own, maintaining, "there can be nothing in the mind, in so far as it is a thinking thing, of which it is not aware" and "we cannot have any thought of which we are not aware at the very moment when it is in us (Descartes et al. 1642, p.171). The similarity of Searle's views to Descartes in this connection is more striking and thoroughgoing than this brief quotation indicates. (See Chapter 6.)
  7. Barbara Abbott -- who has less sympathy with such appeals than I -- has voiced the objection to me that my "naive" arguments for AI merely appeal to ordinary usage.
  8. Though it would suggest a Baconian, bottom-up approach over a Newtonian, top-down one, I should say, to anyone who accepts this imperative. Hauser 1992, however, disputes the scientific nature of folk psychological "theory."
  9. "SAM" is an acronym for "Script Applier Mechanism" (Schank & Abelson 1977).
  10. The comparison is Carol Slater's.
  11. Chapter 6, Section 4, below, considers Searle's recently announced "solution to `the other minds problem'" (Searle 1992, p. 76).
  12. Following Zwicky and Sadock: "From here on the count noun understanding is a neutral term to cover both those elements of `meaning' (in a broad sense) that get coded in semantic representations and those that do not. Each understanding corresponds to a class of contexts in which the linguistic expression is appropriate -- though, of course, a class of contexts might correspond to several understandings, as in examples like Someone is renting the house (courtesy of Morgan [1972])" (Zwicky & Sadock 1975, p.3 n.9)
  13. Zwicky and Sadock note, "This second situation has been called GENERALITY (Chao, 1959:1; Quine, 1960:125-132); Bolinger, 1961, chapt.2), VAGUENESS (Lakoff, 1970b); INDETERMINACY (Humberstone, 1972:140; Shopen, 1973); NONDETERMINATION (Weydt, 1973:578) and INDEFINITENESS OF REFERENCE (Weinreich, 1966:412), though NEUTRALITY, UNMARKEDNESS, and LACK OF SPECIFICATION would be equally good terms." (Zwicky & Sadock 1975, p.2)
  14. Roughly, the strengthened necessary condition of ambiguity is that there be plausibly semantically specified differences in understanding between the usages in question; and, roughly, a sufficient condition for ambiguity would be that there be not plausibly semantically unspecified differences of understanding between the usages.
  15. At least two of Searle's tests, conjunction reduction and comparative formation are also found in Zwicky and Sadock, along with a number of tests not found in Searle. For my argumentative purposes here, I only consider the tests Searle himself proposes, though I believe none of the additional tests Zwicky and Sadock mention will distinguish a difference in sense (or ambiguity of usage) in the cases under consideration either -- e.g., between the sense of "considered" in "Deep thought considered various continuations of play" and "Karpov considered various continuations." Rich Hall notes the similarity of the first "Assymetrical dependence" test to proposals of Fodor (cf. Fodor 1990, chap. 4). This test finds its original source, perhaps, in a distinction proposed by Wittgenstein between "primary" and "secondary" uses of words. Wittgenstein, I note, declines to style secondary usages (distinguished by this test) he considers "metaphorical" (Wittgenstein 1958, bk. ii, p. 216).
  16. It will not do -- it is just question begging -- to say "it's a category mistake to attribute mental properties to inanimate things, because mental predicates are semantically marked as requiring animate subjects."
  17. "GE" is the registered trademark of General Electric.
  18. "The feature of this list which interests me for present purposes, and which I will try to explain is this. Though the occurrence of the word `cut' is literal in utterances of 1-5, and though the word is not ambiguous, it determines different sets of truth conditions for the different sentences. The sort of thing that constitutes cutting the grass is quite different from, e.g., the sort of thing that constitutes cutting a cake." (Searle 1980d, p.222-223) Compare Searle's notion of "different sets of truth conditions for the different sentences" with Zwicky and Sadock's notion of different understandings, each "corresponding to a class of contexts in which the linguistic expression is appropriate." (Zwicky & Sadock, 1975, p.3, n.9. See note 12 above)
  19. Of course some mental attributions to computers are figurative -- as some mental attributions to humans, I suppose, are also. If I say "DOS erased all my files because it hates me" that's metaphorical; when I say "DOS recognizes the `DIR' command" that's different.
  20. Thanks to Gene Cline for clarifying my thinking on this point. I submit it is just as ill advised to argue from the (presumed) falsity of panpsychism to the conclusion that computers don't have mental properties, as it is to argue from the (presumed) truth of panpsychism to the conclusion that they do.
  21. See, e.g., Baker (1987) for such an argument against eliminativism. See Churchland (1980, p.48) for a defense of eliminativism against such charges.
  22. Most of what Searle says suggests that the line between things having minds and mindless things corresponds, for him, to the line between things having (biological) brains and brainless things (Searle 1980a; 1982b, p.57). At any rate he plainly recognizes that frogs have mental properties, and dogs (1979b, p.190), though he's not sure about grasshoppers and fleas. (1990f, p.587).
  23. An as-if fluid (to pursue the analogy with "as-if intentionality") would resemble things which are literally fluids in every publicly discernible respect: note that this means that standard cases of figurative attributions, e.g. of fluidity to Nicklaus's backswing, are not "as-if" attributions; and "as-if" attributions are not standard cases of figurative attribution. See below.
  24. "Turing indistinguishable," is how Harnad (1991) puts it: not distinguishable by anything like a Turing test.
  25. Consider the conjunction reductions and comparison formations below. The first seems, perhaps, very slightly semantically anomalous to me. The second seems not so anomalous. The third and fourth seem not at all anomalous; and neither do the converses of the first and second, if we make "I" the subject of the reduced clause and "thermostat" the unreduced clause's subject. If these results are equivocal, so be it: I am not, after all, committed to the aims and detective powers of thermostats (nor to their lack of such either). I have only argued in this chapter that if the study of mind starts with our everyday working (e.g., predictive and explanatory) uses of such folk psychological terms as "aims" and "detects," then it is not clearly the case that "the study of mind starts [my emphasis] with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't" (Searle 1980a, p.420); and if we change "beliefs" to "aims" or "powers to detect things" -- if Searle's pronouncement is supposed to generalize to all folk psychological predications or all mental properties -- it is clearly not the case that the study of the mind starts with any such thing.
  26. If minds are natural kinds and natural kind terms are nondescriptional (cf. Abbott 1989).

next | previous