Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence by Larry Hauser


TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1 | CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY

1. Intentionality and Intrinsicality | 2. Psychological Modes of Intentionality | 3. Intentional Content | 4. Intrinsic and Derived Intentionality | 5. Derived Intentionality as Normative | 6. Derived Artificial Intentionality | 7. Two Accounts of Objective Intrinsicality | 8. Putnam Against Intrinsic Intentionality | 9. Searle Against Putnam Concerning "Observer-relativity" | 10. Searle Against Putnam on Causal Indexical Dependence | 11. The Relevance of Putnam's Critique of Internalism | 12. Diehard Internalism | 13. The Derivation Problem | Endnotes
Chapter Five: OBJECTIVE INTRINSICALITY:
INTENTIONALITY IN THE BRAIN

We can imagine someone appealing to the identity theory to excuse his own free and uncritical recourse to mentalistic semantics. We can imagine him pleading that it is after all just a matter of physiology, even though no one knows quite how. This would be a sad irony indeed. (Quine 1975, p. 95)

1. Intentionality and Intrinsicality

According to Searle, a scientific theoretical basis for discounting the apparent mental properties of computers is to be found in computers' lack of what he calls "intrinsic intentionality" (Searle 1980b): cases of intrinsic intentionality "are cases of actual mental states" (Searle 1980b, p. 451); and "only intrinsic intentionality is mental" (Searle 1990g, p. 639). My next concern, then, is Searle's account of intrinsic intentionality and its scientific prospects and credentials. Meanwhile, in light of the preceding chapter, I take it that it is as objectionable to deny that pocket calculators calculate, and Deep Thought considers alternate continuations of play "in the grip of a philosophical theory" (Searle 1979b, p. 190 [my emphasis]) as it would be "to deny that dogs and small children can, say, desire milk and bones respectively" (Searle 1979b, p. 190), if one were to deny this "in the grip of a philosophical theory" (Searle 1979b, p. 190).

It is a disputed point, as I noted earlier (Chapter 2) whether all mental properties or states are intentional. I will simply stipulate here as before that the mental properties that concern us are intentional or representational mental properties: "propositional attitudes, as philosophers call them" (Putnam 1988, p. 72). Clearly intentionality is an essential (necessary) characteristic of every intentional mental phenomenon. Note that even given this restriction of our attention just to those mental states which are intentional, intentionality (semantic or representative content, or "aboutness," or reference) is still not a distinguishing feature in the sense of a sufficient condition: some extramental things such as utterances of English sentences and mathematical formulae have semantic or representative content. Searle acknowledges, "the notion of Intentionality-with-a-t{1} applies equally well to both mental states and to linguistic entities such as speech acts and sentences, not to mention maps, diagrams, laundry lists, pictures, and a host of other things." (Searle 1983, p. 26) What is a distinguishing feature of intentional mental phenomena strictly speaking (a necessary and sufficient condition for being an intentional mental phenomenon) on Searle's account, seems to be their intrinsic intentionality.

What's "intrinsic?" Two possibilities suggest themselves here. The first is that what "intrinsic" means, when predicated of intentional mental states, is that they are physically contained by or objectively in the subjects to whom we truly ascribe them: call this "physical" or "objective" intrinsicality. The second possibility is that "intrinsic" here means being phenomenologically in a conscious subject or being subjective contents of such a mind's private field and stream of experiences: call this "mental" or "subjective" intrinsicality.{2} Evidently Searle means both. In this chapter I consider whether the intentional states we attribute to humans (and higher animals) are objectively and physically in their human (and higher animate) subjects in a way in which the intentional mental states we attribute to computers aren't objectively in them. In the next chapter I consider whether a scientifically salient difference between human (and higher animate) intentionality and the seeming intentionality of computers is to be found in the subjective intrinsicality -- in the consciousness or accessibility to consciousness -- of human and higher animate intentionality and the (presumed) unconsciousness of computers' artificial intentional states.

But before turning to the question of how, according to Searle, intentional states are supposed to be in the systems (i.e., the organisms, most especially their brains) that have them, and whether or not the intentional states we naively attribute to computers are in them in this way (whatever it is); perhaps I need to say a word about what is supposed to be in us when its claimed (by Searle, among others) that intentional mental states are in us (or our brains). What does "intentionality" mean, and what are intentional mental states?

2. Psychological Modes of Intentionality

According to Searle, "Intentional states are entirely constituted by their Intentional content and psychological mode, both of which are in the head" (Searle 1983, p. 208). What Searle means by "psychological mode" is roughly what Descartes meant by "modes of thinking," or what Searle elsewhere (Searle 1984a, p. 60-61) refers to as "mental type": one of the types of propositional attitudes or intentional mental states answering to our everyday typology of belief, desire, hope, seeing, hearing, etc. What he means by "Intentional content" is what the belief, desire, etc. refers to or is about. Thinking of Intentional mental states as propositional attitudes, the content is the proposition, and the mode is the attitude. Though Searle's main concern is content (as mine will be also), let us first say a bit about mode.

Intentional mental states can have the same content if they differ in mode. Thus I can believe that the cat is on the mat, hope that the cat is on the mat, want the cat to be on the mat, etc. Different species or modes of intentional mental states determine different satisfaction relations to their intentional objects (or states of affairs corresponding to their propositional contents) and fall into three categories, on Searle's account, according to the direction of fit of the intentional relation (e.g., wanting, believing, hoping) that they specify.

States or attitudes having a mind-to-world direction of fit, cognitive states such as believing, have truth values. "Beliefs like statements can be true or false" (Searle 1983, p. 8), and to say they have a mind-to-world direction of fit is to say that they are "supposed to match an independently existing world" (Searle 1983, p. 7) like "members of the assertive class of speech acts -- statements, descriptions, assertions, etc." (Searle 1983, p. 7). Intuitively we might say the idea of direction of fit is that of responsibility for fitting. If the statement is false, it is the fault of the statement (word-to-world direction of fit). (Searle 1983, p. 7) Similarly, if the belief is false, "It is the responsibility of the belief, so to speak, to match the world, and where the match fails, I repair the situation by changing the belief" (Searle 1983, p. 8).

States or attitudes having world-to-mind direction of fit, conative states or attitudes, such as desires and intentions, on the other hand, are like "members of the commissive class [of speech acts] -- promises, vows, pledges, etc." (Searle 1983, p. 7). These "are not supposed to match an independently existing reality but rather are supposed to bring about changes in the world so that the world matches the propositional content of the speech act" (Searle 1983, p. 7), or (in the case of Intentional mental states) the propositional content of the mental operation or state (Searle 1983, p. 7). Desires and intentions, like promises, vows, and pledges, "cannot be true or false, but can be complied with, fulfilled, or carried out" (Searle 1983, p. 8).

Lastly, just as there is a class of speech acts with no direction fit, such as apologies and congratulations, so too there are "complex Intentional states," Searle maintains (Searle 1983, p. 9) -- states such as sorrow and pleasure -- which may embed or presuppose cognitive and conative attitudes (as "my sorrow contains a belief that I insulted you and a wish that I hadn't" (Searle 1983, p. 8)), but "don't ... have any direction of fit" (Searle 1983, p. 9) themselves.

How psychological modes are "in the head" (if they are) no one knows. One line of speculation has it that differences in psychological mode correspond to different areas or locations in the brain. The picture (as advocated, e.g., by Fodor 1975) is that contents, being propositional, are like sentences -- perhaps they are sentences, of the language of thought or "mentalese" as it has been called. So, wanting the cat on the mat, and believing the cat is on the mat will consist in storing a token (or a mentalese sentence) signifying that the cat is on the mat in the "belief box" (Schiffer, 1987) when I believe it, in the "desire box" when I want it, etc. On this hypothesis entertaining the same proposition in different attitude or mode is, in effect, having the same sentence of mentalese "written" or stored at different locations in the brain. Fortunately, I think, we needn't worry about any of this or pursue such speculations further, since our main concern for now, following Searle, is with the content not the mode; the proposition, not the attitude.

3. Intentional Content

The syntactic correlate of the semantic content of an intentional mental state or process of belief (for instance) is the sentence embedded in the that clause of the sentence attributing the belief. Thus, the content of the belief that I attribute to myself by asserting, "I believe that snow is white," is (the proposition expressed by) "snow is white." Similarly the content of the want I attribute to myself by asserting, "I want the cat [to be] on the mat" is the cat [being] on the mat.

Note how fitting even this last example -- wanting the cat [to be] on the mat -- into canonical propositional attitude form requires some fudging: not all attributions of intentional states have the surface grammar of propositional attitudes, and I suppose that putting such attributions into something like what I have just called "canonical propositional form" will seem to some to be forcing them onto a Procrustean bed. "I want beer," for instance, invokes no that clause containing a sentence corresponding to the content of my want. Does this signify that there are different types of wants -- that some wants (propositional attitudes, e.g., wanting the cat to be on the mat) are for (or directed at, or about) states of affairs corresponding to sentences (e.g., wanting the cat to be on the mat), while others (objectual attitudes, e.g., wanting beer) are directed at objects or substances denoted by noun phrases (NPs)? Are there propositional or sentential conative attitudes on the one hand, and objectual or NP attitudes on the other?

Perhaps not. One argument for this -- for taking the embedded that clause form as canonical, and treating the intentional content of conative states as invariably propositional -- is that such attributions as "I want beer" seem incomplete. Attribution sketches, as it were. For what does it mean to say "I want beer" -- what would satisfy my want? Presumably not just that there be beer (that beer exists). Presumably what I want is to have beer (or to be absolutely canonical that I have beer) or to drink beer or to purchase beer. Similarly Searle argues "that the surface structure [of a sentence like "I want your house"] is misleading and that wanting [and similarly, wishing, desiring, etc.] is a propositional attitude" (Searle 1983, p. 30; cf., Searle 1979b, p. 189-190). He argues thus via the following "simple syntactical argument" (Searle 1983, p. 30):

Consider the sentence

What does "next summer modify? It can't be "want" for the sentence does not mean

since it is perfectly consistent to say

What the sentence must mean is

and we can say that the adverbial phrase modifies the deep structure verb "have" or if we are reluctant to postulate such deep syntactic structures we can simply say that the semantic content of the sentence "I want your house" is: I want that I have your house. (Searle 1983, p. 30)

Still, Searle acknowledges, "not all Intentional states have an entire proposition as Intentional content" (Searle 1983, p. 7), citing love (as attributed in "Romeo loves Juliet") and hate (as attributed in "Bush hates broccoli") as genuinely non-propositional or objectual (conative) attitudes after all. Nonetheless, I will follow Searle in taking propositional attitudes as "central cases" (Searle 1983, p. 35). If this is an oversimplification it's one Searle himself makes.

The propositional semantic contents of mental states go to determine their conditions of satisfaction. What would satisfy my desire for the cat to be on the mat is "The cat is on the mat" being true, or the cat being on the mat; what satisfies your belief that snow is white is "Snow is white" being true or snow being white; etc. What distinguishes intentional states of the same type or mode -- different beliefs, say -- is their content: what makes my belief that snow is white the belief it is (and differentiates it from my belief, that a certain cat is on the mat, say) is that the one is about (i.e., is satisfied or made true by) snow being white, and the other is about a certain cat being on the mat.

Again, as with "psychological modes" or attitudes it is an unclear point among those who believe that intentional contents are "in the head" just what in the head or brain they are; and here again the language of thought hypothesis -- that the propositions or content are sentences of "mentalese" -- has fueled the most speculation. Certainly what has been called "good old fashioned AI" (GOFAI -- c.f., Partridge & Wilks 1990a) supposes this: that psychological states consist of having tokens (symbols) stored in specific locations, from "1"s and "0"s in the accumulator to dissertation chapters in RAM. Recent connectionist research has fueled doubts about this picture and talk about "distributed" and even "nonsymbolic" representation. Searle himself has expressed some guarded sympathy for speculations along such lines suggesting there are "advantages of parallel architecture for weak AI"{3} (Searle 1990a, p. 28; cf., Searle 1992, p. 246); but no matter. Whether computers' architectures are parallel or serial, and whether the representation is serial (sentence and stringlike) representation or distributed (unsentencelike or "nonsymbolic"), regardless of whether the brain's architecture is serial or parallel, whatever in our brains makes us recognize that white's king is in check and consider whether interposing the queen would be a good response, according to Searle, is somehow in our brains in a way that the corresponding representational states of Deep Thought (or a comparable parallel process program, if one could be devised) are not in the electric circuits of a computer running Deep Thought and would not be in circuits implementing Parallel Thought either (cf., Searle 1990a, p. 28). This is puzzling.

What's puzzling is this: surely the states of Deep Thought's circuitry that represent chess pieces and positions are in the computer in a perfectly ordinary sense of "in"; the "in" of "intrinsic" seems stronger than ordinary physical containment. Searle's thought here seems to be that one thing is intrinsic to another only if everything that makes it (the first thing) the kind of thing it is is physically contained in the second. "Intrinsic" in this sense, according to Harman, is "a technical term in philosophy" (Harman 1990, p. 607): "A feature of a thing is intrinsic to that thing if the thing has that feature purely by virtue of the way the thing is in itself and apart from its relations to other things." Intrinsic, in this sense, means something like "nonrelational." In this exacting sense of "in," I suppose, the dollar in your pocket isn't In your pocket either, since everything that goes to make it a dollar (such as the gold in Fort Knox) isn't in your pocket. If this is the sense of "intrinsic" Searle invokes, why would he invoke it? What is its scientific or philosophical import?

4. Intrinsic and Derived Intentionality

Searle insists (Searle 1990g, p. 639) that his notion of intrinsic intentionality is meant to contrast with "what I will call observer relative ascriptions of intentionality, which are ways people have of speaking about entities figuring in our activities but lacking intrinsic intentionality" (Searle 1980b, p. 451). Searle also tries to make a further distinction among observer relative attributions of intentionality between those such as "`Es regnet' means `it is raining'" (Searle 1984b, p. 4) which "are literal ascriptions of [derived] intentionality" (Searle 1984b, p. 5) "whose truth depends on the existence of some mental phenomenon" (Searle 1984b, p. 5) and "metaphorical ascription[s]" (Searle 1984b, p. 4) of as-if intentionality "which do not literally ascribe any intentionality at all, even though the point of the metaphorical ascription might depend on some intrinsic intentionality of human agents" (Searle 1984b, p. 5).

In these connections Searle (Searle 1984b, p. 3) offers the following examples:

A. Robert believes that Ronald Reagan is President.

B. Bill sees that it is snowing.

C. "Es regnet" means it's raining.

D. My car thermostat perceives changes in the engine temperature.

A and B, according to Searle, are literal ascriptions of intrinsic intentionality

Both beliefs and visual experiences are intrinsic intentional phenomena in the minds/brains of agents; the ascription of these states and events is to be taken literally, not just as a manner of speaking, nor as shorthand for a statement describing some more complex set of events and relations going on outside the agents. (Searle 1984b, p. 4)

C, on the other hand,

literally ascribes intentionality, though the intentionality is not intrinsic to the sentence. That very sentence might have meant something else or nothing at all. To ascribe this form of intentionality to it is shorthand for some statement or statements to the effect that speakers of German use the sentence literally to mean one thing rather than another, and the intentionality of the sentence is derived from this more basic form of intentionality of speakers of German. (Searle 1984b, p. 4)

Finally, as for D,

In D, on the other hand, there is no literal ascription of intentionality at all because my car thermostat does not literally have any perceptions. D, unlike C, is a metaphorical ascription of intentionality; but, like C its point depends on some intrinsic intentionality of agents. We use car thermostats to regulate engine temperatures and therefore they must be able to respond to changes in temperature. Hence the metaphor; and hence its harmlessness, provided we don't confuse the analysis of A, B, and C, with that of D. (Searle 1984b, p. 4)

Now, considerations brought forward in the preceding chapter against Searle's attempt to style all ascriptions of intentional mental properties to machines metaphorical undercut this distinction Searle proposes between literal attributions of derived intentionality (as in C) and metaphorical attributions of as-if intentionality (as in D). But even if we disallow Searle's blanket dismissal of attributions of mental properties to computers as "metaphorical," he might still insist that the primary distinction, between intrinsic intentionality and observer-relative intentionality remains. Why not grant that attributions of mental properties to computers literally attribute derived intentionality? Note that the preceding "concession" -- if it were to be allowed on Searle's behalf -- would still enable us to argue on his behalf along the following lines:

Intentional mental states are intrinsically intentional.
Computers' intentional states are derived not intrinsic.
therefore,
No computer has intentional mental states.

To make a case against AI, it would seem to be enough to classify C with D, as essentially distinct from A and B. Perhaps there is no need to further distinguish D from C as Searle tries to do.

In considering this line of attack against SAIP (and in defense of Searle), however, two cautions are in order against two shortcut ways with this objection that might make it appear to be easier to make a case along these lines than it is.

First, while computers clearly, in some sense, derive their intentionality from us, their programmers, builders, and users, it doesn't follow immediately from this that the intentionality of computers differs in kind from ours. I might catch the measles from you, yet we both have the same type disease; intrinsic and derived are not two different strains of measles. Even if there were an essential difference in intentionality between intrinsic and derived, it is not clear either that our intentionality is (wholly?) intrinsic, or that the artificial intentionality of computers is (wholly?) derived. We ourselves might (wholly or entirely) "catch" (the meaning of) our representational states. It seems not implausible to think (nor forlorn to hope) that some information bearing states of us get invested with some meaning in homes and at school, by parents and teachers; perhaps, even in a way not essentially unlike the way we (the community) invest the states and operations of computers with meaning (to the extent that we do). On the other hand, the representational states of computers might have some intentionality not derived from us (its programmers, designers, and users), which would be intrinsic if "intrinsic" means "not externally derived from us relevant observers": and then again, as there are machine written programs, computer acquired data, and programs that design hardware for particular applications, etc., it seems this "us" includes them.

Secondly, it won't do to say that "my pocket calculator adds and subtracts, but does not divide" (Searle 1980c, p. 407) is just shorthand for "I am able to use my calculator for addition and subtraction but not division" (Searle 1980c, p. 407) if this is supposed to mean that I, not the calculator, really do the calculating; if it's supposed to mean that I calculate with the calculator, but the calculator itself doesn't calculate. It can be true that my calculator can add and subtract independently of my knowing how to use it to do so (e.g., if I lost the instructions); and the calculator can go on calculating while I sleep or even if I die in the midst of its calculations. It will not do to conflate the way you use a calculator to calculate with the way you use a pencil as this second would-be short way with the line of objection to AI being considered would do. I could use my pocket calculator in such a way that I remain the sole calculator -- e.g., by scratching out sums in the sand with it -- but when I use it to calculate in the usual way, when I use it as a calculator not as a stylus, I don't just calculate with the calculator in this nondelegative way. Rather, the calculator calculates for me: it's less like using a pencil to do my calculating than using an accountant: I needn't myself know how to add (or whatever), but only how to ask.

5. Derived Intentionality as Normative

Searle admonishes us to "Think hard ... what it would take to establish that hunk of metal on the wall over there had real beliefs ... with direction of fit, propositional content, and conditions of satisfaction" (Searle 1980a, p. 420). I will try.

Whether or not we are comfortable saying the thermostat perceives the engine temperature, it seems (states of) the thermostat represent the engine temperature -- much as "Es regnet" represents it to be raining. Moreover, just as assertions of "Es regnet" can misrepresent the weather (if you affirm it when it's not raining); so too can the thermostat misrepresent the engine temperature (when it's malfunctioning). This means that the thermostat's representation of the engine temperature -- some particular state of the thermostat's meaning the engine is too hot -- is not just a natural sign of the engine's temperature having the sort of natural meaning (Grice's meaningN){4} which entails the existence of what it represents (as thunder means lightning, and smoke means fire). Rather, it shares the distinguishing characteristic of being potentially misrepresentative with "conventional" or non-natural meaning (Grice's meaningNN), with beliefs (e.g., that Ronald Reagan is President) and sentences (e.g., "Es regnet"). According to Grice (1957, p. 377) x meansN y if x means that y entails y -- as "The smoke from their chimney means there's a fire on their hearth" entails "There's a fire on their hearth." On the other hand, x meansNN y if x means that y does not entail y -- as "`Es regnet' means it's raining" does not entail "It's raining."{5} Similarly -- switching from car thermostats to room thermostats -- "The curvature of the bimetallic element means the room is too hot" (on one reading) does not entail "The room is too hot." Call the kind of intentionality (corresponding to Grice's nonnatural meaning) normative intentionality. Obviously the derived intentionality of sentences (which can be falsely asserted), and devices (even so simple as thermostats), can be normative.{6}

In Searle's terms, it may be observed that the thermostat's representation of the engine (or room) temperature, like the belief that Reagan is President or the assertion "Es regnet" has a representation-to-world direction of fit: natural significations (e.g., of fire by smoke), on the other hand, lack satisfaction conditions and directions of fit. And perhaps this is a more satisfactory mark of Grice's distinction -- since mental states and events such as knowledge and seeing, while they do (like natural significations) entail that what's known or seen is the case, also (unlike natural signs) have satisfaction conditions and directions of fit. It's just that (ascriptions of) seeing and knowing, unlike believing, "imply that the intentional phenomenon is satisfied." (Searle 1984b, p. 4) Note now -- considering now a simple room thermostat with a manual set point and a bimetallic strip -- that thermostats (and a fortiori more sophisticated devices like computers) are capable of representations with world-to-representation directions of fit as well as representation-to-world directions of fit. In the case under consideration, the curvature of the bimetallic element represents the actual room temperature (with a representation-to-world direction of fit), while the position of the set point represents the sought room temperature (with a world-to-representation direction of fit). I conclude that whether or not we are willing to attribute beliefs "with the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs"{7} to thermostats, we nonetheless seem able and willing (when not worried about saving our philosophical theses) to attribute representational states with satisfaction conditions and both world-to-representation and representation-to-world directions of fit to them. Similarly we attribute both cognitive intentional states with mind-to-world direction of fit, and conative intentional states with world-to-mind direction of fit, to computers. When I say, "The computer detects the printer on line," I attribute a cognitive state with mind-to-world direction of fit, with the propositional content that the printer is on line, which is satisfied if the printer is on line. When I say "The computer tried to initialize the printer" (when I sent it a print command but forgot to turn the printer on), I attribute a conative state with world-to-mind direction of fit to it, with the propositional content that the computer initialized the printer, which is satisfied if the computer succeeds in initializing the printer.

When I think hard about it, it seems that computers and even thermostats have normative representational states with propositional content, directions of fit, and conditions of satisfaction.

6. Derived Artificial Intentionality

If, "Intentionality is that property of ... states and events by which they are directed at or about objects and states of affairs" (Searle 1983, p. 1), then computers have intentionality. I count on there being states of (or events in) the computer at the Michigan State University Credit Union when it accesses data structures that represent the balances of funds in my accounts, e.g., in preparing my monthly statement that are about the amount of money in my account. People count on there being states of (or events in) the computer at their places of employment that represent the hours they've worked and their rates of pay. Deep Thought has and manipulates data structures that represent chess pieces and positions when it considers alternate continuations. And so on. And if it is advanced as a further distinguishing characteristic of the aboutness or representation relation that the intended or represented relata needn't even exist -- if it is noted that "I can have the [Intentional] state without the object or state of affairs that the Intentional state is directed at even existing at all" -- it is also noteworthy that the continuations Deep Thought considers don't exist (and most never will); the eventualities that get represented when a spreadsheet is used (to plan investment strategies, e.g.) don't yet exist (and most never will); etc. Computers, like us, it seems, can represent nonexistent and misrepresent existent objects and states of affairs.

We have already seen the distinction Searle alleges between the actual but derived intentionality of linguistic signs and the merely metaphorical "as-if" intentionality of computational structures and operations breaks down: we are now seeing why it must -- unless the meaning (e.g., hours worked) of the data the data entry technician enters wondrously disappears in virtue of being typed into the computer and wondrously reappears when the computer prints out my paycheck. The genuine issue here seems not whether computers have intentional or representational states. It's about whether the intentional states computers have are intrinsic, as Searle claims our intentional mental states are and the representational states of computers are not. Therein, according to Searle, lies the essential difference between us and computers.

There are even reasons internal to Searle's theory for abolishing the derived/as-if distinction among instances of "observer-relative" intentionality. Both attributions of "derived" and attributions of "as-if" intentionality are characterized by Searle as "shorthand for a statement describing some more complex set of events and relations going on outside the agents" (Searle 1984b, p. 4). In particular, both are supposed to be shorthand for statements describing how human beings use the marks or sounds (in the case of the derived intentionality of sentences and other linguistic constructions) or the devices (in the case the "as-if" intentionality of thermostats, computers, and their ilk) for their own human purposes (Searle 1980c, p. 407). When I say, "My pocket calculator adds and subtracts but does not divide" as when I say "The expression `il pleut' in French means `it is raining'," according to Searle, my assertions are "roughly speaking, shorthand for saying such things as that by convention people who speak French use the sentence `il pleut' to mean `it is raining' and that I am able to use my calculator for addition and subtraction but not division" (Searle 1980c, p. 407). Ultimately, on the account Searle proposes, "Such statements are in part shorthand statements for the intrinsic mental phenomena of French speakers or users of pocket calculators" (Searle 1980c, p. 407). Just as Searle would say that such meaning or intentionality as sentences and other linguistic expressions have is "in the minds of the observers (or users), not in the subject of the ascription" (Searle 1980c, p. 407); so, also, will he say that "such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output" (Searle 1980a, p. 422).

Why not allow computers have the intentionality they manifestly seem to have, and acknowledge that it's largely if not entirely derived from us, their designers, builders, programmers and users; just as Searle allows that English sentences on pages of books and spoken language on the airwaves, have the meanings that we, their authors and understanders give them? Of course, while we know that (some) books contain information we are not at all tempted to say they know or recall the information they contain -- books don't do anything. They don't do anything to, or with the representations they contain. Computers, on the other hand, do.{8}

7. Two Accounts of Objective Intrinsicality

Searle offers two different accounts of the objective intrinsicality of human (and higher animate) intentional states he alleges as the essential difference between the real intentionality of humans and (certain higher) animals, and the counterfeit, "as-if" (or as we should now style it, "merely derivative") intentionality of computers. According to the first sort of account, to say that Karpov has intrinsic intentionality but Deep Thought doesn't is to say that when Karpov considers alternative continuations of play his representational states and actions (some sort of brain states and processes, presumably) represent things (i.e., chess pieces and positions) to him, independently of their representing anything to anyone else: when Deep Thought considers alternate continuations of play, on the other hand, Deep Thought's representational states don't represent chess positions and pieces to Deep Thought at all, but only to it's human observers, designers, and users. On this account, Searle contrasts "intrinsic" to "observer relative" (Searle 1980b, p. 451-452; 1990g, p. 639). Here Searle also says he is "not contrasting ["intrinsic"] with "relational"; and he maintains that thus understood (as contrasting with "observer relative" not "relational") "`intrinsic'... is not a technical term" (Searle 1990g, p. 639). "On this usage," Searle explains,

the paper in front of me is intrinsically ink, but it is not intrinsically meaningful. Meaning is assigned to it by outside users and observers. (Searle 1990g, p. 639)

To hazard another, "nontechnical," example, of a nonsemantic observer-relative property, the little yellow flowers on the lawn are intrinsically dandelions, but not intrinsically weeds. Being a weed is "in the eye of the beholder" (Searle 1990g, p. 637) in a way (or to a degree) that being a dandelion is not.{9}

On the other sort of account of intrinsicality which Searle sometimes gives, as when he says "To say they are intrinsic is just to say that the states and events really exist ... in the agents" (Searle 1984, p. 4), however, it seems that "intrinsic" does contrast with "relational" and does bear roughly the technical philosophical sense Harman articulates. In this sense, "A feature of a thing is intrinsic to that thing if the thing has that feature by virtue of the way the thing is in itself and apart from its relations to other things" (Harman 1990, p. 607). Similarly, when Searle opposes Putnam's (1975) dictum that "Meaning [and as a consequence beliefs, desires, etc.] ain't in the head" with his own assertion that "Intentional states are entirely constituted by their Intentional content and their psychological mode, both of which are in the head" (Searle 1983, p. 208: my emphasis), once again "relational" seems the operative contrast, since Putnam's argument is designed to show that meaning is relational. Notice that being a dandelion (as opposed to being a weed) seems to depend entirely on the internal structural properties of the plant; so being a dandelion seems an intrinsic property in this second (nonrelational) sense also. Likewise, I suppose for Searle's own example of the type being intrinsically ink: being ink depends entirely on the internal, chemical content of the type, and so being ink seems intrinsic in the nonrelational sense as well. How do we reconcile these accounts?

One thing that might be said here is this: while everything that's observer relative is relational, not everything relational is also observer relative. Being taller than average or being in the house, for instance, are relational without being observer relative. Thus when Searle maintains that Intentional content is "in the head" or "in the agents" he makes a stronger claim than when he claims that the Intentional content of the representational mental states of humans and (certain higher) animals is "not observer relative": it might not be observer relative and yet be relative to something else (e.g., to the actual referents to which it stands in certain causal relations).{10} Roughly, for our purposes, to say that intentional states are "intrinsic" in the stronger, nonrelational, sense is to say they're neither observer-relative (relative to the understanding or practices of other speakers or the linguistic community){11} nor relative to the nonhuman environment (in particular, relative to the actual nature of the referents). Secondly, it seems Searle is going to need to employ "intrinsic" in this stronger sense (which opposes it to "relational" and not just "observer relative") to maintain his "monist interactionist" account of intrinsic intentional mental phenomena. According to this account such mental phenomena as beliefs and desires are "caused by the behaviour of certain elements in the brain" and also "realized in the structure made of these elements" (Searle 1984a, p. 28). As it is Searle's positive account of intentionality that the discussion in this chapter is leading up to, and as that positive account seems committed to this stronger understanding of "intrinsic" (as nonrelational), it seems (his reply to Harman notwithstanding) we need to hold Searle to this stronger account. "Intrinsic" means "nonrelational."

The scientific or methodological purport of this technical philosophical notion of intrinsicality is associated with mechanism: the requirement of intrinsicality seems designed to be a kind of mechanistic constraint on any acceptable account of intentionality. As applied to intentional mental features the requirement is that the features should be determined by, or supervene{12} on, the internal states of the nervous system. This assumption of methodological solipsism (cf., Putnam 1975) is one both Searle and his functionalist and cognitivist adversaries (see, e.g., Fodor 1980) regard as a requirement on an acceptable (scientific) theory of meaning or (mental or linguistic) representation: cognitivists suppose that intentionality or meaning supervenes on the computational states of the brain, and Searle insists that intentionality or meaning supervenes on noncomputational properties or states of the nervous system (on its specific chemical properties Searle suggests) instead or besides. Note this requirement is not a specifically materialistic constraint. It's possible to hold that intentionality supervenes on the mental states of individuals and that mental states are immaterial and yet be a methodological solipsist, as Descartes was. Conversely, one can deny that intentionality supervenes on the internal physical or material states of individuals, and at the same time (materialistically) hold that it supervenes on physical or material states (including states external to the individual), as Putnam does. Methodological solipsism (MS) expresses a mechanistic requirement not a materialistic one{13}: to the internalist proponent of MS it may seem that externalist refusals of this requirement (e.g., Putnam's) countenance action at a distance{14}. Yet despite the enviable record of mechanism as a research commitment in biology, methodological solipsist commitments, nowadays, seem very questionable in semantics.

8. Putnam Against Intrinsic Intentionality

Hilary Putnam argues, on the basis of his famous Twin Earth thought experiment and related examples that meanings "just ain't in the head" (Putnam 1975, p. 227). He argues thus in explicit opposition to what he describes as the "assumption of methodological solipsism" of "traditional philosophers" (Putnam 1975, p. 220). "This assumption" of methodological solipsism, as Putnam styles it, "is the assumption that no psychological state, properly so called, presupposes the existence of any individual other than the subject to whom that state is ascribed" (Putnam 1975, p. 220). In terms of the preceding discussion, methodological solipsism is the assumption that psychological states are nonrelational or intrinsic (in the strong sense); an assumption, as we have just seen, that Searle shares. It's to Putnam's contention that meanings and consequently meaningful or intentional mental states "aren't in the head" (Putnam 1988, p. 72) that Searle explicitly opposes his claim that "all intentional [mental] states are entirely constituted by their Intentional content and psychological mode both of which are in the head" (Searle 1983, p. 208). If Putnam is right, it seems, there is no hope for Searle's attempt to distinguish the representational states or feats of computers from those of humans on the basis of the objective intrinsicality of the later. If Putnam is right, our intentional states aren't intrinsic to us either. Moreover, if Putnam's examples and supporting arguments do show that meanings aren't "in the head" (in the sense of being determined by or supervening on what is in the head); and if being a natural kind is being a kind whose identity conditions are microstructural{15}; then minds, or intentional mental states, are not plausible candidates for being natural kinds. This may undercut any attempt to provide scientific warrant for dismissing mental attributions to computers, and not just Searle's.

Here is the most famous of Putnam's examples. Imagine there is a chemical compound XYZ, a colorless, tasteless, thirst quenching liquid that falls from the skies, and fills the lakes and oceans of Twin Earth, a far away planet identical to our Earth in every respect but one. The colorless, tasteless, thirst quenching liquid that fills Twin Earth's rivers, etc. is not H2O but XYZ (this different chemical). In particular, imagine that I have a twin on Twin Earth whose brain (or mind) is in the same state as mine -- and we both sincerely affirm (I in English, he in Twin English), "Water is wet." But we have different beliefs: I believe H2O is wet, and he believes XYZ is wet. So, beliefs aren't In the head. The moral is that just as the need to add (the semantic element of) truth to belief for it to be knowledge is fatal to the idea that knowledge is In the head, so is the need to add (the semantic element of) reference to whatever is in the head to make it the belief it is -- whether about XYZ or H2O -- fatal to the idea that belief is In the head. And since its generally true for all the propositional attitudes or intentional mental states that part of what makes them what they are is what they're about, i.e., their referents, this point generalizes to all intentional mental states: they're not In the head. They're not in the head in the same sense that knowledge isn't in the head: what is In the head -- the neurophysiological properties of the believer (wanter, hoper, etc.) -- do not suffice to determine what the belief is about (which makes it the belief it is), because facts about things besides the physiology of believers, (e.g., their environments or environmental histories) -- whether there's H2O or XYZ hereabouts -- are necessary conditions for their having the beliefs (about H2O or about XYZ) that they have.{16}

Putnam summarizes the preceding attack on methodological solipsism as showing that meaning (what determines reference or extension) is not wholly determined by what's in the head "because extension is, in part, determined indexically." By this he means, "The extension of our terms depends upon the actual nature of the particular things that serve as paradigms" (Putnam 1975, p. 245) -- on Earth paradigmatic water (what is rightly called "water" in English) is H2O, and on Twin Earth paradigmatic water (what is rightly called "water" in Twin English) is XYZ. If this is correct then intentional mental states are not intrinsic (nonrelational): what I believe when I believe water is wet depends on environmental (extramental or extracranial) facts about the hidden structure of water; what I want when I want a cat on the mat depends on the hidden structure of cats; etc.

In addition to such environmental or indexical determinants, Putnam also maintains that "extension is, in general, determined socially." The actual nature of the paradigms "is not, in general, fully known to the speaker" (or the thinker) because it hasn't yet been discovered (and so is not known to any speaker), or it may simply not be known to me though others may know it full well. Even if I myself "cannot tell an elm from a beech tree," still, "the extension of `elm' in my idiolect is the same as the extension of `elm' in anyone else's, viz. the set of all elm trees" (Putnam 1975, p. 226) because the term "is subject to the division of linguistic labor" (Putnam 1975, p. 229) in accordance with which "the average speaker who acquires it [the meaning of the term] does not acquire anything that fixes it's extension" (Putnam 1975, p. 229). Rather, "it is only the sociolinguistic state of the collective linguistic body that fixes the extension" (Putnam 1975, p. 229). If Putnam is also right about this social determination of meaning, it seems that intentional mental states will fail to be intrinsic even in the weak sense of not being observer relative insofar as their content (at least insofar as they are linguistically mediated) will depend on the psychological states of others besides myself (the speaker or thinker). Thus, the content of my thought that elms are deciduous is not wholly determined by me or what's in my head, but is partly determined by others (especially botanists) and what's in their heads.

9. Searle Against Putnam Concerning "Observer-relativity"

Searle responds to Putnam's, among other externalist attacks, on what Putnam calls the "assumption of methodological solipsism," an assumption at the heart of Searle's research program. It's a program Searle styles a kind of "`Fregean' ... biological naturalism" (Searle 1983, p. 230), according to which meaning "is some mental state in the head of the speaker -- the mental state of grasping a concept or simply having a certain Intentional content" (Searle 1983, p. 198). It is a research program which Searle himself admits "Frege would have found utterly foreign [in its biological naturalism]" (Searle 1983, p. 230), and which he admits to be one that at the time of his writing (1983) "the most influential theories of meaning and reference reject[ed]" (Searle 1983, p. 198); one that at the time of this writing (1993) the most influential theories of meaning and reference still reject. These "most influential theories of meaning and reference," by Searle's account:

suggest a picture of reference and meaning in which the speaker's internal Intentional content is insufficient to determine what he is referring to, either in his thoughts or in his utterances. They share the view that in order to account for the relations between words and the world we need to introduce (for some?, for all? cases) external, contextual, nonconceptual, causal relations between the utterance of expressions and the features of the world that the utterance is about. (Searle 1983, p. 199)

"If these views are correct," Searle acknowledges -- if "the various arguments about the causal theory of reference [have] shown that these mental entities `in the head' are insufficient to show how language and mind refer to things" (Searle 1983 p. 14) "the account I have given of Intentionality must be mistaken" (Searle 1983, p. 199).

Searle characterizes "Putnam's strategy" as follows:

to try to construct intuitively plausible cases where the same psychological state will determine different extensions. If type-identical psychological states can determine different extensions, then there must be more to the determination of extension than psychological states, and the traditional [internalist] view is, therefore, false. (Searle 1983, p. 200)

Then, Searle notes, "Putnam offers two independent arguments to show how the same psychological state can determine different extensions" (Searle 1983, p. 201). The first argument, or line of argument invokes what Putnam calls "the linguistic division of labor" according to which some members of any linguistic community "have more expertise in applying certain terms than others" (Searle 1983, p. 201) and what determines the meaning and fixes the reference of words -- even in my idiolect! -- is as much (or more) their expertise as mine. Thus, according to Searle's exposition of Putnam's example:

insofar as there is any concept attaching to the words "beech" and "elm" for me, they are pretty much the same concept. In both cases I have the concept of a big deciduous tree growing in the Eastern part of the United States. Therefore, according to Putnam, in my idiolect the concept or "intension" is the same, but the extension is clearly different. "Beech" denotes beech trees and "elm" denotes elm trees: same psychological state, different extensions. (Searle 1983, p. 201)

Searle's first response to this argument is just to affirm "that the theory that meaning determines reference can hardly be refuted by considering cases of speakers who don't even know the meaning or know it only imperfectly" (Searle 1983, p. 201). This response is puzzling on its face, and when unpuzzled, I think, quite lame.

Searle's response is puzzling from the very outset: Putnam's line is not to deny that meaning determines reference -- he explicitly accepts this! In fact he uses it as a premise in arguing that meanings aren't in the head. Putnam's argument is that meaning determines reference, but what's in the head doesn't determine reference, so meanings aren't in the head. Then hard upon misstating Putnam's views, Searle seems to misstate his own: he seems to forsake his own theory to respond to Putnam not as a "Fregean" biological naturalist should, but as a genuine Platonistic, antipsychologistic (hence antibiologistic) Fregean, noting, "As traditionally conceived, an intension or Fregean Sinn is an abstract entity which may be more or less imperfectly grasped by individual speakers" (Searle 1983, p. 201). And now his complaint against Putnam's "elm" and "beech" example is that "it does not show that intension does not determine extension to show that some speaker might not have grasped the intention, or grasped it only imperfectly; for such a speaker hasn't got the relevant extension either" (Searle 1983, p. 201). But the whole point of the fable is that Putnam, despite not knowing the difference between a beech and an elm has got the relevant extensions -- by "beech" he refers to beeches (the same trees the experts who have fully "grasped the intension" call "beeches"), and by "elm" he refers to elms (the same trees the experts call "elms"). And since Putnam's got the extension -- since meaning or sense determines reference (for both Searle and Putnam) -- it's evidently not the case (or needs to be shown by Searle, if his reply is to be effective) that Putnam "does not know the meaning of the word [`elm' or `beech']" (Searle 1983, p. 201).

I will try to unpuzzle this. The two curious features of Searle's response -- 1) its misrepresentation of Putnam's views as opposed to the idea that reference is determined by meaning (intension), when what Putnam actually opposes is the view that reference is "determined by a psychological state" (Putnam 1975, p. 222 [my emphasis]), and 2) its misrepresentation of Searle's own view (wherein it differs from Frege's) -- seem related. If we understand Putnam as arguing against the idea that Fregean senses (abstract entities, not psychological states) determine reference, then Putnam's argument is indeed so far beside the point that "any defender of the traditional view would not be worried by this argument" (Searle 1983, p. 201). Indeed, since Fregean senses aren't in the head either, it will not argue against the claim that Fregean sense determines reference to show that what's in the head doesn't suffice to determine reference; though it will still argue against the claim that the individual speaker's more or less imperfect psychological grasp of a Fregean sense determines reference in the individual speaker's idiolect (if psychological grasp is supposed to be an internal state of the speaker).

Now we are in a position to illuminate Searle's curious contention that another way of putting the point that "The thesis that meaning determines reference can hardly be refuted by considering cases of speakers who don't even know the meaning" (Searle 1983, p. 201) is to say "the notions of intension and extension are not defined relative to idiolects" (Searle 1983, p. 201). This is true enough of Fregean Sinn as the determinant of public extension, but it is not true at all of Searlean intensions -- for Searle's way with Frege is, in effect, to eliminate the abstract sense grasped while retaining the individual psychological state or act or way of "grasping" as determinative of extension (Searle 1983, p. 197f). Perhaps no "defender of the traditional [Fregean] view would be worried by this argument,"{17} but a defender of Searle's "Fregean" view certainly should be worried by this argument. On Searle's view the intensions that determine extension are the psychological states of individual speakers; and they determine extension, in the first instance, just in that individual speaker's idiolect. Other speakers' references, in their idiolects, are likewise, on Searle's account, determined by their individual psychological states. Thus on Searle's account intension and extension are defined, in the first instance, "relative to idiolects" and then derivatively (insofar as psychological states of various individual speakers are type identical, and thus determine identical extensions) are defined relative to dialects and languages.

I really don't believe any proponent of the Putnamian attack on anything like the preceding Searlean picture would be worried by Searle's defense of an attenuated{18} version of the traditional Fregean view against such attack.

Searle also attempts a second reply to the "beech"/"elm" counterexample; but this second argument -- that Putnam's assumption (1) that his concept answering to "elm" and his concept answering to "beech" are identical and yet (2) the extension of "elm" in his idiolect is not identical with the extension of "beech" in his idiolect, is incoherent -- fares no better than the first. The incoherence alleged is that Putnam only knows (2) to be true because he knows (3) "that elms and beeches are two different species of trees"; but 3) "states conceptual knowledge" if anything does; so Putnam's concept answering to "elm" is not identical to his concept answering to "beech" after all, which contradicts his original assumption. This fails for three reasons.

First, it is not clear that "if such knowledge [as (3) states] is not conceptual knowledge, nothing is" (Searle 1983, p. 202). It's not conceptual knowledge but empirical knowledge that beeches are not elms and hence that "beech" and "elm" are not codesignative expressions -- unless my knowledge that renates are all cordates (which seems like nonconceptual knowledge, if anything is) is also conceptual knowledge. The objection to Searle here is that unless informed by empirical knowledge the principle he seems to invoke is going to produce wrong concepts. There seems as much conceptual warrant a priori for saying filberts are not hazels as for saying elms are not beeches; but filberts are hazels!

Second, if the concept answering to "elm" includes the knowledge that elms are not beeches it seems it will also have to include the knowledge that elms are not oaks, or hazels, etc.; it will have to enumerate all the other deciduous trees that elms are not, and this will not be a tractable concept whose psychological deployment might plausibly be thought to explain how I determine what "elm" refers to. And this need for the concept to include an enumeration of all the deciduous trees that elms are not will only be the tip of the unwieldy iceberg the concept now becomes; for it was only because the genus "deciduous tree" eliminated ships and shoes and sealing wax at one fell swoop that our concept of an elm didn't also have to include not being a frigate and not being a penny loafer. But now the same trouble is going to repeat itself concerning the concept answering to "deciduous" and "tree," i.e., the concept "tree" is going to have to include not being a lichen or a fungus, etc.

Third, even if there were some principled way of stopping the envisaged conceptual exclusion process short of having every concept explicitly exclude each other species of every genera occurring in its analysis -- allowing that my concept answering to "elm" need only explicitly exclude beeches, and my concept answering to "beech" need only explicitly exclude elms -- look what we have. My concept answering to "elm" will now be a big deciduous tree growing in the Eastern U.S. that isn't a beech, and my concept answering to beech will now be a big deciduous tree growing in the Eastern U.S. that isn't an elm. This is not going to help me to distinguish elms from beeches any better than before; nor do these definitions seem to embody additional conceptual knowledge beyond that included in the initial concept of a big deciduous tree growing in the Eastern U.S. It seems the difference between the augmented definitions and these originals (in my idiolect) will only be notational and not at all notional; so contrary to the incoherence Searle alleges against Putnam, Putnam's augmented concepts answering to "beech" and "elm" remain (notionally) the same after all.

10. Searle Against Putnam on Causal Indexical Dependence

As for Putnam's second, more influential, Twin Earth example, the problem that it poses is that Oscar and twin Oscar's type identical physiological states determine different extensions of "water." When Oscar says "water" or thinks about what he means by "water," he means (or thinks about) H2O: when twin Oscar says "water" or thinks about what he means by "water," he means (or thinks about) XYZ. This is contrary to the assumption of methodological solipsism, which holds that meaning is wholly determined by, or supervenient on, internal or psychological states of the speaker or thinker.

Searle responds as follows:

On the account of intentionality presented in this book the answer to that problem is simple. Though they have type-identical visual experiences in the situation where "water" is for each indexically identified, they do not have type identical Intentional contents. On the contrary their Intentional contents can be different because each Intentional content is causally self referential.... The indexical definitions given by Jones on earth of "water" can be analyzed as follows: "water" is defined indexically as whatever is identical in structure with the stuff causing this visual experience, whatever that structure is. And the analysis for twin Jones on twin earth is: "water" is defined indexically as whatever is identical in structure with the stuff causing this visual experience, whatever that structure is. Thus, in each case, we have type-identical experiences, type-identical utterances, but in fact in each case something different is meant. That is, in each case the conditions of satisfaction established by the mental content (in the head) is different because of the causal self-referentiality of perceptual experiences. (Searle 1983, pp. 207-208)

While on Putnam's account,

My `ostensive definition' of `water' has the following empirical presupposition: that the body of liquid I am pointing to bears a certain sameness relation (say x is the same liquid as y, or x is the sameL as y) to most of the stuff that I and other members of my linguistic community have on other occasions called `water'. (Putnam 1975, p. 225)

On Searle's account, Putnam's ostensive or indexical definition's reference to "most of the stuff that I and others of my linguistic community have on other occasions called `water'" is replaced by "the stuff causing this visual experience." On Putnam's account the difference in the extension of "water" as Oscar means it (referring to H2O) and as twin Oscar means it (referring to XYZ) is determined by differences in their communities and environments. On Searle's account the difference is determined by the numerical difference between Jones's and twin Jones's visual experiences. Searle's account does have the consequence that "in making indexical definitions, different speakers [e.g., Jones and twin Jones] can mean something different [as Jones means H2O and twin Jones XYZ] because their Intentional contents are self-referential to the token Intentional experiences" (Searle 1983, p. 208). But there is a numerical difference between Jones's and Smith's type identical visual experiences of water on this earth too: the worry that arises -- and Searle anticipates -- is that his account will "have the consequence that different speakers on earth must mean something different by `water'" (Searle 1983, p. 208). Perhaps Smith and Jones are having type identical visual experiences, but Smith is viewing a glass of gin and Jones a glass of water. Smith's visual experience might even be an hallucination. So in Jones's idiolect, "water" refers to water if water causes Jones's indexically identified experience; and in Smith's idiolect "water" refers to gin if gin causes Smith's experience, or to thin air (or the LSD Smith took?) if Smith is having an hallucination.

The difference between Putnam's and Searle's account can be brought out by noting that on Putnam's account Smith has not succeeded in ostensively defining "water" when he points at the glass of gin because gin doesn't bear the sameL relation "to most of the stuff that I and other speakers in my linguistic community have called `water'": the empirical presupposition is not satisfied. On Searle's account, Smith must succeed in ostensively defining "water" when he points at the glass of gin, or even if he points at thin air (if he's having an hallucination) because the empirical presupposition has dropped out or (equally troublesomely) become completely subjective -- gin or thin air (or the LSD?) is the cause of Smith's visual experience. Furthermore, a Searlean indexical definition is one that no one can give to anyone else -- since no one else is having this particular visual experience of Smith's, how is anyone else supposed to grasp the indexical definition of "water" as whatever bears the relation sameL to whatever causes this experience?

Searle's attempt to head off this last difficulty is, again -- as were his attempts to answer the "beech"/"elm" counterexample -- odd on its face, and inadequate on close consideration.

Searle's first response to this difficulty is to note:

Most people do not go around baptizing natural kinds; they just intend to use the words to mean and refer to whatever the community at large, including the experts, use the words to mean and refer to. (Searle 1983, p. 208)

This response -- if this were all the response Searle were to give -- just seems to concede Putnam's account of the role of the linguistic division of labor and Putnam's conclusion that what I mean is not fully determined by what's in my head, but depends on what's in the head of other speakers (particularly the experts). If this response is supposed to be part of a defense against Putnam's externalist conclusions (and not the surrender it seems) the weight of that defense must rest on Searle's discussion of the cases "when there are such public baptisms" (Searle 1983, p. 208), in which case Searle maintains, such baptisms "would normally involve on the part of the participants shared visual and other experiences" (Searle 1983, p. 208). The trouble here is how to conjure up public baptisms conferring shared reference out of private baptisms within one individual's experience. What Searle calls "shared visual and other experiences" are supposed to do this.

Searle gives the following example of a "shared visual experience":

Suppose, for example, you and I are both looking at the same object, e.g., a painting, and discussing it. Now, from my point of view, I am not just seeing a painting, rather I am seeing it as part of our seeing it. And the shared aspect of the experience involves something more than just that I believe that you and I are seeing the same thing; but the seeing itself must make reference to that belief, since if the belief is false then something in the content of my experience is unsatisfied: I am not seeing what I took myself to be seeing. (Searle 1983, p. 71)

How is the experience "shared"? I have my visual experience of the painting and you have yours; and indeed the real sharedness of the experience does involve something more than just that I believe that you and I are seeing the same thing -- it involves there being a you, and your seeing, and there being a painting we both see. It seems Searle's account of "indexical definition" makes essential appeal to both the referent and other members of the linguistic community -- the two things his account was supposed to enable us to do without.

The crucial thing here -- what's supposed to conjure up public agreement out of my (our?) private ostension is that "the content of my visual experience makes reference to the content of a belief about what you are seeing," i.e., to the belief that "there is a particular painting that you are seeing that I am seeing too" (Searle 1983, p. 71). What's crucial is 1) that your existence and the object's existence are not entailed by their being made reference to in the content of my visual experience (consistent with methodological solipsism), and 2) that whatever's in your head determines the reference of your visual experience to the very same object to which what's in my head determines the reference of my visual experience. The incoherence of Searle's attempt to meet both these demands simultaneously does little to dispel the thought that it's impossible to do so.

After admitting, "I am not sure how, or even if, the various complexities [of shared experience] can be represented in the notation we have been using so far," Searle offers the following "analysis" of one "very simple sort of case ... where the content of my visual experience makes reference to the content of a belief about what you are seeing" (Searle 1983, p. 71):

An example, stated in ordinary English, would be a case in which "I believe there is a particular painting that you are seeing and I am seeing it too". Here the "it" within the scope of the "see" is within the scope of the quantifier which in turn is within the scope of the "believe," even though the "see" is not within the scope of the "believe". The sentence doesn't say that I believe I see it, it says I see it. (Searle 1983, p. 71)

"Using square brackets for the scope of the Intentional verbs and round brackets for the quantifiers and allowing the two to cross over" (1983, p. 71: my emphasis) Searle gives us the following (sic):

Bel[(E!x)(you are seeing x] & Vis_expµx and the fact that µx is causing this Vis_exp]) (Searle 1983, p. 71)

Besides the incoherence of "allowing the two to cross over," the analysis is contrary to Searle's expressed view that "the content of the visual experience [that there is a yellow station wagon there], like the content of the belief [that there is a yellow station wagon there], is always equivalent to a whole proposition (Searle 1983, p. 40). The conjunctive content clause of "Vis_exp" here is not a sentence, since neither of the conjuncts is a sentence. Neither aspects (or phrases referring to aspects) such as "µx" stands for, nor "the fact that ..." phrases are sentences.{19}

Perhaps the second difficulty, the nonpropositional content of the "Vis_exp" clause is due to carelessness. Perhaps the content of "Vis_exp" can be more carefully expressed as a sentence if we can rewrite, e.g., "Vis_expi[the yellow left hand side aspect of the station wagon & the fact that the yellow left hand side aspect is causing this visual experience]" as "Vis_expi[here is the yellow left hand side of the station wagon & this yellow left hand side of the station wagon is causing my visual experience]. An objection to this (as a reconstruction of Searle's intended meaning) would be that it identifies the aspect with the intentional object, contrary to Searle's insistence that "the aspect ... is not itself the intentional object"; and it squares ill with the idea that the famous duck-rabbit picture has two different aspects, with different causal powers (to make us see it as a duck or a rabbit) under its different aspects! (Searle 1983, p. 52).{20} But I will pass over this. It is the ill-formedness of Searle's analysis that most merits closer attention in connection with the problem of getting public meanings out of private visual experiences.

I suggest the following as well-formed expressions of what Searle is trying to say. Assuming an analysis "µx" as having sentences for its substitution instances, as just suggested, perhaps Searle intends his analysis to mean

Beli[(3x)(you are seeing x & x=y)] & Vis_expi[µy & I am having this Vis_exp because µy]

Or maybe he intends it to mean

Beli[(3x)(you are seeing x)] & Vis_expi[µy & I'm having this Vis_exp because µy & x=y]

Or -- so long as Searle's allowing us to visually experience causal relations -- why not just allow that I visually experience your seeing too, and render it:

Vis_expi[µx & I'm having this Vis_exp because µx & you are seeing x]

It really doesn't matter which of these renderings we choose: perhaps it is a corollary of Searle's doctrine that "Intentional states do not neatly individuate"{21} (Searle 1983, p. 21) that there is nothing to choose between them. The trouble in each of these cases -- with regard to Searle's project of deriving public meanings from private experiences -- is the same. In order to get public determination of reference from the content of my visual experience it is not enough that I believe that you see what I see, or that I visually experience that you are seeing. "Believe" unlike "know" is not a factive verb (Beli[p] does not entail p); and the same goes for "visually experience" as opposed to "see." Just as my believing I used to share my toys with my brother (unlike my knowing it) doesn't entail that I did share them, neither does my belief that you are seeing x and x=y (where y is what I'm visually experiencing) entail that we are sharing a visual experience. Neither does my visual experience (unlike my seeing) that you are seeing this x (which I'm visually experiencing) entail that we are sharing a visual experience. So long as "you see x" and "x = y" are in the scope of the nonfactive "Bel" or "Vis_exp" operators, Searle's analysis fails to establish any "fact that you and I are having a shared visual experience" (Searle 1983, p. 71) as required for the indexical definition of water as "whatever is identical in structure with the stuff causing this visual experience" (Searle 1983, p. 208): it fails to establish a communicable sense that publicly determines extension of "water" in our dialect and not just in my idiolect. Searle's careless use of (the factive verb) "see" in place of (the nonfactive verb phrase) "visually experience" as when he remarks that "in the above formulation I see it under the aspect µ, I assume you see the same object, but I don't have to assume you see it under µ" (Searle 1983, p. 71) is carelessness concerning exactly the crucial point of the object's or your experience's actual existence. The grammatical ill-formedness of Searle's would-be analysis, it seems to me, similarly obscures this crucial point.

11. The Relevance of Putnam's Critique of Internalism

Perhaps one might think to answer, on Searle's behalf, that these criticisms concerning the inability of anything like Searle's proposed indexical definitions to determine public meanings capable of fixing extensions in shared dialects are beside the point. One might think that restriction to private meanings in idiolects is not objectionable when it comes to the meaning of mental states of individuals. Similarly, I suppose, one might object that Putnam's argument that linguistic meanings aren't in the head is simply irrelevant to the issue of whether psychological meanings (i.e., the content of beliefs, desires, etc.) are in the head, and it is after all this that is at issue when we are discussing whether the mental properties or representational states of computers are intrinsic like ours. Such a sanguine response to the preceding criticisms seems ill advised for two reasons.

First, while it seems probable that some of our intentional mental states and processes (e.g., thirst) are not linguistically mediated, other mental states and processes (e.g., wondering whether Searle has produced a sound argument against the thesis of artificial intelligence) plainly do seem language dependent and linguistically mediated. Perhaps we don't always think by mentally tokening natural language expressions (perhaps we sometimes think in mentalese, mental pictures, or what have you), but surely we sometimes think in our natural languages.{22} And when we do, if we really think (despite the non-intrinsic, causally and communally derived meaning of the symbols we manipulate), it bodes ill for the attempt to deny that computers really think because of the causal and communal sources of the meanings of the symbols they manipulate -- especially if the place of computers in the causal nexus of the linguistic community and its environment is essentially not unlike ours.{23} Surely when we think discursively in English (whether out loud or on paper or sotto voce, in private soliloquy), we really think. I should think that if anything is paradigmatically thinking such discursive thought is at least as paradigmatic as thirst (Searle 1989a, p. 707). With specific reference to Searle's Chinese room example, suppose we accept the following proposal of Putnam:

We shall speak of someone as having acquired the word `tiger' if he is able to use it in such a way that (1) his use passes muster (i.e., people don't say of him such things as `he doesn't know what a tiger is', `he doesn't know the meaning of the word "tiger"', etc.; and (2) his total way of being situated in the world and his linguistic community is such that the socially determined extension of the word `tiger' in his idiolect is the set of tigers. (Putnam 1975, p. 247)

Then (1) is true ex hypothesi: as directed against Turing's test, Searle's example presumes the would-be understander -- a computer or the Chinese room responds exactly as if it understands. Then (2), contrary to Searle, seems virtually guaranteed by the fact that the symbols computers process do, by and large, derive their meanings from us, their designers, users and programmers. Second, in lieu of the biological account of our intrinsic intentionality Searle believes must be forthcoming (but doesn't bring forth), it seems the main scientific reason for positing intrinsic intentionality, on Searle's view, is the necessity of positing such in order to explain the derived intentionality of public representations such as natural language expressions. If there's little hope of Searle's account providing such explanation of the derived intentionality of public language, then there is little scientific point to Searle's invocation of "intrinsic intentionality."

12. Diehard Internalism

The foregoing discussion shows that Searle's reply to Putnam fails. I will not address the question of the possibility of some such account, consistent with methodological solipsist (or mechanist) strictures, succeeding.{24} I believe that Putnam's argument shows no such internalist account as Searle's "biological naturalism" proposes can succeed; but I will not rest my case on just this generic objection. Putnam's general arguments make little impression on someone utterly convinced (as many besides Searle are) that it has to be the case that "meanings are precisely in the head" (Searle 1983, p. 200) because "there is nowhere else for them to be" (Searle 1983, p. 200). Many still share Searle's "basic assumption" that

The brain is all we have for the purpose of representing the world to ourselves and everything we can use must be inside the brain. Each of our beliefs must be possible for a being who is a brain in a vat because each of us is precisely a brain in a vat; the vat is a skull and the `messages' coming in are coming in by way of impacts on the nervous system. (Searle 1983, p. 230)

It has seemed, and still seems, to many besides Searle that "there is nowhere else for them to be" if meanings are to be scientifically accountable; that there's nowhere else for them to be if there is to be any hope of squaring folk psychological explanations of actions, invoking intentional states of mind (e.g., beliefs and desires) of agents, with explanations of bodily movements in terms of neurophysiological causal chains giving rise to them. The thinking is that states and processes folk psychological explanations invoke have to be intrinsic to us, or In our nervous systems, if "the question of the relation of mental states to brain states" (Churchland 1988, p. 61) or the question "of how an old theory (folk psychology) is going to be related to a new theory (matured neuroscience) which threatens to displace it" (Churchland 1988, p. 61) is not going to yield the conclusion that folk psychological explanation and attribution of mental properties therein "is simply too confused and inaccurate to win survival through intertheoretic reduction" (Churchland 1988, p. 61) or (if not by reduction) by supervenience. If mechanical explanation does require whatever features of things it explains be In or intrinsic to the mechanism that has the features to be explained; and if mechanical explanation is the only scientific game in town; then any scientifically adequate account of intentional mental states must regard them as intrinsic to the things (e.g., organisms or their nervous systems) that have them. Thus both cognitivists such as Fodor, and eliminativists such as Stich (1983) have supposed that only mechanistic accountability can legitimate the explanatory posits of folk psychology. Fodor argues, in effect, that since these posits work they must be legitimate, and since they're legitimate they must be capable of mechanistic legitimization. Stich argues, in effect, that since these posits are incapable of mechanistic legitimization (because they're not In the organism or mechanism) they must be at least scientifically (and if you add a generous measure of scientism to this line of thought, as Stich and Churchland do, utterly) illegitimate.

I believe this mechanistic counterattack on externalism can be resisted on several fronts. It can be doubted whether mechanical accountability is the sine qua non of scientific accountability; and it can be doubted whether intrinsicality is a necessary condition of mechanistic accountability:

In fact, it's quite unclear whether anything is really intrinsic to an object. A stomach is defined by its function in digesting food; so, whether an organ is a stomach may depend in part on certain functional relations to external objects. The color of an object may depend on relations to potential observers and so not be an intrinsic feature of the object. Whether the mass of an object depends on its relations to other objects is a deep question of physics. (Harman 1990, p. 607)

However, I do not plan to resist it here -- let the issue between internalism and externalism remain in doubt. Besides such generic objections to internalism as those (stemming from Putnam) just considered, there are specific objections to Searle's theory precisely as a mechanistic, internalist account. Even if it were the case, as Searle and others believe, that some internalist account must be correct (on pain, perhaps, of either being driven to eliminativism about the mental or allowing our account of meaning to involve something like causal action at a distance by the referent or the mental states of other members of the linguistic community on the content of mental states of the speaker) there are good reasons for disparaging the mechanistic scientific prospects and credentials of Searle's account.

Once again the argument takes an ad hominem turn. Presuming that meaning or intentionality is scientifically accountable, and that the scientific account must show how meaning or content is determined by (or supervenes on) neurophysiology, I will consider the merits of Searle's proposals for answering "the problem of meaning in its most general form" (Searle 1983, p. 27), which he characterizes as "the problem of how do we get from the physics to the semantics" (Searle 1983, p. 27). This general problem, along with Searle's program or proposal for addressing it, subdivides into two parts. First, to explain the derived meaning or intentionality of conventional signs (such as written or spoken sentences of natural languages) in terms of the meaning or intentionality of the mental states of the human speakers (or writers) and listeners (or readers) who produce or understand them. This derivation problem, as Searle puts it, is "How does the mind impose intentionality on entities that are not intrinsically Intentional, on entities such as sounds and marks that are, construed in one way, just physical phenomena in the world like any other?" (Searle 1983, p. 27). A second problem -- the foundational problem -- concerns how we get from the physics to the intrinsic semantics of intentional mental states in the first place since neurons and their states no less than marks and sounds "are, construed in one way, just physical phenomena in the world like any other" also. Searle's "`Fregean' ... biological naturalism" (Searle 1983, p. 230) is inadequate both in its attempted Gricean reconstruction of the "derived" meaning of conventional signs and in its proposed "monist interactionist" (1980b, p. 454) account of the neurophysiological causation of the "intrinsic" intentionality of mental states. While I cannot show that no Gricean solution to the derivation problem is possible, I believe it can be seen in short compass that Searle's treatment of the derivation problem fails, and why any attempt to derive linguistic meaning from consciousness -- as Searle does -- is doomed, by Putnamian considerations, from the start. This is undertaken in the next and concluding section of the present chapter. Demonstration of the inadequacy of Searle's appeal to consciousness to attempt to solve the foundational problem and, in so doing, to differentiate genuine "intrinsic intentionality" from counterfeit "as-if" intentionality essentially, is undertaken in the next (and concluding) chapter.

13. The Derivation Problem

According to Searle, "The mind imposes Intentionality on the production of sounds, marks, etc., by imposing the conditions of satisfaction of the mental state" expressed by and comprising the sincerity condition of the speech act "on the production of the physical phenomena" (Searle 1983, p. 164). This, on its first face, is the picture. The sentence "The sun is shining brightly," e.g., uttered by me, expresses a belief (that the sun is shining brightly), which is the sincerity condition of the utterance. If I don't believe that the sun is shining brightly my utterance of "The sun is shining brightly" is insincere: on one understanding of the word "mean," in such a case, I say it without really meaning it. Crudely put the idea is that my words get their meanings from the mental states they express, which comprise their sincerity conditions, by a kind of contagion. My assertion, "The sun is shining," is supposed to "catch" its meaning, as it were, from my belief that the sun is shining which has that meaning intrinsically. Here, we at least have a causal model of how the "imposition" of meaning by mind "on entities such as sounds and marks that are, construed in one way, just physical phenomena in the world like any other" (Searle 1983, p. 27) is supposed to work. The trouble with this causal model, however, is not just that it's crude, but that it just doesn't work. If my sincere utterance of "The sun is shining" is supposed to catch its meaning from the belief that the sun is shining (which it expresses), if I say the words without believing them (i.e., believing it is raining) then they ought not to mean what they do; I should not be able to use these words to mislead you about the state of the weather (as I can). On this account one cannot tell a lie! Since one can tell a lie, obviously, the crude causal account of how "the mind imposes intentionality on entities that are not intrinsically Intentional" is obviously flawed. Searle's awareness of this difficulty leads to a modification of the crude causal (contagion) account. The resulting account though clearly less crude, is not so clearly causal.

Searle's solution to the misrepresentation problem -- the problem of how it is possible to tell a lie -- on which the crude contagion account founders is to distinguish

a double level of intentionality in the performance of the speech act. There is first of all the Intentional state expressed, but then secondly there is the intention, in the ordinary not technical sense of that word, with which the utterance is made. Now it is this second Intentional state, that is the intention with which the act is performed, that bestows the Intentionality on the physical phenomena. (Searle 1983, p. 27)

Again,

There is a double level of Intentionality in the performance of the speech act, a level of the psychological state expressed in the performance of the act and a level of the intention with which the act is performed which makes it the act that it is. Let us call these respectively the "sincerity condition" and the "meaning condition." (Searle 1983, p. 164)

On this account, "entities which are not intrinsically Intentional can be made Intentional by, so to speak, intentionally decreeing them to be so" (Searle 1983, p. 175). There are both external and internal problems with this complication (imposing a second level of Intentionality) of the crude causal (contagion) account.

The external problem first: though the second level of intentionality solves (at what cost, is the internal problem) the problem of how deliberate misrepresentation is possible -- I lie by intentionally imposing the conditions of satisfaction of a belief I don't actually have (e.g., that the sun is shining) on my utterance (e.g., "The sun is shining"), there remains a problem for such a view of accounting for inadvertent misrepresentation. How is this possible? Suppose I utter the words, "The Dean is a great philanderer," with the intention of informing you of the Dean's interest in stamp collecting.{25} What I meant by "great philanderer" was "great philatelist" -- but my words meant something else! If speaker meaning is supposed to determine word meaning this seems impossible: where before (on the crude causal account) I cannot tell a lie; now (on the more complicated "double level" account), I cannot unintentionally misspeak, as in the preceding example. But I can so misspeak. So much the worse for Searle's "double level" account or, indeed, I suspect, for any Gricean attempt to make speaker meaning determine word meaning.

The preceding difficulty seems fatal to the Gricean program. Note, here, that simply augmenting the appeal to speaker meaning with an appeal to hearer meaning won't get us far. I can misrepresent the Dean as being a sexual adventurer despite intending to represent him as being a stamp collector all by myself -- e.g., if I write it in my diary. It seems that to account for the possibility of such misrepresentation one has to appeal at least to what English speakers (other than myself, or myself on a later occasion, e.g., upon reading the diary entry) would understand by the words, i.e., beyond speaker's and hearer's occurrent intentions and other psychological states (of meaning and understanding) to their dispositions. Unless Searle were to accept a dispositional analysis of intending -- which he does not -- such appeal to dispositions is contrary to Searle's analysis; neither is it clear such dispositions will be requisitely internal. Worse yet -- beyond this need to appeal to hearer and speaker dispositions -- we are also apt to have to make appeal to rules or conventions of usage which are clearly external to individual speakers' minds or brains. Notably, Searle himself, in his earlier work on speech acts is compelled, in attempting to account for literal meaning, to resort to just such an invocation of convention or rules. He writes,

In our analysis of illocutionary acts [asserting, questioning, requesting, etc.] we must capture both the intentional and the conventional aspects and especially the relationship between them. In the performance of an illocutionary act the speaker intends to produce a certain effect by means of getting the hearer to recognize his intention to produce that effect, and furthermore, if he is using words literally, he intends this recognition to be achieved in virtue of the fact that the rules for using the expressions he utters associate the expression with the production of that effect. (Searle 1971, p. 46: my emphasis)

Note that if this appeal to rules is intended to account for one's really "using words literally" and not just an account of seeming to oneself to be using them so, they must be real shared or external conventions of usage. The case of inadvertent misrepresentation makes this clear. When I say "The Dean is a great philanderer" I intend you to recognize that the Dean is interested in stamps "in virtue of the fact that the rules for using the expressions" I utter "associate the expression with the production of that effect"; yet the words I said don't literally (or otherwise) conventionally mean that. Since misspeaking involves the divergence of my private "convention" (associating "philandering" with stamp collecting) from the socially established convention (associating "philandering" with sexual adventuring) it would seem -- contrary to Gricean internalism -- that any adequate account of misspeaking will have to advert to public external intersubjective rules themselves, not just my private, internal, subjective beliefs about and intentions concerning such rules as I might imagine. So too, I take it, any adequate account of literal meaning needs to advert to actual shared conventions themselves, not to individuals' private impressions of there being such conventions.{26}

This brings us to the internal problem, or problems, with Searle's account. Much as his need to advert to rules or conventions undermines the internalism of the account, Searle's recourse to a second level of intentionality -- the meaning intention -- undermines the causal character of the initial ("contagion") account. On its face, the interposition of this second level of intentionality seems merely a complication of the original account: on the "double level" account, it is still the conditions of satisfaction of the sincerity condition -- in the case of assertion, e.g., of the belief expressed -- that get "bestowed" or "conferred" or "imposed" on the utterance, somehow. On it's face the meaning intention is merely the intermediary by means of which these satisfaction conditions get communicated. On it's face, the meaning intention merely represents a complication of the chain of transmission hypothesized by the crude causal account: much as malaria gets transmitted from one person to another indirectly via mosquito, satisfaction conditions get transmitted from beliefs to assertions via meaning intentions. But there's something wrong with this picture. My catching a malarial infection from you -- however indirectly, however convoluted the means of transmission -- entails your actually having (or having had) the infection: in general, X cannot cause Y unless there is X. But a brief look at Searle's "complication" of the original crude causal (contagion) account shows that this general principle -- that causes must exist -- is violated! According to the "double level" account,

When, for example, I make the statement that it is raining, I both express the belief that it is raining and perform an intentional act of stating that it is raining. Furthermore, the conditions of satisfaction of the mental state expressed in the performance of the speech act are identical with the conditions of satisfaction of the speech act itself. .... The fact that the conditions of satisfaction of the expressed Intentional state and the conditions of satisfaction of the speech act are identical suggests that the key to the problem of meaning is to see that in the performance of the speech act the mind intentionally imposes the same conditions of satisfaction on the physical expression of the expressed mental state, as the mental state has itself. The mind imposes Intentionality on the production of sounds, marks, etc., by imposing the conditions of satisfaction of the mental state [expressed] on the production of the physical phenomenon. (Searle 1983, p. 164)

The trouble with this is that, when the speech act is insincere, when I assert what I don't believe, e.g., "I impose Intentionality on my utterances by intentionally conferring on them conditions of satisfaction which are the conditions of satisfaction of certain psychological states" (Searle 1983, p.28) that I don't have. (Indeed -- lest one object that the mosquito can transmit to me an infection you once had, when you were bitten, but do not presently have, at the time when I'm bitten -- a psychological state I may have never had. It is not a necessary condition on my being able to falsely assert P that I ever believed P.) "He cursed because he was angry." Here we might also say, "His cursing expressed his anger": here, it seems "express" connotes causation. If he was not really angry we might say, keeping this causal sense of "express" in mind, "He was feigning anger, not expressing it." But if there is a sense in which his curses still express (his?) anger when he's not angry but merely feigning it, there is no longer any causal force to such talk of "expression"; nor is there any causal force to Searle's talk of "expression" in his "double level" take on the derivation problem. We are told "the intention ... bestows Intentionality on the physical phenomena," "mind imposes Intentionality on entities that are not intrinsically Intentional by conferring the conditions of satisfaction of the expressed psychological state upon the external physical entity" (Searle 1983, p. 27: my emphasis). Again, "Entities which are not intrinsically intentional can be made Intentional by, so to speak, intentionally decreeing them to be so" (Searle 1983, p. 175: my emphasis). Such talk of "bestowal," "conferral," and "intentional imposition" by "decree" is magical on its face and -- once it becomes clear that Searle's talk of "expression" lacks any causal force, that the "bestowal" is not, as it might appear, a causal communication of something (meaning or conditions of satisfaction) from one occurrent phenomenon (an occurrent belief, say) to another (the utterance of an assertion) -- seems magical to the core. Whether any solution to the derivation problem is possible on Gricean principles is doubtful; certainly Searle has not provided one. Thus the suggestion that there must be such a thing as intrinsic Intentionality, because there are certain things, e.g., utterances of English sentences, that clearly are Intentional but are not intrinsically so, and there's no other explanation for how these things (e.g., these utterances) could come to be Intentional except by having this Intentionality communicated to them from something else (psychological states, presumably) that are intrinsically intentional involves, in the first place the false (or at least unproven) supposition that there is a Gricean, internalist explanation. There is none such extant and no prospect, that I am familiar with, of such explanation being forthcoming. It is for good reason that "the most influential theories of reference and meaning reject a Fregean or internalist analysis" (Searle, 1983, p. 198): "Traditional [internalist] semantic theory leaves out only two contributions to the determination of extension -- the contribution of society and the contribution of the real world!" (Putnam 1975, p. 245). Searle's own attempt to dispense with the contribution of society and the real world in his attempted solution to the derivation problem does nothing to allay -- rather, I think, the manifest failure of Searle's attempt strongly reinforces -- such doubts about the prospects of traditional internalist analyses and presuppositions as have led to their rejection by "the most influential theories of reference and meaning" now going.

Endnotes

  1. Here (1983) and in other works from the mid-eighties Searle uses the convention of capitalization to distinguish Intentionality in the technical sense of semantic aboutness or reference from garden variety intention (as in "I intended to pay the rent, but I forgot"). "Intentionality is directedness; intending to do something is just one kind of Intentionality among others" (Searle 1983, p.3). He also sometimes uses the "-with-a-t" suffix to distinguish the property of aboutness or reference (Intentionality-with-a-t) from the property of non-extensionality or referential opacity (intensionality-with-an-s) where he thinks we are likely to confuse them due to the fact that "some sentences about Intentionality-with-a-t are intensional-with-an-s" (Searle 1983, p.24).
  2. In the case of certain "garden variety intentions" -- our "freely willed" choices in particular -- subjective intrinsicality also involves, perhaps, their being in special sense morally in us (as their sole or first causes). This is touched on in the concluding chapter, below.
  3. "Weak AI" according to Searle treats the computer as merely a tool "that enables us to formulate and test hypotheses in a more rigorous and precise fashion" whether these hypotheses are about mental processes, or meteorological processes, or whatever. According to "weak AI" just as a computer modeling the meteorological processes of precipitation is not really raining a computer modeling thought processes is not really thinking.
  4. Grice 1957.
  5. Grice's example -- "That remark, `Smith couldn't get on without his trouble and strife,' meant Smith found his wife indispensable" (1957, p.378) can be true even if it's also true that "in fact Smith deserted her seven years ago." (1957, p.378) -- provides a fuller parallel. (Lest it be thought that the difference between "Es regnet"'s meaning it's raining and the smoke from their chimney's meaning there's fire on their hearth hinged on the fact that meaning attribution is to a sign token in the first case and a sign type in the second.)
  6. I choose the designation "normative" over "non-natural" as tending less to prejudge the question of the possibility of giving a naturalistic account of linguistic and mentalistic meaning. Rich Hall suggests that "semantic" might be an apter designation (personal communication).
  7. If all of this is required for belief, then thermostats don't have beliefs. But note that if the possibility of being dogmatic and superstitious is necessary for representations to be beliefs, it seems that infrahuman animals can't be said to have beliefs either.
  8. Rapaport (1988; 1993) suggests a difference in some ways like the one Searle tries to draw between intrinsic and observer-relative intentionality hinging on whether a subject does anything with the representations it harbors: books don't, but running computer programs do. It should noted here that among the things that computers not all too inhumanly do is "generate" and "understand," as computational linguists put it, tokens of natural language sentences: it seems computers are among the authors and understanders just mentioned.
  9. It's not clear that Searle's "nontechnical" distinction holds up to close consideration. I suppose there are many plants in the average farmer's field that the farmer has never explicitly had reason or consider, much less to dub "a weed." It seems the account of observer relativity has to go dispositional (and technical) -- whether something's a weed (for the farmer) depends not on what he does say or think about it as what he would say or think about it if he took explicit notice of it. If this tack is taken, Searle's account of "observer- relative" properties (and hence his characterization of intrinsic properties as not observer-relative) will inherit the sorts of problems he urges (see e.g., Searle 1980c) against behavioristic dispositional analyses of mental properties. If this tack is not taken -- if Searle insists he just means "intrinsic" and "observer relative" in their ordinary everyday nontechnical senses -- it is difficult to see how this distinction will underwrite a scientific theoretical distinction between the intentionality of humans and computers.
  10. As would be the case if Putnam were right about the indexical dependence but wrong about the social dependence of meaning.
  11. E.g., relative to what Putnam (1975) calls the "linguistic division of labor".
  12. An effect is said to "supervene" on a system if the internal states of the system causally suffice, by themselves, to produce the effect; i.e., whenever the system (or any like system) is in the same (type) state the same (type) effect will be produced. To show an effect not to be supervenient on the states of an isolated system, consequently, it is enough to show the effect can vary independently of the internal states of the system in the sense that the effect can change without the internal state of the system changing, or that two like systems might be in identical (type) internal states and yet differ with regard to the effect or feature.
  13. Perhaps even dualistic versions of the principle are mechanistic in spirit. Ryle (1949, chap. 1) persuasively styles Cartesian minds not just ghosts in the machine but ghostly mechanisms, at it were, themselves.
  14. I was made aware of the issue of action at a distance here by Herb Hendry.
  15. Such that things of the same natural kind have "the same general hidden structure (the same `essence' so to speak)" (Putnam 1975, p. 235): so what makes water water is having the internal structural properties indicated by its chemical formula (being H2O); and what makes tigers tigers is having certain internal structural properties (i.e., anatomical and genetic properties); etc.
  16. Though in the context of the present critique of the notion that mental states are objectively intrinsic I only press the point, in the text, that individuals' intentional properties fail to supervene on their physiological properties, note that the same point can be made mutatis mutandis (as Putnam 1975 points out) concerning the phenomenological properties of individuals also. See Chapter 6, Section 1, below, for further discussion.
  17. Again, insofar as they restrict their defense to the determination of public reference by abstract Fregean Sinn.
  18. Leaving out the idea the individual's grasp of the sense determines reference in their idiolect.
  19. This second infelicity of Searle's analysis was pointed out to me by Barbara Abbott.
  20. I suspect Searle's notion of an aspect is an illegitimate meld of what is literally an aspect, i.e. a side (or the front or the back) or what is a property (e.g., being yellow) of the object with what is a property of the observer (e.g., of having a side view or a front view, or attending to the color). I take minor liberties with Searle's notation (e.g. adding the superscript "i" here and below and replacing "E!" with "3", below, which do not, I take it, change the "sense of the original": the problem here is to reformulate the original so as to have some sense Searle might plausibly be thought to have intended.
  21. "How many beliefs do I have exactly?", Searle notes, is a question to which "There is no definite answer" (Searle 1983, p.21).
  22. Even proponents of proprietary language of thought hypotheses must acknowledge this, as Abbott point out (Abbott & Hauser 1992).
  23. It seems in some regards computers have even become our linguistic community's experts -- e.g., with regard to the extension of "prime number greater than a billion."
  24. Though I believe the most favored stratagem of diehard internalism -- a kind of chastened internalist strategy, which only presumes to explain "narrow content" (the remainder that actually is in the head once all the external determinants of reference are subtracted) -- amounts to changing the subject. "Narrow content" strikes me as an oxymoron.
  25. I owe this example, I believe, to Michael McKinsey.
  26. As in our earlier discussion of shared visual experience it seems here the rules must really be publicly shared -- contrary to the would-be internalism of Searle's account -- not merely intended by me as shared or believed by me to be shared to do the explanatory work called for. Echoes, here, of Wittgenstein's discussion of "private language" (Wittgenstein 1958, §§201ff).


next | previous
TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1 | CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY