Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence by Larry Hauser 
TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1
| CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY

1. Introduction | 2. As-if Dualism | 3. The Mind-Body Problem and Searle's "Solution" | 4. The Other Minds Problem | 5. Introspectionism Revisited: Why Isn't Psychology Easy? | 6. Conclusion: The Chinese Room Revisited | Endnotes
Chapter Six:
SUBJECTIVE INTRINSICALITY: AS-IF DUALISM

Suppose everyone had a box with something in it: we call it a "beetle". No one can look into anyone else's box, and everyone says he knows what a beetle is only by looking at his beetle. -- Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing. .... The thing in the box has no place in the language-game at all; not even as a something; for the box might even be empty. -- No, one can `divide through' by the thing in the box; it cancels out, whatever it is. (Wittgenstein 1958, §293)

1. Introduction

In scouting Searle's suggestion that an essential difference between the thoughtlike performances of computers and genuine biological thought is to be found in the intrinsic intentionality of biological thought, I am considering alternate ways of understanding "intrinsic," corresponding, roughly, to "in the head" (objective intrinsicality) and "in consciousness" (subjective intrinsicality). Searle holds that genuine intentional states of organisms are distinguished from the merely as-if-intentional states of artifacts both by being objectively In (supervenient on local physical properties of) the organisms that have them as they aren't objectively In computers that only seem to have them and by being subjectively In (supervenient on phenomenological properties of) the organisms like ourselves that have them as they aren't subjectively In (because there are no phenomenological properties of) computers that only seem to have them.{1} How slight are the prospects of appeal to objective intrinsicality for providing a salient differentia warranting scientific claims of an essential difference between animate and artificial Intentionality was the theme of the preceding chapter. The claim of the present chapter is that the prospects for underwriting such an essential differentiation by appeal to subjective intrinsicality can be no better, and are in fact much worse.

In arguing against the claim that objective intrinsicality marks an essential difference between our intentionality and the apparent intentionality of computers I noted that in the exacting sense of "in" that figures in this discussion it seems that our intentional states are no more In us than the apparent intentional states of computers are In them. On the other hand, if such "externalism" about meaning and intentional mental states can successfully be resisted, is there any reason to suppose the mental properties we naively attribute to computers aren't likewise, despite their similar seeming social and environmental dependence, In them too? If we accept Searle's conviction that it must be the case that the meanings of our thoughts "are precisely in the head" because "there's nowhere else for them to be" (Searle 1983, p.200) what precludes our saying that it must be the case that the meanings of intentional states attributed to computers are precisely in their circuits because there's nowhere else for them to be? It seems no less true that their circuits are all they have for the purpose of representing the world to themselves than it seems true of us that "the brain is all we have for representing the world to ourselves" (Searle 1983, p.230). (Really, it seems true of neither of us. They have, e.g., diskettes. We have, e.g., paper and pencil.)

At this juncture Searle's response is that the difference between them and us is that computers don't represent anything to themselves by the states of their circuits at all: if states of their circuitry mean anything it is only to us, their users and programmers; the symbols a computer processes "don't have any interpretations as far as the computer is concerned" (Searle 1980a, p. 423: my emphasis). As the complaint that computers lack intentionality boiled down to the complaint that they lack intrinsic intentionality, now, the intrinsicality that the representational states of computers are supposed to lack boils down to subjective intrinsicality. The reason computers don't represent anything to themselves by the symbols they token, Searle thinks, is that there aren't any selves there for the representation to be to. In the case of real, intrinsic, mental states, according to Searle, "There is always a `first person,' an `I,' that has these mental states." (Searle 1992, p. 20). In the case of computers there is no "I," no "first person"; there's "nobody home" (Harnad 1991, p. 52). Ergo, in the case of computers, there are no intrinsic mental states, external, third person appearances to the contrary notwithstanding. Who, or what, exactly, is this "I"? What are we to understand by the subjective intrinsicality or essential subjectivity of the mental?

Much as we were able to distinguish two sorts of intrinsicality at issue -- objective and subjective -- subjective intrinsicality itself splits (less cleanly, perhaps) into two varieties I will call "agential" and "phenomenological." Agential subjectivity has to do with us (unlike computers) being authors of our own mental (including overt intentional) acts. Phenomenological subjectivity has to do with mental phenomena being in consciousnesses or consisting of qualia or experiences (being made, so to speak of, of the right "phenomenological stuff"). The agential understanding of subjective intrinsicality resonates with Searle's Gricean theory of linguistic meaning on which our utterances and inscriptions are invested with meaning by speakers' psychological acts of meaning them -- what makes "Fido" mean Fido for me (in my idiolect) is, for Searle, my intending it or willing it to mean Fido. On this view semantic Intentionality derives from volitional intentionality (i.e. from acts of will or garden variety intendings).{2} Thus understood, computers' reputed lack of subjectively intrinsic Intentionality comes to this: computers (unlike us) don't themselves intend anything by tokening the symbols they do. In our case we ourselves invest the symbols we process with meaning: these tokenings of ours are intrinsically meaningful to us or for us because we ourselves bestow their meanings on them. On this agential understanding of subjective intrinsicality our Intentional mental states are subjectively intrinsic (and computers' "as if" Intentional mental states are not) because we create their meanings. They are in us or more aptly, perhaps, from us as their originative source. According to this understanding, to say that we (or our mental states) are intrinsically Intentional means that we ourselves are the first-causes of their meaning what they do. The phenomenological understanding of intrinsic intentionality, on the other hand, resonates somewhat differently -- resonates rather with what substance dualists such as Descartes, property dualists such as Nagel, and phenomenologists such as Sartre and Husserl say about consciousness. On this phenomenological understanding of "subjective intrinsicality" the subjective intrinsicality of my Intentional mental states consists of their being conscious or "in" consciousness: what disqualifies the apparent mental abilities of computers from counting as true mental abilities (and what qualifies ours as such) is that computers aren't aware of their representational states representing anything and we are; they don't consciously experience the meanings of their representational states as we do ours.

The question that arises now is whether considerations of phenomenological or agential intrinsicality provide sufficient warrant for discounting naive attributions of mental properties to computers as merely figurative "as if" attributions. Here conclusions reached in Chapter Four, above, constrain the sort of warrant required. The predictive utility and prima facie univocality of such naive attributions means it is not open to Searle to style himself the defender or ordinary usage against revisionary cognitive scientific proposals that would "assimilate existing mental phenomena to some larger natural kind" (Searle 1980b, p. 452) including, besides our conscious calculations and considerations of alternatives, the (presumably unconscious) calculations performed by pocket calculators, considerations of alternatives by chess programs, etc. Since we already do assimilate these phenomena unambiguously (as standard ambiguity tests show) under existing mental concepts to indisputably good predictive effect, the remaining hope for disqualifying the calculations of pocket calculators and considerations of alternative continuations of games by chess programs from being bona fide cases of calculation and consideration would be for cognitive scientific research to discover that what we mistakenly call "calculation" and "considering" in computers isn't really of the same nature as human calculation and considering; much as zoological research discovered that whales (appearances to the contrary) really aren't fish.

Since evidence of radical freedom or agential intrinsicality is generally supposed by libertarians (cf. Campbell 1957; Nagel 1987) to be phenomenological, as Searle also supposes{3} --

The first thing to notice about our conception of human freedom is that it is essentially tied to consciousness. We only attribute freedom to conscious beings. If, for example somebody built a robot which we believed to be totally unconscious, we would never feel any inclination to call it free. Even if we found its behavior random and unpredictable, we would not say that it was acting freely in the sense that we think of ourselves as acting freely. If on the other hand somebody built a robot that we became convinced had consciousness, in the same sense that we do, then it would at least be an open question whether or not that robot had freedom of the will. (Searle 1984a, p. 94).
-- I focus my discussion on subjective intrinsicality phenomenologically construed. The issue of the scientific utility of subjective intrinsicality then becomes a question of the scientific import of consciousness and the perspicuity of what Searle (1990f) calls the "Connection Principle" (CP):
(CP) Ascription of an unconscious intentional phenomenon to a system implies that the phenomenon is in principle accessible to consciousness. (Searle 1990f, p. 586).
Since the broader thesis of ontological subjectivity (phenomenologically construed) is, in effect, that "ascription of [any] ... intentional phenomenon to a system implies it is in principle accessible to consciousness" or holds that "the only occurrent reality of the mental as mental is consciousness" or that all mental phenomena are essentially modifications or modes or "modalities" (Searle 1992, p. 168) of consciousness -- and since conscious mental states are uncontroversially conscious -- CP is ontological subjectivity's controversial part. If Searle's grant of epistemic privileges (to override all external, behavioral evidence) to how it seems to the agent from the "first person point of view" in his Chinese room experiment "invites us to regress to the Cartesian vantage point" (Dennett 1987b, p. 336), CP is an engraved invitation. There is no doubt, as I will show, that this principle is Cartesian: the only doubt concerns whether Searle's embrace of it is regressive, i.e., whether Searle's adoption of the "Cartesian vantage point" suffers from anomalies this "vantage point" is known to be heir to or whether Searle has managed to overcome these anomalies, as he claims.

The most telling anomalies besetting consciousness-based psychological research programs such as Searle's and Descartes', I take it, are three. First, the problem of how conscious phenomena (modes of consciousness) and physical phenomena (modifications of matter) are related or causally interact: the mind-body problem. Second, the problem of how we can know anything of the mental properties of others (or even that others have mental properties) if modes of consciousness are what mental phenomena (instantiations of mental properties) essentially are and no one can directly experience anyone else's consciousness: the other minds problem. Third, the problem -- if modes of consciousness are what mental phenomena essentially are, and each of us has direct epistemically privileged introspective access to every modification of our own consciousness -- of why psychology isn't easy: the introspection problem. Why, if Descartes was right about the essentially conscious nature of mind was he not also right (as the history of psychology since seems to confirm he was not) in thinking "I can achieve an easier and more evident perception of my own mind than of anything else" (Descartes 1642, p. 22-23). Searle (1992) claims to solve all these problems. If so, Searle's "rediscovery" of consciousness enjoys the benefits of dualism (such as they are) without its costs.

It should be noted, one benefit Searle's Cartesian identification of thought with conscious experience does not have (despite Searle's seeming assurance that it does) is that of solving the "symbol grounding problem" (Harnad 1990), i.e., the problem of Intentionality, of where meaning comes from, or what makes referring expressions refer as they do. As Searle's Chinese room example can be viewed as driving home the point that syntax doesn't suffice for semantics, Putnam's (1975) Twin Earth examples can be viewed as driving home the point that neither neurophysiology (what's in the head) nor phenomenology (what's in consciousness) suffice either. Imagine me and my twin on Twin Earth to be in qualitatively identical phenomenological states, having qualitatively identical conscious experiences of wetness (among other things) in our baths. Imagine each of us saying to ourselves or thinking to ourselves -- I in English, he in Twin English -- "Water is wet." Despite the qualitative identity of our conscious experiences, my thought is about H2O and my twin's thought is about XYZ. Phenomenology does not suffice for semantics any more than neurophysiology (or programming) suffices!

In terms of our dialectical situation, subjective intrinsicality's having Twin Earth troubles like objective intrinsicality means prospects for warranting claims of an essential difference between animate and artificial Intentionality by appeal to subjective intrinsicality are at least as bad as the prospects for such a scientific differentiation by appeal to objective intrinsicality. The rest of this chapter argues the prospects for subjective intrinsicality in these regards are much worse. Not only does Searle's manner of appeal to consciousness in these connections fail to solve Descartes' problems, it even manages, in significant ways, to aggravate Descartes' problems. Searle's views would return us to the Cartesian vantage point, yet offer no help against the besetting anomalies of Cartesianism: interaction problems, other minds problems, and the introspection problem. Hence, nothing along such lines as Searle proposes holds much promise of underwriting a scientifically perspicuous distinction between intrinsic biological and derived artifactual intentionality that would warrant dismissing our naive attributions of intentional states to computers as false.

2.1 As-if Intentionality | 2.2 Ontological Subjectivity: Connection Principle and Cartesian Ego | 2.3 Privileged Access | 2.4 Summary and Prospectus
2. As-if Dualism

Since Searle protests (I think, too much) that he is not a dualist and explicitly denies employing any "Cartesian apparatus" (Searle 1992, p. xii) or "Cartesian paraphernalia" (Searle 1987, p. 146), it is necessary to dispute these disavowals. The present section aims to show Searle deploys most of the Cartesian apparatus he officially deplores. He deploys enough Cartesian apparatus, in particular, to occasion the traditional Cartesian difficulties about mind-body interaction, about the knowability of other minds, and about introspection. Since it is fitting that Searle, who so famously advocates the thesis of "as-if intentionality" should otherwise talk just like a dualist, except for explicitly disavowing dualism, I grant this pretense. Let him be an as-if dualist. It will be fitting, also, to begin our survey of the as-if dualistic doctrines Searle espouses with the notion of as-if intentionality itself. This apparatus too is Cartesian.

2.1 As-if Intentionality

Much as Searle is troubled by and concerned to explain away the apparent Intentional mental states and abilities of machines, Descartes was notoriously troubled about the apparent mental states and abilities of nonhuman animals and concerned to explain these away. To explain away such appearances of animal intelligence as the following --
A dog, which you [Descartes] will not allow to possess a mind like yours, certainly makes a similar sort of judgment when it recognizes the master who can exist under all these forms [in different postures, differently dressed, etc.], even though ... he does not keep the same proportions or always appear under the same form. (Gassendi: Descartes et al. 1642, p. 191)
You [Descartes] may say that the soul has the power of preventing a man from both fleeing and advancing. But the principle of cognition does just this in the case of an animal: a dog, despite his fear of threats and blows, may rush forward to grab a morsel it has seen -- and a man often does just the same sort of thing! (Gassendi: Descartes et al. 1642, p. 188)
For at first it seems incredible that it can come about without the assistance of any soul, that the light reflected from the body of a wolf into the eyes of a sheep ... should spread the animal spirits throughout the nerves in the manner necessary to precipitate the sheep's flight. (Arnauld: Descartes et al., p. 144)
-- Descartes invokes a notion of as-if intentionality or (as Descartes calls it) "corporeal imagination."
The corporeal imagination ... can change [corporeal images] in various ways, form them into new [images] and by distributing the animal spirits to the muscles, make the parts of the body move in as many different ways as the parts of our bodies can move without being guided by the will, and in a manner which is just as appropriate to the objects of the senses and the internal passions. (Descartes 1637, p. 139)
This [functioning of the corporeal imagination] enables us to understand how the movements of all the other animals come about, even though we refuse to allow they have any awareness of things, but merely grant them a purely corporeal imagination. (Descartes 1628, p. 42)
I pull the dog's tail and it yelps. It doesn't experience anything, according to Descartes; a fortiori it doesn't really experience pain. It just acts like it. It's just a case, so to speak, of as-if pain. "Besides causing our souls to have various sensations," Descartes holds, "various movements in the brain can also act without the soul, causing the spirits to make their way to certain muscles rather than others" (Descartes 1649, p. 333). All the actions and reactions of infrahuman animals, according to Descartes, are of this merely mechanical "as-if" sort, without accompaniment of conscious experience. Similarly, Searle maintains,
In the sense [involving awareness of things] in which people "process information" when they reflect, say, on problems in arithmetic or when they read and answer questions about stories, the programmed computer does not do "information processing." Rather what it does is manipulate formal symbols. (Searle 1980a, p. 423)
Consequently, "We could have identical behavior in two different systems, one of which is conscious and the other totally unconscious" (Searle 1992, p. 71). Exchanging "corporeal images" for "formal symbols" and vice versa in the passages from Descartes and Searle above scarcely alters the sense of either. The apparatus of "as-if intentionality" Searle invokes -- though invoked to dismiss the apparent mental abilities of computers rather than those of infrahuman animals -- seems thoroughly Cartesian.

2.2 Ontological Subjectivity: Connection Principle and Cartesian Ego

According to Searle, "once you have lost the distinction between a system's really having mental states and merely acting as if it had mental states, then you lose sight of an essential feature of the mental, that its ontology is essentially a first-person ontology" (Searle 1992, p. 17). "A moments reflection on one's own subjective states," Searle thinks, suffices to show the falsity of this "basic metaphysical presupposition: Reality is objective" (Searle 1992, p. 16). By "objective" Searle means publicly observable: the supposition Searle disputes is "that if something is real, it must be equally accessible to all competent observers" (Searle 1992, p. 16). Shades of the cogito: reflection on my own subjective states apprises me of the remarkable fact that these are known to me (and me only) in a different way than external reality and the minds of others are known to me (via the senses). Traditionally, this knowledge of one's mental (subjective, conscious) states was supposed to be more direct (as Searle agrees) and more reliable (Searle demurs, perhaps) than our knowledge of external (objective, physical) reality. The directness in question comes to my conscious experiences providing the "epistemic bases" (Searle 1992, p. 122) for my knowledge of external reality (my knowledge of the external world being mediated by my conscious experiences or "inner" mental representations of it) but my awareness of my subjective conscious experiences themselves being immediate, having no epistemic bases besides the experiences themselves. Just as Descartes takes the epistemic difference between my direct incorrigible knowledge of my own mind and my indirect corrigible knowledge of external reality to mark a metaphysical difference between the (mental and physical) facts or objects known, Searle insists, "The sense in which I am now using the word `subjective' refers to an ontological category, not to an epistemic mode" (Searle 1992, p. 94); "the actual ontology of mental states is a first-person ontology" (Searle 1992, p. 16); "the mind consists of qualia, so to speak, right down to the ground" (Searle 1992, p. 20).

Besides following Descartes in taking "subjectivity" to name a special realm (of phenomena or events, if not substances), Searle also follows Descartes in his two-fold characterization of this realm. On the one hand, he characterizes the subjective in terms, so to speak, of its owner or container: "There is always a `first person' an `I' that has these mental states" (Searle 1992, p. 20). "Beliefs, desires, etc., are always somebody's beliefs and desires" (Searle 1992, p. 17). Alternately, Searle characterizes the subjective realm in terms, so to speak, of its contents: it "consists of qualia" (Searle 1992, p. 20), where an individual quale (e.g., a particular visual experience) "is a concrete conscious event" (Searle 1992, p. 225). According to this second characterization "desires and pains" and the like "are conscious experiences" (Searle 1992, p. 63): the "only occurrent reality of the mental as mental is consciousness" (Searle 1992, p. 187) which "comes ... in a variety of modalities: perception, emotion, thought, pains, etc." (Searle 1992, p. 168). Paralleling Searle's first characterization of the subjective in terms of it's first person containment, Descartes writes,

Is it not one and the same `I' who is now doubting almost everything, who nonetheless understands some things, who affirms that this one thing ["I exist"] is true, denies everything else, desires to know more, is unwilling to be deceived, imagines many things even involuntarily, is aware of many things which apparently come from the senses? .... Which of these activities is distinct from my thinking? Which of them can be said to be separate from myself? The fact that it is I who am doubting and understanding and willing is so evident that I see no way of making it any clearer. But it is also the case that the `I' who imagines is the same `I'. For even if, as I have supposed, none of the objects of imagination are real, the power of imagination is something which really exists and is part of my thinking. Lastly, it is also the same `I' who has sensory perceptions, or is aware of bodily things as it were through the senses. (Descartes 1642, p. 19)
Paralleling Searle's qualitative content characterization of subjectivity, Descartes holds:
As to the fact that there can be nothing in the mind, in so far as it is a thinking thing, of which it is not aware, this seems to me to be self-evident. For there is nothing that we can understand to be in the mind, regarded in this way, that is not a thought or dependent on a thought. If it were not a thought or dependent on a thought it would not belong to the mind qua thinking thing; and we cannot have any thought of which we are not aware at the very moment when it is in us. (Descartes et al. 1642, p.171)
Again, according to Descartes, "...there can be nothing within me of which I am not in some way aware" (Descartes et al. 1642, p.77). Searle's characterization of the mental as essentially conscious, like his characterization of the mental as necessarily possessed by "a `first person' an `I'," is thoroughly Cartesian.

Searle's main substantive attempt to distance himself from Descartes in these connections -- by insisting, "Consciousness is not a `stuff' it is a feature or property of the brain in the sense, for example, that liquidity is a feature of water" (Searle 1992, p. 105) -- seems infelicitous for two reasons. First, Descartes does not hold that consciousness is a "stuff" either: the ontology of thought, for Descartes, seems essentially an ontology of acts or events, not of objects or (ectoplasmic) stuff. A second and related point is that if Searle means to resist Descartes' reification of the "I" -- his identification of the first person he is with a consciousness -- he does so at his own theoretical peril. Identification of the thinking subject (Cartesian ego) with a particular field and stream of conscious experiences, for Descartes, reconciles the content and container characterizations of the mental: contents of streams of consciousness are themselves conscious events (qualia) and conscious events, for Descartes, exist only as parts or modes of such consciousnesses. If Searle tries to identify this "I" or "first person" with something more corporeal -- with a particular human organism or more narrowly with a particular human organism's brain -- the content and container characterizations of the mental realm come troublesomely apart. Many phenomena occurring in human brains are not conscious or potentially conscious.

2.3 Privileged Access

I have already noted that Searle's Chinese room experiment seems to depend on granting the appearance to the person in the room from their first-person point of view of not understanding the privilege of overriding all behavioral evidence -- all appearances to others from their third-person points of view -- of understanding. The first-person ontology of the mental has, for Searle as for Descartes, the epistemic corollary "that the first-person point of view is primary" (Searle 1992, p. 20). The subjectivity of the mental means that while we each have direct access" (Searle 1992, p. 74: my emphasis) to our own mental phenomena "because of their intrinsic subjectivity" we only "have indirect methods of getting at the same empirical facts" (Searle 1992, p. 73) in others: "empirical phenomena that are intrinsically subjective [are] therefore inaccessible to direct third-person tests" (Searle 1992, p. 75). What the subjectivity of the mental means epistemically, for Searle, is that "just as I have a special relation to my conscious states, which is not like my relation to other people's conscious states, so they in turn have a relation to their conscious states, which is not like my relation to their conscious states" (Searle 1992, p. 94-95). Objective empirical facts are "equally accessible epistemically to all competent observers," subjective empirical facts "are not equally accessible to all competent observers" (Searle 1992, p. 72). Still, Searle insists, "This is not an argument for `privileged access' because there is no privilege and no access" (Searle 1992, p. 95, note 5). How so?

Privileged access, traditionally conceived, has two folds: it's direct (immediate and noninferential) and it's incorrigible (incapable of being overridden). Searle attempts to distance himself from both these facets of the traditional conception with equivocal success. With regard to the directness of thinkers' access to their own thoughts, Searle attempts to distance his conception of this immediacy or directness from the way, he thinks, it was traditionally mistakenly conceived as a kind of inner perception, or "introspection." With regard to the incorrigibility Descartes alleges and tradition accepted on behalf of our awareness of our own conscious mental states, Searle writes, "from the fact of subjective ontology it does not follow that one cannot be mistaken about one's own mental states" (Searle 1992, p. 145). However, it is not clear why on Searle's view it should not follow from the directness of our access that our knowledge of our own mental states is incorrigible, as Descartes thought. Neither is it clear whether some lesser epistemic privilege than complete incorrigibility -- say the privilege of overriding all third-person evidence -- is not privilege enough still to give rise to traditional Cartesian anomalies about knowledge of other minds and the difficulty of psychology.

Descartes thinks it follows from the directness of one's awareness of one's own mental state that such awareness is incorrigible for the following reasons. Everything else one knows -- about the external world, mathematical objects, and even God -- one knows, as it were, indirectly. Knowledge of all mind external things, Descartes reasons, is mediated by their phenomenological appearances to me or my mental representations of them. But representations can misrepresent and appearances can be deceiving. Thus, my knowledge of external things is corrigible: about all these things it is possible that I'm mistaken (cf. Descartes 1642, Meditation 1). But in the case of my own mental states there is no mediating representation, so no possibility of misrepresentation: hence, "I can achieve an easier and more evident perception of my own mind than of anything else." Even an incorrigible perception: just as "I am, I exist, is necessarily true whenever it is conceived in my mind" -- since what this "I" is is a thinking (i.e., believing, desiring, understanding, etc.) thing -- so are "I believe p," "I want q," "I understand r," etc., necessarily true whenever they are conceived in my mind. Since Searle follows Descartes in supposing conscious experiences are the "epistemic bases" of everything else we know, and likewise agrees that "in the case of consciousness, its reality is the appearance" (Searle 1992, p. 122), how does he resist Descartes' conclusion that our awareness of our own mental states is incorrigible? He says, "it does not follow that one cannot be mistaken about one's mental states. All that follows is that the standard models of mistake, models based on the appearance-reality distinction, don't work for the existence and characterization of mental states" (Searle 1992, p. 145). Consideration of Searle's alternative models of mistake -- based on self-deception, misinterpretation, and inattention -- finds them wanting, though (see Section 5, below).

But suppose Searle can successfully resist Descartes' conclusion -- suppose he can, despite his allegiance to the directness doctrine and denial of any appearance-reality distinction with regard to one's own mental states, consistently maintain that one's knowledge of one's own mental states is, nevertheless, corrigible. How corrigible is it? If appearances from the first-person point of view of not understanding are supposed to be capable of overriding all public appearances of understanding to the contrary in the Chinese room experiment, as it seems, the answer would seem to be, "Hardly at all." On the other hand Searle recently allows, "that it often happens that someone else is in a better position than we are to determine whether or not we are really, for example, jealous, angry, or feeling generous" (Searle 1992, p. 145). If he allows this, I submit that it might happen (e.g., in Chinese rooms) that someone else is in a better position than oneself to determine whether or not one is really, for example, understanding Chinese. Even if Searle can resist extrapolation of this downgrading of the primacy he previously accorded the first-person point of view in the case of understanding, he undermines his claim that "the same [Chinese room counterexample] would apply to any Turing machine simulation of human mental phenomena" (Searle 1980a, p. 417). On Searle's recently expressed views the same counterexample would not apply to Turing machine simulations of jealousy, anger, and generosity, for example.

Confronted with the challenge that his views in these connections are indistinguishable from Descartes' and, as such, too regressive to merit serious consideration, Searle invokes what I call the "deny-the-name maneuver." That is, he claims that there is no such thing as such-and-suches on the ground that the characteristic expressions used by philosophers to designate such-and-suches imbed a false picture or misleading analogy. Thus, he maintains, there is no such thing as introspection (so, of course, he is not advocating anything like introspectionism) on the grounds that "the model of specting intro requires a distinction between the object spected and the specting of it, and we cannot make this distinction for conscious states" (Searle 1992, p. 144). He maintains, there is no such thing as privileged access (so, of course, he is not invoking any such discredited Cartesian apparatus) on the grounds that "the metaphor of a private inner space [which I access] breaks down when we understand that there isn't anything like a space into which I can enter, because I cannot make the necessary distinctions between the three elements of my self, the act of entering, and the space into which I am supposed to enter" (Searle 1992, p. 98). But deny-the-name is a misbegotten argumentative maneuver. It would be most heartening to eliminativists if one could argue cogently in this manner: one might argue there's no such thing as understanding because there's nothing to stand under and nothing capable of standing there (minds or brains don't have legs); there's no such thing as apprehension because there's nothing in minds to be seized nor capable of seizing it (minds or brains don't have hands); etc. One might also argue that there's no such thing as phenomenological intrinsicality because a phenomenological self is not literally a container in which thoughts could literally be contained. But one can't argue soundly in this manner. Searle himself observes, "Another rhetorical device for disguising the implausible is to give the commonsense view a name and then deny it by name and not by content" (Searle 1992, p. 4). Searle calls this the "`give-it-a-name' maneuver" (Searle 1992, p. 5: my emphasis). The deny-the-name maneuver Searle himself uses is just a variant of this: a rhetorical device for disguising one's commitment to philosophical views "thought long ago discredited" by objecting to their names without rejecting their content.

2.4 Summary and Prospectus

With the Cartesian provenance of so many central Searlean doctrines not in doubt, it is obvious that Searle's "rediscovery of the mind" is an attempt to reclaim the Cartesian vantage point or "first-person point of view" for psychology. The question remaining is whether this attempt is regressive or whether, on the contrary, Searle's rediscovery overcomes the difficulties Descartes' original discovery begot; difficulties which led, in the earlier part of this century to wholesale and precipitous abandonment by psychologists of the "point of view" Searle claims to "rediscover." Do Searle's proposals -- his "simple solution" (Searle 1992, p. 1) to the mind body problem; his "solution to the `other minds problem'" (Searle 1992, p. 78); his attempt to explain how, despite the "in principle" accessibility of all mental states to consciousness and the circumstance that "the appearance [of mental states from the first-person point of view] is their reality" (Searle 1992, p. 122), such appearances can be deceiving -- really solve anything? I think Searle's endeavors in these connections do not succeed. The characteristic anomalies of dualism remain as anomalous as ever on Searle's as-if dualistic views.

3.1 Realization and Levels of Causation: A Picture | 3.2 Standard (Functionalist) Nonreductivism | 3.3 Searle's Nonstandard Nonreductivist Nonsolution
3. The Mind-Body Problem and Searle's "Solution"{4}

The mind-body problem, as Searle puts it, concerns whether "we will be left with a class of entities [or phenomena] that lies outside the realm of serious science and with an impossible problem of relating these entities [or phenomena] to the real world of physical objects": Searle recognizes this as a signal "incoherence of Cartesian dualism" (Searle 1983, p.263). It is my contention that Searle's attempted "monist interactionist" (Searle 1980b, p. 455) solution to the mind body problem -- a solution he nowadays terms "biological naturalism" -- fails.{5} Either "monist interaction" underwrites reduction of the mental to the physical, in which case it's no more, or less, successful at solving the problem than central state identity theories (e.g., Smart 1959; Armstrong 1968), which is what Searle's would be on this understanding; or else it underwrites a functionalist style token-token quasi-reduction, in which case Searle's account is no more, or less, successful at solving the mind-body problem than standard functionalist accounts (e.g. Cummins 1989; Pylyshyn 1984; Fodor 1974) which avail themselves of such token identifications (which Searle's account would be on this understanding); or else it underwrites neither reduction nor quasi-reduction, in which case Searle's account is no more successful at solving the mind-body problem than Descartes' (from which Searle's account is scarcely distinguishable on this as-if dualistic interpretation).

The view Searle advances "toward `solving the mind-body problem'" (Searle 1984, p.17) and the "picture" he believes "will eventually lead to a resolution of the dilemma" (Searle 1983, p.264) posed by "the causal efficacy of the mental" (Searle 1983, p.265) is one "according to which mental states are both caused by the operations of the brain and realized in the structure of the brain (and the rest of the central nervous system)" (Searle 1983, p. 265). This account is supposed to render the mental -- in particular the liability of minds to be affected by physical causes and the power of the mind to produce other mental and physical effects -- physically and even mechanically explicable without having the consequence that "intrinsic mental phenomena" can be "reduced to something else or eliminated by some kind of redefinition" (Searle 1983, p.262) or even quasi-reduced by token-token identifications.

How is this supposed to work, this "approach that does not commit one to the view that there is some class of mental entities lying outside the physical world altogether, and yet does not deny the real existence and causal efficacy of the mental" (Searle 1983, p.263)? To make it work, Searle faces an apparent dilemma:

If the specifically mental aspects of mental states and events function causally ... then the causal relation is totally mysterious and occult; if, on the other hand, you employ the familiar notion of causation according to which the aspects of events which are causally relevant are those described by causal laws, and according to which all causal laws are physical laws, then there can't be any causal efficacy to the mental aspects of mental states. At most there would be a class of physical events which satisfy some mental descriptions, but those descriptions are not the descriptions under which the events instantiate causal laws, and therefore they do not pick out causal aspects of the events. Either you have dualism and an unintelligible account of causation or you have an intelligible account of causation and abandon the idea of the causal efficacy of the mental in favor of some version of the identity thesis with an attendant epiphenomenalism of the mental aspects of psycho-physical events. (Searle 1983, p.264-265)
Yet, Searle alleges, there is really no great mystery about mind-body interaction for there are many "completely trivial and familiar examples of these same sorts of relations" (Searle 1983, p.265). The mental, he holds, is caused by and realized in the physical (specifically, in the neurophysiology of the brain) just as "the liquid properties of water are caused by the molecular behavior [of the water molecules]" and likewise "realized in the collection of molecules" (Searle 1983, p.265). The general point, Searle claims, is "that two phenomena can be related both by causation and realization provided that they are so at different levels of description" (Searle 1983, p. 266). "Nothing," Searle claims, "is more common in nature than for surface features of a phenomenon to be both caused by and realized in a micro-structure, and those are exactly the relationships that are exhibited by the relation of mind to brain" (Searle 1984, p. 22-23). On this account mental properties are global or surface properties (macro-properties) of brains or nervous systems just as liquidity is a global property of collections of water molecules, elasticity a global property of the collections of molecules in tires, transparency a global property of the collection of molecules in the window glass, etc. (Searle 1984, p. 20): "mental phenomena are just [high level] features of the brain (and perhaps the rest of the central nervous system)" (Searle 1984, p.19). The seeming mystery of the supposed causal interaction of mind and brain
only seems so [mysterious] if we think of the mental and the physical as naming two ontological categories, two mutually exclusive classes of things, mental things and physical things, as if we lived in two worlds, a mental world and a physical world. But if we think of ourselves as living in one world which contains mental things in the sense in which it contains liquid things and solid things, then there are no metaphysical obstacles to a causal account of such things. My beliefs and desires, my thirsts and visual experiences, are real causal features of my brain, as much as the solidity of the table I work at and the liquidity of the water I drink are causal features of tables and water. (Searle 1983, p.271)
Searle's "solution" to the mind-body interaction problem is that "because mental states are features of the brain, they have two levels of description -- a higher level in mental terms, and a lower level in physiological terms. The very same causal powers of the system can be described at either level" (Searle 1984, p.26).

It seems the "solution" to the mind-brain interaction problem being depicted here is that there is only one thing, the brain, describable in mental and physical terms. Given "the widely held view that an ideal causal account of the world must always make reference to (strict) causal laws and those laws must always be stated in physical terms" (Searle 1983, p.271), how are we to avoid the conclusion that the real causal story is just the microstructural one? How are we to avoid the conclusion that Searle has come to "abandon the idea of the causal efficacy of the mental in favor of some version of the identity thesis with an attendant epiphenomenalism of the mental aspects of psycho-physical events"? (Searle 1983, p.265) The "solution to the mind-body problem" Searle advocates, while in general agreement with proposals of Fodor (1974, 1989) and other proponents of autonomous causal levels of reality limned by autonomous special sciences, differs from standard nonreductivist proposals in one key particular. "Token-token identities" of higher with lower level property instantiations (phenomena or events), for standard nonreductivism, tie the various levels of reality together; yet in the case of consciousness and it's neurophysiological bases, Searle refuses to countenance "ontological reduction" (Searle 1992, chap. 5) via token identities.

3.1 Realization and Levels of Causation: A Picture

The "picture" Searle believes "will eventually lead to a resolution of the dilemma" (Searle 1983, p.264) posed by "the causal efficacy of the mental," is a picture "according to which mental states are both caused by the operations of the brain and realized in the structure of the brain (and the rest of the central nervous system)" (Searle 1983, p.265). This picture, Searle maintains, "has been available to any educated person since serious work began on the brain nearly a century ago" and provides the "famous mind-body problem" with "a simple solution" (Searle 1992, p. 1) along the following lines:
Consciousness [thus mind in general] is a higher-level or "emergent" property of the brain in the utterly harmless sense of "higher-level" or "emergent" in which solidity is a higher level emergent property of H2O molecules when they are in a lattice structure (ice) and liquidity is similarly a higher-level emergent property of H2O molecules when they are, roughly speaking, rolling around on each other (water). (Searle 1992, p. 14)
Fleshed out, this is the picture (cf. Searle 1983, pp. 269-270):

This picture can be used to represent reductivist and standard nonreductivist positions as well as Searle's nonstandard nonreductivist position depending on how the realization relation is construed. According to reductionist views (type-type identity theories) the realization of some higher level property H by some lower level property L involves the identity (at least coextensiveness) of the properties: on this view, every instance of H must also be an instance of one and the same lower level property L. According to standard nonreductivism, although upper level mental properties (e.g., M) are not reducible to lower level physical (specifically neurophysiological) properties (e.g., B), the lower level "implements" or "realizes" the upper by virtue of identities obtaining between instances (tokens) of mental properties and instances (tokens) of neurophysiological properties; identities, in effect (cf. Fodor 1974) between individual mental and physical phenomena or events. (Below I use lower case letters to denote such individual property token(ing)s or events: letting m be an individual token of property M, b an individual token of property B, etc.) Searle (1992), like Nagel (1974; 1986) and Jackson (1982), denies such token-token identities. Yet Searle, unlike Nagel and Jackson, holds the above picture still provides a nondualistic solution to the mind body problem on which the mind is fully real and causally efficacious solely on the strength of the causal realization of the mental by the neurophysiological, without the token identifications.{6}

Everyone agrees that where the higher level properties are reducible to lower level properties the above picture perspicuously accounts for the causal salience of the higher level. Causal laws and explanations citing reducible high level features or things can be considered just a shorthand way of talking about causal regularities and processes that really exist entirely at the lower level. But everyone (almost) also agrees that mental properties are not reducible in this way to lower level (neurophysiological) properties. Two questions, then, arise. (1) Does the standard nonreductivist construal of this picture underwrite a solution to the mind-body problem? If so: (2) Does Searle's further weakening of the realization relation from token-token identity to mere (?) causation preserve this solution?

3.2 Standard (Functionalist) Nonreductivism

Jaegwon Kim (1989) argues that the standard nonreductivist construal of this picture is methodologically vexed due to its violation of an independently warranted "principle of explanatory exclusion" (EE): "No event can be given more than one complete and independent explanation" (Kim 1989, p. 79). Kim argues such a multiplication of explanations "for one and the same event E, is an inherently unstable situation" (Kim 1989, p. 85): "If simplicity and unity of theory is our aim when we seek explanations, multiple explanations of a single phenomenon are self-defeating -- unless, that is, we are able to determine that their explanatory premises are related to one another in appropriate ways" (Kim 1989, p. 93). Everyone agrees, as already mentioned, that reduction is an appropriate way for the explanatory premises to be related. If H is identical with L -- if a universally quantified biconditional "bridge law" (x)(Hx <-> Lx) holds warranting claims of "type-type" identity between H-ness and L-hood, so that being H (e.g., ice) just is being L (a lattice structure of H2O molecules) -- there is no problem: the "two explanations differ only in the linguistic apparatus used in referring to, or picking out, the conditions and events that do the explaining; they are only descriptive variants of one another" and there is "only one explanation here, and not two" (Kim 1989, p. 87). But it is widely held that nonreductive supervenience -- where multiple universally quantified conditional "realization laws" (x)(L1x -> Hx), (x)(L2x -> Hx), etc., hold, so that having any of these lower level properties suffices for H-ness but no one among them is specifically necessary for H-ness, so that H is "multiply realized" by lower level properties L1, L2, etc. -- is also an appropriate interlevel relation. Mental properties, most everyone agrees, are multiply realizable. But multiple realization is problematic given EE.

To see the problem Kim alleges, consider the multiply realizable high level property of raising one's arm (an action type). On the two-level picture this act is subject to two different explanations. According to (mental explanation) E1 I raise my arm because I intend to: my arm raising is explained by being subsumed under the "covering laws" (x)(Bx -> Mx) and (x)((Mx & Cmx) -> M'x). According to (physiological explanation) E2 I raise my arm because I am in a certain neurological condition which causes physiological occurrences (efferent nerve activation, muscle flexings, etc.) which realize arm raising: my arm raising is explained by being subsumed under the covering laws (x)((Bx & Cbx) -> B'x) and (x)(B'x -> M'x). Standard nonreductivism cannot avoid this difficulty by citing token-token identities between particular "tokenings" (m, m', b, b') of mental and neurophysiological types (M, M', B, B') since it's nomological connections between properties that are expressed in the covering laws cited in explanations like E1 and E2: different properties, different laws; different laws, different explanations; token identities notwithstanding. If Kim (1989) is right about this being an unstable situation (contrary, e.g., to Fodor 1974, 1989), as I believe he is, standard (functionalist) nonreductive materialism -- as a would-be solution to the mind-body problem -- buys only half a loaf.

3.31 As-if Parallelism: Shades of Preestablished Harmony | 3.32 Shades of Epiphenomenalism | 3.33 As-if Interactionism | 3.34 Conclusion: Standard Reductivism and Searle's Nonsolution
3.3 Searle's Nonstandard Nonreductivist Nonsolution

What then of Searle's nonstandard nonreductivist "solution"? In opposition to the standard nonreductivist token identity solution to the mind-body problem, Searle's own brand of nonreductive "materialism" denies that mental property instantiations and the neurophysiological property instantiations that "cause and realize" them are ever one and the same events. Rather than improving on the standard functionalist solution Searle's denial of token identities renders his "solution" to the mind-body problem doubly accursed: not only is there explanatory (or epistemological) overdetermination but causal-metaphysical overdetermination also. There will be two distinct causal chains of events producing one and the same outcome: b will produce m' both via causal chain b²->b'²->m' and via causal chain b²->m²->m' in addition. For every act we perform to be overdetermined by two distinct sufficient causes seems incredibly uneconomical and unlikely. Moreover, if b' and m both causally suffice for m' and b²->b' and m²->m' are distinct causal chains (m²->m' a procession of subjective conscious events and b²->b' a procession of objective neurological events), then if either chain were broken, the effect m' should still be brought about via the other. Most worrisomely, if nexus b²->b' were broken (by some violation of conditions among Cb), it is difficult to see, on Searle's account, why m shouldn't still suffice to cause m' via m²->m': my intention to raise my arm should still bring about the raising of my arm, independently of the usual physiological processes of efferent nervous impulses, muscle flexings, etc., as if by telekinesis! To block this result one must either assume ad hoc that the processions of mental and physical events, while distinct, always happen to march serendipitously along together (shades of preestablished harmony), or else deny the independent causal sufficiency of m for bringing about m' after all (shades of epiphenomenalism), or else allow that m causes m' via "downward causation" of b (shades of traditional interactionism). I will consider each of these alternatives in turn.

3.31 As-if Parallelism: Shades of Preestablished Harmony

Consider the first alternative -- that once the mental gets "given off" by the physical processes occurring in the brain, mental and physical processes always just happen, serendipitously, to run in tandem in a way reminiscent of traditional doctrines of psychophysical parallelism and preestablished harmony. Here is the picture (cf. Broad 1925, chap. 3):

Parallelism, of course is not a solution to the mind-body problem but a nonsolution (perhaps, a dissolution) of it. How do minds and bodies interact? How do mental events cause physical effects? The parallelist "answer" is "They don't." There's no problem about how this mind-body interaction happens, for the parallelist, because it doesn't. Of course, the objection to this "solution" is that it seems too ad hoc -- why, pray tell, should these sequences of mental and physical events just happen to march serendipitously along like this? That God set it up that way -- preestablished harmony -- I take it, is not an answer Searle might advance. Searle's "solution," it seems, needs preestablished harmony without the Harmonizer.

I imagine Searle might resist this accusation by denying that there's anything serendipitous about it -- the objective neurological procession and subjective phenomenological procession don't just happen to run in parallel they have to because at each step (at every intervening time) the bottom level neurophysiological events are causing the top level conscious events. Nothing serendipitous about it at all. This won't work.

Here's why. The picture, now, is that each intervening neurophysiological event between b and b' causally suffices both for its phenomenological correlate and for the next succeeding neurophysiological event, which suffices for its phenomenological correlate, etc. Suppose there are n phenomenologically distinct events m1 ... mn intervening between m and m' each caused and realized by a neurophysiological event b1 ... bn intervening between b and b'. Let our picture, then, be this:{7}

Notice, first, that adding these intervening events does nothing to assuage, but rather aggravates, overdetermination troubles here. On the original picture there were two causal chains, b²->m²->m' and b²->b'²->m' leading from b to m', on this new picture, now, there are 2+n: if n=1, b will bring about m via b²->m²->m1²->m', b²->b1²->m1²->m', b²->b1²->b'²->m'; etc. As for serendipity, whatever else distinguishes the type of causation (r_causing) involved in realization from ordinary causation (o_causing), it is clear from the figures above that r_causes and effects are simultaneous while o_causes precede their effects. This makes r_causing unconditional in a way, it seems, that o_causing is not. Since there is always some interval between c and e, it seems something could have intervened after c to prevent e though ex hypothesi (since c did o_cause e) in this instance, nothing did; in the case of r_causing, however, there is no interval in which anything can intervene. Recurring to our original diagram (Figure 1, above) we see a corresponding difference in the laws corresponding to the upward arrows (realization laws) and the crosswise arrows (laws of succession): the realization laws are unconditional, the laws of succession have conditions. Now, what we are being asked to suppose by the defense of Searle we are considering is this: every violation of a neurophysiological condition Cb must cause (and realize?) some violation of a some mental condition Cm or, thinking in terms of event causation (Figure 2), any intervening event that might have interrupted the procession of neurophysiological events b²>...²>b' (but didn't) would also have to cause (and realize?) some interruption of the phenomenological procession lest the bodily movement (e.g., my arm raising) be brought about despite the break in the physiological processes, as if by telekinesis. To get a sense of the serendipitousness of what is being proposed, consider the sort of interrupting conditions or events (pertaining to subjective trains of thought) we would expect to encounter above (e.g., distractions, second-thoughts, etc.) and the sorts of interrupting conditions or events (pertaining to objective chains of neurophysiological occurrences) we would expect to encounter below (e.g., electrical failures, chemical imbalances, etc.). The originally supposed correlation of neurophysiological with phenomenological occurrences, though speculative, was at least constrained -- the constant conjunction was speculated to hold between specified types of event (phenomenological and neurophysiological). But the sorts of things that might intervene -- the class of potential interrupting conditions -- is not similarly constrained.

Recall that both of the causal levels depicted in our diagram -- the neurophysiological and the mental -- are relatively high levels as compared, e.g., to the subatomic level (cf. Oppenheim & Putnam 1958): it is generally acknowledged that the higher the "level of description" the more exception prone the empirical laws (cf. Fodor 1974). Only at the bottom micro-level do causes cause effects according to "strict deterministic laws" (Davidson 1970, p. 208); at higher macro-levels causes cause effects ceteris paribus. Now the thesis saving supposition we are considering is not just that the theoretically specified phenomenological and neurophysiological events coincide, nor even that all the theoretically specifiable auxiliary conditions (Cms and Cbs) do, but that all the theoretically unspecified other things that might (or might not) be equal also coincide. The crucial coincidence that has to be assumed is this: other things (ceteris paribus conditions) are never not equal neurophysiologically without other things (ceteris paribus conditions) being not equal phenomenologically. Given the nonspecificity of ceteris paribus conditions, it is difficult to view these hypothetical correlations (of upper level and lower level violations and fulfillments of such conditions) as anything but adventitious. Where the hypothesized coincidence of phenomenology and neurophysiology proper is speculation, this hypothesized coincidence of other things being equal vis-à-vis phenomenology and vis-à-vis neurophysiology is wild speculation.

Not only is the hypothesis that violations and fulfillments of ceteris paribus conditions "above" and "below" coincide improbable, in another context, Searle himself cites evidence to the contrary. The "famous case described by William James" (Searle 1983, p. 89) counterinstances this "parallelism" hypothesis.{8} The case, as Searle describes it, is this:

a patient with an anesthetized arm is ordered to raise it. The patient's eyes are closed and unknown to him his arm is held to prevent it from moving. When he opens his eyes he is surprised to discover that there was no arm movement. In such a case he has the experience of acting and that experience plainly has Intentionality; we can say of the patient that his experience is one of trying but failing to raise his arm. (Searle 1983, p. 89)
But notice, the patient's experience was not one of trying but failing -- the failure is not subjective, not, so to speak, in the scope of his experience, but objective. The patient's subjective experience is of trying and succeeding -- or else why, when he opens his eyes, would he be "surprised to discover that there was no arm movement" -- but objectively he fails. This counterexample to the parallelism hypothesis, moreover, is of the worst sort: a violation of the neurophysiological ceteris paribus conditions without a parallel violation of the phenomenological ones, i.e., the sort of violation that should lead us to expect the arm raising to come about independently of the usual physiological mechanisms, as if by telekinesis. If we can speak of afferent and efferent experiences -- roughly, of sensings (afferent) and willings and tryings (efferent) -- in this case the stream of efferent experiences (the willing and trying) gets completed (all the way to mn) but, contrary to the supposed causal sufficiency of the last experience (mn) for arm raising (m'), the patient's arm doesn't go up.{9}

There would seem only two ways to square Searle's picture with these facts: (1) acknowledge that the real causes of arm raising are neurophysiological and the phenomenological causing is illusory (the epiphenomenalist "solution"); or (2) maintain that the supposed or experienced phenomenological stream of experiences does cause the arm's rising, but causes it via downward causation of muscle flexings, etc. (the as-if interactionist "solution").

3.32 Shades of Epiphenomenalism

Here is the epiphenomenalist picture on which mental events are neurophysiologically caused, but have no further mental or physical effects:

Though epiphenomenalism is, perhaps, the least costly option for someone, like Searle, who rejects token identifications of mental and physical phenomena (cf. Jackson 1982), epiphenomenalism is generally accounted a cost (to be avoided if possible); it is, moreover, an option Searle rejects (see, e.g., Searle 1983, pp. 264-265; Searle 1992, p. 125), a cost he maintains his "solution" to the mind-body problem avoids. As for the general acknowledged costliness of the epiphenomenalist option, since one "would prefer to keep both [one's] commonsense conceptions and [one's] scientific beliefs," and since "our conception of ourselves as ... agents is fundamental to our [commonsense] conception of ourselves" (Searle 1984, p. 86) one would prefer not to "abandon the idea of the causal efficacy of the mental" (Searle 1983, p. 265) altogether. Moreover, besides being antithetical to common sense, Kim accounts it a further cost of epiphenomenalism that it violates a plausible principle culled from Samuel Alexander (Alexander 1927) that Kim calls "`Alexander's dictum: To be real is to have causal powers" (Kim 1992b, p. 17). If epiphenomenalism manages to save the phenomena of consciousness and Intentionality at all, it saves them for such a cloistered causally quarantined existence as to make one wonder whether epiphenomenalism constitutes a phenomenological or "Intentional Realism that's worth having" (Fodor 1989, p. 51) at all. Alexander's dictum maintains that "at least for scientific purposes" (Fodor 1989, p. 51) it does not. In addition to these general costs, epiphenomenalism has specific costs for Searle. In particular, his plan for the derivation of the derived Intentionality of signs from the intrinsic Intentionality of speakers' meaning intentions requires the signs (e.g., utterances and inscriptions) to be causal expressions of these intentions.

Given the preceding, it would seem to be a signal virtue of Searle's "solution" if he is right in claiming, "The picture that I have been suggesting ... according to which mental states are both caused by the operations of the brain and realized in the operations of the brain" (Searle 1983, p. 265) successfully avoids epiphenomenalism with its attendant costs. If the analysis I am proposing in the present section is sound, and the criticisms (in subsection 3.31, above) of the first (as-if parallelist) tack are well taken, this means the only construal yet open to someone like Searle who would resist token identifications of mental and physical property instantiations or phenomena is the one I call "as-if interactionism." There is considerable text to support the contention that this is the option Searle, for the most part, does adopt.

3.33 As-if Interactionism

The as-if interactionist construal of our basic picture (Figure 1) and our revised or extended event-causal picture (Figure 2), "allows for top-down causation (our minds, for example, affect our bodies)" (Searle 1984, p. 94) and thereby allows us to replace the two parallel subjective mental and objective neurophysiological causal chains of the parallelist picture (that give rise to overdetermination troubles) with a single causal chain having both objective neurophysiological and subjective mental links (with no overdetermination troubles). This allowance also, as already indicated, avoids the difficulty raised by James's experiment -- that the subject's subjective procession of experiences are (1) evidently complete, and (2) supposedly sufficient to cause arm raising, yet (3) fail to cause arm raising. The reason the subjective procession of phenomenological states fails to bring about arm raising when the arm is restrained, we can now say, is that the way in which mental causes bring about overt physical effects such as arm raising is via downward causation of intermediate covert physical effects which would physically cause the overt behavior, except that in the present case, there is a physical restraint -- "his arm is held" -- blocking the last transition. Much Searle says suggests this as-if interactionist take on the picture(s) at issue, from his earlier confession that "I suppose this [solution to the mind-body problem] is `interactionism' and I guess it is also, in some sense, `monism'" (Searle 1980b, p. 456), to more recent interactionist sounding assertions, like the following:
In the normal case ...mental states mediate the relationship between input stimuli and output behavior. (Searle 1992, p. 69)
Causally, consciousness serves to mediate the causal relations between input stimuli and output behavior; and from the evolutionary point of view, the conscious mind functions causally to control behavior. (Searle 1992, p. 69)
Not surprisingly, such conspicuously interactionist sounding talk occasions conspicuous interactionist sorts of difficulties.

One crucial question -- if we are going to flesh out an interactionist version of our picture(s) -- is how this mental mediation is supposed to take place without violating "the causal closure of the physical domain" (Kim 1992b, p. 2): if subjective mental events (e.g., intentions) cause objective physical effects (e.g., arm raisings) without being part of the objective physical world it will be the case that "there can be no complete theory of physical phenomena" since there will be "physical occurrences that cannot be explained by invoking physical antecedents and laws alone" (Kim 1992b, p. 1). Also there is difficulty in saying "when" in the causal chain this mental mediation is supposed to occur. In this connection Kim notes, "that given the simultaneity of the instances [of m and b] it is not possible to think of the M-instance as a temporally intermediate link in the causal chain [from b to b']" (Kim 1992b, p. 23: my emphasis). Perhaps this -- though suspicious -- doesn't absolutely preclude an interactionist construal of our picture if we can think of the "realization" steps as somehow intermediate without being temporally so. If so, then different answers to our "where" question determine pictures of just two basic types depending on the manner of mediation involved -- whether the mental mediation is supposed to be continuous or intermittent. Here are the pictures:

Since, in intermittent mediation, each of the several mental links in the causal chain is continuous so long as it lasts, problems with continuous interaction get inherited by the intermittent picture also. I begin with continuous mediation.

According to the hypothesis of continuous mediation there is just one continuous subjective mental link -- however long -- in the causal chain between physical stimulus and overt response: the mediation is continuously subjective and mental throughout. In the spirit of Searle's insistence that "the mind consists of qualia, so to speak, right down to the ground" (Searle 1992, p. 20) we here depict mental causation as consisting of qualia, so to speak, from start to finish. This depiction is vexed.

First, the gross picture just limned seems empirically dubious with regard both to what it puts in and what it leaves out. It puts in a continuous stream of subjective experiences or qualia intervening between stimulus and response where it has been forcefully and I think rightly observed (see, e.g., Wittgenstein 1958) that experience reveals no such continuous chain of qualia or experiences because certain of the mental links in such putative chains (especially the Intentional ones, e.g., recognizing, intending, etc.) conspicuously lack any subjective experiential "feel" or qualitative experiential character. What is left out is obvious from the picture: left out is the whole chain of neurophysiological processes from optic nervous impulses to activity in the visual center of the brain, etc., we know mediates the response. Here you might think to reply, "Well your picture is just too crude. If you fill in the missing bottom with all the physiologically mediative processes we know of you would get a different picture." But how is this "filling in" supposed to go? Either it will merely shorten the subjective link in the causal chain -- as with Descartes' story about the pineal gland -- in which case it's still the same picture (only shorter); or else it will insert several subjective links at intervals, in which case it becomes our intermittent picture (Figure 5b).

Subjectively, perhaps, this intermittent picture has some empirical advantages. It seems to me, attentiveness to my own experiences in acting seems to reveal, not a continuous chain either of qualia or conscious ratiocinations, but rather occasional flashes of qualia and interludes of conscious ratiocination. Objective empirical advantages, however, the intermittent picture lacks. For the same reason Descartes' shortening of the subjective link is unavailing -- isn't there a physical causal chain from input to output in the pineal gland also? -- so is the intermittent picture unavailing in this connection. Don't we believe (justifiably) that there are objective physical causal links at each juncture (from b to bi, from bi to b') in the intermittent picture also? Well, suppose we fill in the links so:

Much that Searle says suggests this picture. For instance:

Since all of the surface features of the world are entirely caused by and realized in systems of micro-elements, the behaviour of the micro-elements is sufficient to determine everything that happens. Such a `bottom up' picture of the world allows for top-down causation (our minds, for example, can affect our bodies). But top-down causation only works because the top level is already caused by and realized in the bottom levels. (Searle 1984a, p. 94)
causal emergence, call it "emergent1," has to be distinguished from a much more adventurous conception, call it "emergent2." A feature F is emergent2 iff F is emergent1 and F has causal powers that cannot be explained by the causal interactions of [microelements] a, b, c... If consciousness were emergent 2, then consciousness could cause things that could not be explained by the causal behavior of the neurons. The naive view here is that consciousness gets squirted out by the behavior of the neurons in the brain, but once it has been squirted out, it then has a life of its own. (Searle 1992, p. 112)
"It should be obvious," Searle adds, "that on my view consciousness is emergent1, but not emergent2" (Searle 1992, p. 112); but it is not obvious, as difficulties introduced by this latest revision of our picture reveal. Not only does this revision reintroduce overdetermination -- if you resist token identification of subjective mental with objective neurophysiological events it is as if the subjective were "squirted out" to take on a causal "life of its own" such that b' besides being caused by bi is getting independently caused by mi -- once subjectivity has been "squirted out" to take on this independent causal "life of its own" we have our as-if telekinesis problem too. If mi and biare each causally sufficient for b' then the causal action of one might be interrupted independently of the other. Once mi gets "squirted out" something might happen to break just the causal link bi²->b', in which case b' will still be caused via causal link mi²->b', as if by telekinesis.

3.34 Conclusion: Standard Reductivism and Searle's Nonsolution

In the light of the alternatives just canvassed -- as-if parallelism, epiphenomenalism, and as-if interactionism -- allowing m=b, identifying mental and physical property instantiations or events, after all, looks most attractive. Pace Kim's criticisms, standard nonreductivism may (as a solution to the mind body problem) purchase only half a loaf; but half a loaf is better than none. Here, in Searle's own words, is how the token identity solution makes that purchase:{10}
corresponding to the description of the causal relations that go from the top to the bottom, there is another description of the same series of events where the causal relations bounce entirely along the bottom, that is, they are entirely a matter of neurons and neuron firings at synapses, etc. (Searle 1984a, p. 93)
Since, according to standard nonreductivism, there's only one procession of events, one causal chain, with two descriptions, this solves the overdetermination and as-if telekinesis problems. Since token mental events (beliefs, desires, etc.) really are causes (in virtue of their physical properties: cf. Davidson 1964) standard nonreductivism neatly avoids epiphenomenalism as well. The reason this may seem only a half-solution to the mind-body problem, is that on this picture it seems (to me as to Kim as with Davidson's (1970) "anomalous monism") that we must say these subjective or mental events cause their successor events solely in virtue of their objective neurophysiological properties and not at all in virtue of their subjective phenomenological or Intentional ones; qua physical, as it were, and not qua mental. This is less, perhaps, in the way of a solution to the mind-body problem than many advocates of standard nonreductivism had hoped for, and less, in fact, than has been claimed on the token identity solution's behalf. It is claimed (e.g., by Fodor 1974; Fodor 1989) that the standard nonreductivist picture vindicates claims that the explanations couched in terms of psychological and other higher level "special scientific" descriptions constitute autonomous levels of causal explanation whose higher level predications pick out or characterize autonomous causal levels of reality. If Kim is right, considerations of explanatory exclusion militate strongly against such autonomy claims and favor the more conservative "anomalous monist" construal broached above. Searle's nonstandard nonreductivism not only shares the standard nonreductivist's troubles about explanatory exclusion but, by rejecting token identifications of mental with neurophysiological events, occasions more grievous troubles (overdetermination and as-if telekinesis); occasions virtually all the incoherence of Cartesian dualism. Searle's nonstandard nonreductivism is a nonsolution.

4. The Other Minds Problem

Besides his longstanding claim to have solved the mind-body problem Searle more recently claims a "solution to the `other minds problem'" (Searle 1992, p. 77) also. Though his attempt to address the other minds problem is long overdue -- and is a vast improvement over Searle's previous (1980a-1990g) rhetorical stratagem of feigning amnesia (i.e., of stonewalling, denying there is any such problem) -- Searle's "solution" to this problem, like his "solution" to the mind-body problem, fails. Searle's proposed "solution" is neither new (being, essentially, a reiteration of the well-known argument from analogy) nor improved (providing no resources not already available to Descartes). Indeed, rather than improving on Descartes, Searle regresses in this connection not only to Cartesianism, but beyond it, by employing stronger as-ifness apparatus to exclude computers from the ranks of thinking things than Descartes employs to exclude infrahuman animals.

Searle is keen to point out that "except when doing philosophy, there is really no `problem' about other minds" (Searle 1992, p. 77). In real life we do know (I take it), not only that other people think (or have mental properties) but often enough, roughly, what they think (what mental properties they have). The other minds problem arises, as Searle notes "when doing philosophy" (Searle 1992); especially (Searle doesn't note) when doing philosophy from the Cartesian vantage point. It arises from the metaphysical doctrine Searle terms "ontological subjectivity" that holds mentality is essentially consciousness, it's "qualia right down to the ground" (Searle 1992, p. 20) with its epistemic corollary that "the first-person point of view is primary" (Searle 1992, p. 20). Whether we call this primacy "privileged access" or not, the upshot seems to be that "when we characterize [other] people by mental predicates we are ... making untestable inferences to ... ghostly processes occurring in streams of consciousness which we are debarred from visiting" (Ryle 1949, p. 51). The problem is that these "untestable inferences" seem insufficiently warranted to justify the robust belief in "other minds" we actually have.

The problem (for the Cartesian) is that "we do not have equal access to [the] empirical facts [of consciousness] because of their intrinsic subjectivity" (Searle 1992, p. 73): the "phenomena [being] intrinsically subjective [are] therefore inaccessible to direct third-person tests" (Searle 1992, p. 75). The solution required (of the Cartesian), then, is to show how "in general we have indirect methods of getting at the same empirical facts" (Searle 1992, p. 73); "to show that we can have indirect means of an objective, third-person, empirical kind for getting at empirical phenomena that are intrinsically subjective and therefore inaccessible to direct third-person tests" (Searle 1992, p. 75). Searle's especial difficulty in this connection derives from his allegiance to an as-ifness thesis -- "We could have identical behavior in two different systems, one of which is conscious and the other totally unconscious" (Searle 1992, p. 71) -- that seems to preclude our having any indirect means "of an objective, third person, empirical kind" for distinguishing genuine from the mere as-if mentality either.

Now, as a way out of the preceding difficulty, Searle proposes the following:

If you think for a moment about how we know that dogs and cats are conscious, and that computers and cars are not (and, by the way, there is no doubt that you and I both know these things), you will see that the basis of our certainty is not "behavior," but rather a certain causal conception of how the world works. One can see that dogs and cats are in certain important respects relevantly similar to us. Those are eyes, this is skin, these are ears, etc. The "behavior" only makes sense as the expression or manifestation of an underlying mental reality, because we can see the causal basis of the mental and thereby see the behavior as a manifestation of the mental. (Searle 1992, p. 22)
Where knowledge of other-minds is concerned, behavior by itself is of no interest to us; it is rather the combination of behavior with the knowledge of the causal underpinnings of behavior that form the basis of our knowledge. (Searle 1992, p. 22)
It isn't just because the dog behaves in a way that is appropriate to having conscious mental states but also because I can see that the causal basis of the behavior in the dog's physiology is relevantly like my own. It isn't just that the dog has a structure like my own and that he has behavior that is interpretable in ways analogous to the ways that I interpret my own. But rather it is the combination of these two facts that I can see that the behavior is appropriate and that it has the appropriate causation in the underlying physiology. (Searle 1992, p. 73)
"The methods" Searle proposes "rest," he avows, "on a rough-and-ready principle that we use elsewhere in science and in daily life: same causes-same effects, and similar causes similar effects" (Searle 1992, p. 75). "We can readily see in the case of other human beings that the causal bases of their experiences are virtually identical with the causal bases of our experiences" (Searle 1992, p. 75). Why won't this appeal to "causal bases" and "physiology ... relevantly like my own" work?

The idea is that since I know that I have mental states (from my own experience of them), and that other human beings are like me in relevant respects (e.g., they have brains), I have inductive warrant (by analogy with my own case) for believing other humans have mental states too. This argument from analogy, familiar to philosophers, is a standard Cartesian reply to the other minds problem, subject to familiar difficulties.{11} For starters, it involves inductive extrapolation from a single case (my own). If such an argument provides any inductive warrant for my belief that other people really have the mental properties their behavior or physiology seems to bespeak, it must be an exceedingly weak one: too weak a warrant, it seems, to justify the robust confidence (and justification) we have in attributing minds to others. Searle contends, "It is only because of the connection between behavior and the causal structure of other organisms that behavior is at all relevant to the discovery of mental states in others" (Searle 1992, p. 77); and the difficulty is, in the first place, how the inductive "connection" of mental states to behavior and physiology could be established for others given that we can never "observe" the connection (are forever barred from experiencing the consciousness supposedly correlated with the physiology and behavior) in others. In the second place, according to Searle's picture of how "the appropriate causation in the underlying physiology" is supposed to work "the behaviour of the micro-elements is sufficient to determine everything that happens" behaviorally regardless of whether any consciousness is being given off at all. The basic proposal, here, for providing "indirect means of an objective, third-person, empirical kind for getting at empirical phenomena that are intrinsically subjective and therefore inaccessible to direct third-person tests" (Searle 1992, p. 75) (bells and whistles about analogy with one's own case added) is to "treat `have a mind' as an unobservable property or state of others [a state] whose existence we infer on the basis of indirect evidence in ... the same way that physicists infer the existence of their unobservables, such as quarks, gravitons, superstrings or the Big Bang" (Harnad 1991, p. 46). And the trouble is that

unlike the case of, say, superstrings, without whose existence a particular contemporary theory in physics simply would not work (i.e., it would not succeed in predicting and explaining the available evidence), in the case of the mind all possible empirical evidence (and any theory of brain function that explains it [my emphasis]) is just as compatible with the assumption that a candidate is merely behaving exactly as if it had a mind (but doesn't) as with the assumption that a candidate really has a mind. So unless we're prepared to be dualists (i.e., ready to believe that there are two kinds of "stuff" in the world -- physical and "mental" -- each with causal powers and laws of its own [Alcock (1987)]), "having a mind" can't do any independent theoretical work for us [cf. Fodor (1980)] in the way that physics unobservables do. Hence consciousness can be affirmed or denied on the basis of precisely the same evidence [Harnad (1982, 1989b)]. (Harnad 1991, p. 46)
The solipsistic predicament, once induced (by taking up the Cartesian vantage point) pertains to individuals, not species or genera, phyla, or kingdoms.{12}

As for analogy and other "buzzers and whistles" (e.g., stress on the need for convergence of physiological with behavioral evidence to support the requisite induction) of Searle's account, to see that they are such (buzzers and whistles) it is instructive to compare this Searlean apparatus with apparatus which, if acceptable, truly would avail in supporting the requisite induction; with Descartes' apparatus. According to Descartes, while there can be as-if mentality (generally) and as-if intentionality (e.g., purposiveness) in particular, there cannot be as-if productivity or creativity (e.g., of speech). By insisting on the causal necessity of mental states for linguistic performance Descartes insures the inductive sufficiency of linguistic performance to evidence these mental states. Notice, now, that analogy with one's own case can (at best!) only establish the causal sufficiency of conscious experiences or thoughts (the right ones, of course) for intelligent seeming behavior. Searle is even officially committed to the thoughts or experiences not being necessary for any intelligent seeming behavior you name: the "behaviour of the micro-elements is sufficient to determine everything that happens" (Searle 1984a, p. 94). Searle's proposal seems, then, to be this: that I know from my own case that certain conscious experiences or thoughts suffice for certain intelligent seeming behavior (answering questions about stories, say) in me; I know from neurophysiology that neurophysiological processes suffice for this behavior in people, generally; thus, I am entitled to inductively infer that similar neurophysiological processes give off similar conscious experiences in others. Neurophysiology perhaps warrants the hypothesis that other people are "relevantly similar" to me with regard to the neurophysiological causes of their behavior, but what warrants the hypothesis of relevant similarity with regard to production of conscious experiences? We can, perhaps readily, see in the case of other human beings that the causal bases of their behavior are virtually identical with the causal bases of our behavior, but what Searle avers -- "We can readily see in the case of other human beings that the causal bases of their experiences are virtually identical with the causal bases of our experiences" (Searle 1992, p. 75) -- is just what we can't see (because we can't see the experiences) in the case of others.

Searle admits that his "solution to `the other minds problem' ... gives us sufficient but not necessary conditions for the correct ascription of mental phenomena to other beings" (Searle 1992, p. 76), though he adds, "I am quite confident that the table in front of me, the computer I use daily, the fountain pen I write with, and the tape-recorder I dictate into are quite unconscious, [although] I cannot prove they are unconscious" (Searle 1992, p. 77). Now, in the case of
the table, fountain pen, and tape-recorder, perhaps, he can be said to have evidence (lack of intelligent seeming behavior) against the hypothesis of consciousness. But in the case of the computer producing a Turing machine "simulation" of some intelligent human performance there is no behavioral evidence discountenancing the hypothesis of consciousness. Searle also maintains both that it is likely there are "a great variety of forms of neurophysiologies of consciousness" (Searle 1992, p. 75) and that there are varieties of conscious experience utterly unimaginable to us:

But now some recent research tells us that there are some birds that navigate by detecting the earth's magnetic field. .... Now, what is it like to feel a surge of magnetism? In this case, I do not have the faintest idea what it feels like for a bird, or for that matter, for a human to feel a surge of magnetism from the earth's magnetic field. (Searle 1992, p. 72-73)
In the light of this acknowledgment of the likelihood both of alien physical bases productive of the sorts of conscious experiences familiar to me from my own case and the likelihood of there being radically alien sorts of conscious experiences, how does the difference between the carbon-based neurophysiological circuitry (etc.) of brains and the silicon-based circuitry of computers evidence the unconsciousness of computers? I am not saying computers are conscious: I'm saying if you hold that being conscious is necessary for having intentional mental states (as Searle does) then the prima facie behavioral evidence of intentional mental states in machines should count, for you, as evidencing the consciousness of these machines; different causal bases notwithstanding. No reason is given for thinking that this difference -- between being a carbon-based and a silicon-based system -- is causally salient with regard to the production and nonproduction of conscious experience. Searle says,
I think it is empirically absurd to suppose that we could duplicate the causal powers of neurons entirely in silicon. But that is an empirical claim on my part. It is not something we could establish a priori. (Searle 1992, p. 66)
But the trouble is it is not something one could establish a posteriori either, on Searle's views. On Searle's views it seems "the only way to know is by being the other body" (Harnad 1991, p. 43 abstract), which one cannot do.

In the final analysis Searle's conviction that computers don't have the right stuff for consciousness (hence, for intentionality) is based on unsupported brute "intuition." Searle admits, as we've seen, that his "solution" to the other minds problem is only a half solution -- a solution, he claims, to the problem of how we know that other humans and certain nonhuman animals have consciousness and intentional mental states, but not to the problem of how we know (if we do) that other things (e.g., computers) that act as if they do don't. What Searle fails to recognize is that this lack of warrant for judgments of mindlessness in the negative case undermines the warrant his "solution" is supposed to provide for judgments of mindedness in the positive cases (e.g., of dogs and other people). How do I know that the respects in which the virtually identical causal bases of their intelligent seeming behavior differ from mine aren't precisely in those respects which cause the side effect -- whether it is a mere side-effect as for epiphenomenalism (as in Figure 4) or an overdetermining side-effect (as in Figure 6) -- of consciousness?

According to Bertrand Russell, the argument from analogy for other minds requires the following postulate:

If, whenever we can observe A and B are present or absent [e.g., in one's own case], we find that every case of B [e.g., answering questions about stories] has an A [e.g., a conscious experience of understanding] as a causal antecedent, then it is probable that most B's have A's as causal antecedents, even in cases where observation does not enable us to know whether A is present or not. (Russell 1948, p. 486)
In terms of its applicability in the argument from analogy with one's own case to the existence of other minds, this postulate is vexed (1) because of the dubiousness of its empirical presupposition that every case of answering questions about stories (e.g.) is accompanied by a distinctive conscious experience of understanding in me and (2) because the acceptability of the postulate itself varies, roughly, in proportion to the size, variety, and representativeness of the sampling of cases in which we can observe the subjective correlates, while the sampling here (comprised just of my experiences) seems deficient in all these respects. The analogical "solution" to the other minds problem is shaky enough on the supposition implicit in Russell's postulate that we somehow have evidence from our own case that the mediating conscious experiences are necessary for the behavioral effect -- "we find that every case of B has an A as a causal antecedent" -- i.e., shaky enough on the standard interactionist construal of the causal mediation (as in Figure 5). It can hardly help matters here to allow with Searle (as in Figure 6) that in every case, including the only case in which one can observe both the conscious experiences and the behavioral effect, "the behaviour of the micro-elements" [would be] sufficient to determine everything that happens" regardless of whether it produced the conscious experiences or not. Insofar as Searle's additions to the argument from analogy are more than "buzzers and whistles" they prove detrimental to it.

5. Introspectionism Revisited: Why Isn't Psychology Easy?

At the end of his Second Meditation, Descartes famously concludes,
I know plainly that I can achieve an easier and more evident perception of my own mind than of anything else. (Descartes 1642, p.22-23)
The subsequent history of psychology belies this confident expectation. The notorious inability of Introspectionists to agree even about the data introspectively revealed -- "as to whether auditory sensations have the quality of `extension,' whether intensity is an attribute than can be applied to color, whether there is a difference in `texture' between image and sensation and upon hundreds of other [questions] of like character" (Watson 1913, p. 274) -- comes especially to mind, but the whole history of psychology subsequent to Descartes' original "discovery of the mind" is suspicious in this regard. If the data of psychology are so far superior to the data of chemistry and physics as Descartes thought, why has the progress of psychology been so piddling by comparison to that of sciences based on data that is so much more "uncertain"? Searle's revisionist historical claim that psychology has been held back by its neglect of consciousness ever since Descartes -- that "the exclusion of consciousness from the natural world ... in the seventeenth century" by Descartes "more than anything else, more even than the sheer difficulty of studying consciousness with our available scientific tools has kept us from arriving at an understanding of consciousness" (Searle 1992, p. 93) -- is, frankly, amazing! It is simply not true that "since Descartes we have, for the most part, thought that consciousness was not an appropriate subject for a serious science or scientific philosophy of mind" (Searle 1990f, p. 585). Rather, from Descartes until, roughly, Watson's radical declaration that "the time seems to have come when psychology must discard all reference to consciousness; when it no longer needs to delude itself that it is making mental states the object of observation" (Watson 1913, p.273) consciousness was thought, for the most part, the only appropriate subject for a serious science or scientific philosophy of mind! Not only does dualism not bar (but rather mandates) studying consciousness introspectively, neither does dualism bar the search for neurophysiological correlates (for the parallelist) or causes (for the interactionist) of consciousness. The main barrier to such study now, as always, is not theoretical (stemming from dualism); nor even technical (stemming from the crudity of out present scientific tools). The main barrier now, as it always has been, is the privacy and vagary or elusiveness of the conscious experiential "data" themselves.

Searle, to his credit, recognizes there is elusiveness where, according to Descartes, all should be pellucidness. Searle writes

According to the Cartesian tradition we have immediate and certain knowledge of our own conscious states, so the job [of investigating consciousness and discovering "the structure of consciousness" (Searle 1992, p. 127)] ought to be easy. But it is not. For example, I find it easy to describe the objects on the table in front of me, but how, separately and in addition, would I describe my conscious experience of those objects? (Searle 1992, p. 127)
Since Searle too (like Descartes) holds that we have immediate knowledge based on "direct access" (Searle 1992, p. 74) to our own conscious states and consequently "that the standard models of mistake based on the appearance-reality distinction don't work for the existence or characterization of consciousness" (Searle 1992, p. 145) this still leaves him with the question,
how is it possible that we can be mistaken about our own mental states? What, so to speak, is the form of the mistake we make, if it is not the same as the appearance/reality mistakes we make about the world at large? (Searle 1992, p. 147)
Searle answers that "typical cases where one gives misdescriptions of one's own mental phenomena," which provide the requisite models for understanding how such mistakes are possible "are [cases of] self-deception, misinterpretation, and inattention" (Searle 1992, p. 147: my emphases). I will not attempt to argue that no alternative (to the appearance/reality) model of mistake could be adequate to explain the difficulty of psychology on Searlean or Cartesian assumptions. I will, however, cite experimental evidence (from Nisbett & Wilson 1977) showing that the three models of error Searle specifically proposes are in fact inadequate to account for what I will call "introspective error." I also cite evidence (reviewed by Nisbett & Wilson 1977) suggesting, contrary to CP, either "that we ... have no direct access to higher order mental processes such as those involved in evaluation, judgment, problem solving, and the initiation of behavior" (Nisbett & Wilson 1977, p. 231-232: my emphasis) or "that the accuracy of subjective reports is so poor as to suggest that any introspective access that may exist is not sufficient to produce generally correct or reliable reports" (Nisbett & Wilson 1977, p. 233). Finally I argue that Searle's CP in effect mandates introspection (whether one calls it "introspection" or not) as the research method of choice in psychology; mandates revival of an introspectionist research program which, given the (at best) limited and empirically vexed nature of our direct introspective access to our own mental processes, is probably deservedly defunct.

Nisbett and Wilson focus initially on two social psychological experimental paradigms -- dissonance research and attribution studies -- whose results seem to challenge traditional assumptions about subjects' access to their own mental processes.

The central idea of insufficient justification or dissonance research is that behavior that is intrinsically undesirable will, when performed for inadequate extrinsic reasons, be seen as more attractive than when performed for adequate extrinsic reasons. (Nisbett & Wilson 1977, p. 233)
The central idea of attribution theory is that people strive to discover the causes of attitudinal, emotional, and behavioral responses (their own and others), and that the resulting causal attributions are a chief determinant of a host of additional attitudinal and behavioral effects. (Nisbett & Wilson 1977, p. 233)
The two experimental paradigms, as Nisbett and Wilson observe "share a common formal model":
Verbal stimuli in the form of instructions from the experimenter, together with the subject's appraisal of the stimulus situation, are the inputs to a fairly complicated cognitive process which results in a changed evaluation of the relevant stimuli and altered motivational state. These motivational changes are reflected in subsequent physiological and behavioral events. Thus: stimuli -> cognitive process -> evaluative and motivational state change -> behavior change. Following traditional assumptions about higher mental processes it has been tacitly assumed by investigators that the cognitive processes in question are for the most part verbal conscious ones. Thus the subject consciously decides how he feels about an object, and this evaluation determines his behavior toward it. As several writers (Bem, 1972; Nisbett & Valins, 1972; Storms & Nisbett 1970; Weick, 1966) have pointed out, there is a serious problem with this implicit assumption: Typically behavioral and physiological differences are obtained in the absence of verbally reported differences in evaluative or motive states. (Nisbett & Wilson 1977, p. 234)
Moreover, where subjects do report on their cognitive processes in such experimental situations
...the explanations that subjects offer for their behavior in insufficient-justification and attribution experiments are so removed from the processes that investigators presume to have occurred as to give grounds for considerable doubt that there is direct access to these processes. This doubt would remain, it should be noted, even if it were eventually to be shown that processes other than those posited by investigators were responsible for the results of these experiments. Whatever the inferential process, the experimental method makes it clear that something about the manipulated stimuli produces the differential results. Yet subjects do not refer to these critical stimuli in any way in their reports on their cognitive processes. (Nisbett & Wilson 1977, pp. 238-239: my italics)
In order to warrant the "substantial leap ... from research and anecdotal examples" such as those cited in their voluminous review of the insufficient justification and attribution experimental literature "to blanket assertions about higher order processes" (Nisbett & Wilson 1977, p. 232), Nisbett and Wilson undertook, in a series of experiments of their own, to control for features commonly present in insufficient-justification and attribution experimental setups which might be deemed responsible for the experimental effects reported. In order to rule out the hypothesis that the reported effects were artifacts of experimental situations which interfered or impeded subjects' (otherwise reliable) direct introspective access to their cognitive processes rather than to the nonexistence or general unreliability of such access, Nisbett and Wilson's experiments were "designed with several criteria in mind":
  1. The cognitive processes studied were of a routine sort that occur frequently in daily life. Deception was used minimally, and in only a few of the studies.
  2. Studies were designed to sample a wide variety of behavioral domains, including evaluations, judgments, choices, and predictions.
  3. Care was taken to establish that subjects were thoroughly cognizant of the existence of both the critical stimulus and their own responses.
  4. With two exceptions the critical stimuli were verbal in nature, thus reducing the possibility that subjects could be cognizant of the role of the critical stimulus but simply unable to describe it verbally.
  5. Most of the stimulus situations were designed to be as little ego-involving as possible so that subjects would not be motivated on grounds of social desirability or self-esteem maintenance to assert of deny the role of particular stimuli in influencing their responses. (Nisbett & Wilson, p. 242)
Note that conditions 3, 4, and 5 control for Searle's three, putative error explaining factors: 3 controls for inattention, 4 controls for misdescription, and 5 controls for self-deceit. Yet, Nisbett and Wilson report, these "studies ... indicate ... that such introspective access as may exist is not sufficient to produce accurate reports about the role of critical stimuli in response to questions asked a few minutes or seconds after the stimuli have been processed and a response produced" (Nisbett & Wilson 1977, p. 246) even when Searlean factors are controlled for. These results directly contravene Searle's contention that mistakes about our own mental states can be explained either largely or entirely by appeal to "self-deception, misinterpretation and inattention" (Searle 1992, p. 145).

Additionally, Nisbett and Wilson's results, together with other studies they cite, seem strongly to indicate that actual ignorance of one's own mental states is much more pervasive and thoroughgoing than traditional views, such as Searle's, seem to allow. In addition to the insufficient justification and attribution experiments, Nisbett and Wilson cite

at least five other literatures bearing on the question of the ability of subjects to report accurately about the effects of stimuli on complex, inferential responses: (a) The learning-without-awareness literature, (b) the literature on subject ability to report accurately on the weights they assign to particular stimulus factors in complex judgment tasks (reviewed by Slovic & Lichtenstein, 1971), (c) some of the literature on subliminal perception, (d) the classic Maier (1931) work on stimuli influencing problem solving, and (e) work by Latané and Darley (1970) on awareness of the effect of the presence of other people on helping behavior. (Nisbett & Wilson 1977, p. 239)
The weight of evidence from these various literatures, together with the evidence from their own experiments and from insufficient justification and attribution studies initially cited, Nisbett and Wilson claim, supports not only the contention that subjects "are sometimes (a) unaware of the existence of a stimulus that importantly [and cognitively] influenced a response, (b) unaware of the existence of the response, and (c) unaware that the stimulus has affected the response" (Nisbett & Wilson 1977, p. 231: my emphasis) -- it supports the claim that we are often if not characteristically unaware of the entire panoply of "higher order mental processes such as those involved in evaluation, judgment, problem solving, and the initiation of behavior" (Nisbett & Wilson 1977, p. 231-232) including not only (but perhaps most obviously) "perceptual and memorial processes" but also "cognitive processes underlying complex behaviors such as judgment, choice, inference, and problem solving" (Nisbett & Wilson, 1977, p. 232). Indeed, such "introspective error" effects are so pervasive and robust (reliably obtained across a variety of experimental designs) as to suggest,
When reporting on the effects of stimuli people may not interrogate a memory of the cognitive processes that operated on the stimuli; instead, they may base their reports on implicit, a priori, theories about the causal connection between stimulus and response. .... Subjective reports about higher mental processes are sometimes correct, but even the instances of correct report are not due to direct introspective awareness. Instead, they are due to the incidentally correct employment of a priori causal theories. (Nisbett & Wilson 1977, p. 233)
This suggests that "first-person reports about higher cognitive processes" are "neither more nor less accurate, in general, than the predictions about such processes made by observers" (Nisbett & Wilson 1977, p. 249); a hypothesis generally confirmed by the concurrence of first person reports with reports of observers. "As Bem (1967) put it ... if the reports of subjects do not differ from the reports of observers, then it is unnecessary to assume that the former are drawing on `a fount of privileged knowledge' (p. 186)" (Nisbett & Wilson p. 248). In this connection Nisbett and Bellows (1976) found "that subject and observer reports of factor utilization [in making judgments] were so strongly correlated for each of the judgments that it seems highly unlikely that subjects and observers could possibly have arrived at these reports by different means" (Nisbett & Wilson 1977, p. 250). Nisbett and Wilson conclude, "Such strong correspondence between subject and observer reports suggests that both groups produced these reports via the same route, namely by applying or generating similar causal theories" (Nisbett & Wilson, p. 251).

Nisbett and Wilson's hypothesis -- that one's first-person beliefs about one's own "higher order cognitive processes" are based on similar applications of "implicit, a priori, theories about the causal connection between stimulus and response" to those one's third-person beliefs about other's higher order cognitive processes are based on -- though it enjoys considerable philosophical acceptance (cf. Churchland 1988, pp. 73-75) is still controversial. Searle, for one, insists, "we do not postulate beliefs and desires" in order "to account for anything" (Searle 1992, p. 59). Rather, Searle says, "I think it is obvious that beliefs and desires are experienced as such, and they are certainly not `postulated' to explain behavior, because they are not postulated at all" (Searle 1992, p. 61). Again, "We simply experience conscious beliefs and desires" (Searle 1992, p. 59). Now, given the evidence adduced by Nisbett and Wilson and the inadequacy of Searle's attempt to account for such findings concerning the unreliability of our access to our own mental states and processes as Nisbett and Wilson adduce, it is certainly not obvious that we "simply experience" -- i.e., directly experience and consequently (except for occasional self-deceitful, misinterpretive, or inattentive lapses) reliably experience -- our own intentional mental states (e.g., of belief and desire) and processes (e.g., of inference).

It is noteworthy, here, that according to CP "the notion of an unconscious mental state implies accessibility to consciousness" (Searle 1992, p. 152); hence all mental states, on Searle's view, are simply experiencable; so, it is difficult to see why the method of choice for psychology should not simply be to bring the unconscious to consciousness by removing (when necessary) the impediments of self-deceit, inattention, and misinterpretation. But now, it is also noteworthy, that the express intent of the so-called "special method[s] of investigating consciousness" (Searle 1992, p. 97) that introspectionists employed -- that Searle officially deplores -- were techniques designed to guard one's direct "simple experience" of one's own mental phenomena against precisely the vagaries of inattention, and to guard one's self-reports (based on attentive simple experience) against the vagaries of misinterpretation. Indeed, there seems to be little more to the "special method of investigating consciousness, namely `introspection'" (Searle 1992, p. 97) introspectionists propose than simply attentively experiencing -- just the method Searlean principles seem to mandate.

Searle allows, "In any conscious state, we can shift our attention to the state itself. I can focus my attention, for example, not on the scene in front of me, but on the experience of my seeing this very scene" (Searle 1992, p. 143). He furthermore insists, "the possibility of that shift of attention was present in the state itself" (Searle 1992, p. 143); but that shift of attention, I submit, is what introspectionists meant by "introspection."{13} Indeed, "it is not surprising that introspective psychology proved bankrupt" (Searle 1992, p. 97: my emphasis). It is not surprising for Nisbett and Wilson's reasons: the notion of being "simply experienced" or "experienced as such" (Searle 1992, p. 63) from the "first-person point of view" (Searle 1980b, p. 451) is so empirically vexed as to impugn at least the reliability and epistemic primacy, and perhaps the very existence of this alleged mode of private "simple" "direct experiential" "first-person access"; vexed regardless of how attentive, not self-deceived, etc. the direct experiencing is supposed to be; vexed regardless of whether you further construe this direct experiencing in the manner of Searle's straw introspectionist as "a special capacity, just like vision only less colorful, that we ... have to spect intro" (Searle 1992, p. 144) or not; vexed regardless of whether you refuse, with Searle, to call it "introspection" and guard against conceiving it as imbedding "a distinction between the object spected and the specting of it" (Searle 1992, p. 144) or not.

Insistence on the essential connection of cognition with consciousness, rooted in the Cartesian "concept of mind or soul as distinct from body" (Watson 1924, p.3) was the guiding methodological principle of the introspectionist psychology against which Watson (I think fruitfully) rebelled and to which Searle (I think futilely) would return us. Searle's dogmatic assurance that "it is always possible for an agent to bring his Intentional states to consciousness" (Searle 1979a, p.92), and his insistence "always ... on the first person point of view" (1980b, p.451)" seem to mandate introspection as the research method of choice for psychology: on Searle's views "you are forced to accept the first-person point of view as in some sense epistemically different from the point of view of the third person observer" (Searle 1987a, p.145) and since, e.g., "performing speech acts -- and meaning things by utterances -- goes on at a level of first-person intentionality" (Searle 1987, p.145), according to Searle, it seems we must recognize, on Searle's views, not only the epistemological difference of the first-person point of view, not only its indispensability (such that "it is absolutely essential, at some point to remind yourself of the first-person case" (Searle 1987, p.126)) but its epistemological priority and privilege. If anything like Searle's views were true, it seems that Introspectionism should have been more successful the first time around. The proven sterility of the research program to which Searle's views would return us speak as loudly as anything, I think, against his overarching views and against their application to argue against artificial intelligence as in the Chinese room experiment.

6. Conclusion: The Chinese Room Revisited

Though Searle's reliance on as-if dualistic identification of consciousness with thought generally (i.e., on CP) in his Chinese room experiment is hardly concealed -- though it is a point on which Searle himself frequently and vehemently insists -- it has not, I think been sufficiently appreciated. Suppose, for a moment, that we grant Searle this essential connection in the case of understanding: grant that understanding something entails either being actually conscious of it or being potentially consciousness of it. Also grant that neither Searle in the room nor the room-Searle system (SIR) is conscious of the meanings of any of the Chinese questions asked or answers given in the Chinese room experiment. It follows that Searle (or SIR) doesn't understand -- but he (or it) still does something. What is it? Searle calls it "as-if understanding": you might also call it "blind understanding" or (to have a one word designation) "blunderstanding." Blunderstanding is to garden variety understanding much as blindsight (see Weiskrantz, 1986) is to garden variety seeing: just as, in the case of blindsight, you have behavior indicative of seeing in the absence of visual experiences (or awareness thereof, if these are separable) on the subject's part, in the case of as-if understanding there's behavior indicative of understanding in the absence of experiences or awareness of understanding on the subject's part. Grant that the blunderstanding isn't understanding and blindseeing isn't seeing, yet the following question remains: Are blindsight and blunderstanding Intentional states in their own rights? It seems so! One blindsees that the cat is on the mat, blindsees that the mouse is in the house, etc.: blindsight seems a propositional attitude, different cases of blindseeing being differentiated by their propositional content. SIR blunderstands that it's customary to tip the server in a restaurant, blunderstands that hamburgers are food, etc.: blunderstanding seems a propositional attitude, different cases of blunderstanding being differentiated by their propositional content. In other words, if the question in the Chinese room is whether computers running SAM have any Intentional mental states or propositional attitudes the answer now would seem to be, "Yes, blunderstanding."

Now, while it may be plausible to say that one has definite intuitions about "understanding" warranting (perhaps) a connection principle for understanding, it is not plausible to say this about "blunderstanding." No one has intuitions about "blunderstanding," I take it: "blunderstanding" is not a word in common parlance but one I just invented to describe these unusual unconscious-seeming understanding-like doings here (of SIR or the computer running SAM). Neither, I submit, have we clear intuitions about these doings themselves (about the act or quality of blunderstanding) that might decide whether blunderstanding has to be (potentially) conscious to be real. Well it really is blunderstanding, certainly: I dubbed it that. But is blunderstanding really Intentional or really mental? If you wish, with Searle, to answer in the negative -- since prima facie blunderstanding is Intentional and thus, seemingly, mental -- it seems you need a reason, you need some theoretical grounds for saying (with regard to the Intentionality or mentality of it) why the case of blunderstanding is not as it seems. Searle's reason? "Only a being that could have conscious intentional states could have intentional states at all, and every unconscious intentional state is at least potentially conscious" (Searle 1992, p. 132): CP's as-if dualistic identification of consciousness with thought. This seems what's needed; but it's not warranted!

Just how unwarranted the as-if dualistic Connection Principle on which Searle's case ultimately seems to depend really is has been the burden of my argument in the present chapter. I have sought to show how this principle leads us back into insoluble Cartesian difficulties about the mind-body interaction, knowledge of other minds, and introspection (or why psychology is hard). To these considerations already developed, I now add two more objections to CP not yet canvassed. First -- despite the vagary of the "in principle" in Searle's claim that "The ascription of an unconscious intentional phenomenon to a system implies that the phenomenon is in principle accessible to consciousness" (Searle 1990f, p. 586) of which several commentators (e.g., Block 1990; Chomsky 1990) complain -- there seems a clear counterexample to CP. Forgetting seems uncontroversially Intentional -- one forgets that cat was on the mat, that the assignment was due at the beginning of class, etc. -- yet, it seems plausible to say that forgetting is unconscious in principle. One is aware of having forgotten, but to be consciously forgetting would seem no longer to be forgetting: rather, it would be remembering (or refreshing one's memory). Second, there is a substantial body of evidence contrary to a corollary -- call it the "latency corollary" -- of CP that Searle develops as follows:{14}

unconscious mental phenomena ... to the extent that they are genuinely intentional ... must in some sense preserve their aspectual shape even when unconscious, but the only sense we can give to the notion that they preserve their aspectual shape when unconscious is that they are possible contents of consciousness. (Searle 1992, p. 159-160)
"The concept of unconscious intentionality is thus that of a latency relative to its manifestation in consciousness (Searle 1992, p. 161): the corollary is that "if we think of the ontology of the unconscious in the way suggested -- as an occurrent neurophysiology capable of causing conscious states and events" the unconscious phenomenon does not "have aspectual shape ... right there and then" (Searle 1992, p. 169), while unconscious, "for the only occurrent reality of that shape is the shape of conscious thoughts" (Searle 1992, p. 171). The trouble with this is that many well established psychological phenomena seem explicable only on the hypothesis that unconscious mental states and processes do have intentionality (hence "aspectual shape") right there and then, for they are subject to intentional (semantically mediated) effects while unconscious. In an experiment cited by Nisbett and Wilson, for instance,
In order to test subject ability to report influences on their associative behavior, we had 81 male introductory psychology students memorize a list of word pairs. Some of these word pairs were intended to generate associative processes that would elicit certain target words in a word association test to be performed at a later point in the experiment. For example, subjects memorized the word pair "ocean-moon" with the expectation that when they were later asked to name a detergent they would be more likely to give the target "Tide" than would subjects who had not previously been exposed to the word pairs." (Nisbett & Wilson, p. 243)
The success of this manipulation (the expectation was confirmed) shows the semantic character of the memory trace and the associative processes right then and there, while unconscious. How else but by a semantical association could this correlation between the word pairs and subsequent response be mediated? The associative link is precisely via the meaning of the earlier "ocean-moon" word pair and the later "Tide" response. There are many such examples of unconscious mechanisms -- those involved in subliminal advertising, and classic Freudian mechanisms, for instance -- whose operations seem incontrovertibly semantic and thus at odds with the "latency" corollary of CP. (Similar points are made by Nelkin, forthcoming).

These are very substantial theoretical costs, probably beyond bearing even if as-if dualistic recourse to consciousness had very substantial theoretical benefits also. Perhaps if consciousness explained Intentionality, as computation (alone) arguably cannot, that would be benefit enough to provide a powerful reason for adopting CP despite these costs. Searle's criticism of "Strong AI" -- of the Turing machine functionalist identification of thought with computation -- after all, is that programming or computation (program execution) cannot explain (fails to determine, or does not suffice for) semantic content; but it is a rather well established point that consciousness cannot explain (fails to determine, does not suffice for) semantics either.{15} Though Searle insists "all of those great features that philosophers have thought of as special to the mind are ... dependent on consciousness: intentionality, rationality, free will (if there is such a thing), and mental causation" (Searle 1992, p. 227) he doesn't say how or why. Rather, with regard to the particular feature of Intentionality most at issue (though the same might equally be said with regard to the other "great features" just noted), Searle admits, "The real gap in my account is ... that I do not explain the details of the relation between intentionality and consciousness" (Searle 1991c, p. 181). Given the severity of the problems Searle's appeal to consciousness as "the essence of the mental" (Searle 1991b, p. 144) occasions and the paucity of benefits deriving therefrom, this gap is not just real. It's fatal. Fatal to the overarching as-if dualistic identification of thought with consciousness Searle embodies in his Connection Principle and, insofar as the salience of the Chinese room example depends on that principle, fatal to Searle's vaunted example.

Endnotes

  1. Here, as in Chapters 4 and 5 above, I adopt the device of using "In" with a capital "I" to stress the exacting technical sense of "in" at stake.
  2. I will use the convention (following Searle 1983) of distinguishing Intentionality (with a capital "I") in the technical sense of "aboutness" from garden variety intentions and intendings (small "i") where (as here) it is necessary to typographically distinguish them.
  3. Searle has misgivings about the existence of agential subjective intrinsicality (i.e., of radical or libertarian freedom) -- "Our conception of physical reality," Searle remarks, "simply does not allow for radical freedom" though nothing "will ever convince us that our behaviour is unfree" (Searle 1984a, p. 98).
  4. This whole section -- at least, the felicitous parts -- owes much to comments and suggestions of Rich Hall.
  5. Searle says, "to distinguish this view from many others in the field, I call it `biological naturalism'" (Searle 1992, p. 1). Since I believe Searle's original name for this view to be more apt than this new one -- both more descriptive and (with its whiff of paradox) more suggestive -- I continue to speak of "monist interaction."
  6. I believe the most coherent construal of Searle is as a property dualist (cf., Churchland 1988). He once seemed willing -- like Nagel (1986, 1974) and Jackson (1982) to whom his views show a striking resemblance -- to style himself such, insisting, "property dualism is compatible with complete physicalism, provided that we recognize that mental properties are just one kind of higher level physical property among others" (Searle 1984b, p.6). More recently, however, Searle complains "I have personally speaking, been accused of holding some crazy doctrine of `property dualism' ... even though I have never, implicitly or explicitly, endorsed any [such] views" (Searle 1992, p. 13).
  7. I note two complications, here. First, once these causal realization relations are added, the picture no longer resembles parallelism but rather becomes a kind of epiphenomenalist/parallelist hybrid. (A monstrosity, in my opinion.) Secondly, I note the curious hybrid mental-physical character of m'. Though acts belong to the same "conceptual network" as beliefs, desires, reasons, etc. -- acts being the sort of explananda we think call for explanation in such mentalistic terms -- overt acts (e.g., arm raisings) have objective entailments (my raising my arm entails my arm's going up). Kim (1989), following Malcolm (1968) collapses the right hand column of the diagram so there is only one explanandum event b'/m'. Since Searle is both committed to the mentality of acts -- "to say that a man is walking to the store or eating a meal is to attribute to him mental states which are no less mental than to say that he wants to get to the store or he believes that the stuff on his plate is food" (Searle 1979b, p. 196) -- and maintains the ontologicalirreducibility (i.e., token nonidentity) of mental and physical events (Searle 1992, chap. 5) it seems he would resist such collapse. If such a collapse is allowed only on the right, however, the same difficulties (about overdetermination, as if telekinesis, etc.) still arise (see Kim 1989).
  8. The case is described in James 1950, pp. 489ff.
  9. If one objects that having one's arm held down would be a violation of ceteris paribus conditions "above" no less than below -- this might be countered by modifying James's setup so that impediment is achieved internally (by blocking efferent as well as afferent nervous impulses, say).
  10. Searle seems to waffle in his rejection of token-identity. In its original context, the passage here quoted seems to be advocating token identities between mental and physical events: if this is the line he takes, then his "solution" is not his but is rather the standard nonreductivist one typically identified with functionalism, which Searle ostensively opposes. On the other hand, if Searle does insist that there are not just "two descriptions" -- i.e., if he insists that "subjectivity refers to an ontological category" (Searle 1992, p. 94) and that "a perfect science of the brain would not lead to a would still not lead to an ontological reduction of consciousness in the way that our present science can reduce heat, solidity, color, and sound" (Searle 1992, p. 116) -- then the proposal is distinctively his, but it's no kind of solution (not even a half-solution) to the mind-body problem.
  11. Russell (1948, 482-486) provides exposition and defends the analogical argument. Churchland (1988, p.68-70), Fodor (1968, p.131) and Malcolm (1962) make telling, and perhaps fatal, objections.
  12. "We can't write solipsism species-wide. We can't really argue that we humans have a peculiarly intimate way of knowing that all of us think and feel, while requiring with respect to ... possible extraterrestrial species, or [computers], or chimpanzees, et al., some additional and different (and maybe conveniently impossible) form of demonstration that they think and feel." (Leiber 1985, p.60-61)
  13. The Dictionary of Philosophy and Psychology, defines "Introspection" as "Attention on the part of an individual to his own mental states and processes, as they occur, with a view of knowing more about them" adding the observation that some "writers speak of an `inner sense' or `inner perception'" but this "suggested analogy to `outer sense' and `outer perception' is misleading." (Baldwin 1960, p., 567).
  14. "Aspects" or "aspectual shapes" here refer, roughly, to what Wittgenstein (1958, IIxi) spoke of in connection with seeing-as phenomena: e.g., seeing things as alike and seeing ambiguous pictures (e.g., Necker cubes, duck-rabbits) as one thing or the other. Seeing the duck-rabbit picture (Wittgenstein 1958, IIxi)as a duck would (in these terms) be seeing its duck aspect or seeing it "under" its duck aspect. These "aspects" are like Frege's (1892) "modes of presentation."
  15. Besides Putnam's 1975 Twin Earth arguments, a second persuasive line of argument for this is developed by Wittgenstein 1958, Kripke 1982, and Boghossian 1989. Millikan 1984 (pp. 89f) provides yet a third line of argument.

next | previous 
TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1 | CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY