Propositional Actitudes

Reply to Gunderson{1}

Larry Hauser:

First a clarification: by "full-blooded acts" I mean, roughly, the acts Aristotle styles voluntary or purposeful, a class of actions including deliberate acts supporting attributions of full legal and moral responsibility as a proper subset, but also including purposeful acts of infrahuman animals, children, acts done in the heat of passion, etc. Full-blooded acts are done "under descriptions" (Davidson 1963) or "aspects" (Searle 1989). I concede that computers don't act deliberately. I maintain that computers do nevertheless act voluntarily.

Despite doubts about the scientific theoretic character and functions of folk psychology, I agree with Searle (and I take it Gunderson) that the question of whether computers have the mental properties we attribute to them "is an empirical question" (Searle 1980a, p.422). I'll return to this.

Why I Am a Behaviorist and How

I am a behaviorist, though I do not consider myself radical. I neither believe that folk psychological talk can be reduced to talk about dispositions to make specifiable bodily movements under physically specifiable conditions nor that a scientific stimulus-response psychology (a la Skinner, say) can replace folk psychology: I am neither a "reductive behaviorist" nor an "eliminative behaviorist." I espouse behaviorism shorn of the operationalist and scientistic allegiances with which it has been historically associated in the United States. Such "British behaviorism" might be styled a kind of null hypothesis.{2} Alternative nonbehaviorist hypotheses all view something else besides requisite behavioral competencies as metaphysically constitutive of thought: dualism identifies specific mental states and processes with specific phenomenological states and processes; mind-brain identity theory identifies specific mental states and processes with specific neurophysiological states and processes; functionalism identifies mental states and processes with specific procedural states and processes (e.g., Turing machine states and state transitions). Behaviorism, on the other hand, doesn't metaphysically identify specific mental "states" and "processes" with any behavior mediating states or processes. Note, the behaviorist needn't deny the existence or even the scientific interest of mechanisms mediating between perceptual inputs and behavioral outputs.{3} Nor need behaviorism deny the existence (though I, for one, remain skeptical of the scientific interest) of intervening experiences. Scientifically modest behaviorism is only committed to the claim that the physical, procedural, and phenomenological characteristics of these states and processes are not essential to their being the mental states or processes they seem.

As essentialist hypotheses about the nature of thought, identity theory, functionalism, and dualism undertake a certain burden of research: provide a theoretically fruitful and intuitively acceptable neurophysiological specification (mind-brain identity theory) or procedural specification (functionalism) or phenomenological specification (dualism) of what mental phenomena essentially are. Shorn of operationalist pretensions (promising to limn something like nominal essences where identity theory, functionalism, and dualism promise to discover real ones) such scientifically modest behaviorism as I advocate owes no such specification. On the other hand such modest nonreductive behaviorism (unlike reductive behaviorism) would not seem to offer much prospect of underwriting "the likelihood that the intentional sciences might eventually produce theories whose objectivity and reliability parallel those of the physical and biological sciences" (Fodor & Lepore 1992, p. 16). Unlike eliminativism, however, such nonreductive behaviorism (or at least this nonreductive behaviorist) would deny the scientistic imperative that regards such likelihood as required for any "Intentional Realism worth having" (Fodor 1990, p. 52).{4}

Identity theory, I take it, founders on its chauvinist implication that the "right stuff" for real thinking is literally human or carbon-based protoplasmic stuff. Whatever the demerits of the Turing test (Turing 1950), I take it Turing's test is a better test of thought than a test we might devise by replacing the questioner in Turing's setup with a specially trained "thought"-sniffing dog. Functionalism, perhaps less obviously, inclines to chauvinism of manner instead of matter. There is also more than one way to see a cat or seek a cat: no theoretical importance can attach to a way being ours. I am skeptical of the prospects for an exclusionary functionalist specification of the "right procedures" for producing comparable competencies that would not be objectionably chauvinistic, i.e., that would distinguish between unthoughtlike and thoughtlike ways of computing the same weakly equivalent functions by some other criterion of thoughtlikeness besides similarity to our way. Until such specification is forthcoming, we are warranted in accepting the "null hypothesis": by whatever manner, in whatever matter, competencies that inspire predications of "see," "seek," and the rest of the mental terminology of folk psychology are produced, so long as anything has the requisite competencies, it should be acknowledged to have the corresponding psychological properties. Behaviorism.

As for "the dreaded C word" (Gunderson 1994, p.31) -- if you think consciousness is the essence of mind, that "the mind consists of qualia, so to speak, right down to the ground" (Searle 1992, p. 20), or that every mental phenomenon "is in principle accessible to consciousness" (Searle 1990a, p. 586) -- that's another story: a dualist (or at least dualistic) one.

As If Dualism

The difference between USAIs and MUSAIs according to Gunderson--(or between genuine "intrinsic intentionality" and bogus "as-if intentionality" according to Searle -- is that there's something that it's like to be a USAI for a USAI but nothing that its like for a MUSAI (cf. Nagel 1974, 1986; Sartre 1956); with MUSAIs "no one's at home" (Gunderson 1994, p. 30); a MUSAI's "ontology ... is a completely third-person one" (p. 32). This is indeed "puzzling and opaque" (p. 31). One longstanding difficulty in this appeal to consciousness as the essence of the mental is that the essential privacy or subjectivity of consciousness makes it impossible to know which bodies besides one's own are conscious and, consequently, impossible to establish what properties of our bodies causally suffice for mind in us much less what properties are causally necessary for mind in anything. The trouble is that in actual practice what we rely on when attributing thought to things is what they do; but Gunderson holds with Searle that nothing anything "could do would decide for us whether it should be classified one way or another" (p. 31) as a USAI whose mental properties are real and intrinsic or as a mere MUSAI with counterfeit "as if" mental properties. "Something further is needed to break the tie" (p. 31) Gunderson insists; and that something, according to Gunderson and Searle is consciousness.

Now to this other minds problem which besets consciousness-based views such as Gunderson's and Searle's (and of course Descartes') I do have a solution -- I'm a behaviorist. No problem. I say nothing more is "needed to break the tie"; and it's a good thing! Nothing will do: not the matter by which the behavior is caused (as identity theory proposes), not the manner in which it's caused (as functionalism proposes), not whether the production "involves consciousness" (as Gunderson, alas, following Searle, following Descartes, proposes).{5} None of these hypotheses about the essence of the mental underwrites an unobjectionable distinction that can be distinguished between USAIs and MUSAIs (Gunderson) or between "intrinsic" and mere "as if intentionality" (Searle 1980b).

Conclusion: Propositional Actitudes

I return to the empirical question: Do "internally propelled agent-like acting-under-aspects programmed robots (or Hauser Robots) really have minds?" (p. 2). I insist on understanding "have minds" to mean "have mental properties" so as to lead us not into the temptation of mistaking minds for immaterial things that people have or occult stuff that thoughts are made of. The empirical question as I understand it is whether such "Hauser Robots" as my pocket calculator and my MS-DOS computers really have the mental properties we all say they have.

Gunderson thinks, "there can be literal non-figurative examples of adding and computation" (seeking?, trying?, considering?) "which are nevertheless non-mental" (p. 30): I think not. I accept Searle's "mentalization" of these behaviors. It is important to recognize that many so-called "propositional attitudes" might as aptly be called "propositional actitudes." Some propositional attitude verbs seem to have behavioral glosses if not synonyms: e.g., "calculate" (a verb taking a sentential complement, like "believe" and "wish") means roughly the same as "do arithmetic." Other propositional "attitude" verbs (e.g. "seek") even seem, on their faces, more like descriptions of what's overt (behavior) than hypothetical posits of hidden neurophysiological or phenomenological springs. If what makes actions "full-blooded" is their susceptibility to explanation as means to ends or their subsumability under aims of the agent, then full-blooded acts do require "full-blooded minds" -- require that the agent have some conative properties (aiming, seeking, etc.) and cognitive properties (sensing, detecting, being aware, etc.), at least -- though, since infrahuman animals act full-bloodedly, fully human minds are not required.

Perhaps, then, we can understand Gunderson's thought that I am "wrong in thinking that those non-figurative actions of machines [e.g., my pocket calculator's calculating] also (thereby?) partake of the mental" (pp. 3-4) as claiming that calculators don't really aim to compute arithmetic functions and don't really detect keypresses. Alternately, perhaps, Gunderson could agree that looking ahead and considering alternative continuations of play are nondementalizable. If not (if Gunderson grants none of the preceding), then the claim that "there is nothing a Hauser Robot could do which would decide for us whether it should be classified one way or the other" threatens to have the corollary that whatever mental seeming things get done by machines are also (thereby?) dementalized. Surely this is question-beggingly tendentious. If Gunderson holds this, then, it seems, whether computers partake of the mental is not an empirical question for him after all: whatever intentional states or other seemingly mental properties machines made of silicon chips and copper circuitry (anything besides protoplasm?) partake of will ipso facto be shown to be non-mental in these instances.

Perhaps we can agree, then, that looking ahead and considering alternative continuations of play partake of the mental. At this point it is a mistake to think, "It's a tie: On your behavioristic hypothesis Deep Thought really does look ahead and consider alternative continuations; on Gunderson's and Searle's consciousness-based hypothesis it doesn't." It's not a tie: because we do say these things, and because, as Searle admits in other connections, "the onus of proof [is] on those who wish to claim that these sentences are ambiguous" (Searle 1975, p.40). The track record of the consciousness-based approach inspires little confidence that it can bear any such onus. Prima facie (the empirical evidence is) machines have the mental properties we say they do, which predict and explain their behavior. In the absence of any credible theories of the nature of mind or thinking to the contrary, such attributions deserve to be credited as true literal attributions. I agree with Searle that the solution to other minds problems is to "use your ingenuity, use any weapon at hand and stick with any weapon that works" (Searle 1990b, p.640). Ingenuity suggests my pocket calculator displays "12" after having "7" "+" "5" "=" entered on its keypad because it adds seven and five and gets twelve; suggests my computer running DOS seeks to print my document when I issue a print command and tries to initialize the printer checking first to see if there's a device on line; suggests Deep Thought aims to win at chess and looks ahead to consider and evaluate possible continuations of play to this end; etc. Attribution of mental properties in such cases "gives predictive power that we can get by no other method" (Dennett 1981, p.23) and have no adequate theoretical reasons -- especially not consciousness-based reasons -- to gainsay.

Back to: Home page; Curriculm Vitae;Virtual office.


Anscombe, G. E. M. 1963. Intention. Ithaca, NY: Cornell University Press.

Aristotle. Nichomachean ethics. Trans. D. Ross, in R. McKeon (ed.) The basic works of Aristotle. New York: Random House (1941), 935-1126.

Davidson, D. (1963). Actions, reasons, and causes. Essays on action and events. New York: Oxford University Press (1980), 3-20.

Dennett, D. C. (1981). True believers. The intentional stance. Cambridge, MA: MIT Press (1987).

Fodor, J. A. and Lepore, E. (1992). Holism: a shopper's guide. Cambridge, MA: Basil Blackwell.

Fodor, J. A. (1990). A theory of content and other essays. Cambridge, MA: MIT Press.

Gunderson, K. (1994). Movements, actions, the internal, and Hauser robots. Behavior and Philosophy, 22, 1, 29-33. Presented at the Colloquium on Action Theory, American Philosophical Association Central Division, Louisville, KY, 25 April 1992.

Hauser, L. (1994). Acting, intending, and artificial intelligence. Behavior and Philosophy, 22, 1, 22-28. Presented at the Colloquium on Action Theory, American Philosophical Association Central Division, Louisville, KY, 25 April 1992.

Hauser, L. (1992). Act, aim, and unscientific explanation. Philosophical Investigations, 15, 10, 313-323.

Hempel, C. (1949). The logical analysis of psychology. Readings in philosophical analysis, ed. H. Feigel and W. Sellars. New York: Appleton Century Crofts.

Kim, J. (1984). Self-understanding and rationalizing explanations. Philosophia Naturalis, 21, 309-320.

Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435-450.

Nagel, T. (1986). The view from nowhere. Oxford: Oxford University Press.

Place, U. T. (1992). Eliminative connectionism: its implications for a return to an empiricist/behaviorist linguistics. Behavior and Philosophy, 20, 1, 21-35.

Ryle, G. (1948). The concept of mind. New York: Barnes & Noble.

Sartre, J. P. (1956). Being and nothingness. Trans. H.Barnes. Secaucus, NJ: Citadel Press.

Searle, J. R. (1975). Indirect speech acts. Expression and meaning. Cambridge, Eng.: Cambridge University Press (1979), 30-57.

Searle, J. R. (1980a). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417-424.

Searle, J. R. (1980b). Intrinsic intentionality. Behavioral and Brain Sciences, 3, 450-457.

Searle, J. R. (1989). Consciousness, unconsciousness and intentionality. Philosophical Topics, xxxvii, 10, 193-209.

Searle, J. R. (1990a). Consciousness, explanatory inversion, and cognitive science, Behavioral and Brain Sciences, 13, 585-596.

Searle, J. R. (1990b). Who is computing with the brain? Behavioral and Brain Sciences, 13, 632-640.

Searle, J. R. (1992). The rediscovery of the mind. Cambridge, MA: MIT Press.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 436-460. Reprinted in M. A. Boden (ed.), The philosophy of artificial intelligence (Oxford: Oxford University Press, 1990).

Wittgenstein, Ludwig. 1958. Philosophical investigations. Trans. G. E. M. Anscombe. Oxford: Basil Blackwell.


1. References are to Gunderson 1994 unless otherwise indicated.^

2. The British school of behaviorism typified by Wittgenstein (1958), Ryle (1949) and Anscombe (1963) seems to contrast with the American school typified by Skinner, Hempel (1949), and Quine in this respect.^

3. I agree with U. T. Place that "to claim, as Skinner sometimes did, that the behavioral psychologist has no business concerning him or herself with such matters is plainly absurd" (Place 1992, p. 33).^

4. Fodor's advocacy of functionalism as the "only game in town" that has a chance of forfending eliminativism by underwriting the aforementioned "likelihood" embeds, I think, a double mistake. It is a mistake, firstly, to stake psychology's claim to being scientific on this "likelihood." Secondly, it is a mistake to regard hypotheses in the philosophy of psychology as confirmed insofar as they sustain hopes that "the intentional sciences might eventually produce theories whose objectivity and reliability parallel those of the physical and biological sciences": confirmation derives from explaining appearances not from sustaining hopes. In lieu of evidence that there is any such likelihood -- in view of the track record of attempts to produce psychological theories of the requisite objectivity and reliability -- preference would seem owed to philosophies of psychology (e.g., Kim 1984; Hauser 1992) that explain this manifest unlikelihood over those (e.g., Fodor's) that merely disdain it.^

5. Three signal difficulties beset dualistic views such as Gunderson's, Searle's and Descartes's: other minds troubles such as we are here broaching, introspection troubles (why isn't psychology easy if we can just introspect our mental processes?), and mind-body interaction problems (how to account for the seeming causal efficacy of mind). Behaviorism, of course, avoids the first and second of these entirely. Whether behaviorism also solves the "mind-body problem" -- whether it adequately explains the causal efficacy or explains away the apparent causal efficacy of the mind -- seems to me a much dicier proposition (which strongly suggests that this is a different problem than the other minds problem). Though I believe behaviorism has resources deriving from the "various systematic conceptual connections [it posits] between thought and behavior" (Gunderson, n. 1: my emphasis) that enable it to cope with certain residual problems besetting functionalism (which denies the conceptual and insists on the causal character of these connections), I do not address this mental causation question in my original essay (Hauser 1994) or claim to produce any such "dramatic result" as a solution to this mind-body problem. Still, if indeed "there is no longer a mind body problem" (p. 32) on my behavioristic views, so much the better! This would hardly seem to be an objection to them.^