Behavior and Philosophy, Vol. 23 (1995), pp. 42-47{1}

Doing Without Mentalese

Larry Hauser

Introduction

William Lycan (1993) defends an increasingly beleaguered language of thought hypothesis on the grounds that we need to posit a language (or languages) of thought to explain the productivity of human thinking.{2} Lycan's proposed "deductive argument for the representational theory of thinking" purports to be an inference to the only possible explanation; an argument not only that "no credible alternative picture of thinking has taken the field," but even that no other explanation consistent with physicalism is conceivable. Elaborating on Fodor elaborating on Chomsky, Lycan maintains the "morphemes-plus-compositional-rules-theory" (p.406) is the only possible (physical) explanation of the productivity of thinking: "representationalism" is "as Fodor said in The Language of Thought . . . 'the only game in town'" (p.406). "How," Lycan rhetorically asks, "could the thing be otherwise, barring either magic or divine intervention?" (p.407).

"The production problem" of explaining what Chomsky calls "'the creative aspect of language use" has, on Chomsky's own formulation, two folds. In the first place, "in normal speech one does not merely repeat what one has heard but produces new linguistic forms -- often new in one's experience or even in the history of the language": speech is novel. Secondly, "there are no limits to such innovation" (Chomsky 1988, p. 5): linguistic competence is unbounded. Understand productivity, accordingly, to be unbounded novelty. The explanation of productivity that the morphemes-plus-compositional-rules-theory provides is likewise twofold. Novelty is explained by compositionality, the principle of phrase and sentence meaning which says that given the syntactic structure of the construction plus the meanings of the words in it, the meaning of the whole is fixed.{3} Unboundedness is explained by recursion in the rules that give the syntactic structures. A set of syntactic phrase structure rules is recursive if it allows a constituent to embed a constituent of the same type. The following set of rules, for example,

S -> NP VP

VP -> V (S)

NP -> Det N (PP)

PP -> Prep NP

allows noun phrases to embed noun phrases (e.g., [NPthe house at [NPthe back of [NPmy propertyNP]NP]NP] and sentences to embed sentences (e.g., [SMary thinks that [SFred is afraid that [SMelanie doesn't like himS]S]S]). Such a set of rules generates an infinite number of structures: as the morphemes-plus-compositional-rules story goes, our unbounded linguistic competence derives from the infinite generative capacity of such recursive syntactic rules underlying our finite performances.

"The Chomskyan argument from unbounded competence out of finite resources to recursive structure," hence to a language having that structure, is, as Lycan says "overwhelmingly persuasive" (p.407). If "representationalism" just means "to think is to . . . deploy a physically realized . . . representation" (p.404), I agree: insofar as thought is a process or succession of states,{4} it must be a representative procession (as thoughtfulness requires) of physical states (as physicalism requires). Furthermore, I agree that to be productive the representations in question must belong to recursive systems having compositional semantics. My disagreement begins with the further suggestion that for a subject (S) to think requires S to "harbor and deploy a physically realized mental representation" as "a state of S's central nervous system" (p.404); a state, moreover, that "has its content [and presumably its recursive compositionality] naturally rather than (in part) conventionally as public linguistic tokens do" (p.405).

Call having recursive compositionality and semantic content "naturally rather than by convention," or having them prior to and independently of (causal or etiological) association with public recursive compositional systems of representation, having these properties primordially. Lycan, like Fodor (1975, 1987), seems to think human nervous systems "harbor" states and mechanisms that are primordially (or "naturally" or "intrinsically") representational, compositional, and recursive; states and mechanisms that have their representational, compositional, and recursive characteristics prior to and independently of their association with public representational systems such as natural languages. This stronger proprietary language of thought hypothesis (PLT) positing a language (or languages) of thought besides natural and other public languages is not (or at least not obviously) the only possible explanation of the productivity of thought or behavior consistent with physicalism. I won't deny that thinking involves "physical . . . representation" (p.406, my emphasis). I, however, strongly suspect that it is something other than deployment of primordially recursive compositional systems of neurophysiological representations.{5}

The following gives reason to think and hope that PLT is not "the only game in town." Perhaps productivity is explicable on the alternative assumption that our languages of thought are external and thinking we do "in our heads" -- as much as is indubitably productive -- is an internalization of thinking aloud or on paper.

Seven Arguments Against Mentalese

(1) An argument from infrahumans, subpersonal systems, and feral humans. Among animals, only human thought and behavior, and among human beings only such as mathematical and linguistic behavior, is indisputably productive. Not coincidentally, only human beings command the resources of natural and other public languages . My cat knows where his food bowl is and sometimes wants to go out. I don't suppose he has an unbounded repertoire of thoughts. Though he can recognize me, it seems, over an indefinite range of distances, in various poses, lights, on different days, etc., this is perhaps much as the thermostat can recognize it's too warm over an indefinite range of temperatures, at indefinitely many times, from indefinitely many locations, etc.: here "unboundedness" is due to the continuousness of the quantity represented and "novelty" merely reflective of the indeterminacy of boundary conditions and the vagaries of circumstance: not a clear case of productivity.

How this argument from the coincidence of clearly productive behavior with deployment of overt recursive compositional symbol systems fares depends on what behavior besides linguistic behavior (including mathematical calculative behavior, e.g., as a subspecies) is inexplicable without recourse to recursive computations over compositional formalisms. The behavior of feral humans deprived of exposure to natural language, for instance, should exhibit little or no indisputably productive behavior such as would require a system of symbols being manipulated according to recursive rules. On PLT, it seems that one should even expect the behavior of feral or language-deprived humans to be as productive as that of fully competent speakers. This seems unlikely.

Lycan even suggests that there are many languages of thought in a single human individual, that different modules (informationally encapsulated subsystems) of our total cognitive system should be expected to have their own proprietary languages{6} -- so there will not only be mentalese, the language of our central processing units, but also, perhaps, visualese (a proprietary language of the visual processing system), olfactorese (for the olfactory module), etc. I suspect no such subsystem's "thought" or behavior is clearly productive except, again, perhaps, in the "analog" way in which "unboundedness" just derives from the continuity of what's represented and "novelty" from the vagaries of circumstance.{7}

(2) The calculator argument. Although some very bright animals, e.g., dogs and monkeys, don't indisputably think and behave productively, and command no linguistic resources, some very dense machines, e.g., my pocket calculator, command (or are commanded by) overt recursive formalisms and not so coincidentally do indisputably behave (if not think) productively. A pocket calculator can calculate sums and products never before calculated by it and even, perchance, never before calculated. If it's actual capabilities are limited, certain inputs and results being too large, e.g., for it to store, this is comparable to our own performance limits. There are grammatical sentences too long, given our limited life spans, attention spans, and memories, for us to comprehend. Such performance limits do not, according to Chomsky's way of thinking, impugn the boundless character of the underlying competence. The calculator that outputs garbage due, e.g., to the sum of two addends being too large for it to store, is, after all, incorrectly plusing (following an addition rule, but failing due to its memory limits) not correctly quusing (following the quaddition rule -- "if the sum of two addends is less than such and such output the sum, otherwise output garbage" -- and succeeding).{8} Again, clearly productive behavioral competence strikingly coincides with command of (or by) an overt recursive formalism. The failure of what intuitively seem much smarter beings (monkeys and dogs) to clearly instance productivity, here, makes the coincidence all the more striking.

(3) The phenomenological argument. When I engage in clearly productive thought (e.g., reflecting on Lycan's arguments) or behavior (e.g., balancing my checkbook) my introspected experience is of a train of sentences, an interior discourse, in English. My present activity, writing down these arguments, seems to be a process of editing, extending and transcribing such an interior monologue. Similarly, when I do arithmetic "in my head," in my experience, I typically say to myself under my breath, as it were, in my mind's ear, "let's see, two and nine are eleven, carry the one. . . ."

(4) The Einstein objection. Einstein supposedly said, "My pencil is smarter than I am." Perhaps this is apocryphal, but the inner/outer, thought/behavior distinction is very murky, particularly when it comes to linguistic and mathematical behavior. When I balance my checkbook, the physical symbols I manipulate are actual written tokens of numerals; and when I behave and think philosophically, as in composing this essay, I manipulate external tokens of words that I view on the CRT display and generate with keystrokes. Arguably we do our best thinking aloud and on paper. We only do rough drafts in our heads.

(5) The argument from empirical accessibility. While no less physical than brain states and happenings, public utterances and inscriptions are more accessible to our observation and recognition than brain states. How do I know what you think except by hearing you and reading you? How do I know what I think except by reflecting (which I experience as interior discourse), by discussing, and writing? Though "brain writing" (being something physical) would have to be detectable and perhaps readable by third persons (maybe by using some sort of futuristic EEG machine that reads and decodes mentalese), in practice we observe public utterances and inscriptions. So, the hypothesis that our languages of thought are natural or public languages and the hypothesis that thought's language is other than any natural or public language are equally good physicalist doctrines, but the former is a better empirical doctrine.

(6) An argument from testability. A different advantage of the hypothesis that our languages of thought are natural and other overt languages (NLT) over the proprietary language of thought hypothesis (PLT ), is that on the former we know what inscriptions to look for (of English, in my case), and are given some strong suggestions where to look (in the aural, vocal, and connected subsystems). If the task of discovering the languages of thought is some kind of decryption task (surely it must be), then, even if PLT is correct, the task of discovering and decrypting mentalese might still be hopeless. Decryption is hard enough when one knows something about the syntax and lexicon of the encrypted language. In investigating NLT we have both syntactic and lexical knowledge of the language (our natural language), and, perhaps, some inkling of what the thinking is about to go on. PLT leaves us bereft of syntactic or lexical clues to guide our search. Compare the American practice, in World War II, of translating messages from English to Navaho before encryption to the German practice of directly enciphering German text. The American practice utterly frustrated Japanese decryption efforts, while Turing and his cohorts, with knowledge of German syntax and lexicon to guide them, succeeded, famously, in breaking the German enigma code (Hodges 1983). With little to go on besides what the thinking is about, the decryption task PLT proposes looks hopeless. Even if the proprietary language of thought hypothesis were true, it might still be impossible for us ever positively and directly to confirm this by deciphering the brain's language.

(7) An argument from simplicity. Since mental operations on tokens of overt languages have to be posited to explain how we generate and understand utterances of English, mathematical formulae, etc., it would be most economical if the same also explained the productivity of human thinking and nonlinguistic behavior (if there is productive nonlinguistic behavior).

Concluding Remarks

I suggest, Watsonlike, that productive thinking (at least what we do "in our heads" and not on paper or aloud) is something like the sequential tokening of NL words, sentences, and discourse structures in our vocal, aural, and connected systems. If the task of discerning and decrypting these sequences, even with syntactic and lexical clues to guide it, is as considerable as I suspect, I surmise that such Watsonsian speculation is far from disconfirmed. Furthermore, for methodological reasons, NLT is the hypothesis that should be pursued first. If my estimate of the difficulty of the cryptanalysis project proposed by PLT is accurate, I suspect the only hope for confirming this hypothesis would be to do so negatively or indirectly, like this: since only languagelike recursive systems can explain productivity, and thought is productive, there has to be a language or languages of thought; empirical research discountenances identifying such with any natural languages or other public languages; so, the hypothesis that we think in some proprietary language -- mentalese -- is confirmed. Even if PLT were thus indirectly confirmed by the Chomskyan morphemes-plus-compositional-rules-hypothesis being the only conceivable (or just the best) explanation of productivity in thought and NLT being disconfirmed by empirical research, however, PLT might still be a research programmatic dud if the further decryption project it proposes is undoable. The only game in town argument might founder not because there are other viable physicalistic research programs, but because there are none -- because PLT itself is not viable. Fodor and his followers' conspicuous lack of success in discovering (much less deciphering) any inscriptions of mentalese, after all, is, first and foremost, why the proprietary language of thought hypothesis is beleaguered.{9} The NLT hypothesis gives behaviorism resources for explaining the productivity of thought and linguistic behavior in terms of the recursive rules and symbols of natural and other public languages. The hypothesis is that human thought is not primordially productive: it becomes so by manipulating and, insofar as the thinking is done "in our heads," internalizing the morphemes-plus-compositional-rules of public languages.

Go to Abbott's Reply: "Thinking Without English" by B. K. Abbott

Back to: Homepage; Curriculum Vitae.

References

Bundy, A. (1990). What kind of field is AI? In Partridge,D. & Wilks, Y. (Eds.), The Foundations of Artificial Intelligence (pp. 215-222). Cambridge: Cambridge University Press.

Chandrasekaran, B. (1990). What kind of information processing is intelligence? In Partridge,D. & Wilks, Y. (Eds.), The Foundations of Artificial Intelligence (pp. 14-46). Cambridge: Cambridge University Press.

Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, MA: Cambridge University Press.

Chomsky, N. (1988). Language and Problems of Knowledge. Cambridge, MA: MIT Press.

Dowty, D., Wall, R., & Peters, S. (1981). Introduction to Montague Semantics. Dordrecht, The Netherlands: Kluwer Academic Publishers.

Fodor, J. A. (1975). The Language of Thought. Cambridge, MA: Harvard University Press.

Fodor, J. A. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences, 3, 63-73.

Fodor, J. A. (1987). Why there still has to be a language of thought. In Psychosemantics (135-154). Cambridge, MA: MIT Press.

Kripke, S. (1982). Wittgenstein on Rules and Private Language. Cambridge: Cambridge University Press.

Lycan, W. G. (1987). Consciousness. Cambridge, MA: MIT Press/Bradford Books.

Lycan, W. G. (1993). A deductive argument for the representational theory of thinking. Mind and Language, 8, 404-422. An earlier version of this paper was presented by Lycan at the University of Rochester Conference on Belief and Belief Attribution: Rochester, NY, May 1991.

Newell, A., & Simon, Herbert A. (1976). Computer science as empirical inquiry: symbols and search. Communications of the ACM, 19, 3, 113-126. Reprinted in Haugeland, J. (Ed.). (1981) Mind Design (pp. 363-372). Cambridge, MA: MIT Press.

Partridge, D. (1990). What's in an AI program? In Partridge,D. & Wilks, Y. (Eds.), The Foundations of Artificial Intelligence (pp. 112-118). Cambridge: Cambridge University Press.

Partridge, D. & Wilks, Y. (1987). Does AI have a methodology different from software engineering? AI Review, vol.1, no.2, 111-120. Reprinted in Partridge, D. & Wilks, Y. (Eds.). (1990) The Foundations of Artificial Intelligence. Cambridge: Cambridge University Press.

Putnam, H. (1975). The meaning of `meaning'. In Mind Language and Reality: Philosophical Papers, 2 (pp. 215-271). New York: Cambridge University Press.

Putnam, H. (1988). Representation and Reality. Cambridge, MA: MIT Press.

Ryle, G. (1949). The Concept of Mind. New York: Barnes & Noble.

Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review, 20, 158-177.

Wittgenstein, L. (1958). Philosophical Investigations. Trans. Anscombe, G. E. M. Oxford: Basil Blackwell.

Notes

1. I am indebted to Bill Lycan for his comments on an earlier version of this paper and to Barbara Abbott for much fruitful discussion.^

2. Lycan 1993 (p. 406). Page references are to this work unless otherwise indicated.^

3. Here is another common statement (from Dowty, Wall & Peters 1981, p. 8): 'The meaning of the whole is a function of the meanings of the parts and their mode of combination.' Idioms, such as 'to put on the dog' (meaning to get all dressed up), are by definition exceptions to the principle of compositionality. Thanks to Barbara Abbott for this reference. The characterization in the text also owes to Abbott (personal communication).^

4. It is an old point (see, e.g., Ryle 1949, Wittgenstein 1958) that the generic `think' and many if not most specific mental predicates (e.g., `believe' and `wish') are ill understood as descriptive of inner (phenomenological or neurophysiological) processes and states . The main philosophical interest of connectionism, I take it, is its provision of ways of specifying computational mechanisms underlying the regularities of behavior propositional attitude psychology explains and predicts without having to embody the propositions propositional attitude psychology "quantifies over" as "syntactically" differentiated elements of a neurologically implemented language.^

5. Lycan wishes to dissociate himself from Fodorian commitments to innateness (public discussion: Conference on Belief and Belief Attribution, University of Rochester, Rochester, N.Y., May 1991). If the language of thought hypothesis Lycan defends supposes a primordially representative primordially recursive and compositional LOT I surmise he has not succeeded.^

6. Lycan has suggested this in public discussion (University of Rochester Conference on Belief and Belief Attribution) and privately in correspondence. This also seems the hypothesis most consonant with "homunctionalism" (Lycan 1989, chapt. 4).^

7. The primary contrast intended here is between the analog (continuous) and the digital (discrete), not between literal and analogical or metaphoric . . . though one might also say such analog "productivity" as the thermostat's is merely analogous to genuine, discrete state, productivity (hence the scare quotes). Questions arise here concerning the extent to which both analog "productivity" and "genuine" productivity may be artifacts of the metrics used to describe the thought or behavior in question -- of temperature scales in the case of the analog "productivity" of thermostatic behavior and sentential specifications of the content of "propositional attitudes" in the case of thought.^

8. Chomsky 1965 , Chapter 1, is the locus classicus for the competence/performance distinction. For discussion of plusing vs quusing see Kripke 1982: the "quus," "quaddiition," terminology is Kripke's own.^

9. Jerry Fodor asks why I call the language of thought hypothesis "beleaguered" (personal communication at the Buffalo conference). Besides the number of mentalese inscriptions thus far discovered being zero, there is the "not so well-kept secret that AI is internally in a paradigmatic mess" (Chandrasekaran 1990, p.14). The "methodological mess" (Partridge 1990, p.112) or "malaise" (Bundy 1990, p.215) in AI bespeaks Kuhnian "crisis" (Partridge & Wilks 1990, p.363) for the "symbolic paradigm" (Chandrasekaran 1990, p.18), i.e., for the AI research program that has been the flagship of the "physical symbol system" (Newell & Simon 1976) or "language of thought" (Fodor 1975) hypothetical line. Dismissing AI (Fodor: public discussion at the Buffalo conference) seems an attempt to maintain solvency by liquidating assets: this does not suggest a prospering enterprise. Hilary Putnam's widely accepted "externalist" account of the meaning of natural kind words (e.g., `water' and `tiger') in which the natural environment (the actual referents) and social environment (specifically speakers' roles in the division of linguistic [and extralinguistic] labor in their speech communities) are determinants of meaning also squares ill with PLT. Putnam's (1975) account seems the best explanation scheme for natural kind terms going in lexical semantics and Putnam's account seems to have the consequence that meanings (the semantic contents of propositional attitudes, and consequently propositional attitudes themselves) "aren't in the head" (Putnam 1988, p.72) in anything like the mechanically efficacious, causally salient way PLT supposes (cf. Fodor 1980).^