Minds and Machines, Vol. 3, No. 1 (February, 1993), pp. 21-29.

The Sense of `Thinking':

Reply to Rapaport

by
Larry Hauser
 
It will be found that the great majority, given the premiss that thought is not distinct from corporeal motion, take a much more rational line and maintain that thought is the same in the brutes as in us, since they observe all sorts of corporeal motions in them, just as in us. And they will add that `the difference, which is merely one of degree, does not imply any essential difference'; from this they will be quite justified in concluding that, although there may be a smaller degree of reason in the beasts than there is in us, the beasts possess minds which are of exactly the same type as ours. (Descartes 1642b, pp.288-289)

1. Clarifications and Issues

I begin with several clarifications. By `thinking' I mean having (some) mental properties: perceptual properties (e.g., seeing, hearing, detecting), cognitive properties (e.g., knowing, believing, calculating), conative properties (e.g., wanting, needing, seeking), etc. `Calculate', in the context of this argument (Hauser 1991), means doing arithmetic: adding, subtracting, multiplying, dividing, determining percentages, and extracting square roots; Cal's apparent abilities. The major premise `Calculating is thinking', is meant to assert that whatever calculates thinks, not vice versa. My cat, Mary Jane, for instance, certainly knows where her food bowl is and often wants to go out; but M.J. can't add, subtract, multiply, or divide.

This last remark broaches related questions which seem the central points of contention between Rapaport (1991) and me: (1) whether the argument I present shows that Cal thinks in a philosophically interesting sense; and (2) whether there is a distinction of meaning or of sense between what Rapaport calls the "minimal" and "maximal" senses of `thinking'. In this second connection I also speak to Rapaport's worries about "Cal Jr. [who] can only add (but not subtract, multiply, or divide)," "MiniCal [who] can only add 2 and 3," and "MicroCal ... a piece of cardboard with `2 + 3 =' on one side and `5' on the other" (Rapaport 1991, Sect. 8), etc.; to whether we need to distinguish (or would be helped by distinguishing) different (maximal & minimal) senses of `thinking' to avoid having to say (presumably absurdly), that not only Cal, but MicroCal, and even "NanoCal, a piece of paper with `2 + 3 = 5' inscribed on it," (Rapaport 1991, Sect. 8) think in the same sense as you or I.

2. The Philosophical Interest of Cal's Claim to Be Thinking

With respect to the first question I note that Searle and Dretske both explicitly deny that pocket calculators really calculate -- that they really add, subtract, multiply, or divide, or can really (i.e. literally and truly) be said to have any mental abilities whatever. Both deny that Cal thinks even in the "minimal" sense of having some (even if just one) mental ability. Descartes would also deny that Cal calculates or has any mental abilities whatever (that Cal thinks, even in a "minimal sense"). Descartes, of course, would also deny my cat has any mental abilities whatever (that she thinks, even in a "minimal sense").

Rapaport wonders, "On what basis could one possibly deny that Cal was calculating?" (Sect. 6). Dretske (1985) and Searle (1980b) would say, "Cal lacks intrinsic intentionality" (the intentionality objection). Descartes, if he were to put it in Rapaport's terms, might say "only maximal thinking is real thinking, so piecemeal thought-like abilities, such as Cal's so-called `calculating' and M.J.'s so-called `desire to go out' are not really thinking (not real modes of thought) at all." I share Rapaport's amazement that anyone would deny, with Searle and Dretske, that calculators calculate -- I find it as amazing as Descartes's denial that cats, e.g., really see and hear and want things. (Descartes would say my cat doesn't really literally see and want things, it's just mindless mechanical as-if seeing and wanting, just as Searle says what Cal does is just unconscious, mechanical/syntactical "as-if" calculating.){1} I believe opponents of AI are driven to such lengths to save the thesis that machines don't think by the difficulty in making out any such distinction of meaning between "maximal thinking" and "minimal thinking" as Rapaport tries to make out.

3. Unambiguous `Thinking'

I turn, then, to Rapaport's attempt to make out such a "maximal sense" of `thinking' -- a sense he roughly identifies with thinking to a degree that suffices, or with having a sufficient number and range of mental abilities, to enable you to pass Turing's Test. And the first thing{2} to note is that if passing Turing's Test is the criterion of thinking in the same (maximal) sense as you and I, then neither cats, nor dogs, nor chimps, nor porpoises nor any other infrahuman animal thinks in the same sense humans do. So if the question at issue is whether Cal belongs to the "club" of things with minds, along with us; and if you think this "club" includes monkeys and dogs (Searle 1980a, p.421) and, perhaps, grasshoppers and fleas (Searle 1990a, p.587); then the Turing Test criterion (and likewise, perhaps, any other holistic criterion, or unity criterion) is going to rule out Cal at the cost of ruling out other things (e.g., cats) we may want to include. Perhaps this is why Dretske and Searle appeal to intentionality and not unity to rule out Cal; and Descartes, who does appeal to the unity of minds or universality of mental capability, takes this holistic turn, among other things, to exclude cats (whales, monkeys, clams, sponges, etc.) from the "club."

 I admit that the notion of "various sorts" of mental abilities is not entirely clear: Neither is it clear which sorts and how many (of what sorts) are required for "maximal thinking." This is why I only say some holistic requirement "may suffice to rule out Cal." (So, I haven't seriously undercut my own argument.)

 Well, some things have more mental properties than others. I trust I have more mental properties than my cat, but I suppose both she and I have mental properties (can be said to "think") in the same sense. Similarly, Lake Michigan contains more water than this glass contains -- you might say Lake Michigan contains water maximally and the glass contains water minimally -- yet surely both contain water in the same sense. More generally, the notion that there are "degrees of thinking with humans at one end of the spectrum and pocket calculators nearer the other end" (Rapaport) squares ill with the claim that `think' means something different when I say, "Cal thinks" (because he calculates, and calculating is thinking) than when I say, "You think" (because you hear me speak, agree with some of my points, disagree with others, etc.); it squares ill with the claim that the difference between my thinking and Cal's thinking answers to a difference in the sense of the word `thinking'. Rapaport doesn't establish that Cal doesn't think in the same sense (though not, of course, to the same extent -- who would say that?) as you and I. "An ordinary application of Occam's razor places the onus of proof on those who wish to claim that these sentences are ambiguous. One does not multiply meanings beyond necessity." (Searle 1975, p.40)

Perhaps cognitive science will find it necessary to elaborate such a spectrum of senses as Rapaport suggests. It is common, in science, to invent new terms or make proprietary use of existing terminology to mark crucial theoretical distinctions. Nothing I say here rules this out.{3} On the other hand, I do not see that anything Rapaport says argues much for the scientific necessity or utility of recognizing such a spectrum of senses as he proposes; and even if some such spectrum of senses were fruitfully made out, the philosophical issue I address remains. The issue is whether computers have any sort of mentality: what Searle (1980a, 1990c) and Dretske (1985), e.g., deny. If the inability of cats to pass Turing's Test does not show that the apparent mental capacities of cats aren't real mental capacities{4} (just less than maximal), then neither should the inability of pocket calculators to pass Turing's Test show that their apparent mental capacities aren't real (however minimal). Even if such a spectrum of senses can be scientifically made out -- say some spectrum of senses ranging from thinks1 (Cal?) to thinks9 (Einstein?){5} -- presumably, whether a thing thinks1 or thinks9 it thinks simpliciter).

I take it, then, my argument shows Cal thinks in the received philosophical (not uninteresting) sense of being a res cogitans or subject of mental (specifically, intentional) states. I also believe this to be a perfectly ordinary (not "some watered down") sense of the term `think'; but whether this second belief is correct does not affect the soundness (nor, I have just urged, the philosophical interest) of my argument. Since I take `think' to mean `be a res cogitans' or `be a subject having mental properties' in the major premise (`Calculating is thinking') as well as the conclusion (`Cal thinks') -- whether this usage is ordinary or not -- there is no equivocation on `think'. On this understanding of `think' (as Rapaport notes) the major premise seems obviously true. If `calculate' being a predicate taking a sentential complement (like `believe' and `wish') confirms the intuition that calculating (like believing and wishing) is a propositional attitude or intentional mental state, as it seems, a thing that calculates is ipso facto the subject of a mental state.

4. Addendum: Cal and Other Minds (Minical, Microcal, Etc.){6}

4.1 Occam's Eraser{7}

I propose we accept the principle Paul Ziff calls "Occam's Eraser." Paul Grice calls it "Modified Occam's Razor (M.O.R). Senses are not to be multiplied beyond necessity." As Grice points out, "Like many regulative principles, it would be a near platitude, and all would depend on what counted as `necessity'." (Grice 1978, p.118-119.) What Searle claims is that it is necessary to posit a systematic (intrinsic/as-if) ambiguity of mental terms because "the price for giving this distinction up would be that everything then becomes mental" (Searle 1989, p.198). "If you deny the distinction then it turns out that everything in the universe has intentionality." (Searle 1989, p.198; 1990a, p.586.) Perhaps Rapaport likewise supposes that recognizing a spectrum of senses of `thought' provides some traction on that notoriously slippery slope from scholar to sponge, or from Sparky to Cal to NanoCal. Since I further believe, with Saul Kripke,
It is very much the lazy man's approach to philosophy to posit ambiguities when in trouble. If we face a putative counterexample to our favorite philosophical thesis, it is always open for us to protest that some key term is being used in a special sense, different from its use in the thesis. We may be right, but the ease of the move should council a policy of caution. Do not posit an ambiguity unless you are really forced to, unless there are really compelling theoretical or intuitive grounds to suppose that an ambiguity really is present. (Kripke 1977, p.268)
I will only consider whether some such posits of ambiguity as Rapaport and Searle propose are scientifically necessary (or useful) for avoiding sliding all the way down the slope and off into panpsychic oblivion. This accords with the scientific tenor of Rapaport's comments, and the scientific pretenses (I am more doubtful of the scientific tenor) of Searle's proposals also.{8} Since the question that concerns me is whether calculators have any mental properties at all -- which Rapaport admits, and Searle denies -- I shall concentrate on Searle's proposed multiplication of senses rather than Rapaport's.

 4.2 Searle's Proposal

What Searle proposes (see, e.g., Searle 1980b) amounts to this: Each mental term in our folk psychological vocabulary gets its sense multiplied by two -- an "as-if" sense posited for each -- when it comes to be predicated of machines. Much as the fundamental intuition underlying Rapaport's call for a spectrum of senses of `think' is that there are "degrees of thinking, with humans at one end of the spectrum and pocket calculators nearer the other end" (Rapaport, p.29), Searle observes,
there can be an indefinite range of degrees of consciousness, ranging from the drowsiness just before one falls asleep to the full blown complete alertness of the obsessive. There are lots of different degrees of consciousness, but door knobs, bits of cloth and shingles are not conscious at all. (Searle 1990b, p.635)
Searle differs from Descartes, perhaps, in thinking that in addition to being "an on/off switch" thought "is a rheostat" (Searle 1990b, p.635); but he agrees with Descartes (and herein disagrees with Rapaport and me) that consciousness is thought's essence. Searle's "Connection Principle," requiring mental states to be "in principle accessible to consciousness" (1990a, p.586: Searle's italics), echoes Descartes's insistence that "there can be nothing in the mind of which it is not aware," which "is not a [conscious] thought or dependent on a [conscious] thought" (1642b, p.171).{9} Like Searle, Descartes also, famously, compares consciousness to light (e.g., 1642a, p.41; 1642b, p.135; and in many other passages).

Now, the trouble with all this as a scientific proposal is well known, and has long been understood: We have nothing comparable to our public ways of measuring degrees of illumination, or even (if behavioral evidence is unable to decide between the hypothesis of genuine and mere "as-if" awareness) of telling whether the "light" is on, in the case of consciousness. A staunch defender of Searle, Stevan Harnad, even shows (and accepts the consequences) that given Searle's consciousness-based distinction between real and mere as-if mental properties, "no scientific answer can be expected to the question of how or why we differ from mindless bodies that simply behave exactly as if they have minds"; consciousness "makes no objective difference"; and "subjective experience does not and cannot figure directly in mind modeling -- not even the way a quark does -- but must always be taken on faith" (Harnad 1991, p.52). That claims of artificial intelligence -- e.g., that Cal thinks -- are injurious to this neo-Cartesian faith{10} in consciousness Harnad, Nagel (1974; 1986), and Searle (1980a, 1990b) profess, I take it, does not scientifically force us to posit a systematic (intrinsic/as-if) ambiguity for all mental terms.

The question Searle calls "a deep mistake" (1990b, p.635) -- How do we tell which things (besides ourselves) are conscious (and to what degree), and what things aren't? -- of course, is precisely the scientifically imperative question. If Searle's neo-Cartesian proposals don't shed light on this longstanding anomaly (which, perhaps more than anything, sank the first wave of Cartesian, consciousness-based, research programs){11} -- if Searle's proposed scientific study of consciousness (Searle 1990a, p.585) cannot provide tolerably precise measures of different systems' and species' degrees of (conscious) "illumination" -- his counsel is scientifically futile.

Searle's proposed "answer" to the other-minds problems that bedevil such consciousness-centered views as his is just, "Use your ingenuity. Use any weapon at hand, and stick with any weapon that works" (1990b, p.640). Given Searle's proposed theoretical linkage (if not identification) of mind and consciousness, this, of course, is no answer to Searle's problem of providing some tolerably precise way of measuring (or at least detecting) conscious illumination independently of its behavioral effects. It is, however, the answer to my supposed slippery slope troubles. Just so! This is the basis of my argument.

4.3 Who's Afraid of the Slippery Slope?

What ingenuity suggests to explain why Cal displays `4' (or to predict he will display `4') after having `2', `+', `2', `=' entered on his keypad is the hypothesis "Cal adds 2 + 2 and gets 4"; and this works. If we allow (with Searle 1980a, p.422) that whether machines have mental properties is "an empirical question"; if we credit (as we should) working attributions of mental properties to machines -- "DOS recognizes the dir command," "Deep Thought considers more continuations of play at greater length than any human chess player," etc. -- above speculative "holiday" talk (Wittgenstein 1956, 38) about the nature of mind or the essence of thinking (such as Searle's "Connection Principle"); then, I submit, the empirical evidence suggests that machines running DOS, Deep Thought, and Cal have the mental properties of recognizing the dir command, considering alternative continuations of play, and calculating that two plus two equal four, respectively. My argument rests on homely empirical judgments such as these, not on any theoretical claims about the nature of mind such as Searle Connection Principle or even Rapaport's "fundamental assumption (the working hypothesis) of computational cognitive science" which maintains, "to think is to compute" (Rapaport 1991, Sect. 1).{12}

So, what of Cal Jr., MiniCal, MicroCal, etc.? Like Rapaport (p.30) I suspect a "boundary is crossed when we move from dynamic systems ... to static ones" that rules out MicroCal. And if the wind blows MicroCal down the street alternately displaying `2 + 3 =' and `5'{13} . . . am I now committed to the claim that MicroCal (or the wind, or the MicroCal-wind system) calculates (and therefore thinks)? Not by my lights. If pressed to justify this I should say something about attributions of calculative abilities serving to predict responses to mathematical queries not gusts of wind. If pressed even further I should say something about the social division of arithmetic labor and Cal's role in it (see, e.g., Putnam 1975). Such further justification, however, is really supererogatory here. Since my argument in no way presupposes or invokes an "externalist" theory (or any other theory) of the nature of mind (or the origins of meaning), and since I haven't any impression (does anyone?) that MicroCal blowing down the street is calculating (unlike the case of Cal!), I am under no obligation to provide any theoretical justification for denying Microcal calculates.

On the other hand, whoever denies Cal calculates (in the face of universal admission that "Cal is a calculator") does owe some theoretical justification. I have argued that no adequate justification for such denial has been provided by Searle, Dretske, or anyone else. Given the track record of attempts to make respectable science either out of or in place of "folk psychology," no such justification, I believe, is likely to be forthcoming. The state of psychological theory generally, and personality theory (what's at issue, if holism's at issue) in particular, strongly militate against the possibility of such justification. Neither does it seem we can simply "overcome the impulse," (Searle 1980a, p.423) to say such things as "calculators calculate," "Deep Thought plays chess,"{14} "DOS recognizes the dir command," etc.. It would be fruitless, I think, to try.

References

Bayle, Pierre (1697), "Rorarius," in Bayle's Historical and Critical Dictionary, trans. R. H. Popkin (Indianapolis: Hackett Publishing Co., 1991): 213-254.

Descartes, Rene (1628), Rules for the Direction of the Mind, trans. in J. Cottingham, R. Stoothoff, D. Murdoch, The Philosophical Writings of Descartes, Vol.1 (Cambridge Eng.: Cambridge University Press, 1985): 7-78.

Descartes, Rene (1637), Discourse on Method, trans. in J. Cottingham, R. Stoothoff, D. Murdoch, The Philosophical Writings of Descartes, Vol.1 (Cambridge, Eng.: Cambridge University Press, 1985): 109-151.

Descartes, Rene (1642a), Meditations on First Philosophy, trans. in J. Cottingham, R. Stoothoff, D. Murdoch, The Philosophical Writings of Descartes, Vol.2 (Cambridge Eng.: Cambridge University Press, 1984): 1-62.

Descartes, Rene (1642b), Objections and Replies, trans. in J. Cottingham, R. Stoothoff, D. Murdoch, The Philosophical Writings of Descartes, Vol.2 (Cambridge Eng.: Cambridge University Press, 1984): 93-383.

Dretske, Fred (1985), "Machines and the Mental," Proceedings and Addresses of the American Philosophical Association 59: 23-33.

Grice, Paul (1978), "Further Notes on Logic and Conversation," Syntax and Semantics: Pragmatics, Vol.3, ed., Peter Cole (New York: Academic Press): 113-127.

Harnad, Stevan (1991), "Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Problem," Minds and Machines I (1): 43-54.

Hauser, Larry (1991), "Why Isn't My Pocket Calculator a Thinking Thing?"; in L. Hauser and W. J. Rapaport (1991), "Why Isn't My Pocket Calculator a Thinking Thing: Essay, Comments, and Reply," Technical Report 91-20 (Buffalo: SUNY Buffalo Department of Computer Science).

Hauser, Larry (1992a: forthcoming in Philosophical Investigations), "Act, Aim, and Unscientific Explanation."

Hauser, Larry (1992b: forthcoming in Minds and Machines), "Reaping the Whirlwind: Reply to Stevan Harnad's `Other Bodies, Other Minds'."

Hauser, Larry (1992c), "Acting, Intending, and Artificial Intelligence." Presented at Colloquium on Action Theory, American Philosophical Association Central Division, Louisville, KY 25 April 1992.

Kripke, Saul (1977), "Speaker's Reference and Semantic Reference," in P. A. French, T. E. Uehling Jr., H. K. Wettstein (eds.), Midwest Studies in Philosophy, II (Morris, MN: University of Minnesota, Morris).

Nagel, Thomas (1974), "What Is It Like to Be a Bat?", Philosophical Review, 83: 435-450.

Nagel, Thomas (1986), The View from Nowhere (New York: Oxford University Press).

Putnam, Hilary (1975), "The Meaning of `Meaning'," in H. Putnam Mind Language and Reality: Philosophical Papers, Vol. 2 (Cambridge, Eng.: Cambridge University Press): 215-27; first published in Language Mind and Knowledge: Minnesota Studies in the Philosophy of Science, VII (Minneapolis, MN: University of Minnesota Press, 1975).

Rapaport, W. J. (1991), "Because Mere Calculation Isn't Thinking," in L. Hauser and W. J. Rapaport (1991), "Why Isn't My Pocket Calculator a Thinking Thing: Essay, Comments, and Reply," Technical Report 91-20 (Buffalo: SUNY Buffalo Department of Computer Science); page references are to the technical report.

Ryle, Gilbert (1949), The Concept of Mind (New York: Barnes & Noble).

Searle, J.R. (1975), "Indirect Speech Acts," in J. Searle, Expression and Meaning (Cambridge, Eng.: Cambridge University Press, 1979): 30-57.

Searle, John R. (1980a), "Minds, Brains, and Programs," Behavioral and Brain Sciences, 3: 417-424.

Searle, John R. (1980b), "Intrinsic Intentionality," Behavioral and Brain Sciences, 3: 450-457.

Searle, John R. (1990a), "Consciousness, Explanatory Inversion, and Cognitive Science," Behavioral and Brain Sciences, 13: 585-596.

Searle, John R. (1990b), "Who is Computing with the Brain?", Behavioral and Brain Sciences, 13: 632-640.

Searle, John R. (1990c), "Is the Brain's Mind a Computer Program?", Scientific American, Vol. 262 (1): 26-31.

Turing, Alan M. (1950), "Computing Machinery and Intelligence," Mind 59: 436-460; reprinted in A. R. Anderson (ed.), Minds and Machines (Englewood Cliffs, N.J: Prentice-Hall, 1964): 4-30; also reprinted in M. A. Boden (ed), The Philosophy of Artificial Intelligence (Oxford: Oxford University Press, 1990; page references are to Boden.

Watson, John B. (1913), "Psychology as the Behaviorist Views it," Psychological Review, 20: 158-177.

Wittgenstein, Ludwig (1958), Philosophical Investigations, trans. G. E. M. Anscombe (Oxford: Basil Blackwell).

Ziff, Paul (1960), Semantic Analysis (Ithaca, NY: Cornell University Press).

Notes

 1. "This [functioning of the corporeal imagination] enables us to understand how the movements of all the other animals come about, even though we refuse to allow that they have any awareness of things, but merely grant them a purely corporeal imagination." (Descartes 1628, p.42. Cf., Descartes 1637, p.139; 1642b, p.288-289).^

 2. Passing over Turing's worry, "that the odds are weighted too heavily against the machine. If a man were to try and pretend to be the machine he would clearly make a very poor showing." (Turing 1950, p.42)^

 3. Though what I have argued elsewhere (Hauser 1992a), perhaps, does.^

 4. Descartes (1637, pp.139-140) argues the inability of beasts, like machines, to "put together words in order to declare ... thoughts to others" or the inability of beast (or machine) "to give an appropriate meaningful answer to whatever is said in its presence, as even the dullest of men can do" -- failure, in effect, to pass the Turing Test -- does show that their apparent mental capacities aren't really mental.^

 5. Perhaps, as an anonymous reviewer suggests an "empiricist account" based on "variations of the Turing Test" could even be made "tolerably precise": there would be "a version of the Turing Test which a chimp, for example, could pass" (evidencing thought7, say), "a version which a dog could pass" (evidencing thought5, say), etc.^

 6. This section owes much to the comments of an anonymous reviewer for Minds and Machines.^

 7. "There is no point in multiplying dictionary entries beyond necessity. (That is the point of Occam's eraser)" (Ziff 1960, p.44). Thanks to Barbara Abbott for making me aware of Ziff's discussion as well as Grice's and Kripke's discussions (below).^

 8. Searle (1990a, p.585) writes, "since Descartes, we have, for the most part, thought consciousness was not an appropriate subject for a serious science or scientific philosophy of mind." It would be closer to the truth, I think, to say, "From Descartes into the twentieth century (until the advent Watsonian behaviorism and Freudian psychoanalysis) nothing but consciousness was thought an appropriate subject for a serious science or scientific philosophy of mind."^

 9. If Searle's insistence on granting overriding epistemic privileges to "the first-person point of view" in his Chinese Room Experiment (1980b, p.451) implicitly "invites us to regress to the Cartesian vantage point" (Dennett, 1987, p.336), this "Connection Principle" is an engraved invitation.^

 10. Descartes professes such faith as follows: "That there can be nothing in the mind, in so far as it is a thinking thing, of which it is not aware seems to me self evident." (Descartes 1642b, p.171)^

 11. The difficulty "that the arguments of the Cartesians lead us to judge that other men are machines," has long been understood to be "perhaps the weakest side of Cartesianism" (Bayle 1697, p.231): the consciousness-based program's most serious and longstanding anomaly. This difficulty was pressed against introspectionist, consciousness-based research programs by Watson (1913) and driven home (decisively, I believe) by Ryle (1948) and Wittgenstein (1958). Whatever the defects of metaphysical behaviorism, methodological behaviorism's turn away from the "data" of consciousness, away from the vagaries of "the point of view of the agent" and how it seems to me (the only agent whose "first-person point of view" I can be privy to) "from my point of view" (Searle 1980a, p.419-420), the turn towards publicly observable (and possibly measurable) data, such as behavior, still seems (scientifically, clinically, and just practically) sound policy.^

12. This is, roughly, the hypothesis Searle (1980a and elsewhere) terms "strong AI."^

 13. An anonymous reviewer of an earlier version of this paper imagines Abbey the abacus being blown about by the wind.^

 14. Since Searle also holds that many attributions of actions, e.g. playing chess, presuppose mental states (intentions), he must even deny that chess playing computers play chess, perhaps that word processors really process words, etc. (Hauser 1992c discusses this.)^

 
Back to: Why Isn't My Pocket Calculator a Thinking Thing
Email me:: Your Comments. Back to: Homepage; CurriculumVitae.