Behavior and Philosophy, 22, 1, 29-33.

Movements, Actions, the Internal, and Hauser Robots

Keith Gunderson

University of Minnesota

Abstract

As a bit of stage setting for displaying my reactions to Larry Hauser's very interesting paper let me distinguish between two classes of robots. The first I shall label Utopian strong AI robots or USAIs -- here using strong in Searle's sense, meaning endowed with real understanding, thoughts, etc., but I depart from Searle's terminology in that I won't assume that their mental life is supposed to derive (implausibly, according to Searle) solely from their programming. They are, as it were, made of the "right stuff," whatever it is, and can do most everything we can do. The second is a class of mindless "guys" that can mimic either us or USAIs. These I shall dub MUSAIs.{1} USAIs mimic us and have minds, and MUSAIs mimic USAIs and us and don't have minds.

Although we don't yet co-habit the planet with MUSAIs, and although it might not really make sense to suppose we ever could, we obviously do associate with their less accomplished prototypes. What, in effect, philosophers such as Searle and Hauser, AI researchers, and others argue about is whether we also already associate with any first approximations of USAIs. What interests me most in Hauser's provocative critique of Searle, is the extent to which he thinks that we do, and why.

Hauser begins by arguing that something quite like the distinction between action and movement seems called for in the case of robots raising their arms as opposed to their robot arms being raised for them, and that this suggests that the distinction has more scope than that of intention or aim. I agree, and regard the point as important. He also claims that our ascriptions of mentality to non-living robots are not to be dismissed as all equally metaphorical, figurative, or as-iffy. They are, rather, more or less so, and the more literal ones need a subtler philosophical assessment than Searle's accounts provide. Attention to degrees, Hauser believes, is sufficient to aim our rudders away from any drift toward panpsychism, too much fear of which may lead us to deny, quite implausibly, I concur, that Schank and Abelson's SAM even processes stories, that Deep Thought really plays chess, etc. So too, on the positive side, a subtler assessment, for Hauser, yields up in the case of programmed robots, analogies to human actions emanating from internal or so-called "intrinsic" states and operations, and justifying counterfactuals concerning behavior under aspects.

So do these internally propelled agent-like acting-under-aspects programmed robots (or Hauser Robots) really have minds? sort of have minds? or what? Do various extant and easily imagined future Hauser Robots -- those with fairly rich behavioral repertoires -- belong in a class of "psychologically real" first approximations of USAIs, or should they instead be consigned to the class of first (or second or third) approximations to psychologically vacant ("no one's at home") MUSAIs? Hauser seems to group them with first approximations of USAIs, and, his paper, though charmingly modest and non-evangelical, is, after all, it seems, an apology on behalf of some allegedly current strong AI models of the mind. Although he supposes that in some sense of action -- his "full blooded" actions -- "humans act and computers don't," and that the sense in which computers don't has to do with underwriting "full legal and moral responsibility," this, supposedly, doesn't differentiate their performances from various types of clearly mentalistic human ones perpetrated by kids or disadvantaged adults, or non-disadvantaged adults "in the heat of passion" etc. And, he claims, shadowing Searle's reasoning for actions being movements caused by intentions, "an argument for the view that movements caused by programs are full-blooded acts would be that programs `enable us to justify counterfactuals' concerning behavior under aspects" (p. 27). Now I agree that it could be an argument. But if "full-blooded acts" entail "full-blooded minds," it would not, by itself, prove decisive. It would depend, as well, not just on the nature of the program, but on the medium in which the program was instantiated (or "roboted") and whether it was of the "right stuff" whatever that may be,{2} and reasons for classifying whatever robot is under consideration as a USAI approximation instead of a MUSAI one. More of this in a moment, after remarking on some of the earlier points in the paper.

Hauser regards as a quite implausible conjunction Searle's mentalization of all behavior and his denial of any mental states to computers. For then "no computer can really, literally be said to have perceptual states (e.g., to see, or hear, or detect anything), nor even (literally) to do anything. Not only doesn't Schank and Abelson's story understanding program, SAM, understand the stories it processes ... since to process is to act ... it doesn't even process them!" and "Neither do adding machines add, nor calculators calculate." What I'm not clear about here, however is whether Hauser rejects the conjunction but accepts Searle's mentalization of all behavior. It looks to me as if he might. But if what is called behavior -- a rather tricky issue -- includes such activities as adding and calculating, etc. I think there are problems. What computers are being said not to be able to do on Searle's account and which Hauser thinks they can do, seems to me a very mixed bag. I agree with him that if Searle's position leads to the ascriptively stingy conclusion that computers don't even compute, or detect things, and that adding machines don't add -- except figuratively or metaphorically -- it is in trouble.{3} But for me that would be only because I believe that there can be literal non-figurative examples of adding and computation and digging holes and washing clothes which are nevertheless non-mental. I think that Hauser correctly sees (contra Searle) that machines literally do (or "ape" -- my term) all sorts of things which, when done by us, involve mentality. But I think he is wrong in believing that those literal non-figurative actions of machines also (thereby?) partake of the mental. The real magic of mimicry, however, is that typically mentalized human actions can be replicated using non-mental means. Furthermore, I don't see Searle's account as having any trouble for implying that computers can't comprehend language, or see, or hear ... as least the last two of which seem clearly to involve consciousness,{4} and may more appropriately be viewed as states of conscious awareness than as types of behavior.

Absent from Hauser's discussion is any mention of the dreaded C word, so I'm not sure how he thinks various behaviorally versatile Hauser Robots might fare with respect to that messy topic. Perhaps he thinks they may be plausibly graced with a somewhat more cautiously crafted mentality without getting into the matter -- a strategy easily empathized with. But the crucial difference between so-called full-blooded actions and their more anemic analogs, just is the messy difference between actions which support attributions of full legal and moral responsibility and those which don't. For those which provide such support presuppose and depend on the presence of conscious intent.

As puzzling and opaque as the topic of consciousness is, it can at least be shown how aspects of it are relevant to assessing the philosophical upshot of Hauser Robots. Although the performances of such robots provide us with nice analogs of the distinction between some kinds of action and movement, and programs internal (or intrinsic) to machines may support counterfactuals concerning behavior under aspects reminiscent of the manner in which intentions internal to a human agent do, this should not nudge us into classifying Hauser Robots with those which have a real mental life (approximations of USAIs). If my class of MUSAIs is well-imagined, then there is nothing a Hauser Robot could do which could decide for us whether it should be classified one way or the other. Something further is needed to break the tie, and, alas, that involves conscious.

Here is one way to see why the topic of consciousness cannot be avoided in assessing Hauser's challenge to Searle. Recall all those arms going up at the beginning of the paper. I could, supposedly, grab Hauser's arm right now and force it (make it move) up. Or he could raise it by himself to wave off some silly objection of mine to his paper. So too, I might do much the same thing to a Hauser Robot by interrupting its programmed performance, which it could then resume, executing thereby some arm raising of its own. But just how much of the same thing is going on here? Although I, as an arm-bully of Hauser and a Hauser Robot, may "distinguish much the same difference or differences" in the two cases, and you, watching our antics may too, what about what Hauser and the Hauser Robot distinguish? Here the differences turn out to be quite different. Hauser, no doubt, can feel his arm being forced up, and if he tries to resist, he can feel that too. For his body is a medium involving proprioceptive awareness. In this way he is consciously intimate with his actions -- his behavior -- non-linguistic and linguistic{5} alike. It is that which makes our behavior mental, and which, in turn, feeds back into what we do and how, though neither that it does nor how, is visibly manifest in the observed behavior itself, nor would it appear in any objectifying display of our underlying "programs" or analogs thereof, should any be forthcoming.

There is a beguiling feature of Hauser Robots that makes them seem like they might have the right stuff. And that is that what makes them tick is their programs which can be properly viewed as comprised of properties intrinsic or internal to the robot agent. And because minds including their intentions are typically viewed as, in some sense at least, internal to human agents -- ignoring here subtle questions about individualism -- robots with programs may seem very much like humans with minds. There are other reasons, of course, such as the fact that both minds and programs process symbols, etc. But here I mean mainly to call attention to the way in which internality itself may be suggestive of the mental.

Suggestive or not, there is really no reason to suppose that the internal or intrinsic properties of programs in Hauser Robots provide us with anything like the conscious stuff requisite for intentional behavior. The basic difference between ourselves and mere Hauser Robots is that in the case of the latter we can tell a complete story about how they work from the outside, as it were. This goes for their programs as well. Although internal or intrinsic to the machine, they are fully accessible to us. Their hardware and software designers and installers can, in principle, give us an objective and complete account of what makes them tick.{6} What makes them self-adaptive, or teleological, as it were, and philosophically charismatic, turns out to be as amenable to exhaustive objective public display as the springs, spokes, and wheels, inside a non-self-adaptive wind-up clock. (One reason why vitalism is kaput.) Their ontology, in other words, is a completely third-person one, whereas ours is, at most, only half (or maybe only two-fifths) of that.

In the case of human beings, however, or USAIs and their approximations, there is a first-person point of view to be taken into consideration. The mind-body problem and/or the problem of other minds unsimply is (or are) the problem(s) of showing how those considerations can be made to meld with what can be said of ourselves from a third-person point of view: our physical constitution and behaviors and whatever plausible models of such are on hand (they too being of third-person vintage). If Hauser Robots really do have minds, then a third-person account of minds in terms of actions and programs is adequate, and there is no longer a mind-body problem.{7} This would be a much more dramatic result than I think Hauser has, or thinks he has, produced.

Hauser's Reply: "Propositional Actitudes"

Back to: Home page; Curriculm Vitae;Virtual office.

References

Gunderson, K. (1985). Mentality and machines, 2nd Edition. Minneapolis, Minnesota: University of Minnesota Press.

Hauser, L. (1994). Acting, intending, and artificial intelligence. Behavior and Philosophy, 22, 1, 22-28. Presented at the Colloquium on Action Theory, American Philosophical Association Central Division, Louisville, KY, 25 April 1992.

Searle, J. R. (1980). Minds Brains and Programs. Behavioral and Brain Sciences, 1, 1980, 417-424.

Searle, J. R. (1992). The rediscovery of the mind. Cambridge, MA: MIT Press.

Notes

1. It's highly debatable, of course, whether the idea of MUSAIs is really well-imagined. Any not-yet-extinct radical behaviorist would, of course, deny it, which is neither here nor there. But less radical and less endangered species of philosophers of mind primed to profess various systematic conceptual connections between thought and behavior might also more than wince. I shall only assume that the idea of MUSAIs is not obviously incoherent, and, much more to the point that all sorts of distant approximations to them are already extant. Parenthetically I think that Searle would find it unproblematic and important. For in a recent book The Rediscovery of the Mind he is committed to the coherency of a thought experiment wherein we sense our conscious awareness and control of what we are doing shrinking towards the point of disappearance, while our bodies go on moving and interacting in the world in the same way they did before the shrinkage set in. So it's almost as if we gradually turn into MUSAIs! If he sticks to his guns about all behavior being mentalistic (as cited by Hauser) I guess he would have to say that as we turn, in effect, into MUSAIs we stop really behaving!^

2. It's not that we know what the range of "right stuff" is. It's rather that we know what the wrong stuff is! As, for example, in the case of transparently non-mental electric eyes which open doors for us at supermarkets but to which we have no reason to ascribe either visual awareness or feelings of politeness to.^

3. Whether Searle could wiggle out of this or not, I'm not sure. For example, he might qualify "behavior" so that it is read "conscious intentional behavior," though then his complaint concerning "standard behaviorist analyses" would seem to lose much of its punch. But the point seems to me significant apart from Searle. For various adamant anti-mechanists have been that explicitly stingy, and I recall once defending the thesis that airplanes can fly against objections to it by Virgil Aldrich. (I'm a little to soft on Aldrich about this in Mentality and Machines (Gunderson 1985, p. 214).) Part of the old litany is "Robots and computers don't deserve the credit for what they do, only their designers and programmers do." I don't like that tune, and I don't think Hauser does either.^

4. And it seems to me it is mainly this -- the presence of consciousness -- that Searle is appealing to in his famous Chinese room argument (Searle 1980). For that argument depends on his sense of not having any understanding -- or conscious awareness of meanings -- as he manipulates Chinese inscriptions, which is contrasted with the case where he does -- i.e. when recognizing inputs in English and concocting his responses to them.^

5. For example, when I say "I am going to the bank" because, typically, what I say is a function of what I mean to say, and what I mean to say is, also typically, something I am as conscious of as limb position, I do not need to disambiguate my remark for myself in the way that others may need to: "Going fishing? or making a deposit?" They may need to ask. But I virtually always already know.^

6. Just as one can explain the cybernetic self-adaptive (or teleological) performances of thermostats and various strategic missile systems without positing any conscious mental awareness in them, we can explain the doings of Hauser Robots without reference to inner mental (or conscious) states. But this is part of their philosophical fascination for us. They are properly viewed, I think, as belonging to a long string of technological accomplishments which spawn metaphysical surprise: surprise in that they provide examples of artifactual performance which at some earlier time would have seemed literally impossible for anything other than a living conscious being to execute. (Consider: trying to imagine in 1500 anything other than a living conscious being being able to play chess, calculate the bills, etc.) Typically what we have been confronted with have been dementalized examples of activities which we hitherto assumed only organisms with minds were capable of. The performance of Hauser Robots does not show us that they have minds. It instead shows us that there needn't be minds present for there to be something like an agent which is capable of performing in such a way that its performance exemplifies some of the salient differences between action and movement as well as acting under aspects on the basis of self-contained internal causes. Conversely: suppose I am able to ensconce myself in an enormous gumball machine in such a way that I mimic its mindless gumball dispensing function. That doesn't mean I've become mindless (though it would seem that I'd "lost my mind"), it only means I've mimicked something that is. (Cf. Gunderson 1985, p. 175.)^

7. Unless aspects of sentience or "qualia" or "raw feels" were somehow separated out and seen as still intractable for special reasons.^