Minds and Machines, Vol. 7, No. 3 (August 1997), pp. 433-438.

Selmer Bringsjord, What Robots Can and Can't Be,Studies in Cognitive Systems, Vol. 12, Dordrecht: Kluwer Academic Publishers, 1992; xiv + 380 pp., $111.00 (cloth), ISBN 0-7923-1662-2.

Selmer Bringsjord’s What Robots Can and Can’t Be (henceforth Robots) is a "self-contained specification and defense of" the position "that AI will down the road give us robots (or androids) whose behavior is dazzling, but will not give us robotic persons. Robots, if you like, will do a lot, but they won’t be a lot" (p. 1). In style as in content the book blends seeming opposites: it formalizes literally dozens of arguments, rendering many in the austere language of first order modal logic; yet the would-be heaviness of so much formalization is allayed by the breezy prose (Selmer is a published novelist) and many helpful (and some lighthearted) illustrations. With several reviewers in the electronic journal Psycoloquy, I agree, "There is much to admire (and dispute!) in Bringsjord's stimulating book" (Brown & O’Rourke, pgph.1). Admirably, Robots "brings in lots of distinctions and subtleties" (Scholl, pgph.8) and "raises the level of discussion" with its "focus on particular arguments, their premises and conclusions" (Barresi, pgph.1). By "formalizing arguments that are often bandied about loosely, and by offering new thought experiments to remedy defects in previous ones" (Hobbs, abs.) Bringsjord "has tightened up many of the arguments against computer intelligence" (Rickert, abs.). Such as they are (see below). Bringsjord’s "provocative review of the foundations of artificial intelligence"gives an "accessible and useful" account "of many of the disputes swirling about the foundations of AI" (Korb pgph.1). Even the most negative Psycoloquy review praises Robots’ "clarity of argumentation" and finds it "refreshing to see philosophy presented in a clear enough fashion that it is possible quickly and easily to establish where there is an argument and how to evaluate it" (Mulhauser, pgph.31).

Nevertheless . . . for all the book’s admirable qualities, like Gregory Mulhauser, "I find the arguments themselves unsatisfying" (Mulhauser, pgph,31). Robots fascinates despite -- indeed, partly because of -- "[p]roblem[s] . . . at the foundation" (Scholl, pgph.8). Robots’ reworkings of familiar anti-AI arguments unwittingly spotlight problems endemic to these arguments. The unwittingness, furthermore, seems rooted in presuppositions widely shared by parties to these discussions on both sides. I return to these matters, in brief, below.

Overview

Chapter 1, Introduction, outlines the argumentative strategy of the book. Bringsjord identifies his target as the Person Building Project (PBP) or the thesis that "Cognitive Engineers will succeed in building persons" (p. 7). This project (or thesis), Bringsjord claims, implies that persons are automata (PERaut). His plan is to refute (PBP) by disproving (PERaut) and, more generally, AI-Functionalism, "the very general, vague, all encompassing functionalist theory of mind that underlies the Person Building Project" (p. 11).

Chapter 2, Our Machinery "contains the rough and ready ontology" plus an account of "the logico-mathematical language" employed throughout the book and provides as readable (and well-illustrated) a basic exposition of automata theory as you’ll find. This chapter also vets some "commonsensical propositions about personhood" (p. 83).

Chapter 3, Arguments Pro, Destroyed, might better be titled Arguments Pro Threatened: little in this chapter seems to warrant the "supremely confident tone" (Mulhauser, pgph.2). Bringsjord reviews several arguments for (PERaut) and kindred computationalist or functionalist claims; arguments which argue inductively from the intelligent-seeming doings and capacities of computers to the likelihood that our intelligent doings are essentially computational as well. He generally concedes that these arguments, though "inconclusive" (p. 120), "provide prima facie evidence" (p. 125) for identification of thought with computation. But, Bringsjord claims, he has "formidable deductive arguments" that trump these inductions. The inductions, he says, "will succeed only if I fail in the rest of this book" (p. 105). To actually bring forth such deductions -- to make good this threat -- then, is the main argumentive task of the rest of the work.

But first, Chapter 4, What Robots Can Be argues that "[c]ognitive engineers will eventually build a robot able to excel in the Turing test sequence" (p. 130); "taking us toward a time when robots will, say, compose well-crafted novels" (p. 5), excelling, perhaps, at "‘genre’ or ‘formulaic’ fiction"; they’ll "solve mysteries" and crimes (p. 130); and more. Bringsjord first defends this claim "indirectly . . . by refutations of attacks on it," arguing (against Rita Manning (1987)) that expert systems very probably can be developed to solve "complicated mysteries of the type solved by Sherlock Holmes" (p. 130). A second, direct line of defense cites the progress and prospects of Autopoesis"an interdisciplinary effort [which Bringsjord directs] at Rensselaer Polytechnic Institute, aimed at creating a computer which eventually produces, on its own, adult-level fiction" (p. 130).

Chapter 5, Searle begins the attack on (PERaut) and kindred claims of computationalism or "AI-functionalism". Bringsjord proposes a variant of John Searle’s Chinese room experiment involving an imagined idiot-savant "Jonah" who "automatically, swiftly, without conscious deliberation" can "reduce high-level computer programs (in, say, PROLOG and LISP) to the super-austere language that drives a Register machine (or Turing machine)" and subsequently "can use his incredible powers of mental imagery to visualize a Register machine, and to visualize this machine running the program that results from his reduction" p. 185. The variant is designed to be systems-reply-proof and robot-reply-proof, building in Searle’s wonted changes-- internalization of the program (against the systems reply) and added sensorimotor capacities (to counter the robot reply) -- from the outset. Bringsjord then considers three further objections -- the Churchlands’ (1990) connectionist reply, David Cole’s (1991) multiple-personality reply, and Rapaport’s (1990) process reply -- and offers rebuttals.

Chapter 6 Arbitrary Realization expounds and develops a variety of argument invoked against functionalism by Ned Block (1978) and since taken up by Searle (1983) and others; a variety of argument Bringsjord deems "to come about as close as we can come to an outright refutation of AI-functionalism" (208). His own arbitrary realization scenario envisages a "Turing machine M, representing exactly the same flow chart as that which [on AI-functionalist hypotheses] governs [the brain of someone in a mental state]" -- say, the state of fearing purple unicorns -- but "built out of 4 billion Norwegians all working on railroad tracks in boxcars with chalk and erasers (etc.) across the state of Texas" (p. 209). Bringsjord thinks it intuitively obvious that "[t]here is no agent constituted by M that fears purple unicorns" (210). Hence AI-functionalism is counterinstanced. Here, as usual, Bringsjord’s discussion is wide-ranging and often penetrating. The chapter’s concluding section argues (I think convincingly) that Tim Maudlin’s (1989) Olympia argument -- despite Maudlin’s claims to the contrary -- is, essentially, an arbitrary realization argument.

Chapter 7, Gödel seeks to rehabilitate the "mathematical objection" anticipated by Turing (1950) and pressed by J. R. Lucas (1964) and, more recently by Roger Penrose (1991). The argument is that human intelligence is not subject to the Gödelian limits that automata provably are: e.g., humans, according to Bringsjord, are able to solve the halting problem. Robotic "intelligence" is necessarily halting; real human intelligence is not.

Chapter 8, Free Will, argues, "that people have," as one reviewer describes it, "free will in a contra-causal, incompatibilist, ‘agent causation’ sense, and automata do not" (Hobbs, pgph. 15). We are radically autonomous; they cannot be.

Chapter 9, Introspection, defends what Bringsjord calls "hyper weak incorrigibilism." "Humans," Robots contends, "have with respect to a restricted class of properties, the ability to ascertain infallibly via introspection, whether they have these properties"; again, an ability, Bringsjord argues, no automaton could share.

Chapter 10, the Conclusion, briefly summarizes the book.

Comment: What Robots Can and Robots Can’t be

Arguments Bringsjord thinks sink (PERaut) and AI-Functionalism can be classed under two headings: arbitrary realization arguments and what Turing (1950) called arguments from various disabilities. (Bringsjord himself considers the possibility that the Chinese room is an arbitrary realization -- noting how the "Chinese gym" variant Searle (1990) offers against the Churchlands’ (1990) "connectionist reply" resembles Block’s (1978) Chinese nation scenario.) Arbitrary realizations aim to refute the AI-functionalist hypothesis that Computation suffices for thought. Suppose (ex hypothesi) that there is some program which, when implemented in a human brain, endows that brain (or human) with some mental property P. Then imagine this program implemented "arbitrarily"; say by a group of live intelligent agents, e.g., the Chinese populace (Block 1978)); or by a contraption made from inorganic junk, e.g. beer cans (Searle 1983, p.3). Now isn’t it clear that no such group or contraption has P or any other mental property? The AI-functionalist hypothesis is counterinstanced. But it’s not clear. Not everyone shares these intuitions. What Bringsjord is "ready to just take as a starting place" -- "that [the arbitrary realization argument] has once and for all demolished AI-functionalism" (p. 223) -- is just an impasse: his spade is turned; intuitions just clash.

Chapters 7, 8, and 9 argue from inherent disabilities of automata to the conclusion Persons can’t be automata "repeatedly . . . through instantiations of a simple schema" (Précis, pgph. 6). The schema:

Persons have [property] F.
Automata can't have F.
Therefore:
Persons can't be automata.
Given the properties Bringsjord invokes -- unhalting paracalculative abilities, libertarian-style absolute freedom, and introspective (albeit limited) infallibility -- of course his first premises are suspect. But the deeper problem is that the schema the Précis states is invalid, as shown by the following instance:
Persons have heights less than X feet.
X-footers can't have heights less than X feet.
Therefore:
Persons can't be X-footers.
Let X = the height of the tallest person yet + 1/8 inch: the first premise is true by stipulation; the second is logically true; but the conclusion is false. To validate the schema, the first premise schema needs to be "Necessarily persons have F." Remarkably, Bringsjord nowhere argues this modal claim for his favored "person-distinguishing properties" (p. 105). Unless the Fs robots can’t have are shown to be essential to personhood, "his argument would not be that computers can't be persons, but that they constitute different kinds of persons" (Hobbs, pgph.14); at least they might.

But the chief problem with Robots’ would-be case against person building or AI as a project is Bringsjord’s "uncontroversial conditional: namely, If the Person Building Project will succeed, then AI- Functionalism is true" (Précis, pgph. 20: my emphasis). If this is not controversial among cognitive scientists it should be. It tendentiously forecloses the possibility that, while thought is not essentially identifiable with computation, computers, nevertheless, do think. Persons might not necessarily be automata; yet some automata or robots might actually be persons. Let "workers in the Person Building Project" be as "committed to certain well-defined algorithmic techniques" (p. 10) as you will. Contrary to AI-functionalism, suppose implementing of these algorithms does not in and of itself suffice for personhood; something else is required: right environmental ensconsure, implementation in the right stuff, whatever; call it X. Suppose cognitive engineers, it happens, implement their algorithms in the presence of X: environed much as we are, plausibly robots are rightly ensconced; for all we know silicon is a right stuff. Having actually built a person, then, the Person Building Project succeeds; despite AI-functionalism being false. Engineering success as a practical matter isn’t hostage to theory in the internal or conceptual way Robots wants.

References

507 N. Francis Av. LARRY HAUSER
Lansing, MI 48912
lshauser@aol.com
http://members.aol.com/lshauser