Minds and Machines, Vol. 10 (2000), No. 1, pp. 115-117.
Ordinary Devices: Reply to Bringsjord's "Clarifying the Logic of Anti-Computationalism: Reply to Hauser"1

What Robots Can and Can't Be (hereinafter Robots) is, as Selmer Bringsjord says "intended to be a collection of formal-arguments-that-border-on-proofs for the proposition that in all worlds, at all times, machines can't be minds" (Bringsjord, forthcoming). In his (1994) "Précis of What Robots Can and Can't Be" Bringsjord styles certain of these arguments as proceeding "repeatedly . . . through instantiations of" the "simple schema"

I complained of the invalidity of such inherent disabilities arguments when construed nonmodally, in accord with this schema, in my review. I surmised, as Bringsjord's clarification confirms, that the intended schema was probably more like Persons must have F.
Automata can't have F.
Therefore:
Persons can't be automata.
My real complaint is that what's missing in Bringsjord's enthymatic schema is also missing in Robots' argumentation. "Unless what automata can't do is essential to personhood," as I put it, "the upshot of [these] disabilities arguments 'would not be that computers can't be persons, but that they constitute different kinds of persons' (Hobbs ¶6)" At least they might, for all Robots has shown. Bringsjord "never argues for the necessity of his favored 'person-distinguishing properties' (Bringsjord 1992, p. 105)" (Hauser 1997). That is the substance of my complaint.

Robots, Bringsjord says, instantiates F in the preceding schema to "a technical precise correlate" of each of several "familiar properties often offered as candidates for capturing, together, the essence of personhood." His clarification cites autonomy,precised iterative agent causation; and self-consciousness precised hyper-weak incorrigibilism. Iterative agent causation is an alleged ability of persons to "bring about events (or states of affairs or propositions) . . . directly, in a manner unmediated by conventional causation" (Bringsjord 1992, p. 280). Hyper-weak incorrigibility is an alleged ability of persons, "with respect to a certain restricted class of properties [e.g., "seeming [to themselves] to be in pain" (Bringsjord 1992, p. 335: original emphases)] to ascertain infallibly, via introspection, whether they have these properties" (Bringsjord 1992, p. 329). The trouble is that these technical correlates don't inherit the prima facie claims autonomy and self-consciousness have to capturing (something of) the essence of personhood. Not even the facts of personhood (so I allege); much less the essence. If hyper-weak incorrigibility were essential, the most infinitesimal possibility of my being mistaken about seeming to myself to be in pain would absolutely disqualify me as a person! With iterative agent causation lack of an undetectable supernatural power would be disqualifying! For all I know -- on these assumptions -- there are no persons!  I know better.  This is the motive of my complaint.

In a second line of response to me Bringsjord opposes my claim that the person building project "is not hostage to theory in the internal or conceptual way Robots wants"; my claim that "the irreducibility of personhood to computation -- if such be so -- is no insuperable bar to the person building project." Suppose something else besides computation is necessary -- being rightly environmentally ensconced, made of the right stuff, whatever. Call it X. I reasoned:

Suppose Cognitive engineers, as it happens, implement their algorithms in the presence of X.
. . . Having actually built a person, then, the Person Building Project succeeds; the falsity of AI-functionalism [Computationalism] notwithstanding.
"Here," Bringsjord says I face an "acute dilemma." Either I am "promoting AI as alchemy": AI-engineers build the computational mansions . . . then a miracle happens. Or else "X is reasonably instantiated to something over which aspiring person builders have conscious command" -- say environmental ensconcement -- in which case Robot's argument "is easily modified" to show that there are person-essential properties "(e.g., those associated with iterative agent causation) that no rightly environmentally ensconced [or other tractable-X endowed] automaton can possess."

There is a way between the horns of this dilemma: X is neither alchemical nor under "conscious command" but reliably present. Hanging reliably kills humans, though hanging alone doesn't suffice. X amount of gravitation is required, also. X being naturally present, hangmen successfully ply their trade with neither knowledge nor control of X; without alchemy. To set a fire requires no knowledge -- much less full knowledge -- of the chemical nature of combustion.

In the AI person-building case, Bringsjord seems to think the missing supernatural-power and infallible-cognition conferring ingredient is consciousness in the form of qualia or phenomenological experience. I don't believe in supernatural powers or infallible cognitive abilities myself; and I'm vague on how qualia are supposed to confer them. It's not me, I think, who dabbles in alchemy.

One good dilemma deserves another. Bringsjord maintains hyper-weak incorrigibility and iterative agent causation are necessary for personhood. The necessity must be either a priori (conceptual); or else a posteriori (scientific-essential). If a priori, it must be warranted by conceptual analysis; and it's not. The thought that, by definition, lack of hyper-weak incorrigibility or iterative agent causation disqualifies you as a person is wildly counterintuitive. If a posteriori, it must at least be scientifically warrant-able: in this regard, claims about hyper-weak incorrigibility, iterative agent causation, and also phenomenological experience (in my opinion), are highly suspect.

Notes

1. Hauser 1997. Bringsjord forthcoming. Quotations and other cited opinions from Bringsjord and Hauser, herein, are from these works unless otherwise indicated.

References

Bringsjord, Selmer (1992).  What Robots Can and Can't Be.  Dordrecht: Kluwer Academic Publishers.

Bringsjord, Selmer (1994). 'Précis of What Robots Can and Can't Be', Psycholoquy 5.59 [http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?5.59].

Bringsjord, Selmer (forthcoming). 'Clarifying the Logic of Anti-Computationalism: Reply to Hauser', Minds and Machines.

Hauser, Larry (1997). 'Review of Selmer Bringsjord's What Robots Can and Cannot Be', Minds and Machines, Vol. 7: 433-438.

Hobbs, Jesse (1995).  'Creating Computer Persons: More Likely Irrational than Impossible: Book Review of Bringsjord on Robot-Consciousness', Psycoloquy 6.14: Article 9 [http://cogsci.ecs.soton.ac.uk/cgi-bin/newpsy?robot-consciousness.9].
 

LARRY HAUSER


Philosophy Department
Alma College
614 W. Superior
Alma, MI 48801, U.S.A.
(hauser@alma.edu)