Searle's Chinese Box:
The Chinese Room Argument and Artificial Intelligence by
TITLE PAGE | PREFACE | ACKNOWLEDGEMENTS | ABSTRACT| TABLE_OF_CONTENTS | GLOSSARY | INTRODUCTION | CHAPTER_1 | CHAPTER_2 | CHAPTER_3 | CHAPTER_4 | CHAPTER_5 | CHAPTER_6 | BIBLIOGRAPHY
The apparently intelligent doings of computers occasion philosophical debate about artificial intelligence (AI). Evidence of AI (such doings) is not bad; arguments against AI are: such is the case for. One argument against AI -- currently, perhaps, the most influential -- is considered in detail: John Searle's Chinese room argument (CRA). This argument and its attendant thought experiment (CRE) are shown to be unavailing against claims (of AI proper) that computers can and even do think. CRA is formally invalid and informally fallacious. CRE's putative experimental result is not robust (similar "experiments" give conflicting results) and fails to generalize from understanding to other mental attributes as claimed. Further, CRE depends for its credibility, in the first place, on a dubious tender of the epistemic privilege of overriding all "external" behavioral evidence to first person disavowals of mental properties like understanding. Advertised as effective against AI, Searle's argument is an ignoratio elenchi, feigning to refute AI by disputing a similar (but logically independent) claim of "strong AI" or Turing machine functionalism (FUN) metaphysically identifying minds with programs. AI, however, is warranted independently of FUN: even if CRA disproved FUN this would still fail to refute or seriously disconfirm claims of AI. Searle's contention that everyday predications of mental terms of computers are discountable as equivocal (figurative) "as-if" predications -- impugning independent seeming-evidence of AI if tenable -- is unwarranted. Lacking intuitive basis, such accusations of ambiguity require theoretical support. The would-be theoretical differentiation of intrinsic intentionality (ours) from as-if intentionality (theirs) Searle propounds to buttress allegations of ambiguity against mental attributions to computers, however, depends either on dubious doctrines of objective intrinsicality according to which meanings are physically in the head or on even more dubious (as if dualistic) notions of subjective intrinsicality according to which meanings are phenomenologically "in" consciousness. Neither would such would-be differentiae as these unproblematically rule out seeming instances of AI if granted. The dubiousness of as if dualistic identification of thought with consciousness also undermines the epistemic privileging of the "first person point of view" crucial to Searle's thought experiment.