A1: Programs are formal (syntactic)
A1S: Strong modal version of A1: Programs are necessarily formal
A1*: Envisaged strengthening of A1: Programs are purely formal,
having only syntactic properties.
A2: Minds have semantics.
A2S: Strong modal version of A2: Minds necessarily have semantics.
A3: Form (syntax) is not (sufficient for) semantics.
A3S: Strong modal version of A3: Necessarily syntax is not (sufficient
A4: Brains cause minds.
AI: Artificial intelligence.
AIP: AI proper: Computers can (roughly, someday will) think.
BSA: The "brutally simple" argument: A1 & A2 & A3, therefore,
BSAS: Strong modal version of BSA: A1S, &
A2S & A3S, therefore, C1S.
BS1A: Supplemented BSA 1: A1 & A2 &
A3 & S1, therefore, C1.
BS2A: Supplemented BSA 2: A1 & A2 &
S2 & P1, therefore, C1.
BS12A: Doubly supplemented BSA: A1 &
A2 & S1 & S2, therefore EL.
C1: Programs are not (sufficient for) minds.
C1S: Strong modal version of C1: Necessarily
Programs are not (sufficient for) minds.
C1W: Weak modal version of C1: Possibly Programs
are not (sufficient for) minds.
C2: Sufficiency for mind requires causal
powers (at least) equivalent to those of brains.
C3: Any artifact that produced mental phenomena
would have to duplicate the specific causal powers of brains, and it could
not do this just by running a program.
C4: The way human brains actually produce
mental phenomena cannot be solely by virtue of running a computer program.
COG: Cognitivism: To explain mental phenomena
is to specify the programs from whose execution they derive.
CP: Searle's "Connection Principle": unconscious
mental phenomena are in principle accessible to consciousness.
CPU: Central processing unit.
CRA: The Chinese room argument: A1 &
A2 & A3, therefore, C1; C1 & A4, therefore, C2 &
C3 & C4.
CRE: The Chinese room thought experiment.
CREA: Would-be Chinese room experimental
argument: A2 & IR, therefore, C1.
CREAS: Strong modal CREA: A2S & IRS,
CREAW: Weak modal CREA: A2 & IRW, therefore,
CSAM: Imaginary souped up Chinese version
of SAM implemented by Searle in the Chinese room.
DOS: Short for MS-DOS, an acronym for "Microsoft
Disk Operating System", once the standard operating system for IBM and
EE: Explanatory exclusion principle: "No
event can be given more than one complete and independent explanation"
(Kim 1989, p. 79).
EL: Eliminativism: Nothing has mental or
semantic properties: no minds or meanings exist.
FUN: Turing machine Functionalism or Computationalism:
Programs are (sufficient for) minds
FUN': Specific programs are (sufficient for)
minds for specific types of system.
In: "In" (with a capital "I"), or "intrinsicality,"
connotes a relation between things (including states and events as well
as objects) stronger than physical containment. X's being In Y requires
everything necessary for X being what it essentially is (or for individuating
X) being In Y. Properties are also held to be In their subjects if everything
necessary for having the property is intrinsic to (or supervenient on the
local physical properties of) the subject: in this connection intrinsicality
can be viewed as a mechanistic constraint on essence, a necessary condition
for essentiality of attributes.
IR: Intermediate would-be experimental result:
Programming does not suffice for semantics or, equivalently, some things
implement Programs without having semantics.
IRS: Strong modal version of IR: Necessarily
Programming doesn't suffice for semantics.
IRW: Weak modal version of IR: Possibly Programming
doesn't suffice for semantics.
MR: Multiple realizability: Identical types
of mental attributes can be realized or caused by different programs (procedural
MR) or in different physical types of systems (hardware MR).
NLU: Natural language understanding.
P1: Presupposition 1: Some things instantiate
PC: Personal computer.
Program: "Program" (with a capital "P") refers
to just the Turing test passing programs (insofar as behaviorism is in
question) or just the Turing test passing programs which do it in the right
way or by implementing the right program (insofar as functionalism is in
R: Would-be experimental result of CRE (directly
construed): Searle in the room implements an NLU Program for Chinese (CSAM)
without understanding any Chinese.
R': Would-be experimental result of CRE indirectly
construed: Searle in the room implements an NLU Program for Chinese (CSAM)
without having semantics for the Chinese symbols he processes.
RIS: The room-in-Searle: SIR internalized
(i.e., CSAM memorized) by Searle.
S: Supplementary hypothesis (for SCRA): No
(presently existing) computer has causal powers (at least) equivalent to
S1: Supplementary hypothesis 1 (for BSA):
All things instantiate Programs.
S2: Supplementary hypothesis 2 (for BSA):
Form (syntax) precludes semantics.
SAIP: Strong AI proper: Computers think already.
SAM: Script Applier Mechanism: Schank and
Abelson's story understanding program.
SCRA: Supplemental Chinese room argument:
SIR: Searle-in-the-room: the system consisting
of Searle, program, slips of paper, etc.
SR: Scientific Realism: Whatever the best
scientific theory ultimately postulates for explanatory purposes really
exists and constitutes or causes the phenomena it explains.
Strong AI: Searle's name for AIP or FUN.
Weak AI: Searle's name for his view that
computers merely simulate the mental abilities they seem to manifest, with
the additional proviso that such simulation may nevertheless be a useful
tool for studying the mind.
XR: Extreme would-be experimental result
(equivalent to S2): Programming precludes semantics.
next | previous
PAGE | PREFACE | ACKNOWLEDGEMENTS
| ABSTRACT| TABLE_OF_CONTENTS
| GLOSSARY | INTRODUCTION
| CHAPTER_1 | CHAPTER_2
| CHAPTER_3 | CHAPTER_4
| CHAPTER_5 | CHAPTER_6