Chapter 15: How to Build a Conscious Machine
- "Nobody knows how to build a machine that is conscious" (McGinn
- Dennett: "there can be conscious machines -- us" (1991: 432)
- Huge problems
- consciousness is undefined
- we don't know how to recognize it (at least in others)
- Two aspects of the problem
- "how to build a machine that seems
- "how to build a machine that really is conscious (whatever that means)"
I'm Sure it Likes Me
- "We humans seem to adopt the intentional stance
toward other people, animals, toys and machines on the flimsiest of
- e.g., Tamagotchis, Furbys, et al.
- Brooks: "we, all of us, over anthropomorphize
humans, who are after all mere machines" (2002: 175)
- "our natural tendencies to treat others as intentional,
sociable and feeling creatures all confuse the question of artificial
- Robots specifically designed to elicit social behavior: Cog &
- Cog: a robot torso & head at with humanoid characteristics
(e.g., human-like visual processes) "growing up" and being developed at
MIT (Concept: 202)
- Kismet: a human-like head with facial expressions reflecting
its "emotional" state (Concept: 213; Fig. 15.1: 214)
- three dimensional "emotional" range
three features of visual interest weighted according to
- valence (or "happiness")
- arousal (level of stimulation)
- stance (openness to new stimulation)
verbal & auditory abilities
- saturated colors
- skin color
Whizmet (Blackmore's thought experiment)
- detects and its mood is affected by prosodic patters in
- Kismet's vocalization: babbling
- no speech generation (or recognition) capabilities
- Kismet makes prosodic vocalizations reflecting its mood
Two alternative responses
- imagine Kismet on steroids
- as sensitive to the full range of human emotion as a real
person, or moreso
- aptly accommodating people's moods -- comforting them when
they're down, laughing along when they're jolly, etc.
- [really speaking -- telling jokes, saying "there, there",
etc. -- not just babbling (Whizmet Professional Edition)]
- Is Whizmet conscious?
- it's just as-if conscious
- we know how prone we are to anthropomorphize and take the
intentional stance toward things like Furbies and Tatamagouchis
- Whizmet is just a fancier Furby
- Conclusion: Whizmet merely simulates
human emotional responsiveness.
- it's really conscious
- "there is no dividing line between as if and real consciousness" (214)
- sensitivity toward others' emotions is part of what we mean
- Kismet is somewhat conscious, and Whizmet somewhat moreso
They're Already Conscious
- John McCarthy's thermostat: "My thermostat has three beliefs"
(reported by Searle, 1987: 211)
- "it's too hot in here"
- "it's too cold in here"
- "it's just right in here"
- What of consciousness?
- many believe having intentional states require consciousness
- beliefs are intentional states (they're about things, they
- if that's right, if thermostats really had beliefs, they'd
really be consciousness (compare the flea's argument in the
- two ways to go
- if they had beliefs they'd be conscious, but surely they're
not conscious; so we should conclude they have no beliefs (Searle)
- if they had beliefs they'd be conscious, and they have
beliefs; so, we should conclude they're conscious (Siphonoptera)
- "Others draw a firm distinction between intentionally and
- Dichotomy redux
- If there is "a sharp divide between real consciousness and as if consciousness, then robot
makers need to find out what real consciousness is and whether it can
be put into a machine" (215)
- "Alternately, if there is no difference between real consciousness and as if consciousness, we humans are
already sharing our world with the beginnings of AC." (215)
Find X and Put it in a Machine
- X = what makes it real
- What is X?
- Mysterionism: X = ?:
we'll never know because the human brain is incapable of solving the
mystery of consciousness (sats McGinn 1999)
- Cognitivism: X = the
right computation: "implementing the right computations suffices for
consciousness ... [even] for rich conscious experience like our own."
(Chalmers, 1996: 315)
- Global Workspace Theory:
X = global availability
- IDA (Intelligent Distribution Agent) works for the U.S. Navy
"assigning sailors to different jobs).
- IDA uses a GW architecture "with coalitions of unconscious
processes finding their way into a Global Workspace from where messages
are broadcast to recruit other processors to help solve the current
- Franklin [IDA's designer] "describes IDA as being
functionally conscious in the sense that she implements GWT, but she is
not self-conscious, and of most relevance here, he sees no convincing
evidence that she is phenomenally conscious" (216) [Could anything BE
- Selfhood questions: "IDA is a software agent and so is not
permanently tied to any particular physical machine, raising the
question of just what it is that we refer to as conscious."
- AI version: who is the subject of artificial thoughts?
- the computer (hardware)?
- or the program (software)?
- human version
- am I = my brain?
- am I = my brain's program & data?
- Edelman and Tononi: X = reentrant connectivity architecture
(somewhat like GWT)
- Penrose: X = quantum coherence in the microtubules.
- Problem: given a computer that behaving as if conscious due to
being supplied with one of the above X's. [from the above list] and
supplied with X, "we would still not know for sure that it was really conscious." (217)
- Dennett (1991): X is imaginary: as
if consciousness is all
- there is no X (no "actual phenomenology" or "Cartesian
- there is only "X" -- the illusion of having "actual
phenomenology" or a "Cartesian theater"
- "The task, then, is to understand how the illusion comes about
and then design a similarly deluded machine." (SB: 218)
- Language as the source of the illusion
- linguistic theories of self-hood describe the self as
- a "center of narrative gravity" ... "that emerges in
creatures who use language or a 'self-plex' constructed by and for the
replication of memes" (SB: 218)
- Implication: "if any machine were capable of using language,
and capable of talking about 'I,' 'me,' and 'mine,' it would also fall
for the illusion that it was an experiencing self, and would then be
conscious like us." (SB: 218)
- No cigar: "none of these can be said to understand what they say"
- phonographs & tape-recorders
- "canned" speech systems: "can I help you"
- "machines that can read aloud from printed text or turn spoken
language into print"
- Natural Language Processing (NLP) progress and prospects
- limited successes
- problem of ambiguity, etc.
- "time flies like an arrow" (5 ways ambiguous)
Minsky's example: who does "they" refer to?
- temporal progression is rapid and linear
- time flies are attracted to arrows (like fruit flies to
- The city fathers denied the group a permit to march because
they advocate violence.
- The city fathers denied the group a permit to march because
they feared the outbreak of violence
- moral: disambiguation relies on
- context: both verbal and real-world: a frame of reference
- extensive background knowledge (as in Minsky's example)
Hopes for language acquisition through experience
- how to represent and store such "common-sense knowledge"
(Lenat's Cyc project)
- how to access the appropriate information in real time
- especially intractable
- the "frame problem" is the problem of choosing and updating appropriate frames of
- connectionism & neural nets
- nets able to acquire concepts or word meanings
- nets able to learn pronunciations
- nets able to learn grammatical constructions ... e.g., how
to form the English past tenses of regular & irregular verbs
- memetics: "language can be treated as an evolving system in
which both syntax and semantics emerge spontaneously" (220) [Chomsky to
the contrary, notwithstanding?]
- "As our understanding of the brain improves and our ability to
accurately and noninvasively scan these feature improves,
reinstantiating (reinstalling) a persons brain should alter a person's
mind no more than it changes from day to day." (Kurzweil 1999: 125)
- humans can become immortal by downloading the the neural
programs and data structures that make them who they are into computers
- Brooks: "We will not download ourselves into machines; rather,
those of us alive today, over the course of our lifetimes, will morph
ourselves into machines." (2002: 212)
- artificial limbs & joints
- cochlea implants (already) and retinal implants (forthcoming)
- What's in store?
- Neural internet connections?
- "implanted mobile phones" enabling telepathic contact (SB:
- memory expansion chips
- Speculative segue
- suppose we were all neurally connected to the internet
- the internet would be like our shared global workspace
- "The notion of 'consciousness as global availability' seems to
provide a curious conclusion here" (222-3)
Consciousness in Cyberspace
- Software agents: examples
- chatbots in chat rooms, MUDs, and elsewhere in cyberspace
- virtual warriors in computer games
- virtual characters in films
- "All of these entities depend on physical substrates for their
existence, but none has a permanent physical home. Could they be