ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Sam Harris is the author of five New York Times bestsellers. His books include The End of FaithLetter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.

More profile about the speaker
Sam Harris | Speaker | TED.com
TEDSummit

Sam Harris: Can we build AI without losing control over it?

Filmed:
5,024,015 views

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.
- Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live. Full bio

Double-click the English transcript below to play the video.

00:13
I'm going to talk
about a failure of intuition
0
1000
2216
00:15
that many of us suffer from.
1
3240
1600
00:17
It's really a failure
to detect a certain kind of danger.
2
5480
3040
00:21
I'm going to describe a scenario
3
9360
1736
00:23
that I think is both terrifying
4
11120
3256
00:26
and likely to occur,
5
14400
1760
00:28
and that's not a good combination,
6
16840
1656
00:30
as it turns out.
7
18520
1536
00:32
And yet rather than be scared,
most of you will feel
8
20080
2456
00:34
that what I'm talking about
is kind of cool.
9
22560
2080
00:37
I'm going to describe
how the gains we make
10
25200
2976
00:40
in artificial intelligence
11
28200
1776
00:42
could ultimately destroy us.
12
30000
1776
00:43
And in fact, I think it's very difficult
to see how they won't destroy us
13
31800
3456
00:47
or inspire us to destroy ourselves.
14
35280
1680
00:49
And yet if you're anything like me,
15
37400
1856
00:51
you'll find that it's fun
to think about these things.
16
39280
2656
00:53
And that response is part of the problem.
17
41960
3376
00:57
OK? That response should worry you.
18
45360
1720
00:59
And if I were to convince you in this talk
19
47920
2656
01:02
that we were likely
to suffer a global famine,
20
50600
3416
01:06
either because of climate change
or some other catastrophe,
21
54040
3056
01:09
and that your grandchildren,
or their grandchildren,
22
57120
3416
01:12
are very likely to live like this,
23
60560
1800
01:15
you wouldn't think,
24
63200
1200
01:17
"Interesting.
25
65440
1336
01:18
I like this TED Talk."
26
66800
1200
01:21
Famine isn't fun.
27
69200
1520
01:23
Death by science fiction,
on the other hand, is fun,
28
71800
3376
01:27
and one of the things that worries me most
about the development of AI at this point
29
75200
3976
01:31
is that we seem unable to marshal
an appropriate emotional response
30
79200
4096
01:35
to the dangers that lie ahead.
31
83320
1816
01:37
I am unable to marshal this response,
and I'm giving this talk.
32
85160
3200
01:42
It's as though we stand before two doors.
33
90120
2696
01:44
Behind door number one,
34
92840
1256
01:46
we stop making progress
in building intelligent machines.
35
94120
3296
01:49
Our computer hardware and software
just stops getting better for some reason.
36
97440
4016
01:53
Now take a moment
to consider why this might happen.
37
101480
3000
01:57
I mean, given how valuable
intelligence and automation are,
38
105080
3656
02:00
we will continue to improve our technology
if we are at all able to.
39
108760
3520
02:05
What could stop us from doing this?
40
113200
1667
02:07
A full-scale nuclear war?
41
115800
1800
02:11
A global pandemic?
42
119000
1560
02:14
An asteroid impact?
43
122320
1320
02:17
Justin Bieber becoming
president of the United States?
44
125640
2576
02:20
(Laughter)
45
128240
2280
02:24
The point is, something would have to
destroy civilization as we know it.
46
132760
3920
02:29
You have to imagine
how bad it would have to be
47
137360
4296
02:33
to prevent us from making
improvements in our technology
48
141680
3336
02:37
permanently,
49
145040
1216
02:38
generation after generation.
50
146280
2016
02:40
Almost by definition,
this is the worst thing
51
148320
2136
02:42
that's ever happened in human history.
52
150480
2016
02:44
So the only alternative,
53
152520
1296
02:45
and this is what lies
behind door number two,
54
153840
2336
02:48
is that we continue
to improve our intelligent machines
55
156200
3136
02:51
year after year after year.
56
159360
1600
02:53
At a certain point, we will build
machines that are smarter than we are,
57
161720
3640
02:58
and once we have machines
that are smarter than we are,
58
166080
2616
03:00
they will begin to improve themselves.
59
168720
1976
03:02
And then we risk what
the mathematician IJ Good called
60
170720
2736
03:05
an "intelligence explosion,"
61
173480
1776
03:07
that the process could get away from us.
62
175280
2000
03:10
Now, this is often caricatured,
as I have here,
63
178120
2816
03:12
as a fear that armies of malicious robots
64
180960
3216
03:16
will attack us.
65
184200
1256
03:17
But that isn't the most likely scenario.
66
185480
2696
03:20
It's not that our machines
will become spontaneously malevolent.
67
188200
4856
03:25
The concern is really
that we will build machines
68
193080
2616
03:27
that are so much
more competent than we are
69
195720
2056
03:29
that the slightest divergence
between their goals and our own
70
197800
3776
03:33
could destroy us.
71
201600
1200
03:35
Just think about how we relate to ants.
72
203960
2080
03:38
We don't hate them.
73
206600
1656
03:40
We don't go out of our way to harm them.
74
208280
2056
03:42
In fact, sometimes
we take pains not to harm them.
75
210360
2376
03:44
We step over them on the sidewalk.
76
212760
2016
03:46
But whenever their presence
77
214800
2136
03:48
seriously conflicts with one of our goals,
78
216960
2496
03:51
let's say when constructing
a building like this one,
79
219480
2477
03:53
we annihilate them without a qualm.
80
221981
1960
03:56
The concern is that we will
one day build machines
81
224480
2936
03:59
that, whether they're conscious or not,
82
227440
2736
04:02
could treat us with similar disregard.
83
230200
2000
04:05
Now, I suspect this seems
far-fetched to many of you.
84
233760
2760
04:09
I bet there are those of you who doubt
that superintelligent AI is possible,
85
237360
6336
04:15
much less inevitable.
86
243720
1656
04:17
But then you must find something wrong
with one of the following assumptions.
87
245400
3620
04:21
And there are only three of them.
88
249044
1572
04:23
Intelligence is a matter of information
processing in physical systems.
89
251800
4719
04:29
Actually, this is a little bit more
than an assumption.
90
257320
2615
04:31
We have already built
narrow intelligence into our machines,
91
259959
3457
04:35
and many of these machines perform
92
263440
2016
04:37
at a level of superhuman
intelligence already.
93
265480
2640
04:40
And we know that mere matter
94
268840
2576
04:43
can give rise to what is called
"general intelligence,"
95
271440
2616
04:46
an ability to think flexibly
across multiple domains,
96
274080
3656
04:49
because our brains have managed it. Right?
97
277760
3136
04:52
I mean, there's just atoms in here,
98
280920
3936
04:56
and as long as we continue
to build systems of atoms
99
284880
4496
05:01
that display more and more
intelligent behavior,
100
289400
2696
05:04
we will eventually,
unless we are interrupted,
101
292120
2536
05:06
we will eventually
build general intelligence
102
294680
3376
05:10
into our machines.
103
298080
1296
05:11
It's crucial to realize
that the rate of progress doesn't matter,
104
299400
3656
05:15
because any progress
is enough to get us into the end zone.
105
303080
3176
05:18
We don't need Moore's law to continue.
We don't need exponential progress.
106
306280
3776
05:22
We just need to keep going.
107
310080
1600
05:25
The second assumption
is that we will keep going.
108
313480
2920
05:29
We will continue to improve
our intelligent machines.
109
317000
2760
05:33
And given the value of intelligence --
110
321000
4376
05:37
I mean, intelligence is either
the source of everything we value
111
325400
3536
05:40
or we need it to safeguard
everything we value.
112
328960
2776
05:43
It is our most valuable resource.
113
331760
2256
05:46
So we want to do this.
114
334040
1536
05:47
We have problems
that we desperately need to solve.
115
335600
3336
05:50
We want to cure diseases
like Alzheimer's and cancer.
116
338960
3200
05:54
We want to understand economic systems.
We want to improve our climate science.
117
342960
3936
05:58
So we will do this, if we can.
118
346920
2256
06:01
The train is already out of the station,
and there's no brake to pull.
119
349200
3286
06:05
Finally, we don't stand
on a peak of intelligence,
120
353880
5456
06:11
or anywhere near it, likely.
121
359360
1800
06:13
And this really is the crucial insight.
122
361640
1896
06:15
This is what makes
our situation so precarious,
123
363560
2416
06:18
and this is what makes our intuitions
about risk so unreliable.
124
366000
4040
06:23
Now, just consider the smartest person
who has ever lived.
125
371120
2720
06:26
On almost everyone's shortlist here
is John von Neumann.
126
374640
3416
06:30
I mean, the impression that von Neumann
made on the people around him,
127
378080
3336
06:33
and this included the greatest
mathematicians and physicists of his time,
128
381440
4056
06:37
is fairly well-documented.
129
385520
1936
06:39
If only half the stories
about him are half true,
130
387480
3776
06:43
there's no question
131
391280
1216
06:44
he's one of the smartest people
who has ever lived.
132
392520
2456
06:47
So consider the spectrum of intelligence.
133
395000
2520
06:50
Here we have John von Neumann.
134
398320
1429
06:53
And then we have you and me.
135
401560
1334
06:56
And then we have a chicken.
136
404120
1296
06:57
(Laughter)
137
405440
1936
06:59
Sorry, a chicken.
138
407400
1216
07:00
(Laughter)
139
408640
1256
07:01
There's no reason for me to make this talk
more depressing than it needs to be.
140
409920
3736
07:05
(Laughter)
141
413680
1600
07:08
It seems overwhelmingly likely, however,
that the spectrum of intelligence
142
416339
3477
07:11
extends much further
than we currently conceive,
143
419840
3120
07:15
and if we build machines
that are more intelligent than we are,
144
423880
3216
07:19
they will very likely
explore this spectrum
145
427120
2296
07:21
in ways that we can't imagine,
146
429440
1856
07:23
and exceed us in ways
that we can't imagine.
147
431320
2520
07:27
And it's important to recognize that
this is true by virtue of speed alone.
148
435000
4336
07:31
Right? So imagine if we just built
a superintelligent AI
149
439360
5056
07:36
that was no smarter
than your average team of researchers
150
444440
3456
07:39
at Stanford or MIT.
151
447920
2296
07:42
Well, electronic circuits
function about a million times faster
152
450240
2976
07:45
than biochemical ones,
153
453240
1256
07:46
so this machine should think
about a million times faster
154
454520
3136
07:49
than the minds that built it.
155
457680
1816
07:51
So you set it running for a week,
156
459520
1656
07:53
and it will perform 20,000 years
of human-level intellectual work,
157
461200
4560
07:58
week after week after week.
158
466400
1960
08:01
How could we even understand,
much less constrain,
159
469640
3096
08:04
a mind making this sort of progress?
160
472760
2280
08:08
The other thing that's worrying, frankly,
161
476840
2136
08:11
is that, imagine the best case scenario.
162
479000
4976
08:16
So imagine we hit upon a design
of superintelligent AI
163
484000
4176
08:20
that has no safety concerns.
164
488200
1376
08:21
We have the perfect design
the first time around.
165
489600
3256
08:24
It's as though we've been handed an oracle
166
492880
2216
08:27
that behaves exactly as intended.
167
495120
2016
08:29
Well, this machine would be
the perfect labor-saving device.
168
497160
3720
08:33
It can design the machine
that can build the machine
169
501680
2429
08:36
that can do any physical work,
170
504133
1763
08:37
powered by sunlight,
171
505920
1456
08:39
more or less for the cost
of raw materials.
172
507400
2696
08:42
So we're talking about
the end of human drudgery.
173
510120
3256
08:45
We're also talking about the end
of most intellectual work.
174
513400
2800
08:49
So what would apes like ourselves
do in this circumstance?
175
517200
3056
08:52
Well, we'd be free to play Frisbee
and give each other massages.
176
520280
4080
08:57
Add some LSD and some
questionable wardrobe choices,
177
525840
2856
09:00
and the whole world
could be like Burning Man.
178
528720
2176
09:02
(Laughter)
179
530920
1640
09:06
Now, that might sound pretty good,
180
534320
2000
09:09
but ask yourself what would happen
181
537280
2376
09:11
under our current economic
and political order?
182
539680
2736
09:14
It seems likely that we would witness
183
542440
2416
09:16
a level of wealth inequality
and unemployment
184
544880
4136
09:21
that we have never seen before.
185
549040
1496
09:22
Absent a willingness
to immediately put this new wealth
186
550560
2616
09:25
to the service of all humanity,
187
553200
1480
09:27
a few trillionaires could grace
the covers of our business magazines
188
555640
3616
09:31
while the rest of the world
would be free to starve.
189
559280
2440
09:34
And what would the Russians
or the Chinese do
190
562320
2296
09:36
if they heard that some company
in Silicon Valley
191
564640
2616
09:39
was about to deploy a superintelligent AI?
192
567280
2736
09:42
This machine would be capable
of waging war,
193
570040
2856
09:44
whether terrestrial or cyber,
194
572920
2216
09:47
with unprecedented power.
195
575160
1680
09:50
This is a winner-take-all scenario.
196
578120
1856
09:52
To be six months ahead
of the competition here
197
580000
3136
09:55
is to be 500,000 years ahead,
198
583160
2776
09:57
at a minimum.
199
585960
1496
09:59
So it seems that even mere rumors
of this kind of breakthrough
200
587480
4736
10:04
could cause our species to go berserk.
201
592240
2376
10:06
Now, one of the most frightening things,
202
594640
2896
10:09
in my view, at this moment,
203
597560
2776
10:12
are the kinds of things
that AI researchers say
204
600360
4296
10:16
when they want to be reassuring.
205
604680
1560
10:19
And the most common reason
we're told not to worry is time.
206
607000
3456
10:22
This is all a long way off,
don't you know.
207
610480
2056
10:24
This is probably 50 or 100 years away.
208
612560
2440
10:27
One researcher has said,
209
615720
1256
10:29
"Worrying about AI safety
210
617000
1576
10:30
is like worrying
about overpopulation on Mars."
211
618600
2280
10:34
This is the Silicon Valley version
212
622116
1620
10:35
of "don't worry your
pretty little head about it."
213
623760
2376
10:38
(Laughter)
214
626160
1336
10:39
No one seems to notice
215
627520
1896
10:41
that referencing the time horizon
216
629440
2616
10:44
is a total non sequitur.
217
632080
2576
10:46
If intelligence is just a matter
of information processing,
218
634680
3256
10:49
and we continue to improve our machines,
219
637960
2656
10:52
we will produce
some form of superintelligence.
220
640640
2880
10:56
And we have no idea
how long it will take us
221
644320
3656
11:00
to create the conditions
to do that safely.
222
648000
2400
11:04
Let me say that again.
223
652200
1296
11:05
We have no idea how long it will take us
224
653520
3816
11:09
to create the conditions
to do that safely.
225
657360
2240
11:12
And if you haven't noticed,
50 years is not what it used to be.
226
660920
3456
11:16
This is 50 years in months.
227
664400
2456
11:18
This is how long we've had the iPhone.
228
666880
1840
11:21
This is how long "The Simpsons"
has been on television.
229
669440
2600
11:24
Fifty years is not that much time
230
672680
2376
11:27
to meet one of the greatest challenges
our species will ever face.
231
675080
3160
11:31
Once again, we seem to be failing
to have an appropriate emotional response
232
679640
4016
11:35
to what we have every reason
to believe is coming.
233
683680
2696
11:38
The computer scientist Stuart Russell
has a nice analogy here.
234
686400
3976
11:42
He said, imagine that we received
a message from an alien civilization,
235
690400
4896
11:47
which read:
236
695320
1696
11:49
"People of Earth,
237
697040
1536
11:50
we will arrive on your planet in 50 years.
238
698600
2360
11:53
Get ready."
239
701800
1576
11:55
And now we're just counting down
the months until the mothership lands?
240
703400
4256
11:59
We would feel a little
more urgency than we do.
241
707680
3000
12:04
Another reason we're told not to worry
242
712680
1856
12:06
is that these machines
can't help but share our values
243
714560
3016
12:09
because they will be literally
extensions of ourselves.
244
717600
2616
12:12
They'll be grafted onto our brains,
245
720240
1816
12:14
and we'll essentially
become their limbic systems.
246
722080
2360
12:17
Now take a moment to consider
247
725120
1416
12:18
that the safest
and only prudent path forward,
248
726560
3176
12:21
recommended,
249
729760
1336
12:23
is to implant this technology
directly into our brains.
250
731120
2800
12:26
Now, this may in fact be the safest
and only prudent path forward,
251
734600
3376
12:30
but usually one's safety concerns
about a technology
252
738000
3056
12:33
have to be pretty much worked out
before you stick it inside your head.
253
741080
3656
12:36
(Laughter)
254
744760
2016
12:38
The deeper problem is that
building superintelligent AI on its own
255
746800
5336
12:44
seems likely to be easier
256
752160
1736
12:45
than building superintelligent AI
257
753920
1856
12:47
and having the completed neuroscience
258
755800
1776
12:49
that allows us to seamlessly
integrate our minds with it.
259
757600
2680
12:52
And given that the companies
and governments doing this work
260
760800
3176
12:56
are likely to perceive themselves
as being in a race against all others,
261
764000
3656
12:59
given that to win this race
is to win the world,
262
767680
3256
13:02
provided you don't destroy it
in the next moment,
263
770960
2456
13:05
then it seems likely
that whatever is easier to do
264
773440
2616
13:08
will get done first.
265
776080
1200
13:10
Now, unfortunately,
I don't have a solution to this problem,
266
778560
2856
13:13
apart from recommending
that more of us think about it.
267
781440
2616
13:16
I think we need something
like a Manhattan Project
268
784080
2376
13:18
on the topic of artificial intelligence.
269
786480
2016
13:20
Not to build it, because I think
we'll inevitably do that,
270
788520
2736
13:23
but to understand
how to avoid an arms race
271
791280
3336
13:26
and to build it in a way
that is aligned with our interests.
272
794640
3496
13:30
When you're talking
about superintelligent AI
273
798160
2136
13:32
that can make changes to itself,
274
800320
2256
13:34
it seems that we only have one chance
to get the initial conditions right,
275
802600
4616
13:39
and even then we will need to absorb
276
807240
2056
13:41
the economic and political
consequences of getting them right.
277
809320
3040
13:45
But the moment we admit
278
813760
2056
13:47
that information processing
is the source of intelligence,
279
815840
4000
13:52
that some appropriate computational system
is what the basis of intelligence is,
280
820720
4800
13:58
and we admit that we will improve
these systems continuously,
281
826360
3760
14:03
and we admit that the horizon
of cognition very likely far exceeds
282
831280
4456
14:07
what we currently know,
283
835760
1200
14:10
then we have to admit
284
838120
1216
14:11
that we are in the process
of building some sort of god.
285
839360
2640
14:15
Now would be a good time
286
843400
1576
14:17
to make sure it's a god we can live with.
287
845000
1953
14:20
Thank you very much.
288
848120
1536
14:21
(Applause)
289
849680
5093

▲Back to top

ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Sam Harris is the author of five New York Times bestsellers. His books include The End of FaithLetter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.

More profile about the speaker
Sam Harris | Speaker | TED.com