TEDSummit

Sam Harris: Can we build AI without losing control over it?

Filmed:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical, existential crisis kind of way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.

- Neuroscientist and philosopher
Sam Harris’ writings and scholarship cover a wide range of topics, from neuroscience and moral philosophy to religion, violence and human reasoning, with a focus on how our growing understanding of ourselves and the world is changing our sense of how we should live. Full bio

I'm going to talk
about a failure of intuition
00:12
that many of us suffer from.
00:15
It's really a failure
to detect a certain kind of danger.
00:17
I'm going to describe a scenario
00:21
that I think is both terrifying
00:22
and likely to occur,
00:26
and that's not a good combination,
00:28
as it turns out.
00:30
And yet rather than be scared,
most of you will feel
00:31
that what I'm talking about
is kind of cool.
00:34
I'm going to describe
how the gains we make
00:37
in artificial intelligence
00:40
could ultimately destroy us.
00:41
And in fact, I think it's very difficult
to see how they won't destroy us
00:43
or inspire us to destroy ourselves.
00:47
And yet if you're anything like me,
00:49
you'll find that it's fun
to think about these things.
00:51
And that response is part of the problem.
00:53
OK? That response should worry you.
00:57
And if I were to convince you in this talk
00:59
that we were likely
to suffer a global famine,
01:02
either because of climate change
or some other catastrophe,
01:05
and that your grandchildren,
or their grandchildren,
01:08
are very likely to live like this,
01:12
you wouldn't think,
01:15
"Interesting.
01:17
I like this TED Talk."
01:18
Famine isn't fun.
01:21
Death by science fiction,
on the other hand, is fun,
01:23
and one of the things that worries me most
about the development of AI at this point
01:27
is that we seem unable to marshal
an appropriate emotional response
01:31
to the dangers that lie ahead.
01:35
I am unable to marshal this response,
and I'm giving this talk.
01:36
It's as though we stand before two doors.
01:41
Behind door number one,
01:44
we stop making progress
in building intelligent machines.
01:45
Our computer hardware and software
just stops getting better for some reason.
01:49
Now take a moment
to consider why this might happen.
01:53
I mean, given how valuable
intelligence and automation are,
01:56
we will continue to improve our technology
if we are at all able to.
02:00
What could stop us from doing this?
02:05
A full-scale nuclear war?
02:07
A global pandemic?
02:10
An asteroid impact?
02:14
Justin Bieber becoming
president of the United States?
02:17
(Laughter)
02:20
The point is, something would have to
destroy civilization as we know it.
02:24
You have to imagine
how bad it would have to be
02:29
to prevent us from making
improvements in our technology
02:33
permanently,
02:36
generation after generation.
02:38
Almost by definition,
this is the worst thing
02:40
that's ever happened in human history.
02:42
So the only alternative,
02:44
and this is what lies
behind door number two,
02:45
is that we continue
to improve our intelligent machines
02:48
year after year after year.
02:51
At a certain point, we will build
machines that are smarter than we are,
02:53
and once we have machines
that are smarter than we are,
02:57
they will begin to improve themselves.
03:00
And then we risk what
the mathematician IJ Good called
03:02
an "intelligence explosion,"
03:05
that the process could get away from us.
03:07
Now, this is often caricatured,
as I have here,
03:09
as a fear that armies of malicious robots
03:12
will attack us.
03:16
But that isn't the most likely scenario.
03:17
It's not that our machines
will become spontaneously malevolent.
03:20
The concern is really
that we will build machines
03:24
that are so much
more competent than we are
03:27
that the slightest divergence
between their goals and our own
03:29
could destroy us.
03:33
Just think about how we relate to ants.
03:35
We don't hate them.
03:38
We don't go out of our way to harm them.
03:40
In fact, sometimes
we take pains not to harm them.
03:42
We step over them on the sidewalk.
03:44
But whenever their presence
03:46
seriously conflicts with one of our goals,
03:48
let's say when constructing
a building like this one,
03:51
we annihilate them without a qualm.
03:53
The concern is that we will
one day build machines
03:56
that, whether they're conscious or not,
03:59
could treat us with similar disregard.
04:02
Now, I suspect this seems
far-fetched to many of you.
04:05
I bet there are those of you who doubt
that superintelligent AI is possible,
04:09
much less inevitable.
04:15
But then you must find something wrong
with one of the following assumptions.
04:17
And there are only three of them.
04:20
Intelligence is a matter of information
processing in physical systems.
04:23
Actually, this is a little bit more
than an assumption.
04:29
We have already built
narrow intelligence into our machines,
04:31
and many of these machines perform
04:35
at a level of superhuman
intelligence already.
04:37
And we know that mere matter
04:40
can give rise to what is called
"general intelligence,"
04:43
an ability to think flexibly
across multiple domains,
04:45
because our brains have managed it. Right?
04:49
I mean, there's just atoms in here,
04:52
and as long as we continue
to build systems of atoms
04:56
that display more and more
intelligent behavior,
05:01
we will eventually,
unless we are interrupted,
05:03
we will eventually
build general intelligence
05:06
into our machines.
05:09
It's crucial to realize
that the rate of progress doesn't matter,
05:11
because any progress
is enough to get us into the end zone.
05:14
We don't need Moore's law to continue.
We don't need exponential progress.
05:18
We just need to keep going.
05:21
The second assumption
is that we will keep going.
05:25
We will continue to improve
our intelligent machines.
05:28
And given the value of intelligence --
05:32
I mean, intelligence is either
the source of everything we value
05:37
or we need it to safeguard
everything we value.
05:40
It is our most valuable resource.
05:43
So we want to do this.
05:45
We have problems
that we desperately need to solve.
05:47
We want to cure diseases
like Alzheimer's and cancer.
05:50
We want to understand economic systems.
We want to improve our climate science.
05:54
So we will do this, if we can.
05:58
The train is already out of the station,
and there's no brake to pull.
06:01
Finally, we don't stand
on a peak of intelligence,
06:05
or anywhere near it, likely.
06:11
And this really is the crucial insight.
06:13
This is what makes
our situation so precarious,
06:15
and this is what makes our intuitions
about risk so unreliable.
06:17
Now, just consider the smartest person
who has ever lived.
06:22
On almost everyone's shortlist here
is John von Neumann.
06:26
I mean, the impression that von Neumann
made on the people around him,
06:29
and this included the greatest
mathematicians and physicists of his time,
06:33
is fairly well-documented.
06:37
If only half the stories
about him are half true,
06:39
there's no question
06:43
he's one of the smartest people
who has ever lived.
06:44
So consider the spectrum of intelligence.
06:46
Here we have John von Neumann.
06:50
And then we have you and me.
06:53
And then we have a chicken.
06:55
(Laughter)
06:57
Sorry, a chicken.
06:59
(Laughter)
07:00
There's no reason for me to make this talk
more depressing than it needs to be.
07:01
(Laughter)
07:05
It seems overwhelmingly likely, however,
that the spectrum of intelligence
07:08
extends much further
than we currently conceive,
07:11
and if we build machines
that are more intelligent than we are,
07:15
they will very likely
explore this spectrum
07:18
in ways that we can't imagine,
07:21
and exceed us in ways
that we can't imagine.
07:23
And it's important to recognize that
this is true by virtue of speed alone.
07:26
Right? So imagine if we just built
a superintelligent AI
07:31
that was no smarter
than your average team of researchers
07:36
at Stanford or MIT.
07:39
Well, electronic circuits
function about a million times faster
07:42
than biochemical ones,
07:45
so this machine should think
about a million times faster
07:46
than the minds that built it.
07:49
So you set it running for a week,
07:51
and it will perform 20,000 years
of human-level intellectual work,
07:53
week after week after week.
07:58
How could we even understand,
much less constrain,
08:01
a mind making this sort of progress?
08:04
The other thing that's worrying, frankly,
08:08
is that, imagine the best case scenario.
08:10
So imagine we hit upon a design
of superintelligent AI
08:15
that has no safety concerns.
08:20
We have the perfect design
the first time around.
08:21
It's as though we've been handed an oracle
08:24
that behaves exactly as intended.
08:26
Well, this machine would be
the perfect labor-saving device.
08:28
It can design the machine
that can build the machine
08:33
that can do any physical work,
08:35
powered by sunlight,
08:37
more or less for the cost
of raw materials.
08:39
So we're talking about
the end of human drudgery.
08:41
We're also talking about the end
of most intellectual work.
08:45
So what would apes like ourselves
do in this circumstance?
08:49
Well, we'd be free to play Frisbee
and give each other massages.
08:52
Add some LSD and some
questionable wardrobe choices,
08:57
and the whole world
could be like Burning Man.
09:00
(Laughter)
09:02
Now, that might sound pretty good,
09:06
but ask yourself what would happen
09:09
under our current economic
and political order?
09:11
It seems likely that we would witness
09:14
a level of wealth inequality
and unemployment
09:16
that we have never seen before.
09:20
Absent a willingness
to immediately put this new wealth
09:22
to the service of all humanity,
09:25
a few trillionaires could grace
the covers of our business magazines
09:27
while the rest of the world
would be free to starve.
09:31
And what would the Russians
or the Chinese do
09:34
if they heard that some company
in Silicon Valley
09:36
was about to deploy a superintelligent AI?
09:39
This machine would be capable
of waging war,
09:41
whether terrestrial or cyber,
09:44
with unprecedented power.
09:46
This is a winner-take-all scenario.
09:49
To be six months ahead
of the competition here
09:51
is to be 500,000 years ahead,
09:54
at a minimum.
09:57
So it seems that even mere rumors
of this kind of breakthrough
09:59
could cause our species to go berserk.
10:04
Now, one of the most frightening things,
10:06
in my view, at this moment,
10:09
are the kinds of things
that AI researchers say
10:12
when they want to be reassuring.
10:16
And the most common reason
we're told not to worry is time.
10:18
This is all a long way off,
don't you know.
10:22
This is probably 50 or 100 years away.
10:24
One researcher has said,
10:27
"Worrying about AI safety
10:28
is like worrying
about overpopulation on Mars."
10:30
This is the Silicon Valley version
10:33
of "don't worry your
pretty little head about it."
10:35
(Laughter)
10:37
No one seems to notice
10:39
that referencing the time horizon
10:41
is a total non sequitur.
10:43
If intelligence is just a matter
of information processing,
10:46
and we continue to improve our machines,
10:49
we will produce
some form of superintelligence.
10:52
And we have no idea
how long it will take us
10:56
to create the conditions
to do that safely.
10:59
Let me say that again.
11:04
We have no idea how long it will take us
11:05
to create the conditions
to do that safely.
11:09
And if you haven't noticed,
50 years is not what it used to be.
11:12
This is 50 years in months.
11:16
This is how long we've had the iPhone.
11:18
This is how long "The Simpsons"
has been on television.
11:21
Fifty years is not that much time
11:24
to meet one of the greatest challenges
our species will ever face.
11:26
Once again, we seem to be failing
to have an appropriate emotional response
11:31
to what we have every reason
to believe is coming.
11:35
The computer scientist Stuart Russell
has a nice analogy here.
11:38
He said, imagine that we received
a message from an alien civilization,
11:42
which read:
11:47
"People of Earth,
11:48
we will arrive on your planet in 50 years.
11:50
Get ready."
11:53
And now we're just counting down
the months until the mothership lands?
11:55
We would feel a little
more urgency than we do.
11:59
Another reason we're told not to worry
12:04
is that these machines
can't help but share our values
12:06
because they will be literally
extensions of ourselves.
12:09
They'll be grafted onto our brains,
12:12
and we'll essentially
become their limbic systems.
12:13
Now take a moment to consider
12:16
that the safest
and only prudent path forward,
12:18
recommended,
12:21
is to implant this technology
directly into our brains.
12:22
Now, this may in fact be the safest
and only prudent path forward,
12:26
but usually one's safety concerns
about a technology
12:29
have to be pretty much worked out
before you stick it inside your head.
12:32
(Laughter)
12:36
The deeper problem is that
building superintelligent AI on its own
12:38
seems likely to be easier
12:43
than building superintelligent AI
12:45
and having the completed neuroscience
12:47
that allows us to seamlessly
integrate our minds with it.
12:49
And given that the companies
and governments doing this work
12:52
are likely to perceive themselves
as being in a race against all others,
12:55
given that to win this race
is to win the world,
12:59
provided you don't destroy it
in the next moment,
13:02
then it seems likely
that whatever is easier to do
13:05
will get done first.
13:07
Now, unfortunately,
I don't have a solution to this problem,
13:10
apart from recommending
that more of us think about it.
13:13
I think we need something
like a Manhattan Project
13:15
on the topic of artificial intelligence.
13:18
Not to build it, because I think
we'll inevitably do that,
13:20
but to understand
how to avoid an arms race
13:23
and to build it in a way
that is aligned with our interests.
13:26
When you're talking
about superintelligent AI
13:29
that can make changes to itself,
13:32
it seems that we only have one chance
to get the initial conditions right,
13:34
and even then we will need to absorb
13:39
the economic and political
consequences of getting them right.
13:41
But the moment we admit
13:45
that information processing
is the source of intelligence,
13:47
that some appropriate computational system
is what the basis of intelligence is,
13:52
and we admit that we will improve
these systems continuously,
13:58
and we admit that the horizon
of cognition very likely far exceeds
14:03
what we currently know,
14:07
then we have to admit
14:09
that we are in the process
of building some sort of god.
14:11
Now would be a good time
14:15
to make sure it's a god we can live with.
14:16
Thank you very much.
14:19
(Applause)
14:21

▲Back to top

About the Speaker:

Sam Harris - Neuroscientist and philosopher
Sam Harris’ writings and scholarship cover a wide range of topics, from neuroscience and moral philosophy to religion, violence and human reasoning, with a focus on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Harris is an outspoken proponent of skepticism and science, and several of his books have become bestsellers. In The End of Faith, Harris gave a harrowing glimpse of humanity’s willingness to suspend reason in favor of religious beliefs, even when these beliefs inspire atrocities. After receiving thousands of angry letters in response, he wrote Letter to a Christian Nation, which centered on religious controversies in the US such as stem cell research and intelligent design. Harris received a degree in philosophy from Stanford and a PhD in neuroscience from UCLA. He is working on a book about the ethics of artificial intelligence.

More profile about the speaker
Sam Harris | Speaker | TED.com