ABOUT THE SPEAKER
Stuart Russell - AI expert
Stuart Russell wrote the standard text on AI; now he thinks deeply on AI's future -- and the future of us humans, too.

Why you should listen

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring and philosophical foundations.

He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

More profile about the speaker
Stuart Russell | Speaker | TED.com
TED2017

Stuart Russell: 3 principles for creating safer AI

Filmed:
1,465,832 views

How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.
- AI expert
Stuart Russell wrote the standard text on AI; now he thinks deeply on AI's future -- and the future of us humans, too. Full bio

Double-click the English transcript below to play the video.

00:12
This is Lee Sedol.
0
712
1552
00:14
Lee Sedol is one of the world's
greatest Go players,
1
2288
3997
00:18
and he's having what my friends
in Silicon Valley call
2
6309
2885
00:21
a "Holy Cow" moment --
3
9218
1510
00:22
(Laughter)
4
10752
1073
00:23
a moment where we realize
5
11849
2188
00:26
that AI is actually progressing
a lot faster than we expected.
6
14061
3296
00:30
So humans have lost on the Go board.
What about the real world?
7
18154
3047
00:33
Well, the real world is much bigger,
8
21225
2100
00:35
much more complicated than the Go board.
9
23349
2249
00:37
It's a lot less visible,
10
25622
1819
00:39
but it's still a decision problem.
11
27465
2038
00:42
And if we think about some
of the technologies
12
30948
2321
00:45
that are coming down the pike ...
13
33293
1749
00:47
Noriko [Arai] mentioned that reading
is not yet happening in machines,
14
35738
4335
00:52
at least with understanding.
15
40097
1500
00:53
But that will happen,
16
41621
1536
00:55
and when that happens,
17
43181
1771
00:56
very soon afterwards,
18
44976
1187
00:58
machines will have read everything
that the human race has ever written.
19
46187
4572
01:03
And that will enable machines,
20
51850
2030
01:05
along with the ability to look
further ahead than humans can,
21
53904
2920
01:08
as we've already seen in Go,
22
56848
1680
01:10
if they also have access
to more information,
23
58552
2164
01:12
they'll be able to make better decisions
in the real world than we can.
24
60740
4268
01:18
So is that a good thing?
25
66792
1606
01:21
Well, I hope so.
26
69898
2232
01:26
Our entire civilization,
everything that we value,
27
74694
3255
01:29
is based on our intelligence.
28
77973
2068
01:32
And if we had access
to a lot more intelligence,
29
80065
3694
01:35
then there's really no limit
to what the human race can do.
30
83783
3302
01:40
And I think this could be,
as some people have described it,
31
88665
3325
01:44
the biggest event in human history.
32
92014
2016
01:48
So why are people saying things like this,
33
96665
2829
01:51
that AI might spell the end
of the human race?
34
99518
2876
01:55
Is this a new thing?
35
103438
1659
01:57
Is it just Elon Musk and Bill Gates
and Stephen Hawking?
36
105121
4110
02:01
Actually, no. This idea
has been around for a while.
37
109953
3262
02:05
Here's a quotation:
38
113239
1962
02:07
"Even if we could keep the machines
in a subservient position,
39
115225
4350
02:11
for instance, by turning off the power
at strategic moments" --
40
119599
2984
02:14
and I'll come back to that
"turning off the power" idea later on --
41
122607
3237
02:17
"we should, as a species,
feel greatly humbled."
42
125868
2804
02:22
So who said this?
This is Alan Turing in 1951.
43
130177
3448
02:26
Alan Turing, as you know,
is the father of computer science
44
134300
2763
02:29
and in many ways,
the father of AI as well.
45
137087
3048
02:33
So if we think about this problem,
46
141239
1882
02:35
the problem of creating something
more intelligent than your own species,
47
143145
3787
02:38
we might call this "the gorilla problem,"
48
146956
2622
02:42
because gorillas' ancestors did this
a few million years ago,
49
150345
3750
02:46
and now we can ask the gorillas:
50
154119
1745
02:48
Was this a good idea?
51
156752
1160
02:49
So here they are having a meeting
to discuss whether it was a good idea,
52
157936
3530
02:53
and after a little while,
they conclude, no,
53
161490
3346
02:56
this was a terrible idea.
54
164860
1345
02:58
Our species is in dire straits.
55
166229
1782
03:00
In fact, you can see the existential
sadness in their eyes.
56
168538
4263
03:04
(Laughter)
57
172825
1640
03:06
So this queasy feeling that making
something smarter than your own species
58
174489
4840
03:11
is maybe not a good idea --
59
179353
2365
03:14
what can we do about that?
60
182488
1491
03:16
Well, really nothing,
except stop doing AI,
61
184003
4767
03:20
and because of all
the benefits that I mentioned
62
188794
2510
03:23
and because I'm an AI researcher,
63
191328
1716
03:25
I'm not having that.
64
193068
1791
03:27
I actually want to be able
to keep doing AI.
65
195283
2468
03:30
So we actually need to nail down
the problem a bit more.
66
198615
2678
03:33
What exactly is the problem?
67
201317
1371
03:34
Why is better AI possibly a catastrophe?
68
202712
3246
03:39
So here's another quotation:
69
207398
1498
03:41
"We had better be quite sure
that the purpose put into the machine
70
209935
3335
03:45
is the purpose which we really desire."
71
213294
2298
03:48
This was said by Norbert Wiener in 1960,
72
216282
3498
03:51
shortly after he watched
one of the very early learning systems
73
219804
4002
03:55
learn to play checkers
better than its creator.
74
223830
2583
04:00
But this could equally have been said
75
228602
2683
04:03
by King Midas.
76
231309
1167
04:05
King Midas said, "I want everything
I touch to turn to gold,"
77
233083
3134
04:08
and he got exactly what he asked for.
78
236241
2473
04:10
That was the purpose
that he put into the machine,
79
238738
2751
04:13
so to speak,
80
241513
1450
04:14
and then his food and his drink
and his relatives turned to gold
81
242987
3444
04:18
and he died in misery and starvation.
82
246455
2281
04:22
So we'll call this
"the King Midas problem"
83
250444
2341
04:24
of stating an objective
which is not, in fact,
84
252809
3305
04:28
truly aligned with what we want.
85
256138
2413
04:30
In modern terms, we call this
"the value alignment problem."
86
258575
3253
04:37
Putting in the wrong objective
is not the only part of the problem.
87
265047
3485
04:40
There's another part.
88
268556
1152
04:42
If you put an objective into a machine,
89
270160
1943
04:44
even something as simple as,
"Fetch the coffee,"
90
272127
2448
04:47
the machine says to itself,
91
275908
1841
04:50
"Well, how might I fail
to fetch the coffee?
92
278733
2623
04:53
Someone might switch me off.
93
281380
1580
04:55
OK, I have to take steps to prevent that.
94
283645
2387
04:58
I will disable my 'off' switch.
95
286056
1906
05:00
I will do anything to defend myself
against interference
96
288534
2959
05:03
with this objective
that I have been given."
97
291517
2629
05:06
So this single-minded pursuit
98
294170
2012
05:09
in a very defensive mode
of an objective that is, in fact,
99
297213
2945
05:12
not aligned with the true objectives
of the human race --
100
300182
2814
05:16
that's the problem that we face.
101
304122
1862
05:19
And in fact, that's the high-value
takeaway from this talk.
102
307007
4767
05:23
If you want to remember one thing,
103
311798
2055
05:25
it's that you can't fetch
the coffee if you're dead.
104
313877
2675
05:28
(Laughter)
105
316576
1061
05:29
It's very simple. Just remember that.
Repeat it to yourself three times a day.
106
317661
3829
05:33
(Laughter)
107
321514
1821
05:35
And in fact, this is exactly the plot
108
323359
2754
05:38
of "2001: [A Space Odyssey]"
109
326137
2648
05:41
HAL has an objective, a mission,
110
329226
2090
05:43
which is not aligned
with the objectives of the humans,
111
331340
3732
05:47
and that leads to this conflict.
112
335096
1810
05:49
Now fortunately, HAL
is not superintelligent.
113
337494
2969
05:52
He's pretty smart,
but eventually Dave outwits him
114
340487
3587
05:56
and manages to switch him off.
115
344098
1849
06:01
But we might not be so lucky.
116
349828
1619
06:08
So what are we going to do?
117
356193
1592
06:12
I'm trying to redefine AI
118
360371
2601
06:14
to get away from this classical notion
119
362996
2061
06:17
of machines that intelligently
pursue objectives.
120
365081
4567
06:22
There are three principles involved.
121
370712
1798
06:24
The first one is a principle
of altruism, if you like,
122
372534
3289
06:27
that the robot's only objective
123
375847
3262
06:31
is to maximize the realization
of human objectives,
124
379133
4246
06:35
of human values.
125
383403
1390
06:36
And by values here I don't mean
touchy-feely, goody-goody values.
126
384817
3330
06:40
I just mean whatever it is
that the human would prefer
127
388171
3787
06:43
their life to be like.
128
391982
1343
06:47
And so this actually violates Asimov's law
129
395364
2309
06:49
that the robot has to protect
its own existence.
130
397697
2329
06:52
It has no interest in preserving
its existence whatsoever.
131
400050
3723
06:57
The second law is a law
of humility, if you like.
132
405420
3768
07:01
And this turns out to be really
important to make robots safe.
133
409974
3743
07:05
It says that the robot does not know
134
413741
3142
07:08
what those human values are,
135
416907
2028
07:10
so it has to maximize them,
but it doesn't know what they are.
136
418959
3178
07:15
And that avoids this problem
of single-minded pursuit
137
423254
2626
07:17
of an objective.
138
425904
1212
07:19
This uncertainty turns out to be crucial.
139
427140
2172
07:21
Now, in order to be useful to us,
140
429726
1639
07:23
it has to have some idea of what we want.
141
431389
2731
07:27
It obtains that information primarily
by observation of human choices,
142
435223
5427
07:32
so our own choices reveal information
143
440674
2801
07:35
about what it is that we prefer
our lives to be like.
144
443499
3300
07:40
So those are the three principles.
145
448632
1683
07:42
Let's see how that applies
to this question of:
146
450339
2318
07:44
"Can you switch the machine off?"
as Turing suggested.
147
452681
2789
07:49
So here's a PR2 robot.
148
457073
2120
07:51
This is one that we have in our lab,
149
459217
1821
07:53
and it has a big red "off" switch
right on the back.
150
461062
2903
07:56
The question is: Is it
going to let you switch it off?
151
464541
2615
07:59
If we do it the classical way,
152
467180
1465
08:00
we give it the objective of, "Fetch
the coffee, I must fetch the coffee,
153
468669
3482
08:04
I can't fetch the coffee if I'm dead,"
154
472175
2580
08:06
so obviously the PR2
has been listening to my talk,
155
474779
3341
08:10
and so it says, therefore,
"I must disable my 'off' switch,
156
478144
3753
08:14
and probably taser all the other
people in Starbucks
157
482976
2694
08:17
who might interfere with me."
158
485694
1560
08:19
(Laughter)
159
487278
2062
08:21
So this seems to be inevitable, right?
160
489364
2153
08:23
This kind of failure mode
seems to be inevitable,
161
491541
2398
08:25
and it follows from having
a concrete, definite objective.
162
493963
3543
08:30
So what happens if the machine
is uncertain about the objective?
163
498812
3144
08:33
Well, it reasons in a different way.
164
501980
2127
08:36
It says, "OK, the human
might switch me off,
165
504131
2424
08:39
but only if I'm doing something wrong.
166
507144
1866
08:41
Well, I don't really know what wrong is,
167
509747
2475
08:44
but I know that I don't want to do it."
168
512246
2044
08:46
So that's the first and second
principles right there.
169
514314
3010
08:49
"So I should let the human switch me off."
170
517348
3359
08:53
And in fact you can calculate
the incentive that the robot has
171
521721
3956
08:57
to allow the human to switch it off,
172
525701
2493
09:00
and it's directly tied to the degree
173
528218
1914
09:02
of uncertainty about
the underlying objective.
174
530156
2746
09:05
And then when the machine is switched off,
175
533977
2949
09:08
that third principle comes into play.
176
536950
1805
09:10
It learns something about the objectives
it should be pursuing,
177
538779
3062
09:13
because it learns that
what it did wasn't right.
178
541865
2533
09:16
In fact, we can, with suitable use
of Greek symbols,
179
544422
3570
09:20
as mathematicians usually do,
180
548016
2131
09:22
we can actually prove a theorem
181
550171
1984
09:24
that says that such a robot
is provably beneficial to the human.
182
552179
3553
09:27
You are provably better off
with a machine that's designed in this way
183
555756
3803
09:31
than without it.
184
559583
1246
09:33
So this is a very simple example,
but this is the first step
185
561237
2906
09:36
in what we're trying to do
with human-compatible AI.
186
564167
3903
09:42
Now, this third principle,
187
570657
3257
09:45
I think is the one that you're probably
scratching your head over.
188
573938
3112
09:49
You're probably thinking, "Well,
you know, I behave badly.
189
577074
3239
09:52
I don't want my robot to behave like me.
190
580337
2929
09:55
I sneak down in the middle of the night
and take stuff from the fridge.
191
583290
3434
09:58
I do this and that."
192
586748
1168
09:59
There's all kinds of things
you don't want the robot doing.
193
587940
2797
10:02
But in fact, it doesn't
quite work that way.
194
590761
2071
10:04
Just because you behave badly
195
592856
2155
10:07
doesn't mean the robot
is going to copy your behavior.
196
595035
2623
10:09
It's going to understand your motivations
and maybe help you resist them,
197
597682
3910
10:13
if appropriate.
198
601616
1320
10:16
But it's still difficult.
199
604206
1464
10:18
What we're trying to do, in fact,
200
606302
2545
10:20
is to allow machines to predict
for any person and for any possible life
201
608871
5796
10:26
that they could live,
202
614691
1161
10:27
and the lives of everybody else:
203
615876
1597
10:29
Which would they prefer?
204
617497
2517
10:34
And there are many, many
difficulties involved in doing this;
205
622061
2954
10:37
I don't expect that this
is going to get solved very quickly.
206
625039
2932
10:39
The real difficulties, in fact, are us.
207
627995
2643
10:44
As I have already mentioned,
we behave badly.
208
632149
3117
10:47
In fact, some of us are downright nasty.
209
635290
2321
10:50
Now the robot, as I said,
doesn't have to copy the behavior.
210
638431
3052
10:53
The robot does not have
any objective of its own.
211
641507
2791
10:56
It's purely altruistic.
212
644322
1737
10:59
And it's not designed just to satisfy
the desires of one person, the user,
213
647293
5221
11:04
but in fact it has to respect
the preferences of everybody.
214
652538
3138
11:09
So it can deal with a certain
amount of nastiness,
215
657263
2570
11:11
and it can even understand
that your nastiness, for example,
216
659857
3701
11:15
you may take bribes as a passport official
217
663582
2671
11:18
because you need to feed your family
and send your kids to school.
218
666277
3812
11:22
It can understand that;
it doesn't mean it's going to steal.
219
670113
2906
11:25
In fact, it'll just help you
send your kids to school.
220
673043
2679
11:28
We are also computationally limited.
221
676976
3012
11:32
Lee Sedol is a brilliant Go player,
222
680012
2505
11:34
but he still lost.
223
682541
1325
11:35
So if we look at his actions,
he took an action that lost the game.
224
683890
4239
11:40
That doesn't mean he wanted to lose.
225
688153
2161
11:43
So to understand his behavior,
226
691340
2040
11:45
we actually have to invert
through a model of human cognition
227
693404
3644
11:49
that includes our computational
limitations -- a very complicated model.
228
697072
4977
11:54
But it's still something
that we can work on understanding.
229
702073
2993
11:57
Probably the most difficult part,
from my point of view as an AI researcher,
230
705876
4320
12:02
is the fact that there are lots of us,
231
710220
2575
12:06
and so the machine has to somehow
trade off, weigh up the preferences
232
714294
3581
12:09
of many different people,
233
717899
2225
12:12
and there are different ways to do that.
234
720148
1906
12:14
Economists, sociologists,
moral philosophers have understood that,
235
722078
3689
12:17
and we are actively
looking for collaboration.
236
725791
2455
12:20
Let's have a look and see what happens
when you get that wrong.
237
728270
3251
12:23
So you can have
a conversation, for example,
238
731545
2133
12:25
with your intelligent personal assistant
239
733702
1944
12:27
that might be available
in a few years' time.
240
735670
2285
12:29
Think of a Siri on steroids.
241
737979
2524
12:33
So Siri says, "Your wife called
to remind you about dinner tonight."
242
741627
4322
12:38
And of course, you've forgotten.
"What? What dinner?
243
746616
2508
12:41
What are you talking about?"
244
749148
1425
12:42
"Uh, your 20th anniversary at 7pm."
245
750597
3746
12:48
"I can't do that. I'm meeting
with the secretary-general at 7:30.
246
756915
3719
12:52
How could this have happened?"
247
760658
1692
12:54
"Well, I did warn you, but you overrode
my recommendation."
248
762374
4660
13:00
"Well, what am I going to do?
I can't just tell him I'm too busy."
249
768146
3328
13:04
"Don't worry. I arranged
for his plane to be delayed."
250
772490
3281
13:07
(Laughter)
251
775795
1682
13:10
"Some kind of computer malfunction."
252
778249
2101
13:12
(Laughter)
253
780374
1212
13:13
"Really? You can do that?"
254
781610
1617
13:16
"He sends his profound apologies
255
784400
2179
13:18
and looks forward to meeting you
for lunch tomorrow."
256
786603
2555
13:21
(Laughter)
257
789182
1299
13:22
So the values here --
there's a slight mistake going on.
258
790505
4403
13:26
This is clearly following my wife's values
259
794932
3009
13:29
which is "Happy wife, happy life."
260
797965
2069
13:32
(Laughter)
261
800058
1583
13:33
It could go the other way.
262
801665
1444
13:35
You could come home
after a hard day's work,
263
803821
2201
13:38
and the computer says, "Long day?"
264
806046
2195
13:40
"Yes, I didn't even have time for lunch."
265
808265
2288
13:42
"You must be very hungry."
266
810577
1282
13:43
"Starving, yeah.
Could you make some dinner?"
267
811883
2646
13:48
"There's something I need to tell you."
268
816070
2090
13:50
(Laughter)
269
818184
1155
13:52
"There are humans in South Sudan
who are in more urgent need than you."
270
820193
4905
13:57
(Laughter)
271
825122
1104
13:58
"So I'm leaving. Make your own dinner."
272
826250
2075
14:00
(Laughter)
273
828349
2000
14:02
So we have to solve these problems,
274
830823
1739
14:04
and I'm looking forward
to working on them.
275
832586
2515
14:07
There are reasons for optimism.
276
835125
1843
14:08
One reason is,
277
836992
1159
14:10
there is a massive amount of data.
278
838175
1868
14:12
Because remember -- I said
they're going to read everything
279
840067
2794
14:14
the human race has ever written.
280
842885
1546
14:16
Most of what we write about
is human beings doing things
281
844455
2724
14:19
and other people getting upset about it.
282
847203
1914
14:21
So there's a massive amount
of data to learn from.
283
849141
2398
14:23
There's also a very
strong economic incentive
284
851563
2236
14:27
to get this right.
285
855331
1186
14:28
So imagine your domestic robot's at home.
286
856541
2001
14:30
You're late from work again
and the robot has to feed the kids,
287
858566
3067
14:33
and the kids are hungry
and there's nothing in the fridge.
288
861657
2823
14:36
And the robot sees the cat.
289
864504
2605
14:39
(Laughter)
290
867133
1692
14:40
And the robot hasn't quite learned
the human value function properly,
291
868849
4190
14:45
so it doesn't understand
292
873063
1251
14:46
the sentimental value of the cat outweighs
the nutritional value of the cat.
293
874338
4844
14:51
(Laughter)
294
879206
1095
14:52
So then what happens?
295
880325
1748
14:54
Well, it happens like this:
296
882097
3297
14:57
"Deranged robot cooks kitty
for family dinner."
297
885418
2964
15:00
That one incident would be the end
of the domestic robot industry.
298
888406
4523
15:04
So there's a huge incentive
to get this right
299
892953
3372
15:08
long before we reach
superintelligent machines.
300
896349
2715
15:12
So to summarize:
301
900128
1535
15:13
I'm actually trying to change
the definition of AI
302
901687
2881
15:16
so that we have provably
beneficial machines.
303
904592
2993
15:19
And the principles are:
304
907609
1222
15:20
machines that are altruistic,
305
908855
1398
15:22
that want to achieve only our objectives,
306
910277
2804
15:25
but that are uncertain
about what those objectives are,
307
913105
3116
15:28
and will watch all of us
308
916245
1998
15:30
to learn more about what it is
that we really want.
309
918267
3203
15:34
And hopefully in the process,
we will learn to be better people.
310
922373
3559
15:37
Thank you very much.
311
925956
1191
15:39
(Applause)
312
927171
3709
15:42
Chris Anderson: So interesting, Stuart.
313
930904
1868
15:44
We're going to stand here a bit
because I think they're setting up
314
932796
3170
15:47
for our next speaker.
315
935990
1151
15:49
A couple of questions.
316
937165
1538
15:50
So the idea of programming in ignorance
seems intuitively really powerful.
317
938727
5453
15:56
As you get to superintelligence,
318
944204
1594
15:57
what's going to stop a robot
319
945822
2258
16:00
reading literature and discovering
this idea that knowledge
320
948104
2852
16:02
is actually better than ignorance
321
950980
1572
16:04
and still just shifting its own goals
and rewriting that programming?
322
952576
4218
16:09
Stuart Russell: Yes, so we want
it to learn more, as I said,
323
957692
6356
16:16
about our objectives.
324
964072
1287
16:17
It'll only become more certain
as it becomes more correct,
325
965383
5521
16:22
so the evidence is there
326
970928
1945
16:24
and it's going to be designed
to interpret it correctly.
327
972897
2724
16:27
It will understand, for example,
that books are very biased
328
975645
3956
16:31
in the evidence they contain.
329
979625
1483
16:33
They only talk about kings and princes
330
981132
2397
16:35
and elite white male people doing stuff.
331
983553
2800
16:38
So it's a complicated problem,
332
986377
2096
16:40
but as it learns more about our objectives
333
988497
3872
16:44
it will become more and more useful to us.
334
992393
2063
16:46
CA: And you couldn't
just boil it down to one law,
335
994480
2526
16:49
you know, hardwired in:
336
997030
1650
16:50
"if any human ever tries to switch me off,
337
998704
3293
16:54
I comply. I comply."
338
1002021
1935
16:55
SR: Absolutely not.
339
1003980
1182
16:57
That would be a terrible idea.
340
1005186
1499
16:58
So imagine that you have
a self-driving car
341
1006709
2689
17:01
and you want to send your five-year-old
342
1009422
2433
17:03
off to preschool.
343
1011879
1174
17:05
Do you want your five-year-old
to be able to switch off the car
344
1013077
3101
17:08
while it's driving along?
345
1016202
1213
17:09
Probably not.
346
1017439
1159
17:10
So it needs to understand how rational
and sensible the person is.
347
1018622
4703
17:15
The more rational the person,
348
1023349
1676
17:17
the more willing you are
to be switched off.
349
1025049
2103
17:19
If the person is completely
random or even malicious,
350
1027176
2543
17:21
then you're less willing
to be switched off.
351
1029743
2512
17:24
CA: All right. Stuart, can I just say,
352
1032279
1866
17:26
I really, really hope you
figure this out for us.
353
1034169
2314
17:28
Thank you so much for that talk.
That was amazing.
354
1036507
2375
17:30
SR: Thank you.
355
1038906
1167
17:32
(Applause)
356
1040097
1837

▲Back to top

ABOUT THE SPEAKER
Stuart Russell - AI expert
Stuart Russell wrote the standard text on AI; now he thinks deeply on AI's future -- and the future of us humans, too.

Why you should listen

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring and philosophical foundations.

He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

More profile about the speaker
Stuart Russell | Speaker | TED.com