ABOUT THE SPEAKER
Margaret Mitchell - AI research scientist
Margaret Mitchell is a senior research scientist in Google's Research & Machine Intelligence group, working on artificial intelligence.

Why you should listen

Margaret Mitchell's research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Research's "Cognition" group, focused on advancing artificial intelligence, and a researcher in Microsoft Research's Natural Language Processing group.

More profile about the speaker
Margaret Mitchell | Speaker | TED.com
TED@BCG Milan

Margaret Mitchell: How we can build AI to help humans, not hurt us

玛格丽特‧米切尔: 如何构建人工智能来帮助,而非伤害我们

Filmed:
1,154,210 views

作为谷歌的研究科学家,玛格丽特·米切尔帮助开发电脑,它们能够沟通所看到和理解的事情。她警示,如今我们潜意识地将差距,盲点和偏见编码到人工智能中——我们应该考虑今天创造的技术对未来意味着什么。米切尔说:“我们现在所看到的是人工智能进化过程中的一个快照。如果我们希望人工智能以一种帮助人类的方式发展,那么我们需要定义目标和策略,来开通这这条路径。”
- AI research scientist
Margaret Mitchell is a senior research scientist in Google's Research & Machine Intelligence group, working on artificial intelligence. Full bio

Double-click the English transcript below to play the video.

00:13
I work on helping帮助 computers电脑
communicate通信 about the world世界 around us.
0
1381
4015
我致力于协助电脑和
我们周围世界的沟通。
是有很多方法可以做到这一点,
00:17
There are a lot of ways方法 to do this,
1
5754
1793
我喜欢专注于协助电脑
00:19
and I like to focus焦点 on helping帮助 computers电脑
2
7571
2592
去谈论它们看到和理解的内容。
00:22
to talk about what they see
and understand理解.
3
10187
2874
00:25
Given特定 a scene现场 like this,
4
13514
1571
鉴于这样的情景,
一个现代的计算机视觉演算法,
00:27
a modern现代 computer-vision计算机视觉 algorithm算法
5
15109
1905
可以告诉你,有一个女人,
还有一只狗。
00:29
can tell you that there's a woman女人
and there's a dog.
6
17038
3095
它可以告诉你,那个女人在微笑。
00:32
It can tell you that the woman女人 is smiling微笑.
7
20157
2706
它甚至可以告诉你,
这只狗非常可爱。
00:34
It might威力 even be able能够 to tell you
that the dog is incredibly令人难以置信 cute可爱.
8
22887
3873
我处理这个问题
00:38
I work on this problem问题
9
26784
1349
思考人类如何理解和与世界共处。
00:40
thinking思维 about how humans人类
understand理解 and process处理 the world世界.
10
28157
4212
00:45
The thoughts思念, memories回忆 and stories故事
11
33577
2952
那些思想,记忆和故事
00:48
that a scene现场 like this
might威力 evoke唤起 for humans人类.
12
36553
2818
在这样的场景中,
可能会唤起人类的注意。
00:51
All the interconnections互连
of related有关 situations情况.
13
39395
4285
所有关连情况的相互联系。
00:55
Maybe you've seen看到
a dog like this one before,
14
43704
3126
也许你以前见过这样的狗,
00:58
or you've spent花费 time
running赛跑 on a beach海滩 like this one,
15
46854
2969
或者你曾经花时间,
在这样的沙滩上跑步,
01:01
and that further进一步 evokes唤起 thoughts思念
and memories回忆 of a past过去 vacation假期,
16
49847
4778
并进一步唤起
过去假期的记忆和想法,
01:06
past过去 times to the beach海滩,
17
54649
1920
以前去海滩的时候,
01:08
times spent花费 running赛跑 around
with other dogs小狗.
18
56593
2603
花在与其他狗儿,
跑来跑去的时间。
01:11
One of my guiding主导 principles原则
is that by helping帮助 computers电脑 to understand理解
19
59688
5207
我的指导原则之一,
是通过帮助电脑了解
01:16
what it's like to have these experiences经验,
20
64919
2896
这是什么样的经历,
01:19
to understand理解 what we share分享
and believe and feel,
21
67839
5176
从而了解我们所相信的
和感受的共通点,
01:26
then we're in a great position位置
to start开始 evolving进化 computer电脑 technology技术
22
74094
4310
那么我们就有能力开始
不断发展计算机技术,
01:30
in a way that's complementary补充
with our own拥有 experiences经验.
23
78428
4587
以一种与我们经验互补的方式。
01:35
So, digging挖掘 more deeply into this,
24
83539
3387
因此,深入挖掘这一点,
01:38
a few少数 years年份 ago I began开始 working加工 on helping帮助
computers电脑 to generate生成 human-like类人 stories故事
25
86950
5905
我几年前开始致力于帮助电脑
产生类似人类的故事,
从图像序列。
01:44
from sequences序列 of images图片.
26
92879
1666
01:47
So, one day,
27
95427
1904
所以,有一天,
01:49
I was working加工 with my computer电脑 to ask it
what it thought about a trip to Australia澳大利亚.
28
97355
4622
我正在用电脑工作时,询问它
对澳大利亚之行的看法。
01:54
It took a look at the pictures图片,
and it saw a koala考拉.
29
102768
2920
它看了看图片,
看到一只树袋熊。
01:58
It didn't know what the koala考拉 was,
30
106236
1643
它不知道树袋熊是什么,
01:59
but it said it thought
it was an interesting-looking有趣的外观 creature生物.
31
107903
2999
但电脑表示它认为树袋熊
看起来是很有趣的生物。
02:04
Then I shared共享 with it a sequence序列 of images图片
about a house burning燃烧 down.
32
112053
4004
然后我与电脑分享一系列
关于房屋烧毁的图像。
02:09
It took a look at the images图片 and it said,
33
117704
3285
电脑看了一下图片,它说:
02:13
"This is an amazing惊人 view视图!
This is spectacular壮观!"
34
121013
3500
「这是个惊人的景观!
这很壮观!」
02:17
It sent发送 chills畏寒 down my spine脊柱.
35
125450
2095
它使我的脊背发冷。
02:20
It saw a horrible可怕, life-changing改变生活
and life-destroying毁灭生命 event事件
36
128983
4572
电脑看到一个可怕的,
改变生活和毁灭生命的事件
02:25
and thought it was something positive.
37
133579
2382
并认为这是积极的事情。
02:27
I realized实现 that it recognized认可
the contrast对比,
38
135985
3441
我意识到电脑认识到
红色和黄色的对比,
02:31
the reds红魔, the yellows,
39
139450
2699
02:34
and thought it was something
worth价值 remarking陈述 on positively积极.
40
142173
3078
并认为这是值得积极评价的事情。
部分原因是因为
02:37
And part部分 of why it was doing this
41
145928
1615
02:39
was because most
of the images图片 I had given特定 it
42
147577
2945
我输入电脑的大部分
02:42
were positive images图片.
43
150546
1840
是积极的图像。
02:44
That's because people
tend趋向 to share分享 positive images图片
44
152903
3658
那是因为人们谈论自己的经历时,
倾向于分享积极的图像。
02:48
when they talk about their experiences经验.
45
156585
2190
你上次在葬礼上看到自拍照
是什么时候?
02:51
When was the last time
you saw a selfie自拍 at a funeral葬礼?
46
159267
2541
02:55
I realized实现 that,
as I worked工作 on improving提高 AIAI
47
163434
3095
我意识到在改进人工智能的过程中,
02:58
task任务 by task任务, dataset数据集 by dataset数据集,
48
166553
3714
是任务归任务,数据集归数据集,
03:02
that I was creating创建 massive大规模的 gaps空白,
49
170291
2897
在电脑的理解上,
创造出巨大的差距,缺陷和盲点。
03:05
holes and blind spots斑点
in what it could understand理解.
50
173212
3999
03:10
And while doing so,
51
178307
1334
在这样做的同时,
03:11
I was encoding编码 all kinds of biases偏见.
52
179665
2483
我正在为各种偏见编码。
03:15
Biases偏见 that reflect反映 a limited有限 viewpoint观点,
53
183029
3318
偏见反映有限观点,
03:18
limited有限 to a single dataset数据集 --
54
186371
2261
源于一个数据集——
03:21
biases偏见 that can reflect反映
human人的 biases偏见 found发现 in the data数据,
55
189283
3858
其中就反映着人类同样的,
03:25
such这样 as prejudice偏见 and stereotyping定型.
56
193165
3104
比如成见和刻板印象。
03:29
I thought back to the evolution演化
of the technology技术
57
197554
3057
我回想起那技术的发展,
03:32
that brought me to where I was that day --
58
200635
2502
让我达到那天我所处的境地——
03:35
how the first color颜色 images图片
59
203966
2233
第一张彩色的图像
03:38
were calibrated校准 against反对
a white白色 woman's女人的 skin皮肤,
60
206223
3048
是针对一位白人女性的
皮肤颜色进行校准的,
03:41
meaning含义 that color颜色 photography摄影
was biased against反对 black黑色 faces面孔.
61
209665
4145
这意味着彩色摄影对黑脸有偏差。
03:46
And that same相同 bias偏压, that same相同 blind spot
62
214514
2925
而同样的偏差,那个盲点
03:49
continued继续 well into the '90s.
63
217463
1867
继续带入了 90 年代。
03:51
And the same相同 blind spot
continues继续 even today今天
64
219701
3154
同样的盲点即使在今天,
03:54
in how well we can recognize认识
different不同 people's人们 faces面孔
65
222879
3698
仍然存在于在面部识别技术应用中,
怎样辨识不同人物的脸。
03:58
in facial面部 recognition承认 technology技术.
66
226601
2200
04:01
I though虽然 about the state of the art艺术
in research研究 today今天,
67
229323
3143
在今天的研究中,
我想到了最先进的技术,
04:04
where we tend趋向 to limit限制 our thinking思维
to one dataset数据集 and one problem问题.
68
232490
4514
都倾向于将我们的想法,
限制在一个数据集和一个问题上。
04:09
And that in doing so, we were creating创建
more blind spots斑点 and biases偏见
69
237688
4881
而这样做,我们正在创造
更多的盲点和偏见,
04:14
that the AIAI could further进一步 amplify放大.
70
242593
2277
会在使用人工智能时
被进一步放大。
04:17
I realized实现 then
that we had to think deeply
71
245712
2079
那时我意识到我们必须深思,
04:19
about how the technology技术 we work on today今天
looks容貌 in five years年份, in 10 years年份.
72
247815
5519
我们今天发明创造技术,
在五年到十年之后会怎样被看待 。
04:25
Humans人类 evolve发展 slowly慢慢地,
with time to correct正确 for issues问题
73
253990
3142
在人类与环境互动作用中,
人类用时间纠正问题,
所以进化缓慢。
04:29
in the interaction相互作用 of humans人类
and their environment环境.
74
257156
3534
04:33
In contrast对比, artificial人造 intelligence情报
is evolving进化 at an incredibly令人难以置信 fast快速 rate.
75
261276
5429
人工智能相比之下,正在以
令人难以置信的速度发展。
这意味着它确实很重要,
04:39
And that means手段 that it really matters事项
76
267013
1773
04:40
that we think about this
carefully小心 right now --
77
268810
2317
我们现在要仔细考虑这一点 ——
04:44
that we reflect反映 on our own拥有 blind spots斑点,
78
272180
3008
反思自己的盲点,
04:47
our own拥有 biases偏见,
79
275212
2317
及偏见,
04:49
and think about how that's informing通知
the technology技术 we're creating创建
80
277553
3857
并考虑这些偏见是如何影响
我们现在创造的技术,
04:53
and discuss讨论 what the technology技术 of today今天
will mean for tomorrow明天.
81
281434
3902
并讨论今天的技术,
对未来意味着什么。
04:58
CEOs老总 and scientists科学家们 have weighed称重 in
on what they think
82
286593
3191
CEO和科学家,
已经权衡了他们的想法,
05:01
the artificial人造 intelligence情报 technology技术
of the future未来 will be.
83
289808
3325
关于未来的人工智能发展。
05:05
Stephen斯蒂芬 Hawking霍金 warns警告 that
84
293157
1618
斯蒂芬·霍金警告说:
05:06
"Artificial人造 intelligence情报
could end结束 mankind人类."
85
294799
3007
「人工智能会使人类灭亡。」
05:10
Elon伊隆 Musk warns警告
that it's an existential存在 risk风险
86
298307
2683
伊隆‧马斯克警告
这是一种存在的风险,
05:13
and one of the greatest最大 risks风险
that we face面对 as a civilization文明.
87
301014
3574
也是我们作为文明社会;
要面临的最大风险之一。
05:17
Bill法案 Gates盖茨 has made制作 the point,
88
305665
1452
比尔‧盖茨指出:
05:19
"I don't understand理解
why people aren't more concerned关心."
89
307141
3185
「我不明白为什么人们
对人工智能不更忧虑。」
05:23
But these views意见 --
90
311412
1318
但是这些观点——
05:25
they're part部分 of the story故事.
91
313618
1734
只是故事的一部分。
05:28
The math数学, the models楷模,
92
316079
2420
那数学,模型,
05:30
the basic基本 building建造 blocks
of artificial人造 intelligence情报
93
318523
3070
这些人工智能的基本组成部分,
05:33
are something that we call access访问
and all work with.
94
321617
3135
是我们都可以取得和并使用的。
05:36
We have open-source开源 tools工具
for machine learning学习 and intelligence情报
95
324776
3785
我们有向大众开放的源代码工具
来学习机器,
05:40
that we can contribute有助于 to.
96
328585
1734
并同时作出自己的贡献。
05:42
And beyond that,
we can share分享 our experience经验.
97
330919
3340
除此之外,我们也可以
分享我们的经验。
我们可以分享在技术方面
及其与我们的关系,
05:46
We can share分享 our experiences经验
with technology技术 and how it concerns关注 us
98
334760
3468
和如何令我们雀跃的地方。
05:50
and how it excites的激励 us.
99
338252
1467
我们可以讨论我们所爱的东西。
05:52
We can discuss讨论 what we love.
100
340251
1867
我们可以与预见的将来进行沟通,
05:55
We can communicate通信 with foresight先见之明
101
343244
2031
05:57
about the aspects方面 of technology技术
that could be more beneficial有利
102
345299
4857
关于技术方面这可能会更有益,
06:02
or could be more problematic问题 over time.
103
350180
2600
或随着时间的推移,
可能会出现更多的问题。
06:05
If we all focus焦点 on opening开盘 up
the discussion讨论 on AIAI
104
353799
4143
如果我们都专注于
开放对于人工智能的讨论
06:09
with foresight先见之明 towards the future未来,
105
357966
1809
展望未来,
06:13
this will help create创建 a general一般
conversation会话 and awareness意识
106
361093
4270
这将有助于创造一个
常规的对话和意识,
06:17
about what AIAI is now,
107
365387
2513
关于人工智能是什么?
它能成为什么?
06:21
what it can become成为
108
369212
2001
以及我们需要做的所有事情,
06:23
and all the things that we need to do
109
371237
1785
06:25
in order订购 to enable启用 that outcome结果
that best最好 suits西装 us.
110
373046
3753
以实现最适合我们的结果。
06:29
We already已经 see and know this
in the technology技术 that we use today今天.
111
377490
3674
我们已经在今天使用的技术中
看到和了解这一点。
06:33
We use smart聪明 phones手机
and digital数字 assistants助理 and Roombas罗蒙加斯.
112
381767
3880
我们使用智能手机,数码助理
和自动吸尘器。
06:38
Are they evil邪恶?
113
386457
1150
它们邪恶吗?
06:40
Maybe sometimes有时.
114
388268
1547
也许有时是。
06:42
Are they beneficial有利?
115
390664
1333
他们有益吗?
06:45
Yes, they're that, too.
116
393005
1533
是的,他们也是。
06:48
And they're not all the same相同.
117
396236
1761
它们并不完全相同。
06:50
And there you already已经 see
a light shining闪亮的 on what the future未来 holds持有.
118
398489
3540
在那里你已经看到了未来的光芒。
06:54
The future未来 continues继续 on
from what we build建立 and create创建 right now.
119
402942
3619
未来将继续从我们现在
建立和创造的东西开始。
06:59
We set into motion运动 that domino骨牌 effect影响
120
407165
2642
我们启动了多米诺骨牌效应,
07:01
that carves out AI'sai 的 evolutionary发展的 path路径.
121
409831
2600
这就揭开了人工智能的进化通道
07:05
In our time right now,
we shape形状 the AIAI of tomorrow明天.
122
413173
2871
在我们的时代,塑造了
明天的人工智能。
07:08
Technology技术 that immerses沉浸 us
in augmented增强 realities现实
123
416566
3699
让我们能沉浸在增强现实的技术中,
07:12
bringing使 to life past过去 worlds世界.
124
420289
2566
使过去的世界复活,
07:15
Technology技术 that helps帮助 people
to share分享 their experiences经验
125
423844
4312
当人们沟通有困难时,
07:20
when they have difficulty困难 communicating通信.
126
428180
2262
科技就帮助他们分享彼此的经验。
07:23
Technology技术 built内置 on understanding理解
the streaming visual视觉 worlds世界
127
431323
4532
建立于在线视觉媒体的科技,
07:27
used as technology技术 for self-driving自驾车 cars汽车.
128
435879
3079
可被用在汽车自动驾驶上。
07:32
Technology技术 built内置 on understanding理解 images图片
and generating发电 language语言,
129
440490
3413
科技基于图像的理解而产生语言,
07:35
evolving进化 into technology技术 that helps帮助 people
who are visually视觉 impaired受损
130
443927
4063
能演变成协助视障人士的技术,
07:40
be better able能够 to access访问 the visual视觉 world世界.
131
448014
2800
帮助他们更好地拥有视觉世界。
07:42
And we also see how technology技术
can lead to problems问题.
132
450838
3261
我们也看到科技
如何导致一些问题。
07:46
We have technology技术 today今天
133
454885
1428
我们今天有科技
07:48
that analyzes分析 physical物理
characteristics特点 we're born天生 with --
134
456337
3835
分析我们出生的身体特征 ——
07:52
such这样 as the color颜色 of our skin皮肤
or the look of our face面对 --
135
460196
3272
比如我们皮肤的颜色
还是我们脸上的表情
07:55
in order订购 to determine确定 whether是否 or not
we might威力 be criminals罪犯 or terrorists恐怖分子.
136
463492
3804
以确定我们是否罪犯或恐怖分子。
07:59
We have technology技术
that crunches仰卧起坐 through通过 our data数据,
137
467688
2905
我们拥有处理数据的技术,
08:02
even data数据 relating有关
to our gender性别 or our race种族,
138
470617
2896
处理关于性别或种族的数据,
08:05
in order订购 to determine确定 whether是否 or not
we might威力 get a loan贷款.
139
473537
2865
以确定我们是否可以获得贷款。
08:09
All that we see now
140
477494
1579
我们现在看到的所有东西,
08:11
is a snapshot快照 in the evolution演化
of artificial人造 intelligence情报.
141
479097
3617
只是人工智能演变过程中的
快照。
08:15
Because where we are right now,
142
483763
1778
因为我们现在所处的地方,
08:17
is within a moment时刻 of that evolution演化.
143
485565
2238
是演变中的一个时刻。
08:20
That means手段 that what we do now
will affect影响 what happens发生 down the line线
144
488690
3802
这意味着我们现在所做的,
将会影响事情的往后发展,
并延至未来的世界。
08:24
and in the future未来.
145
492516
1200
08:26
If we want AIAI to evolve发展
in a way that helps帮助 humans人类,
146
494063
3951
如果我们希望人工智能
能协助人类的方式进化,
08:30
then we need to define确定
the goals目标 and strategies策略
147
498038
2801
我们就需要确定策略和目标,
08:32
that enable启用 that path路径 now.
148
500863
1733
马上开通那条路径。
08:35
What I'd like to see is something
that fits适合 well with humans人类,
149
503680
3738
我想看到的是适合人类的
08:39
with our culture文化 and with the environment环境.
150
507442
2800
文化和环境的发展方向。
08:43
Technology技术 that aids艾滋病 and assists助攻
those of us with neurological神经 conditions条件
151
511435
4484
科技能帮助我们治愈神经系统疾病
08:47
or other disabilities残疾人
152
515943
1721
或其它残疾的患者,
08:49
in order订购 to make life
equally一样 challenging具有挑战性的 for everyone大家.
153
517688
3216
让他们与每个人一样,
让生活同样具有挑战性。
08:54
Technology技术 that works作品
154
522097
1421
科技的运作
不会考量你的特征
08:55
regardless而不管 of your demographics人口统计学
or the color颜色 of your skin皮肤.
155
523542
3933
或皮肤颜色。
09:00
And so today今天, what I focus焦点 on
is the technology技术 for tomorrow明天
156
528383
4742
我今天关注的是
明日和十年后的科技,
09:05
and for 10 years年份 from now.
157
533149
1733
09:08
AIAI can turn out in many许多 different不同 ways方法.
158
536530
2634
人工智能可以以许多不同的方式出现。
09:11
But in this case案件,
159
539688
1225
但在这种情况下,
09:12
it isn't a self-driving自驾车 car汽车
without any destination目的地.
160
540937
3328
它并不是没有任何目的地的
无人驾驶车。
09:16
This is the car汽车 that we are driving主动.
161
544884
2400
这是我们能驾驶同时控制的汽车。
09:19
We choose选择 when to speed速度 up
and when to slow down.
162
547953
3595
我们选择何时加速和何时减速。
09:23
We choose选择 if we need to make a turn.
163
551572
2400
我们选择是否需要转弯。
09:26
We choose选择 what the AIAI
of the future未来 will be.
164
554868
3000
我们选择未来的人工智能会是什么。
会有一个广阔的竞技场。
09:31
There's a vast广大 playing播放 field领域
165
559186
1337
09:32
of all the things that artificial人造
intelligence情报 can become成为.
166
560547
2965
容许人工智能可以成为所有的东西。
它会变成很多不同的东西。
09:36
It will become成为 many许多 things.
167
564064
1800
09:39
And it's up to us now,
168
567694
1732
现在取决于
09:41
in order订购 to figure数字 out
what we need to put in place地点
169
569450
3061
我们要弄清楚所需要实施的
09:44
to make sure the outcomes结果
of artificial人造 intelligence情报
170
572535
3807
以确保人工智能的结果
09:48
are the ones那些 that will be
better for all of us.
171
576366
3066
是对所有人类都会更好。
09:51
Thank you.
172
579456
1150
谢谢。
09:52
(Applause掌声)
173
580630
2187
(掌声)
Translated by Thomas Tam
Reviewed by Echo Sun

▲Back to top

ABOUT THE SPEAKER
Margaret Mitchell - AI research scientist
Margaret Mitchell is a senior research scientist in Google's Research & Machine Intelligence group, working on artificial intelligence.

Why you should listen

Margaret Mitchell's research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Research's "Cognition" group, focused on advancing artificial intelligence, and a researcher in Microsoft Research's Natural Language Processing group.

More profile about the speaker
Margaret Mitchell | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee