ABOUT THE SPEAKER
Kriti Sharma - AI technologist
Kriti Sharma creates AI technology to help address some of the toughest social challenges of our time -- from domestic violence to sexual health and inequality.

Why you should listen

Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. In 2018, she also launched rAInbow, a digital companion for women facing domestic violence in South Africa. This service reached nearly 200,000 conversations within the first 100 days, breaking down the stigma of gender-based violence. In 2019, she collaborated with the Population Foundation of India to launch Dr. Sneha, an AI-powered digital character to engage with young people about sexual health, an issue that is still considered a taboo in India. 

Sharma was recently named in the Forbes "30 Under 30" list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Government’s Centre for Data Ethics and Innovation. 

More profile about the speaker
Kriti Sharma | Speaker | TED.com
TEDxWarwick

Kriti Sharma: How to keep human bias out of AI

克里蒂 • 夏尔马: 如何让人工智能远离人类的偏见

Filmed:
2,050,106 views

人工智能算法一直在对你做出重要的决定——比如你应该支付多少车险,或者你是否应该得到面试机会。但是,当这些机器被植入人类的偏见时,会发生什么呢?技术专家克里蒂•夏尔马(Kriti Sharma)探讨了技术缺乏多样性是如何渗透到我们的人工智能中的。她提供了三种方法,让我们可以开始设计更合乎道德的算法。
- AI technologist
Kriti Sharma creates AI technology to help address some of the toughest social challenges of our time -- from domestic violence to sexual health and inequality. Full bio

Double-click the English transcript below to play the video.

00:12
How many许多 decisions决定
have been made制作 about you today今天,
0
875
3768
你今天,这周,或今年
有多少决定
00:16
or this week or this year,
1
4667
2601
是人工智能(AI)做出的?
00:19
by artificial人造 intelligence情报?
2
7292
1958
00:22
I build建立 AIAI for a living活的
3
10958
1685
我靠创建AI为生,
所以,坦白说,我是个技术狂。
00:24
so, full充分 disclosure泄露, I'm kind of a nerd书呆子.
4
12667
3017
因为我是算是个技术狂,
00:27
And because I'm kind of a nerd书呆子,
5
15708
2393
每当有关于人工智能
要抢走我们的工作
00:30
wherever哪里 some new news新闻 story故事 comes out
6
18125
2351
00:32
about artificial人造 intelligence情报
stealing偷窃行为 all our jobs工作,
7
20500
3434
这样的新闻报道出来,
或者机器人获得了
一个国家的公民身份时,
00:35
or robots机器人 getting得到 citizenship国籍
of an actual实际 country国家,
8
23958
4185
我就成了对未来感到
担忧的朋友和关注者
00:40
I'm the person my friends朋友
and followers追随者 message信息
9
28167
3142
发消息的对象。
00:43
freaking再用 out about the future未来.
10
31333
1542
00:45
We see this everywhere到处.
11
33833
2101
这种事情随处可见。
媒体担心机器人
正在接管人类的统治。
00:47
This media媒体 panic恐慌 that
our robot机器人 overlords霸主 are taking服用 over.
12
35958
4893
我们可以为此谴责好莱坞。
00:52
We could blame Hollywood好莱坞 for that.
13
40875
1917
00:56
But in reality现实, that's not the problem问题
we should be focusing调焦 on.
14
44125
4125
但现实中,这不是
我们应该关注的问题。
01:01
There is a more pressing紧迫 danger危险,
a bigger risk风险 with AIAI,
15
49250
3643
人工智能还有一个更紧迫的
危机, 一个更大的风险,
需要我们首先应对。
01:04
that we need to fix固定 first.
16
52917
1583
01:07
So we are back to this question:
17
55417
2309
所以我们再回到这个问题:
今天我们有多少决定
是由人工智能做出的?
01:09
How many许多 decisions决定
have been made制作 about you today今天 by AIAI?
18
57750
4708
01:15
And how many许多 of these
19
63792
1976
其中有多少决定
是基于你的性别,种族或者背景?
01:17
were based基于 on your gender性别,
your race种族 or your background背景?
20
65792
4500
01:24
Algorithms算法 are being存在 used all the time
21
72500
2768
算法一直在被用来
判断我们是谁,我们想要什么。
01:27
to make decisions决定 about who we are
and what we want.
22
75292
3833
01:32
Some of the women妇女 in this room房间
will know what I'm talking about
23
80208
3643
在座的人里有些女性
知道我在说什么,
如果你有上千次
被要求看完YouTube上
01:35
if you've been made制作 to sit through通过
those pregnancy怀孕 test测试 adverts广告 on YouTubeYouTube的
24
83875
3768
那些怀孕测试广告,
01:39
like 1,000 times.
25
87667
2059
或者你在Faceboo的短新闻中
01:41
Or you've scrolled滚动 past过去 adverts广告
of fertility生育能力 clinics诊所
26
89750
2851
刷到过生育诊所的广告。
01:44
on your FacebookFacebook的 feed饲料.
27
92625
2042
01:47
Or in my case案件, Indian印度人 marriage婚姻 bureaus.
28
95625
2393
或者我的遇到的情况是,
印度婚姻局。
(笑声)
01:50
(Laughter笑声)
29
98042
1267
但人工智能不仅被用来决定
01:51
But AIAI isn't just being存在 used
to make decisions决定
30
99333
2976
我们想要买什么产品,
01:54
about what products制品 we want to buy购买
31
102333
2601
或者我们接下来想刷哪部剧。
01:56
or which哪一个 show显示 we want to binge狂欢 watch next下一个.
32
104958
2500
02:01
I wonder奇迹 how you'd feel about someone有人
who thought things like this:
33
109042
5184
我想知道你会怎么看这样想的人:
“黑人或拉丁美洲人
02:06
"A black黑色 or Latino拉丁美洲人 person
34
114250
1934
比白人更不可能按时还贷。”
02:08
is less likely容易 than a white白色 person
to pay工资 off their loan贷款 on time."
35
116208
4125
02:13
"A person called John约翰
makes品牌 a better programmer程序员
36
121542
2809
“名叫约翰的人编程能力
要比叫玛丽的人好。”
02:16
than a person called Mary玛丽."
37
124375
1667
02:19
"A black黑色 man is more likely容易 to be
a repeat重复 offender犯罪分子 than a white白色 man."
38
127250
5083
“黑人比白人更有可能成为惯犯。”
02:26
You're probably大概 thinking思维,
39
134958
1268
你可能在想,
“哇,这听起来像是一个有严重
性别歧视和种族歧视的人。” 对吧?
02:28
"Wow, that sounds声音 like a pretty漂亮 sexist性别歧视,
racist种族主义者 person," right?
40
136250
3750
02:33
These are some real真实 decisions决定
that AIAI has made制作 very recently最近,
41
141000
4851
这些都是人工智能
近期做出的真实决定,
基于它从我们人类身上
02:37
based基于 on the biases偏见
it has learned学到了 from us,
42
145875
2934
学习到的偏见。
02:40
from the humans人类.
43
148833
1250
02:43
AIAI is being存在 used to help decide决定
whether是否 or not you get that job工作 interview访问;
44
151750
4809
人工智能被用来帮助决定
你是否能够得到面试机会;
02:48
how much you pay工资 for your car汽车 insurance保险;
45
156583
2393
你应该为车险支付多少费用;
你的信用分数有多好;
02:51
how good your credit信用 score得分了 is;
46
159000
1893
甚至你在年度绩效评估中
应该得到怎样的评分。
02:52
and even what rating评分 you get
in your annual全年 performance性能 review评论.
47
160917
3125
02:57
But these decisions决定
are all being存在 filtered过滤 through通过
48
165083
3143
但这些决定都是
通过它对我们的身份、
种族、性别和年龄的
假设过滤出来的。
03:00
its assumptions假设 about our identity身分,
our race种族, our gender性别, our age年龄.
49
168250
5875
03:08
How is that happening事件?
50
176250
2268
为什么会这样?
想象一下人工智能
正在帮助一个人事主管
03:10
Now, imagine想像 an AIAI is helping帮助
a hiring招聘 manager经理
51
178542
3517
寻找公司下一位科技领袖。
03:14
find the next下一个 tech高科技 leader领导 in the company公司.
52
182083
2851
目前为止,主管雇佣的大部分是男性。
03:16
So far, the manager经理
has been hiring招聘 mostly大多 men男人.
53
184958
3101
所以人工智能知道男人比女人
更有可能成为程序员,
03:20
So the AIAI learns获悉 men男人 are more likely容易
to be programmers程序员 than women妇女.
54
188083
4750
03:25
And it's a very short leap飞跃 from there to:
55
193542
2892
也就更容易做出这样的判断:
男人比女人更擅长编程。
03:28
men男人 make better programmers程序员 than women妇女.
56
196458
2042
03:31
We have reinforced加强
our own拥有 bias偏压 into the AIAI.
57
199417
3726
我们通过人工智能强化了自己的偏见。
现在,它正在筛选掉女性候选人。
03:35
And now, it's screening筛查 out
female candidates候选人.
58
203167
3625
03:40
Hang on, if a human人的
hiring招聘 manager经理 did that,
59
208917
3017
等等,如果人类招聘主管这样做,
我们会很愤怒,
不允许这样的事情发生。
03:43
we'd星期三 be outraged愤怒, we wouldn't不会 allow允许 it.
60
211958
2351
这种性别偏见让人难以接受。
03:46
This kind of gender性别
discrimination区别 is not OK.
61
214333
3476
然而,或多或少,
人工智能已经凌驾于法律之上,
03:49
And yet然而 somehow不知何故,
AIAI has become成为 above以上 the law,
62
217833
4518
因为是机器做的决定。
03:54
because a machine made制作 the decision决定.
63
222375
2083
03:57
That's not it.
64
225833
1518
这还没完。
我们也在强化我们与
人工智能互动的偏见。
03:59
We are also reinforcing加强 our bias偏压
in how we interact相互作用 with AIAI.
65
227375
4875
04:04
How often经常 do you use a voice语音 assistant助理
like SiriSiri的, AlexaAlexa的 or even Cortana科尔塔纳?
66
232917
5976
你们使用Siri,Alexa或者Cortana
这样的语音助手有多频繁?
它们有两点是相同的:
04:10
They all have two things in common共同:
67
238917
2559
第一点,它们总是搞错我的名字,
04:13
one, they can never get my name名称 right,
68
241500
3101
第二点,它们都有女性特征。
04:16
and second第二, they are all female.
69
244625
2667
04:20
They are designed设计 to be
our obedient听话 servants公务员,
70
248417
2767
它们都被设计成顺从我们的仆人,
开灯关灯,下单购买商品。
04:23
turning车削 your lights灯火 on and off,
ordering排序 your shopping购物.
71
251208
3250
04:27
You get male AIs认可 too,
but they tend趋向 to be more high-powered高功率,
72
255125
3309
也有男性的人工智能,
但他们倾向于拥有更高的权力,
04:30
like IBMIBM Watson沃森,
making制造 business商业 decisions决定,
73
258458
3059
比如IBM的Watson可以做出商业决定,
还有Salesforce的Einstein
或者ROSS, 是机器人律师。
04:33
Salesforce销售力量 Einstein爱因斯坦
or ROSS罗斯, the robot机器人 lawyer律师.
74
261541
3792
04:38
So poor较差的 robots机器人, even they suffer遭受
from sexism性别歧视 in the workplace职场.
75
266208
4060
所以即便是机器人也没能
逃脱工作中的性别歧视。
04:42
(Laughter笑声)
76
270292
1125
(笑声)
04:44
Think about how these two things combine结合
77
272542
2851
想想这两者如何结合在一起,
04:47
and affect影响 a kid孩子 growing生长 up
in today's今天的 world世界 around AIAI.
78
275417
5309
又会影响一个在当今人工智能
世界中长大的孩子。
比如他们正在为学校的
一个项目做一些研究,
04:52
So they're doing some research研究
for a school学校 project项目
79
280750
2934
他们在谷歌上搜索了CEO的照片。
04:55
and they Google谷歌 images图片 of CEOCEO.
80
283708
3018
算法向他们展示的大部分是男性。
04:58
The algorithm算法 shows节目 them
results结果 of mostly大多 men男人.
81
286750
2893
他们又搜索了个人助手。
05:01
And now, they Google谷歌 personal个人 assistant助理.
82
289667
2559
你可以猜到,它显示的大部分是女性。
05:04
As you can guess猜测,
it shows节目 them mostly大多 females女性.
83
292250
3434
然后他们想放点音乐,
也许想点些吃的,
05:07
And then they want to put on some music音乐,
and maybe order订购 some food餐饮,
84
295708
3601
而现在,他们正对着一位
顺从的女声助手发号施令。
05:11
and now, they are barking叫声 orders命令
at an obedient听话 female voice语音 assistant助理.
85
299333
6584
05:19
Some of our brightest minds头脑
are creating创建 this technology技术 today今天.
86
307542
5309
我们中一些最聪明的人
创建了今天的这个技术。
他们可以用任何他们
想要的方式创造技术。
05:24
Technology技术 that they could have created创建
in any way they wanted.
87
312875
4184
然而,他们却选择了上世纪50年代
《广告狂人》的秘书风格。
05:29
And yet然而, they have chosen选择 to create创建 it
in the style样式 of 1950s "Mad Man" secretary秘书.
88
317083
5685
是的,你没听错!
05:34
Yay好极了!
89
322792
1500
05:36
But OK, don't worry担心,
90
324958
1310
但还好,不用担心。
这不会因为我告诉你
05:38
this is not going to end结束
with me telling告诉 you
91
326292
2059
我们都在朝着性别歧视、
种族主义的机器前进而结束。
05:40
that we are all heading标题 towards
sexist性别歧视, racist种族主义者 machines running赛跑 the world世界.
92
328375
3477
05:44
The good news新闻 about AIAI
is that it is entirely完全 within our control控制.
93
332792
5791
人工智能的好处是,
一切都在我们的控制中。
05:51
We get to teach the right values,
the right ethics伦理 to AIAI.
94
339333
4000
我们得告诉人工智能
正确的价值观,道德观。
05:56
So there are three things we can do.
95
344167
2184
所以有三件事我们可以做。
第一,我们能够意识到自己的偏见
05:58
One, we can be aware知道的 of our own拥有 biases偏见
96
346375
3351
和我们身边机器的偏见。
06:01
and the bias偏压 in machines around us.
97
349750
2726
第二,我们可以确保打造
这个技术的是背景多样的团队。
06:04
Two, we can make sure that diverse多种 teams球队
are building建造 this technology技术.
98
352500
4518
第三,我们必须让它
从丰富的经验中学习。
06:09
And three, we have to give it
diverse多种 experiences经验 to learn学习 from.
99
357042
4916
06:14
I can talk about the first two
from personal个人 experience经验.
100
362875
3309
我可以从我个人的经验来说明前两点。
当你在科技行业工作,
06:18
When you work in technology技术
101
366208
1435
06:19
and you don't look like
a Mark标记 Zuckerberg扎克伯格 or Elon伊隆 Musk,
102
367667
3392
并且不像马克·扎克伯格
或埃隆·马斯克那样位高权重,
你的生活会有点困难,
你的能力会收到质疑。
06:23
your life is a little bit difficult,
your ability能力 gets得到 questioned质疑.
103
371083
3750
06:27
Here's这里的 just one example.
104
375875
1393
这只是一个例子。
跟大部分开发者一样,
我经常参加在线科技论坛,
06:29
Like most developers开发商,
I often经常 join加入 online线上 tech高科技 forums论坛
105
377292
3726
分享我的知识帮助别人。
06:33
and share分享 my knowledge知识 to help others其他.
106
381042
3226
我发现,
06:36
And I've found发现,
107
384292
1309
当我用自己的照片,
自己的名字登陆时,
06:37
when I log日志 on as myself,
with my own拥有 photo照片, my own拥有 name名称,
108
385625
3976
我倾向于得到这样的问题或评论:
06:41
I tend趋向 to get questions问题
or comments注释 like this:
109
389625
4601
“你为什么觉得自己
有资格谈论人工智能?”
06:46
"What makes品牌 you think
you're qualified合格 to talk about AIAI?"
110
394250
3000
06:50
"What makes品牌 you think
you know about machine learning学习?"
111
398458
3476
“你为什么觉得你了解机器学习?”
所以,我创建了新的资料页,
06:53
So, as you do, I made制作 a new profile轮廓,
112
401958
3435
这次,我没有选择自己的照片,
而是选择了一只带着喷气背包的猫。
06:57
and this time, instead代替 of my own拥有 picture图片,
I chose选择 a cat with a jet喷射 pack on it.
113
405417
4851
并选择了一个无法体现我性别的名字。
07:02
And I chose选择 a name名称
that did not reveal揭示 my gender性别.
114
410292
2458
07:05
You can probably大概 guess猜测
where this is going, right?
115
413917
2726
你能够大概猜到会怎么样,对吧?
于是这次,我不再收到
任何居高临下的评论,
07:08
So, this time, I didn't get any of those
patronizing光顾 comments注释 about my ability能力
116
416667
6392
我能够专心把工作做完。
07:15
and I was able能够 to actually其实
get some work doneDONE.
117
423083
3334
07:19
And it sucks, guys.
118
427500
1851
这感觉太糟糕了,伙计们。
我从15岁起就在构建机器人,
07:21
I've been building建造 robots机器人 since以来 I was 15,
119
429375
2476
我有计算机科学领域的几个学位,
07:23
I have a few少数 degrees in computer电脑 science科学,
120
431875
2268
然而,我不得不隐藏我的性别
07:26
and yet然而, I had to hide隐藏 my gender性别
121
434167
2434
以让我的工作被严肃对待。
07:28
in order订购 for my work
to be taken采取 seriously认真地.
122
436625
2250
07:31
So, what's going on here?
123
439875
1893
这是怎么回事呢?
男性在科技领域就是强于女性吗?
07:33
Are men男人 just better
at technology技术 than women妇女?
124
441792
3208
07:37
Another另一个 study研究 found发现
125
445917
1559
另一个研究发现,
当女性程序员在平台上
隐藏性别时,像我这样,
07:39
that when women妇女 coders编码器 on one platform平台
hid their gender性别, like myself,
126
447500
4934
她们的代码被接受的
比例比男性高4%。
07:44
their code was accepted公认
four percent百分 more than men男人.
127
452458
3250
07:48
So this is not about the talent天赋.
128
456542
2916
所以这跟能力无关。
07:51
This is about an elitism精英 in AIAI
129
459958
2893
这是人工智能领域的精英主义,
即程序员看起来得像
具备某个特征的人。
07:54
that says a programmer程序员
needs需求 to look like a certain某些 person.
130
462875
2792
07:59
What we really need to do
to make AIAI better
131
467375
3101
让人工智能变得更好,
我们需要切实的
把来自不同背景的人集合到一起。
08:02
is bring带来 people
from all kinds of backgrounds背景.
132
470500
3042
08:06
We need people who can
write and tell stories故事
133
474542
2559
我们需要能够书写和讲故事的人
来帮助我们创建人工智能更好的个性。
08:09
to help us create创建 personalities个性 of AIAI.
134
477125
2167
08:12
We need people who can solve解决 problems问题.
135
480208
2042
我们需要能够解决问题的人。
08:15
We need people
who face面对 different不同 challenges挑战
136
483125
3768
我们需要能应对不同挑战的人,
我们需要有人告诉我们什么是
真正需要解决的问题,
08:18
and we need people who can tell us
what are the real真实 issues问题 that need fixing定影
137
486917
5351
帮助我们找到用技术
解决问题的方法。
08:24
and help us find ways方法
that technology技术 can actually其实 fix固定 it.
138
492292
3041
08:29
Because, when people
from diverse多种 backgrounds背景 come together一起,
139
497833
3726
因为,当不同背景的人走到一起时,
当我们以正确的方式做事情时,
08:33
when we build建立 things in the right way,
140
501583
2143
就有无限的可能。
08:35
the possibilities可能性 are limitless无限.
141
503750
2042
08:38
And that's what I want to end结束
by talking to you about.
142
506750
3309
这就是我最后想和你们讨论的。
减少种族歧视的机器人,
减少夺走我们工作的机器——
08:42
Less racist种族主义者 robots机器人, less machines
that are going to take our jobs工作 --
143
510083
4225
更多专注于技术究竟能实现什么。
08:46
and more about what technology技术
can actually其实 achieve实现.
144
514332
3125
08:50
So, yes, some of the energy能源
in the world世界 of AIAI,
145
518292
3434
是的,人工智能世界中,
科技世界中的一些能量
08:53
in the world世界 of technology技术
146
521750
1393
08:55
is going to be about
what ads广告 you see on your stream.
147
523167
4267
是关于你在流媒体中看到的广告。
但更多是朝着让世界更美好的方向前进。
08:59
But a lot of it is going towards
making制造 the world世界 so much better.
148
527458
5209
09:05
Think about a pregnant woman女人
in the Democratic民主的 Republic共和国 of Congo刚果,
149
533500
3768
想想刚果民主共和国的一位孕妇,
需要走17小时才能
到最近的农村产前诊所
09:09
who has to walk步行 17 hours小时
to her nearest最近的 rural乡村 prenatal产前 clinic诊所
150
537292
4184
进行产检。
09:13
to get a checkup检查.
151
541500
1851
如果她在手机上
就能得到诊断会怎样呢?
09:15
What if she could get diagnosis诊断
on her phone电话, instead代替?
152
543375
2917
09:19
Or think about what AIAI could do
153
547750
1809
或者想象一下人工智能
能为1/3面临家庭暴力的
09:21
for those one in three women妇女
in South Africa非洲
154
549583
2726
南非女性做什么。
09:24
who face面对 domestic国内 violence暴力.
155
552333
2125
09:27
If it wasn't safe安全 to talk out loud,
156
555083
2726
如果大声说出来不安全的话,
她们可以通过一个
人工智能服务来报警,
09:29
they could get an AIAI service服务
to raise提高 alarm报警,
157
557833
2476
获得财务和法律咨询。
09:32
get financial金融 and legal法律 advice忠告.
158
560333
2459
09:35
These are all real真实 examples例子 of projects项目
that people, including包含 myself,
159
563958
5018
这些都是包括我在内,
正在使用人工智能的人
所做的项目中的真实案例。
09:41
are working加工 on right now, using运用 AIAI.
160
569000
2500
09:45
So, I'm sure in the next下一个 couple一对 of days
there will be yet然而 another另一个 news新闻 story故事
161
573542
3601
我确信在未来的几十天里面,
会有另一个新闻故事,
09:49
about the existential存在 risk风险,
162
577167
2684
告诉你们,
机器人会接管你们的工作。
09:51
robots机器人 taking服用 over
and coming未来 for your jobs工作.
163
579875
2434
(笑声)
09:54
(Laughter笑声)
164
582333
1018
当这样的事情发生时,
09:55
And when something like that happens发生,
165
583375
2309
我知道我会收到
同样对未来表示担忧的信息。
09:57
I know I'll get the same相同 messages消息
worrying令人担忧 about the future未来.
166
585708
3601
但我对这个技术极为乐观。
10:01
But I feel incredibly令人难以置信 positive
about this technology技术.
167
589333
3667
10:07
This is our chance机会 to remake翻拍 the world世界
into a much more equal等于 place地点.
168
595458
5959
这是我们重新让世界
变得更平等的机会。
10:14
But to do that, we need to build建立 it
the right way from the get go.
169
602458
4000
但要做到这一点,我们需要
在一开始就以正确的方式构建它。
10:19
We need people of different不同 genders性别,
races比赛, sexualities性行为 and backgrounds背景.
170
607667
5083
我们需要不同性别,种族,
性取向和背景的人。
10:26
We need women妇女 to be the makers制造商
171
614458
2476
我们需要女性成为创造者,
而不仅仅是听从创造者命令的机器。
10:28
and not just the machines
who do the makers'制造商 ' bidding投标.
172
616958
3000
10:33
We need to think very carefully小心
what we teach machines,
173
621875
3768
我们需要仔细思考
我们教给机器的东西,
我们给它们什么数据,
10:37
what data数据 we give them,
174
625667
1642
这样它们就不会
只是重复我们过去的错误。
10:39
so they don't just repeat重复
our own拥有 past过去 mistakes错误.
175
627333
3125
10:44
So I hope希望 I leave离开 you
thinking思维 about two things.
176
632125
3542
所以我希望我留给你们两个思考。
10:48
First, I hope希望 you leave离开
thinking思维 about bias偏压 today今天.
177
636542
4559
首先,我希望你们思考
当今社会中的的偏见。
下次当你滚动刷到
10:53
And that the next下一个 time
you scroll滚动 past过去 an advert广告
178
641125
3184
认为你对生育诊所
10:56
that assumes假设 you are interested有兴趣
in fertility生育能力 clinics诊所
179
644333
2810
或者网上投注站有兴趣的广告时,
10:59
or online线上 betting博彩 websites网站,
180
647167
2851
这会让你回想起
11:02
that you think and remember记得
181
650042
2017
同样的技术也在假定
黑人会重复犯罪。
11:04
that the same相同 technology技术 is assuming假设
that a black黑色 man will reoffend重犯.
182
652083
4625
11:09
Or that a woman女人 is more likely容易
to be a personal个人 assistant助理 than a CEOCEO.
183
657833
4167
或者女性更可能成为
个人助理而非CEO。
11:14
And I hope希望 that reminds提醒 you
that we need to do something about it.
184
662958
3709
我希望那会提醒你,
我们需要对此有所行动。
11:20
And second第二,
185
668917
1851
第二,
我希望你们考虑一下这个事实,
11:22
I hope希望 you think about the fact事实
186
670792
1892
你不需要以特定的方式去看,
11:24
that you don't need to look a certain某些 way
187
672708
1976
也不需要有一定的工程或技术背景
11:26
or have a certain某些 background背景
in engineering工程 or technology技术
188
674708
3851
去创建人工智能,
11:30
to create创建 AIAI,
189
678583
1268
人工智能将成为我们未来的
一股非凡力量。
11:31
which哪一个 is going to be
a phenomenal非凡的 force for our future未来.
190
679875
2875
11:36
You don't need to look
like a Mark标记 Zuckerberg扎克伯格,
191
684166
2143
你不需要看起来像马克·扎克伯格,
你可以看起来像我。
11:38
you can look like me.
192
686333
1250
11:41
And it is up to all of us in this room房间
193
689250
2893
我们这个房间里的所有人都有责任
去说服政府和公司
11:44
to convince说服 the governments政府
and the corporations公司
194
692167
2726
为每个人创建人工智能技术,
11:46
to build建立 AIAI technology技术 for everyone大家,
195
694917
2892
包括边缘的情况。
11:49
including包含 the edge边缘 cases.
196
697833
2393
让我们所有人都能
11:52
And for us all to get education教育
197
700250
2059
在未来接受有关这项非凡技术的教育。
11:54
about this phenomenal非凡的
technology技术 in the future未来.
198
702333
2375
11:58
Because if we do that,
199
706167
2017
因为如果我们那样做了,
才刚刚打开了人工智能世界的大门。
12:00
then we've我们已经 only just scratched划伤 the surface表面
of what we can achieve实现 with AIAI.
200
708208
4893
谢谢。
12:05
Thank you.
201
713125
1268
(鼓掌)
12:06
(Applause掌声)
202
714417
2708
Translated by jacks jun
Reviewed by Jin Ge

▲Back to top

ABOUT THE SPEAKER
Kriti Sharma - AI technologist
Kriti Sharma creates AI technology to help address some of the toughest social challenges of our time -- from domestic violence to sexual health and inequality.

Why you should listen

Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. In 2018, she also launched rAInbow, a digital companion for women facing domestic violence in South Africa. This service reached nearly 200,000 conversations within the first 100 days, breaking down the stigma of gender-based violence. In 2019, she collaborated with the Population Foundation of India to launch Dr. Sneha, an AI-powered digital character to engage with young people about sexual health, an issue that is still considered a taboo in India. 

Sharma was recently named in the Forbes "30 Under 30" list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Government’s Centre for Data Ethics and Innovation. 

More profile about the speaker
Kriti Sharma | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee