ABOUT THE SPEAKER
Damon Horowitz - Philosopher, entrepreneur
Damon Horowitz explores what is possible at the boundaries of technology and the humanities.

Why you should listen

Damon Horowitz is a philosophy professor and serial entrepreneur. He recently joined Google as In-House Philosopher / Director of Engineering, heading development of several initiatives involving social and search. He came to Google from Aardvark, the social search engine, where he was co-founder and CTO, overseeing product development and research strategy. Prior to Aardvark, Horowitz built several companies around applications of intelligent language processing. He co-founded Perspecta (acquired by Excite), was lead architect for Novation Biosciences (acquired by Agilent), and co-founded NewsDB (now Daylife).

Horowitz teaches courses in philosophy, cognitive science, and computer science at several institutions, including Stanford, NYU, University of Pennsylvania and San Quentin State Prison.

Get more information on the Prison University Project >>

More profile about the speaker
Damon Horowitz | Speaker | TED.com
TEDxSiliconValley

Damon Horowitz: We need a "moral operating system"

达蒙·霍罗威茨讲“道德操作系统”

Filmed:
795,617 views

在硅谷的TEDx讲台上,达蒙·霍罗威茨总结了技术带来的前所未有的巨大能量:懂得更多,尤其是对他人,他将听众带入哲学的讨论中,提醒我们重新关注层出不穷的新发明背后的基本哲学——道德原则。可行的道德操作系统在哪里?
- Philosopher, entrepreneur
Damon Horowitz explores what is possible at the boundaries of technology and the humanities. Full bio

Double-click the English transcript below to play the video.

00:15
Power功率.
0
0
2000
能力
00:17
That is the word that comes to mind心神.
1
2000
2000
是我们时时在想的
00:19
We're the new technologists技术专家.
2
4000
2000
我们是新时代的技术人员
00:21
We have a lot of data数据, so we have a lot of power功率.
3
6000
3000
我们有很多信息,这样我们就有能力
00:24
How much power功率 do we have?
4
9000
2000
多少能力呢?
00:26
Scene现场 from a movie电影: "Apocalypse启示录 Now" -- great movie电影.
5
11000
3000
想像电影“现代启示录”, 这电影绝了
00:29
We've我们已经 got to get our hero英雄, Captain队长 Willard威拉德, to the mouth of the Nung但愿人长久 River
6
14000
3000
里面有个英雄,卫尔沃德上尉,来到怒河的入海口
00:32
so he can go pursue追求 Colonel陆军上校 Kurtz库尔茨.
7
17000
2000
去追踪科特兹上校
00:34
The way we're going to do this is fly him in and drop下降 him off.
8
19000
2000
想像他飞到那里,下飞机
00:36
So the scene现场:
9
21000
2000
想像这个情景:
00:38
the sky天空 is filled填充 with this fleet舰队 of helicopters直升机 carrying携带 him in.
10
23000
3000
天空布满了陪他来的直升飞机
00:41
And there's this loud, thrilling惊险 music音乐 in the background背景,
11
26000
2000
背景音乐震人心肺
00:43
this wild野生 music音乐.
12
28000
2000
非常野性
00:45
Dum达姆 daDA taTA daDA dum达姆
13
30000
2000
♫ Dum da ta da dum ♫
00:47
Dum达姆 daDA taTA daDA dum达姆
14
32000
2000
♫ Dum da ta da dum ♫
00:49
Da taTA daDA daDA
15
34000
3000
♫ Da ta da da ♫
00:52
That's a lot of power功率.
16
37000
2000
充满了力量
00:54
That's the kind of power功率 I feel in this room房间.
17
39000
2000
这就是我在这个房间里感觉到的能量
00:56
That's the kind of power功率 we have
18
41000
2000
这就是我们拥有的能量
00:58
because of all of the data数据 that we have.
19
43000
2000
就因为我们拥有信息
01:00
Let's take an example.
20
45000
2000
举个例子
01:02
What can we do
21
47000
2000
用个人信息
01:04
with just one person's人的 data数据?
22
49000
3000
我们能做什么呢?
01:07
What can we do
23
52000
2000
用这位先生的信息
01:09
with that guy's家伙 data数据?
24
54000
2000
我们能做什么?
01:11
I can look at your financial金融 records记录.
25
56000
2000
我可以查看你的财政历史
01:13
I can tell if you pay工资 your bills票据 on time.
26
58000
2000
看你是不是按时付账单
01:15
I know if you're good to give a loan贷款 to.
27
60000
2000
来决定给不给你贷款
01:17
I can look at your medical records记录; I can see if your pump is still pumping --
28
62000
3000
我可以浏览你的医疗病历,看你的心脏好不好
01:20
see if you're good to offer提供 insurance保险 to.
29
65000
3000
来决定给不给你保险
01:23
I can look at your clicking点击 patterns模式.
30
68000
2000
我可以跟踪你上网的习惯
01:25
When you come to my website网站, I actually其实 know what you're going to do already已经
31
70000
3000
当你访问我的网站时,我已经知道你会怎么做了
01:28
because I've seen看到 you visit访问 millions百万 of websites网站 before.
32
73000
2000
因为我已经看到你以前是怎么访问成千上万个网站的
01:30
And I'm sorry to tell you,
33
75000
2000
我不得不说
01:32
you're like a poker扑克 player播放机, you have a tell.
34
77000
2000
你就像是个赌客,你能被看透
01:34
I can tell with data数据 analysis分析 what you're going to do
35
79000
2000
就凭你的信息我能看透你将会做什么
01:36
before you even do it.
36
81000
2000
在你还没做之前
01:38
I know what you like. I know who you are,
37
83000
3000
我知道你喜欢什么,我知道你是谁
01:41
and that's even before I look at your mail邮件
38
86000
2000
在我看你的邮件之前
01:43
or your phone电话.
39
88000
2000
看你的电话之前
01:45
Those are the kinds of things we can do
40
90000
2000
现今我们能这样做了
01:47
with the data数据 that we have.
41
92000
3000
凭着我们有的信息
01:50
But I'm not actually其实 here to talk about what we can do.
42
95000
3000
但我来这里不是谈我们能做什么
01:56
I'm here to talk about what we should do.
43
101000
3000
我是来谈谈我们应该做什么
02:00
What's the right thing to do?
44
105000
3000
如何善用信息?
02:04
Now I see some puzzled困惑 looks容貌
45
109000
2000
我看到一些人迷惑了
02:06
like, "Why are you asking us what's the right thing to do?
46
111000
3000
好像在想:“为什么你问我们应该如何善用信息?
02:09
We're just building建造 this stuff东东. Somebody else其他 is using运用 it."
47
114000
3000
我们只管把技术推出来,用是别人的事情。”
02:12
Fair公平 enough足够.
48
117000
3000
就算你对
02:15
But it brings带来 me back.
49
120000
2000
但这正是我要说的
02:17
I think about World世界 War战争 IIII --
50
122000
2000
想想二次世界大战
02:19
some of our great technologists技术专家 then,
51
124000
2000
很多我们最优秀的技术人员
02:21
some of our great physicists物理学家,
52
126000
2000
最优秀的物理学家
02:23
studying研究 nuclear fission分裂 and fusion聚变 --
53
128000
2000
在研究核裂变和核聚变
02:25
just nuclear stuff东东.
54
130000
2000
都是关于核的
02:27
We gather收集 together一起 these physicists物理学家 in Los洛杉矶 Alamos洛斯阿拉莫斯
55
132000
3000
我们把这些科学家请到洛斯阿拉莫斯国家实验室
02:30
to see what they'll他们会 build建立.
56
135000
3000
来看看他们想用这些技术来做什么
02:33
We want the people building建造 the technology技术
57
138000
3000
我们希望,人们能够在发明新技术时
02:36
thinking思维 about what we should be doing with the technology技术.
58
141000
3000
时时想着我们应该怎么用这些技术
02:41
So what should we be doing with that guy's家伙 data数据?
59
146000
3000
话说回来,我们应该怎么用这位先生的信息?
02:44
Should we be collecting搜集 it, gathering搜集 it,
60
149000
3000
我们应不应该收集他的信息
02:47
so we can make his online线上 experience经验 better?
61
152000
2000
来使他的上网冲浪的体验更好?
02:49
So we can make money?
62
154000
2000
我们应不应该用这些信息来赚钱?
02:51
So we can protect保护 ourselves我们自己
63
156000
2000
应不应该用这些信息来自我保护
02:53
if he was up to no good?
64
158000
2000
以防万一他干坏事?
02:55
Or should we respect尊重 his privacy隐私,
65
160000
3000
还是我们应该尊重他的隐私
02:58
protect保护 his dignity尊严 and leave离开 him alone单独?
66
163000
3000
为他保留尊严,不要烦他?
03:02
Which哪一个 one is it?
67
167000
3000
哪一个办法更好?
03:05
How should we figure数字 it out?
68
170000
2000
怎么做决定?
03:07
I know: crowdsource众包. Let's crowdsource众包 this.
69
172000
3000
我知道:集思广益。让我们来听听大家的意见
03:11
So to get people warmed温暖 up,
70
176000
3000
先来个热身题
03:14
let's start开始 with an easy简单 question --
71
179000
2000
简单的问题——
03:16
something I'm sure everybody每个人 here has an opinion意见 about:
72
181000
3000
这个问题我相信大家都有倾向
03:19
iPhone苹果手机 versus AndroidAndroid的.
73
184000
2000
苹果电话还是谷歌Android电话?
03:21
Let's do a show显示 of hands -- iPhone苹果手机.
74
186000
3000
让我们举手表决,支持苹果的
03:24
Uh huh.
75
189000
2000
03:26
AndroidAndroid的.
76
191000
3000
现在支持谷歌Android的
03:29
You'd think with a bunch of smart聪明 people
77
194000
2000
原以为这里都是聪明人
03:31
we wouldn't不会 be such这样 suckers吸盘 just for the pretty漂亮 phones手机.
78
196000
2000
不会只为了漂亮徒有其表而盲目消费。
03:33
(Laughter笑声)
79
198000
2000
(笑声)
03:35
Next下一个 question,
80
200000
2000
下一个问题,
03:37
a little bit harder更难.
81
202000
2000
有点难
03:39
Should we be collecting搜集 all of that guy's家伙 data数据
82
204000
2000
我们应不应该收集这位先生的信息
03:41
to make his experiences经验 better
83
206000
2000
来让他的体验更好
03:43
and to protect保护 ourselves我们自己 in case案件 he's up to no good?
84
208000
3000
同时来保护我们自身,万一他干坏事?
03:46
Or should we leave离开 him alone单独?
85
211000
2000
还是我们应该不管他?
03:48
Collect搜集 his data数据.
86
213000
3000
支持收集信息的
03:53
Leave离开 him alone单独.
87
218000
3000
支持不管他的
03:56
You're safe安全. It's fine.
88
221000
2000
(这位先生)你是安全的
03:58
(Laughter笑声)
89
223000
2000
(笑声)
04:00
Okay, last question --
90
225000
2000
现在,最后一个问题——
04:02
harder更难 question --
91
227000
2000
更难一点——
04:04
when trying to evaluate评估
92
229000
3000
当我们试图决定
04:07
what we should do in this case案件,
93
232000
3000
我们应该怎么做的时候
04:10
should we use a Kantian康德 deontological道义 moral道德 framework骨架,
94
235000
4000
是应该用康德的义务论道德框架,
04:14
or should we use a Millian莉恩 consequentialist后果论 one?
95
239000
3000
还是用米利安的结果主义论?
04:19
Kant康德.
96
244000
3000
支持康德的
04:22
Mill.
97
247000
3000
支持米利安的
04:25
Not as many许多 votes.
98
250000
2000
没什么人知道哦。
04:27
(Laughter笑声)
99
252000
3000
(笑声)
04:30
Yeah, that's a terrifying可怕的 result结果.
100
255000
3000
这是个惊人的结果
04:34
Terrifying可怕的, because we have stronger opinions意见
101
259000
4000
惊人,因为我们关心
04:38
about our hand-held手持式 devices设备
102
263000
2000
手持设备的电话
04:40
than about the moral道德 framework骨架
103
265000
2000
比关心道德理论更多
04:42
we should use to guide指南 our decisions决定.
104
267000
2000
我们应该用这些理论来指导我们的决定
04:44
How do we know what to do with all the power功率 we have
105
269000
3000
如何决定我们该怎么来使用我们所拥有的能力
04:47
if we don't have a moral道德 framework骨架?
106
272000
3000
如果我们连道德框架都没有?
04:50
We know more about mobile移动 operating操作 systems系统,
107
275000
3000
我们对移动电话操作系统知道得越来越多
04:53
but what we really need is a moral道德 operating操作 system系统.
108
278000
3000
但是我们需要知道的是道德的操作系统
04:58
What's a moral道德 operating操作 system系统?
109
283000
2000
什么是道德操作系统?
05:00
We all know right and wrong错误, right?
110
285000
2000
我们知道什么是对,什么是错
05:02
You feel good when you do something right,
111
287000
2000
你做对了的时候,自我感觉挺好
05:04
you feel bad when you do something wrong错误.
112
289000
2000
如果你做错了你觉得自责
05:06
Our parents父母 teach us that: praise赞美 with the good, scold with the bad.
113
291000
3000
父母亲教导我们,奖对惩错
05:09
But how do we figure数字 out what's right and wrong错误?
114
294000
3000
但是怎么才能知道什么是对什么是错?
05:12
And from day to day, we have the techniques技术 that we use.
115
297000
3000
尤其是当每天新技术都层出不穷
05:15
Maybe we just follow跟随 our gut肠道.
116
300000
3000
说不定我们可以就靠本能
05:18
Maybe we take a vote投票 -- we crowdsource众包.
117
303000
3000
说不定我们需要推选——群众意见集思广益
05:21
Or maybe we punt平底船 --
118
306000
2000
或者我们赌一把——
05:23
ask the legal法律 department, see what they say.
119
308000
3000
问法律专家的意见,看他们怎么说
05:26
In other words, it's kind of random随机,
120
311000
2000
也就是说,我们作出决定的办法
05:28
kind of ad广告 hoc特别,
121
313000
2000
很随机
05:30
how we figure数字 out what we should do.
122
315000
3000
很即兴
05:33
And maybe, if we want to be on surer更有把握 footing立足点,
123
318000
3000
或者,如果我们想更保险些
05:36
what we really want is a moral道德 framework骨架 that will help guide指南 us there,
124
321000
3000
只想要一个好的道德框架,我们能用来自己弄明白的
05:39
that will tell us what kinds of things are right and wrong错误 in the first place地点,
125
324000
3000
能帮我们明辨对错
05:42
and how would we know in a given特定 situation情况 what to do.
126
327000
4000
帮我们决定什么情况下做什么
05:46
So let's get a moral道德 framework骨架.
127
331000
2000
让我们找到一个道德框架
05:48
We're numbers数字 people, living活的 by numbers数字.
128
333000
3000
我们用很多数字,生活在数字中
05:51
How can we use numbers数字
129
336000
2000
怎么用数字
05:53
as the basis基础 for a moral道德 framework骨架?
130
338000
3000
来建立一个道德框架?
05:56
I know a guy who did exactly究竟 that.
131
341000
3000
我知道一个人专门干这个的
05:59
A brilliant辉煌 guy --
132
344000
3000
很聪明的人——
06:02
he's been dead 2,500 years年份.
133
347000
3000
2500年前就死了
06:05
Plato柏拉图, that's right.
134
350000
2000
柏拉图。对了
06:07
Remember记得 him -- old philosopher哲学家?
135
352000
2000
还记得他?老哲学家?
06:09
You were sleeping睡眠 during that class.
136
354000
3000
在他的教学课上,你可能睡过去了。
06:12
And Plato柏拉图, he had a lot of the same相同 concerns关注 that we did.
137
357000
2000
柏拉图,他有很多我们有的担心
06:14
He was worried担心 about right and wrong错误.
138
359000
2000
他担心如何辨明是非
06:16
He wanted to know what is just.
139
361000
2000
他希望知道怎么衡量
06:18
But he was worried担心 that all we seem似乎 to be doing
140
363000
2000
他担心我们现在做的
06:20
is trading贸易 opinions意见 about this.
141
365000
2000
不过是交换意见
06:22
He says something's什么是 just. She says something else其他 is just.
142
367000
3000
你一个主意我一个主意
06:25
It's kind of convincing使人信服 when he talks会谈 and when she talks会谈 too.
143
370000
2000
听起来都很有理
06:27
I'm just going back and forth向前; I'm not getting得到 anywhere随地.
144
372000
2000
我只是前瞻后顾,最后还是没结果
06:29
I don't want opinions意见; I want knowledge知识.
145
374000
3000
我不需要意见,我需要的是真理
06:32
I want to know the truth真相 about justice正义 --
146
377000
3000
我需要知道什么是有关正义的真理--
06:35
like we have truths真理 in math数学.
147
380000
3000
就像我们研究什么是数学的真理一样
06:38
In math数学, we know the objective目的 facts事实.
148
383000
3000
在数学中,我们使用客观的事实
06:41
Take a number, any number -- two.
149
386000
2000
比如将一个数字,任何数字——二
06:43
Favorite喜爱 number. I love that number.
150
388000
2000
个人最爱
06:45
There are truths真理 about two.
151
390000
2000
二有一些事实可讲
06:47
If you've got two of something,
152
392000
2000
如果你有两个东西
06:49
you add two more, you get four.
153
394000
2000
你再加上两个,就是四个
06:51
That's true真正 no matter what thing you're talking about.
154
396000
2000
这是真理,不管你讨论的东西是什么
06:53
It's an objective目的 truth真相 about the form形成 of two,
155
398000
2000
这是关于二的客观真理
06:55
the abstract抽象 form形成.
156
400000
2000
抽象性
06:57
When you have two of anything -- two eyes眼睛, two ears耳朵, two noses鼻子,
157
402000
2000
当你有两个东西,任何东西——两只眼睛,两只耳朵,两个鼻子
06:59
just two protrusions突起 --
158
404000
2000
或者仅仅是两个小突起——
07:01
those all partake参加 of the form形成 of two.
159
406000
3000
他们都是二的表现
07:04
They all participate参加 in the truths真理 that two has.
160
409000
4000
都分享了关于二的事实
07:08
They all have two-ness两岬 in them.
161
413000
2000
都有二在其中
07:10
And therefore因此, it's not a matter of opinion意见.
162
415000
3000
所以,个人情绪没关系
07:13
What if, Plato柏拉图 thought,
163
418000
2000
如果柏拉图考虑
07:15
ethics伦理 was like math数学?
164
420000
2000
道德,就像数学一样,会怎么样?
07:17
What if there were a pure form形成 of justice正义?
165
422000
3000
如果真有纯粹的公正,会怎么样?
07:20
What if there are truths真理 about justice正义,
166
425000
2000
如果公正真有绝对的事实,
07:22
and you could just look around in this world世界
167
427000
2000
你可以环顾四周
07:24
and see which哪一个 things participated参加,
168
429000
2000
看能用在什么上
07:26
partook每人吃 of that form形成 of justice正义?
169
431000
3000
什么东西有着绝对的公正,会怎么样?
07:29
Then you would know what was really just and what wasn't.
170
434000
3000
这样你就能知道什么是真的对,什么是真的错
07:32
It wouldn't不会 be a matter
171
437000
2000
和事情的外表
07:34
of just opinion意见 or just appearances出场.
172
439000
3000
和个人的观点都没有关系
07:37
That's a stunning令人惊叹 vision视力.
173
442000
2000
这是个诱人的观点
07:39
I mean, think about that. How grand盛大. How ambitious有雄心.
174
444000
3000
我是说,想象一下,多么壮观,多么充满雄心
07:42
That's as ambitious有雄心 as we are.
175
447000
2000
这是我们最大的野心了
07:44
He wants to solve解决 ethics伦理.
176
449000
2000
他希望解决道德的问题
07:46
He wants objective目的 truths真理.
177
451000
2000
他希望客观的事实
07:48
If you think that way,
178
453000
3000
如果你这么想
07:51
you have a Platonist柏拉图主义者 moral道德 framework骨架.
179
456000
3000
你就有柏拉图式的道德观。
07:54
If you don't think that way,
180
459000
2000
如果你不这么认为
07:56
well, you have a lot of company公司 in the history历史 of Western西 philosophy哲学,
181
461000
2000
那你就在西方哲学史上有很多同僚
07:58
because the tidy整洁 idea理念, you know, people criticized批评 it.
182
463000
3000
因为人们总爱挑剔这简明的观点。
08:01
Aristotle亚里士多德, in particular特定, he was not amused.
183
466000
3000
特别是亚里士多德,他就不买帐
08:04
He thought it was impractical不切实际的.
184
469000
3000
他觉得这不实际
08:07
Aristotle亚里士多德 said, "We should seek寻求 only so much precision精确 in each subject学科
185
472000
4000
亚里士多德说:“我们看每个物体
08:11
as that subject学科 allows允许."
186
476000
2000
都只能精确到这个物体允许的程度。”
08:13
Aristotle亚里士多德 thought ethics伦理 wasn't a lot like math数学.
187
478000
3000
亚里士多德认为道德不像是数学
08:16
He thought ethics伦理 was a matter of making制造 decisions决定 in the here-and-now此时此地
188
481000
3000
他认为道德是我在当时当地
08:19
using运用 our best最好 judgment判断
189
484000
2000
作出的最好的判断
08:21
to find the right path路径.
190
486000
2000
从而引导我们的决定
08:23
If you think that, Plato's柏拉图的 not your guy.
191
488000
2000
如果你这么想,柏拉图就不是你这边的了
08:25
But don't give up.
192
490000
2000
但是别放弃
08:27
Maybe there's another另一个 way
193
492000
2000
或者还有一条路
08:29
that we can use numbers数字 as the basis基础 of our moral道德 framework骨架.
194
494000
3000
我们能用数字,作为我们道德观的基础。
08:33
How about this:
195
498000
2000
这个怎么样:
08:35
What if in any situation情况 you could just calculate计算,
196
500000
3000
如果你能在任何情况下都算出
08:38
look at the choices选择,
197
503000
2000
衡量所有的选择
08:40
measure测量 out which哪一个 one's那些 better and know what to do?
198
505000
3000
算出怎么做更好,什么应该做?
08:43
That sound声音 familiar?
199
508000
2000
听起来熟悉是不是?
08:45
That's a utilitarian功利 moral道德 framework骨架.
200
510000
3000
这就是实用主义的道德观
08:48
John约翰 Stuart斯图尔特 Mill was a great advocate主张 of this --
201
513000
2000
约翰·斯图尔特·密尔是大支持者——
08:50
nice不错 guy besides除了 --
202
515000
2000
很友善的一个人——
08:52
and only been dead 200 years年份.
203
517000
2000
可惜死了两百年了
08:54
So basis基础 of utilitarianism功利主义 --
204
519000
2000
所以实用主义的基础——
08:56
I'm sure you're familiar at least最小.
205
521000
2000
我详细你们都略知一二
08:58
The three people who voted for Mill before are familiar with this.
206
523000
2000
刚才三个米尔的支持者肯定知道
09:00
But here's这里的 the way it works作品.
207
525000
2000
我还是讲讲这个怎么运作的
09:02
What if morals, what if what makes品牌 something moral道德
208
527000
3000
假如道德,道德的内涵
09:05
is just a matter of if it maximizes最大程度地增强 pleasure乐趣
209
530000
2000
只是为了最大的享受
09:07
and minimizes最小化 pain疼痛?
210
532000
2000
和最少的痛苦?
09:09
It does something intrinsic固有 to the act法案.
211
534000
3000
这在根本上决定了我们的行为
09:12
It's not like its relation关系 to some abstract抽象 form形成.
212
537000
2000
看起来和任何抽象的形式有关
09:14
It's just a matter of the consequences后果.
213
539000
2000
只是一系列的必然结果
09:16
You just look at the consequences后果
214
541000
2000
你只看必然结果
09:18
and see if, overall总体, it's for the good or for the worse更差.
215
543000
2000
看总的来说是好结果还是坏结果
09:20
That would be simple简单. Then we know what to do.
216
545000
2000
这看起来简单,然后我们就知道怎么做
09:22
Let's take an example.
217
547000
2000
让我们看一个例子
09:24
Suppose假设 I go up
218
549000
2000
假设我上前
09:26
and I say, "I'm going to take your phone电话."
219
551000
2000
说:“我要没收你的手机。”
09:28
Not just because it rang earlier,
220
553000
2000
不仅仅是因为刚才你的手机响了
09:30
but I'm going to take it because I made制作 a little calculation计算.
221
555000
3000
而是因为我刚算计了一下
09:33
I thought, that guy looks容貌 suspicious可疑.
222
558000
3000
我觉得,这个人看起来可疑
09:36
And what if he's been sending发出 little messages消息 to Bin箱子 Laden's拉登的 hideout巢穴 --
223
561000
3000
说不定他是在给本拉登的藏身处发消息——
09:39
or whoever took over after Bin箱子 Laden拉登 --
224
564000
2000
或者是跟本拉登的接班人发消息——
09:41
and he's actually其实 like a terrorist恐怖分子, a sleeper轨枕 cell细胞.
225
566000
3000
他看起来真像是恐怖分子,一个卧底
09:44
I'm going to find that out, and when I find that out,
226
569000
3000
我得把他揪出来,当我揪他时
09:47
I'm going to prevent避免 a huge巨大 amount of damage损伤 that he could cause原因.
227
572000
3000
我就能防止他将带来一个大危害
09:50
That has a very high utility效用 to prevent避免 that damage损伤.
228
575000
3000
防止这个危害的结果非常有意义
09:53
And compared相比 to the little pain疼痛 that it's going to cause原因 --
229
578000
2000
和我的行为将带来的小小痛苦相比——
09:55
because it's going to be embarrassing尴尬 when I'm looking on his phone电话
230
580000
2000
因为在我检查他的手机的时候
09:57
and seeing眼看 that he has a Farmville法姆维尔 problem问题 and that whole整个 bit --
231
582000
3000
要是看到他只是在玩线上游戏,是挺丢人的
10:00
that's overwhelmed不堪重负
232
585000
3000
但是这不抵
10:03
by the value of looking at the phone电话.
233
588000
2000
检查手机能带来的价值
10:05
If you feel that way,
234
590000
2000
如果你这么想
10:07
that's a utilitarian功利 choice选择.
235
592000
3000
就是实用主义的选择
10:10
But maybe you don't feel that way either.
236
595000
3000
可能你也并不赞同这个做法
10:13
Maybe you think, it's his phone电话.
237
598000
2000
可能你会想,这是他的手机
10:15
It's wrong错误 to take his phone电话
238
600000
2000
拿他的手机是不对的
10:17
because he's a person
239
602000
2000
他是个人
10:19
and he has rights权利 and he has dignity尊严,
240
604000
2000
他有人权,有尊严
10:21
and we can't just interfere干扰 with that.
241
606000
2000
我们不能干涉
10:23
He has autonomy自治.
242
608000
2000
他有自主权
10:25
It doesn't matter what the calculations计算 are.
243
610000
2000
不管计算结果是什么
10:27
There are things that are intrinsically本质 wrong错误 --
244
612000
3000
这个举动在本质上是错误的——
10:30
like lying说谎 is wrong错误,
245
615000
2000
就像撒谎是错误的
10:32
like torturing折磨 innocent无辜 children孩子 is wrong错误.
246
617000
3000
像是折磨无辜的孩童是错误的
10:35
Kant康德 was very good on this point,
247
620000
3000
康德非常坚持这一点
10:38
and he said it a little better than I'll say it.
248
623000
2000
他解释得更好一点
10:40
He said we should use our reason原因
249
625000
2000
他说我们应该用我们自己的理由
10:42
to figure数字 out the rules规则 by which哪一个 we should guide指南 our conduct进行,
250
627000
3000
来弄明白决定该怎么做的准则
10:45
and then it is our duty义务 to follow跟随 those rules规则.
251
630000
3000
接下来我们的责任是遵守这些准则
10:48
It's not a matter of calculation计算.
252
633000
3000
不是关于计算。
10:51
So let's stop.
253
636000
2000
别再算了
10:53
We're right in the thick of it, this philosophical哲学上 thicket灌木丛.
254
638000
3000
我们正在这错综复杂的哲学中间
10:56
And this goes on for thousands数千 of years年份,
255
641000
3000
而且已经像这样上千年了
10:59
because these are hard questions问题,
256
644000
2000
因为这些是很难的问题
11:01
and I've only got 15 minutes分钟.
257
646000
2000
我只有十五分钟
11:03
So let's cut to the chase.
258
648000
2000
所以让我们直奔主题
11:05
How should we be making制造 our decisions决定?
259
650000
4000
我们应该怎么做决定?
11:09
Is it Plato柏拉图, is it Aristotle亚里士多德, is it Kant康德, is it Mill?
260
654000
3000
是听从柏拉图,亚里士多德,康德,还是米尔?
11:12
What should we be doing? What's the answer回答?
261
657000
2000
我们该做什么?答案是什么?
11:14
What's the formula that we can use in any situation情况
262
659000
3000
能在任何情况下计算出我们应该怎么做的
11:17
to determine确定 what we should do,
263
662000
2000
万灵的公式在哪里?
11:19
whether是否 we should use that guy's家伙 data数据 or not?
264
664000
2000
我们该还是不该用这位先生的数据?
11:21
What's the formula?
265
666000
3000
公式是什么?
11:25
There's not a formula.
266
670000
2000
没有公式
11:29
There's not a simple简单 answer回答.
267
674000
2000
没有简单明了的答案
11:31
Ethics伦理 is hard.
268
676000
3000
道德观是很难的
11:34
Ethics伦理 requires要求 thinking思维.
269
679000
3000
道德观需要思想
11:38
And that's uncomfortable不舒服.
270
683000
2000
这是不太容易的
11:40
I know; I spent花费 a lot of my career事业
271
685000
2000
我知道。在我的职业生涯中我花了很多时间
11:42
in artificial人造 intelligence情报,
272
687000
2000
研究机器人
11:44
trying to build建立 machines that could do some of this thinking思维 for us,
273
689000
3000
试图造出机器来替我们做决定
11:47
that could give us answers答案.
274
692000
2000
来给我们答案
11:49
But they can't.
275
694000
2000
但是他们做不到
11:51
You can't just take human人的 thinking思维
276
696000
2000
你不能就拿人的思维方式
11:53
and put it into a machine.
277
698000
2000
放在机器里
11:55
We're the ones那些 who have to do it.
278
700000
3000
我们要靠自己思考
11:58
Happily令人高兴的是, we're not machines, and we can do it.
279
703000
3000
好的方面是我们不是机器,我们能思考
12:01
Not only can we think,
280
706000
2000
不光能思考
12:03
we must必须.
281
708000
2000
我们必须思考
12:05
Hannah汉娜 Arendt阿伦特 said,
282
710000
2000
汉娜·阿伦特说过
12:07
"The sad伤心 truth真相
283
712000
2000
“悲哀的是
12:09
is that most evil邪恶 doneDONE in this world世界
284
714000
2000
世上大多数的罪恶
12:11
is not doneDONE by people
285
716000
2000
不是由有恶意的人
12:13
who choose选择 to be evil邪恶.
286
718000
2000
故意造成的
12:15
It arises出现 from not thinking思维."
287
720000
3000
而是人没有想清楚。”
12:18
That's what she called the "banality平庸 of evil邪恶."
288
723000
4000
这就是她所谓的“平庸的罪恶”
12:22
And the response响应 to that
289
727000
2000
对策是
12:24
is that we demand需求 the exercise行使 of thinking思维
290
729000
2000
我们需要练习思考
12:26
from every一切 sane明智的 person.
291
731000
3000
每个平常人都要
12:29
So let's do that. Let's think.
292
734000
2000
让我们来做吧,来思考
12:31
In fact事实, let's start开始 right now.
293
736000
3000
事实上,让我们现在就做
12:34
Every一切 person in this room房间 do this:
294
739000
3000
房间里每个人都这样做:
12:37
think of the last time you had a decision决定 to make
295
742000
3000
想像上一次你需要做一个决定的时候
12:40
where you were worried担心 to do the right thing,
296
745000
2000
当你担心不能做出正确的决定
12:42
where you wondered想知道, "What should I be doing?"
297
747000
2000
当你怀疑:“到底该怎么做呢?”
12:44
Bring带来 that to mind心神,
298
749000
2000
想象那个时刻
12:46
and now reflect反映 on that
299
751000
2000
之后再想
12:48
and say, "How did I come up that decision决定?
300
753000
3000
“我当时是怎么做决定的?”
12:51
What did I do? Did I follow跟随 my gut肠道?
301
756000
3000
我做了什么?是随我意愿的么?
12:54
Did I have somebody vote投票 on it? Or did I punt平底船 to legal法律?"
302
759000
2000
征求他人意见了么?征求法律意见了么?
12:56
Or now we have a few少数 more choices选择.
303
761000
3000
现在我们有更多选择
12:59
"Did I evaluate评估 what would be the highest最高 pleasure乐趣
304
764000
2000
“我有没有像米尔会做的那样
13:01
like Mill would?
305
766000
2000
衡量最大的享受?
13:03
Or like Kant康德, did I use reason原因 to figure数字 out what was intrinsically本质 right?"
306
768000
3000
或者像康德,借助原因来弄明白什么是本质上对的?”
13:06
Think about it. Really bring带来 it to mind心神. This is important重要.
307
771000
3000
想象一下,真的深入其中,这是很重要的
13:09
It is so important重要
308
774000
2000
有多重要
13:11
we are going to spend 30 seconds of valuable有价值 TEDTalkTED演讲 time
309
776000
2000
我们将会用三十秒,非常宝贵的TED的时间
13:13
doing nothing but thinking思维 about this.
310
778000
2000
什么也不做,就思考
13:15
Are you ready准备? Go.
311
780000
2000
准备好了么?开始
13:33
Stop. Good work.
312
798000
3000
结束,不错
13:36
What you just did,
313
801000
2000
你刚做的
13:38
that's the first step towards taking服用 responsibility责任
314
803000
2000
是向着为我们使用我们的能力
13:40
for what we should do with all of our power功率.
315
805000
3000
负责任迈出的第一步
13:45
Now the next下一个 step -- try this.
316
810000
3000
现在第二步——试试这个
13:49
Go find a friend朋友 and explain说明 to them
317
814000
2000
找到一个朋友,向他们解释
13:51
how you made制作 that decision决定.
318
816000
2000
你是怎么做决定的
13:53
Not right now. Wait till直到 I finish talking.
319
818000
2000
不是现在,等我讲完了之后。
13:55
Do it over lunch午餐.
320
820000
2000
午饭的时候做
13:57
And don't just find another另一个 technologist技术专家 friend朋友;
321
822000
3000
别找另一个搞技术的朋友
14:00
find somebody different不同 than you.
322
825000
2000
找一个和你很不同的
14:02
Find an artist艺术家 or a writer作家 --
323
827000
2000
艺术家或者作家——
14:04
or, heaven天堂 forbid禁止, find a philosopher哲学家 and talk to them.
324
829000
3000
或者,哲学家,和他们谈
14:07
In fact事实, find somebody from the humanities人文.
325
832000
2000
事实上找个人文主义者谈
14:09
Why? Because they think about problems问题
326
834000
2000
为什么?因为他们考虑问题的角度
14:11
differently不同 than we do as technologists技术专家.
327
836000
2000
和我们技术人员是很不一样的
14:13
Just a few少数 days ago, right across横过 the street from here,
328
838000
3000
仅仅几天前,街对面
14:16
there was hundreds数以百计 of people gathered云集 together一起.
329
841000
2000
成百上千人聚集起来
14:18
It was technologists技术专家 and humanists人文主义
330
843000
2000
都是技术人员和人文主义者
14:20
at that big BiblioTechBiblioTech Conference会议.
331
845000
2000
在开一个大的研讨会
14:22
And they gathered云集 together一起
332
847000
2000
他们聚集起来
14:24
because the technologists技术专家 wanted to learn学习
333
849000
2000
因为技术人员想了解
14:26
what it would be like to think from a humanities人文 perspective透视.
334
851000
3000
从人文主义者的角度想是怎么样的
14:29
You have someone有人 from Google谷歌
335
854000
2000
有从谷歌来的人
14:31
talking to someone有人 who does comparative比较 literature文学.
336
856000
2000
和研究比较文学的人谈
14:33
You're thinking思维 about the relevance关联 of 17th century世纪 French法国 theater剧院 --
337
858000
3000
你在想十七世纪的法国戏剧的影响——
14:36
how does that bear upon venture冒险 capital首都?
338
861000
2000
是怎么在风险投资下存活的?
14:38
Well that's interesting有趣. That's a different不同 way of thinking思维.
339
863000
3000
这很有趣,是另一个角度看问题。
14:41
And when you think in that way,
340
866000
2000
我们在这样想的时候
14:43
you become成为 more sensitive敏感 to the human人的 considerations注意事项,
341
868000
3000
就开始对人类的考量更敏感
14:46
which哪一个 are crucial关键 to making制造 ethical合乎道德的 decisions决定.
342
871000
3000
这对于做出道德的决定是很重要的
14:49
So imagine想像 that right now
343
874000
2000
想像现在
14:51
you went and you found发现 your musician音乐家 friend朋友.
344
876000
2000
你去找个音乐家的朋友
14:53
And you're telling告诉 him what we're talking about,
345
878000
3000
和他说我们现在的话题
14:56
about our whole整个 data数据 revolution革命 and all this --
346
881000
2000
这些信息革命等等——
14:58
maybe even hum a few少数 bars酒吧 of our theme主题 music音乐.
347
883000
2000
甚至哼哼我们的一些电影主题音乐。
15:00
Dum达姆 taTA daDA daDA dum达姆 dum达姆 taTA daDA daDA dum达姆
348
885000
3000
♫ Dum ta da da dum dum ta da da dum ♫
15:03
Well, your musician音乐家 friend朋友 will stop you and say,
349
888000
2000
你的音乐家朋友会打断你说:
15:05
"You know, the theme主题 music音乐
350
890000
2000
“你知道,你技术革命的
15:07
for your data数据 revolution革命,
351
892000
2000
主题音乐
15:09
that's an opera歌剧, that's Wagner瓦格纳.
352
894000
2000
是个戏剧,是瓦格纳的作品。
15:11
It's based基于 on Norse北欧 legend传说.
353
896000
2000
关于一个北欧传说
15:13
It's Gods and mythical神话 creatures生物
354
898000
2000
是关于上帝和神话人物
15:15
fighting战斗 over magical神奇 jewelry首饰."
355
900000
3000
为了魔术珍宝而争斗。”
15:19
That's interesting有趣.
356
904000
3000
这很有趣
15:22
Now it's also a beautiful美丽 opera歌剧,
357
907000
3000
现在这也是个动人的戏剧
15:25
and we're moved移动 by that opera歌剧.
358
910000
3000
我们都被感动了
15:28
We're moved移动 because it's about the battle战斗
359
913000
2000
因为这是关于一场战争
15:30
between之间 good and evil邪恶,
360
915000
2000
善与恶之间的
15:32
about right and wrong错误.
361
917000
2000
对与错之间的
15:34
And we care关心 about right and wrong错误.
362
919000
2000
我们关心对和错
15:36
We care关心 what happens发生 in that opera歌剧.
363
921000
3000
我们关心在戏剧里发生了什么
15:39
We care关心 what happens发生 in "Apocalypse启示录 Now."
364
924000
3000
我们关心在“现代启示录”里发生了什么
15:42
And we certainly当然 care关心
365
927000
2000
我们也关心
15:44
what happens发生 with our technologies技术.
366
929000
2000
用我们的技术会发生什么
15:46
We have so much power功率 today今天,
367
931000
2000
当下我们有这么多的能力
15:48
it is up to us to figure数字 out what to do,
368
933000
3000
这是我们的责任来搞清楚该怎么做。
15:51
and that's the good news新闻.
369
936000
2000
这是好消息
15:53
We're the ones那些 writing写作 this opera歌剧.
370
938000
3000
我们是写剧本的人
15:56
This is our movie电影.
371
941000
2000
这就是我们的电影
15:58
We figure数字 out what will happen发生 with this technology技术.
372
943000
3000
我们清楚这个技术将会带给我们什么。
16:01
We determine确定 how this will all end结束.
373
946000
3000
我们认为事情会这样结束
16:04
Thank you.
374
949000
2000
谢谢
16:06
(Applause掌声)
375
951000
5000
(掌声)
Translated by Alison Xiaoqiao Xie
Reviewed by Angelia King

▲Back to top

ABOUT THE SPEAKER
Damon Horowitz - Philosopher, entrepreneur
Damon Horowitz explores what is possible at the boundaries of technology and the humanities.

Why you should listen

Damon Horowitz is a philosophy professor and serial entrepreneur. He recently joined Google as In-House Philosopher / Director of Engineering, heading development of several initiatives involving social and search. He came to Google from Aardvark, the social search engine, where he was co-founder and CTO, overseeing product development and research strategy. Prior to Aardvark, Horowitz built several companies around applications of intelligent language processing. He co-founded Perspecta (acquired by Excite), was lead architect for Novation Biosciences (acquired by Agilent), and co-founded NewsDB (now Daylife).

Horowitz teaches courses in philosophy, cognitive science, and computer science at several institutions, including Stanford, NYU, University of Pennsylvania and San Quentin State Prison.

Get more information on the Prison University Project >>

More profile about the speaker
Damon Horowitz | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee