ABOUT THE SPEAKER
Bruce Schneier - Security expert
Bruce Schneier thinks hard about security -- as a computer security guru, and as a philosopher of the larger notion of making a safer world.

Why you should listen

Bruce Schneier is an internationally renowned security technologist and author. Described by the Economist as a "security guru," he is best known as a refreshingly candid and lucid security critic and commentator. When people want to know how security really works, they turn to Schneier.

His first bestseller, Applied Cryptography, explained how the arcane science of secret codes actually works, and was described by Wired as "the book the National Security Agency wanted never to be published." His book on computer and network security, Secrets and Lies, was called by Fortune "[a] jewel box of little surprises you can actually use." Beyond Fear tackles the problems of security from the small to the large: personal safety, crime, corporate security, national security. His current book, Schneier on Security, offers insight into everything from the risk of identity theft (vastly overrated) to the long-range security threat of unchecked presidential power and the surprisingly simple way to tamper-proof elections.

Schneier publishes a free monthly newsletter, Crypto-Gram, with over 150,000 readers. In its ten years of regular publication, Crypto-Gram has become one of the most widely read forums for free-wheeling discussions, pointed critiques and serious debate about security. As head curmudgeon at the table, Schneier explains, debunks and draws lessons from security stories that make the news.

More profile about the speaker
Bruce Schneier | Speaker | TED.com
TEDxPSU

Bruce Schneier: The security mirage

布鲁斯 施奈尔:安全的幻觉

Filmed:
958,315 views

计算机安全专家布鲁斯 施奈尔认为,安全的感觉和现实不总是相符的。在TEDxPSU上,他解释了为什么我们花费数十亿美元去解决新闻故事风险,像现在在你当地机场播放的“安全剧场”,而忽略更多可能的风险,并告诉我们怎么跳出这个模式。
- Security expert
Bruce Schneier thinks hard about security -- as a computer security guru, and as a philosopher of the larger notion of making a safer world. Full bio

Double-click the English transcript below to play the video.

00:15
So security安全 is two different不同 things:
0
0
2000
安全其实是两种事物:
00:17
it's a feeling感觉, and it's a reality现实.
1
2000
2000
它不仅是感觉,也是现实。
00:19
And they're different不同.
2
4000
2000
这两样事物是完全不同的。
00:21
You could feel secure安全
3
6000
2000
你可以在不安全的时候
00:23
even if you're not.
4
8000
2000
感觉很安全。
00:25
And you can be secure安全
5
10000
2000
或者你不感到安全的时候
00:27
even if you don't feel it.
6
12000
2000
却很安全
00:29
Really, we have two separate分离 concepts概念
7
14000
2000
真的,我们有两种不同的概念
00:31
mapped映射 onto the same相同 word.
8
16000
2000
存在于同一个词语上。
00:33
And what I want to do in this talk
9
18000
2000
我在这里想要做的是
00:35
is to split分裂 them apart距离 --
10
20000
2000
把它们区分开来
00:37
figuring盘算 out when they diverge偏离
11
22000
2000
找出什么时候它们存在分歧,
00:39
and how they converge汇集.
12
24000
2000
什么时候又聚合在一起。
00:41
And language语言 is actually其实 a problem问题 here.
13
26000
2000
语言是个大问题。
00:43
There aren't a lot of good words
14
28000
2000
因为没有多少适合的词语去表达
00:45
for the concepts概念 we're going to talk about.
15
30000
3000
我们将要谈到的概念。
00:48
So if you look at security安全
16
33000
2000
如果你将安全
00:50
from economic经济 terms条款,
17
35000
2000
视为一个经济学的名词,
00:52
it's a trade-off交易.
18
37000
2000
那它就是“权衡取舍”。
00:54
Every一切 time you get some security安全,
19
39000
2000
每一次你得到一些安全,
00:56
you're always trading贸易 off something.
20
41000
2000
你总是在拿一些东西去交换。
00:58
Whether是否 this is a personal个人 decision决定 --
21
43000
2000
不管是个人决定
01:00
whether是否 you're going to install安装 a burglar窃贼 alarm报警 in your home --
22
45000
2000
比如是否给家里装一个防盗器
01:02
or a national国民 decision决定 -- where you're going to invade入侵 some foreign国外 country国家 --
23
47000
3000
或者国家的决策,比如去侵略哪个国家
01:05
you're going to trade贸易 off something,
24
50000
2000
你都要去交换,
01:07
either money or time, convenience方便, capabilities功能,
25
52000
3000
不管是金钱、时间、便利、能力,
01:10
maybe fundamental基本的 liberties自由.
26
55000
3000
还可能是基本自由权。
01:13
And the question to ask when you look at a security安全 anything
27
58000
3000
当你面对安全的时候,要问的
01:16
is not whether是否 this makes品牌 us safer更安全,
28
61000
3000
不是这个能不能让我们更安全,
01:19
but whether是否 it's worth价值 the trade-off交易.
29
64000
3000
而是值不值得我们去交换。
01:22
You've heard听说 in the past过去 several一些 years年份,
30
67000
2000
你们这几年都听过,
01:24
the world世界 is safer更安全 because Saddam萨达姆 Hussein侯赛因 is not in power功率.
31
69000
2000
世界更安全了是因为萨达姆倒台了。
01:26
That might威力 be true真正, but it's not terribly可怕 relevant相应.
32
71000
3000
那个可能是真的,但没什么关系。
01:29
The question is, was it worth价值 it?
33
74000
3000
问题是,值得吗?
01:32
And you can make your own拥有 decision决定,
34
77000
3000
你可以有自己的想法,
01:35
and then you'll你会 decide决定 whether是否 the invasion侵入 was worth价值 it.
35
80000
2000
然后决定那个侵略是否值得
01:37
That's how you think about security安全 --
36
82000
2000
那就是你如何在以权衡取舍
01:39
in terms条款 of the trade-off交易.
37
84000
2000
来考虑安全。
01:41
Now there's often经常 no right or wrong错误 here.
38
86000
3000
这里没有绝对的对与错。
01:44
Some of us have a burglar窃贼 alarm报警 system系统 at home,
39
89000
2000
我们中的有些人在家安了防盗器
01:46
and some of us don't.
40
91000
2000
有些人没有
01:48
And it'll它会 depend依靠 on where we live生活,
41
93000
2000
安不安取决于我们住在哪里
01:50
whether是否 we live生活 alone单独 or have a family家庭,
42
95000
2000
我们是独居还是有个家庭
01:52
how much cool stuff东东 we have,
43
97000
2000
我们有多少值钱的东西
01:54
how much we're willing愿意 to accept接受
44
99000
2000
我们愿意接受多少
01:56
the risk风险 of theft盗窃.
45
101000
2000
盗窃带来的风险
01:58
In politics政治 also,
46
103000
2000
对于政治来说也是一样
02:00
there are different不同 opinions意见.
47
105000
2000
存在着各种不同的观点
02:02
A lot of times, these trade-offs权衡
48
107000
2000
很多时候,这些权衡取舍
02:04
are about more than just security安全,
49
109000
2000
不仅仅跟安全有关
02:06
and I think that's really important重要.
50
111000
2000
对于这点我觉得很重要
02:08
Now people have a natural自然 intuition直觉
51
113000
2000
当今人们有一种
02:10
about these trade-offs权衡.
52
115000
2000
关于这些权衡取舍的直觉
02:12
We make them every一切 day --
53
117000
2000
我们每天都在用它来做决定
02:14
last night in my hotel旅馆 room房间,
54
119000
2000
比如昨晚我在酒店房间里
02:16
when I decided决定 to double-lock双锁 the door,
55
121000
2000
决定是否给门上两层锁的时候
02:18
or you in your car汽车 when you drove开车 here,
56
123000
2000
或者你在驾车到这里的路上
02:20
when we go eat lunch午餐
57
125000
2000
或者当我们去吃午饭时
02:22
and decide决定 the food's食品的 not poison and we'll eat it.
58
127000
3000
会认为食物是没毒的然后放心地吃
02:25
We make these trade-offs权衡 again and again,
59
130000
2000
我们反复做出这种权衡取舍
02:27
multiple times a day.
60
132000
2000
每天都有很多次
02:29
We often经常 won't惯于 even notice注意 them.
61
134000
2000
我们甚至没有留意它们
02:31
They're just part部分 of being存在 alive; we all do it.
62
136000
2000
它们只是生活的一部分,人们都是这么做的
02:33
Every一切 species种类 does it.
63
138000
3000
每一个物种都是这么做的
02:36
Imagine想像 a rabbit兔子 in a field领域, eating grass,
64
141000
2000
想象有一只兔子在吃草
02:38
and the rabbit's going to see a fox狐狸.
65
143000
3000
然后它看到了一只狐狸
02:41
That rabbit兔子 will make a security安全 trade-off交易:
66
146000
2000
那只兔子需要做出一个关于安全的权衡取舍
02:43
"Should I stay, or should I flee逃跑?"
67
148000
2000
“我应该留下,还是逃跑呢?”
02:45
And if you think about it,
68
150000
2000
正如你所见
02:47
the rabbits that are good at making制造 that trade-off交易
69
152000
3000
懂得做出权衡取舍的兔子
02:50
will tend趋向 to live生活 and reproduce复制,
70
155000
2000
会选择生存和繁衍
02:52
and the rabbits that are bad at it
71
157000
2000
而不懂的兔子
02:54
will get eaten吃过 or starve饿死.
72
159000
2000
则被吃掉
02:56
So you'd think
73
161000
2000
所以你可能会想
02:58
that us, as a successful成功 species种类 on the planet行星 --
74
163000
3000
作为这个星球上一支成功的物种的我们 --
03:01
you, me, everybody每个人 --
75
166000
2000
你、我、所有人 --
03:03
would be really good at making制造 these trade-offs权衡.
76
168000
3000
比较擅长于做出有利的权衡取舍
03:06
Yet然而 it seems似乎, again and again,
77
171000
2000
然而事实一次又一次地证明
03:08
that we're hopelessly绝望地 bad at it.
78
173000
3000
我们并非如此
03:11
And I think that's a fundamentally从根本上 interesting有趣 question.
79
176000
3000
我认为那是个关键又有趣的问题
03:14
I'll give you the short answer回答.
80
179000
2000
我给你们一个简短的答案
03:16
The answer回答 is, we respond响应 to the feeling感觉 of security安全
81
181000
2000
答案就是,我们依据的是安全的感觉
03:18
and not the reality现实.
82
183000
3000
而非现实
03:21
Now most of the time, that works作品.
83
186000
3000
很多时候,这样没什么问题
03:25
Most of the time,
84
190000
2000
因为在大部分时间里
03:27
feeling感觉 and reality现实 are the same相同.
85
192000
3000
感觉和现实是相同的
03:30
Certainly当然 that's true真正
86
195000
2000
在绝大部分史前人类历史中
03:32
for most of human人的 prehistory史前.
87
197000
3000
那也是没错的
03:35
We've我们已经 developed发达 this ability能力
88
200000
3000
我们发展了这个能力
03:38
because it makes品牌 evolutionary发展的 sense.
89
203000
2000
因为它有利于进化
03:40
One way to think of it
90
205000
2000
继续思考一下就知道
03:42
is that we're highly高度 optimized优化
91
207000
2000
我们做出某些风险决策的能力
03:44
for risk风险 decisions决定
92
209000
2000
是高度优化了的
03:46
that are endemic流行 to living活的 in small family家庭 groups
93
211000
3000
这些决策是以群居的小型家庭形式
03:49
in the East African非洲人 highlands高地 in 100,000 B.C.
94
214000
3000
生活在公元前十万年的东非高地的人们所独有 --
03:52
2010 New York纽约, not so much.
95
217000
3000
其实2010年的纽约也差不多
03:56
Now there are several一些 biases偏见 in risk风险 perception知觉.
96
221000
3000
现在有一些对风险的偏见
03:59
A lot of good experiments实验 in this.
97
224000
2000
很多实验都关于这些偏见
04:01
And you can see certain某些 biases偏见 that come up again and again.
98
226000
3000
你可以观察到有些偏见反复出现
04:04
So I'll give you four.
99
229000
2000
我讲四个
04:06
We tend趋向 to exaggerate夸大 spectacular壮观 and rare罕见 risks风险
100
231000
3000
第一个,我们会夸大那些耸人听闻但少见的风险
04:09
and downplay淡化 common共同 risks风险 --
101
234000
2000
并漠视常见的
04:11
so flying飞行 versus driving主动.
102
236000
3000
比如像飞机和汽车的事故率
04:14
The unknown未知 is perceived感知
103
239000
2000
第二个,未知的被认为比熟悉的
04:16
to be riskier风险较高 than the familiar.
104
241000
3000
更加危险
04:20
One example would be,
105
245000
2000
举个例子
04:22
people fear恐惧 kidnapping绑架 by strangers陌生人
106
247000
3000
人们害怕被陌生人绑架
04:25
when the data数据 supports支持 kidnapping绑架 by relatives亲戚们 is much more common共同.
107
250000
3000
即使数据证实被亲戚绑架更常见
04:28
This is for children孩子.
108
253000
2000
以上都是针对孩子们来说的
04:30
Third第三, personified人格化 risks风险
109
255000
3000
第三个,人格化的风险
04:33
are perceived感知 to be greater更大 than anonymous匿名 risks风险 --
110
258000
3000
被认为比匿名的更严重
04:36
so Bin箱子 Laden拉登 is scarier可怕 because he has a name名称.
111
261000
3000
所以本拉登很可怕是因为他有个名字
04:39
And the fourth第四
112
264000
2000
第四个是
04:41
is people underestimate低估 risks风险
113
266000
2000
人们在他们觉得可以掌控的情况下
04:43
in situations情况 they do control控制
114
268000
2000
会低估风险
04:45
and overestimate估计过高 them in situations情况 they don't control控制.
115
270000
4000
而在不能控制的情况下高估风险
04:49
So once一旦 you take up skydiving跳伞 or smoking抽烟,
116
274000
3000
所以你在开始跳伞或抽烟后
04:52
you downplay淡化 the risks风险.
117
277000
2000
会不再重视它们带来的风险
04:54
If a risk风险 is thrust推力 upon you -- terrorism恐怖主义 was a good example --
118
279000
3000
如果你猛然面临某种风险 -- 恐怖主义是个好例子 --
04:57
you'll你会 overplay表演过火 it because you don't feel like it's in your control控制.
119
282000
3000
你会高估它,因为你不觉得你可以控制了
05:02
There are a bunch of other of these biases偏见, these cognitive认知 biases偏见,
120
287000
3000
还有许多这样的认知偏见
05:05
that affect影响 our risk风险 decisions决定.
121
290000
3000
影响着我们的风险决策
05:08
There's the availability可用性 heuristic启发式,
122
293000
2000
有一种易得性偏差
05:10
which哪一个 basically基本上 means手段
123
295000
2000
意思是
05:12
we estimate估计 the probability可能性 of something
124
297000
3000
我们在估计某事发生的概率时
05:15
by how easy简单 it is to bring带来 instances实例 of it to mind心神.
125
300000
4000
依据的是想到具体的例子是否容易
05:19
So you can imagine想像 how that works作品.
126
304000
2000
你可以想象那是怎么作用的
05:21
If you hear a lot about tiger attacks攻击, there must必须 be a lot of tigers老虎 around.
127
306000
3000
如果你听说了许多老虎袭击人的消息,那么你会认为肯定有很多老虎在附近
05:24
You don't hear about lion狮子 attacks攻击, there aren't a lot of lions狮子 around.
128
309000
3000
如果你没听说狮子袭击人,那么就没有多少狮子在附近
05:27
This works作品 until直到 you invent发明 newspapers报纸.
129
312000
3000
这是可行的直到报纸被发明
05:30
Because what newspapers报纸 do
130
315000
2000
因为报纸所做的
05:32
is they repeat重复 again and again
131
317000
2000
就是一遍又一遍地重复
05:34
rare罕见 risks风险.
132
319000
2000
那些少见的风险
05:36
I tell people, if it's in the news新闻, don't worry担心 about it.
133
321000
2000
我告诉大家,如果事情出现在新闻里,那就不用担心了
05:38
Because by definition定义,
134
323000
2000
因为按照定义
05:40
news新闻 is something that almost几乎 never happens发生.
135
325000
3000
新闻是从来没有发生过的事情
05:43
(Laughter笑声)
136
328000
2000
(笑)
05:45
When something is so common共同, it's no longer news新闻 --
137
330000
3000
当事情变得常见了,那就不是新闻了
05:48
car汽车 crashes崩溃, domestic国内 violence暴力 --
138
333000
2000
比如车祸和家庭暴力
05:50
those are the risks风险 you worry担心 about.
139
335000
3000
你会担心这些风险
05:53
We're also a species种类 of storytellers讲故事的人.
140
338000
2000
我们同时也是一种会讲故事的物种
05:55
We respond响应 to stories故事 more than data数据.
141
340000
3000
相比于数据,我们更喜欢故事
05:58
And there's some basic基本 innumeracy数学盲 going on.
142
343000
2000
在故事里,总有些对科学的无知存在
06:00
I mean, the joke玩笑 "One, Two, Three, Many许多" is kind of right.
143
345000
3000
比如 “一、二、三、很多”(见英文) 这个笑话
06:03
We're really good at small numbers数字.
144
348000
3000
我们善于用小数字
06:06
One mango芒果, two mangoes芒果, three mangoes芒果,
145
351000
2000
一个芒果,两个芒果,三个芒果
06:08
10,000 mangoes芒果, 100,000 mangoes芒果 --
146
353000
2000
一万个芒果,十万个芒果 --
06:10
it's still more mangoes芒果 you can eat before they rot腐烂.
147
355000
3000
在烂掉前还有足够的芒果等你去吃
06:13
So one half, one quarter25美分硬币, one fifth第五 -- we're good at that.
148
358000
3000
二分之一,四分之一,五分之一 -- 我们擅长这些
06:16
One in a million百万, one in a billion十亿 --
149
361000
2000
百万分之一,十亿分之一 --
06:18
they're both almost几乎 never.
150
363000
3000
它们就像永远不会发生那样
06:21
So we have trouble麻烦 with the risks风险
151
366000
2000
所以我们不知如何面对
06:23
that aren't very common共同.
152
368000
2000
那些不常见的风险
06:25
And what these cognitive认知 biases偏见 do
153
370000
2000
这些认知偏见所起的作用
06:27
is they act法案 as filters过滤器 between之间 us and reality现实.
154
372000
3000
就是像过滤器一样隔断我们和现实
06:30
And the result结果
155
375000
2000
结果呢
06:32
is that feeling感觉 and reality现实 get out of whack重打,
156
377000
2000
感觉和现实被割开
06:34
they get different不同.
157
379000
3000
它们变得不同了
06:37
Now you either have a feeling感觉 -- you feel more secure安全 than you are.
158
382000
3000
现在你要么有种感觉 -- 你觉得比现实更加安全
06:40
There's a false sense of security安全.
159
385000
2000
这是一个错误的安全感
06:42
Or the other way,
160
387000
2000
要么相反
06:44
and that's a false sense of insecurity不安全.
161
389000
2000
出现错误的不安全感
06:46
I write a lot about "security安全 theater剧院,"
162
391000
3000
我写了很多关于”安全剧场“的文章
06:49
which哪一个 are products制品 that make people feel secure安全,
163
394000
3000
这个概念只起到让人们觉得很安全的作用
06:52
but don't actually其实 do anything.
164
397000
2000
除此之外一无是处
06:54
There's no real真实 word for stuff东东 that makes品牌 us secure安全,
165
399000
2000
现实世界里不存在让我们安全
06:56
but doesn't make us feel secure安全.
166
401000
2000
但不让我们觉得安全的事物
06:58
Maybe it's what the CIA's中情局 supposed应该 to do for us.
167
403000
3000
可能这就是CIA应该为我们做的
07:03
So back to economics经济学.
168
408000
2000
好了,回到经济学里
07:05
If economics经济学, if the market市场, drives驱动器 security安全,
169
410000
4000
如果经济,或者市场,以安全为重
07:09
and if people make trade-offs权衡
170
414000
2000
并且人们根据安全的感觉
07:11
based基于 on the feeling感觉 of security安全,
171
416000
3000
作出权衡取舍
07:14
then the smart聪明 thing for companies公司 to do
172
419000
2000
那么精明的公司所应该做的
07:16
for the economic经济 incentives奖励
173
421000
2000
为了经济上的激励
07:18
are to make people feel secure安全.
174
423000
3000
就是让人们觉得安全
07:21
And there are two ways方法 to do this.
175
426000
3000
有两种方法可以做到
07:24
One, you can make people actually其实 secure安全
176
429000
2000
一,你可以真正地做到安全
07:26
and hope希望 they notice注意.
177
431000
2000
然后希望人们可以注意到
07:28
Or two, you can make people just feel secure安全
178
433000
3000
二,你可以让人们觉得安全
07:31
and hope希望 they don't notice注意.
179
436000
3000
然后希望他们没有注意到真相
07:35
So what makes品牌 people notice注意?
180
440000
3000
那么到底什么可以引起人们注意是否安全呢?
07:38
Well a couple一对 of things:
181
443000
2000
有很多,比如
07:40
understanding理解 of the security安全,
182
445000
2000
对安全的理解
07:42
of the risks风险, the threats威胁,
183
447000
2000
对风险和威胁的理解
07:44
the countermeasures对策, how they work.
184
449000
3000
对 对策及其原理的理解
07:47
But if you know stuff东东,
185
452000
2000
如果你知道很多东西
07:49
you're more likely容易 to have your feelings情怀 match比赛 reality现实.
186
454000
3000
那么你更有可能拥有与现实一致的感觉
07:52
Enough足够 real真实 world世界 examples例子 helps帮助.
187
457000
3000
很多现实生活中的例子可以帮助理解
07:55
Now we all know the crime犯罪 rate in our neighborhood邻里,
188
460000
3000
比如我们都了解我们居住的地区的犯罪率
07:58
because we live生活 there, and we get a feeling感觉 about it
189
463000
3000
因为我们住在那,并且我们能够感受的到
08:01
that basically基本上 matches火柴 reality现实.
190
466000
3000
这种感觉与现实基本相符
08:04
Security安全 theater's戏剧的 exposed裸露
191
469000
3000
”安全剧场“会在失灵的时候
08:07
when it's obvious明显 that it's not working加工 properly正确.
192
472000
3000
很明显的暴露出来
08:10
Okay, so what makes品牌 people not notice注意?
193
475000
4000
好,接下来,什么让人们不去注意安全呢?
08:14
Well, a poor较差的 understanding理解.
194
479000
2000
这里有个简单的理解
08:16
If you don't understand理解 the risks风险, you don't understand理解 the costs成本,
195
481000
3000
如果你不理解风险,你就不理解成本
08:19
you're likely容易 to get the trade-off交易 wrong错误,
196
484000
2000
你就会做出错误的权衡取舍
08:21
and your feeling感觉 doesn't match比赛 reality现实.
197
486000
3000
并且你的感觉与现实不符
08:24
Not enough足够 examples例子.
198
489000
2000
没多少例子
08:26
There's an inherent固有 problem问题
199
491000
2000
在小概率事件里
08:28
with low probability可能性 events事件.
200
493000
2000
存在一个固有的问题
08:30
If, for example,
201
495000
2000
举个例子
08:32
terrorism恐怖主义 almost几乎 never happens发生,
202
497000
2000
如果恐怖行动从来没发生过
08:34
it's really hard to judge法官
203
499000
2000
那么就很难对反恐措施的效果
08:36
the efficacy功效 of counter-terrorist反恐 measures措施.
204
501000
3000
进行衡量
08:40
This is why you keep sacrificing牺牲 virgins处女,
205
505000
3000
这是为什么人们牺牲处女
08:43
and why your unicorn独角兽 defenses防御 are working加工 just great.
206
508000
3000
和对童话的抵触会如此成功的原因
08:46
There aren't enough足够 examples例子 of failures故障.
207
511000
3000
鲜有失败的例子
08:50
Also, feelings情怀 that are clouding混浊 the issues问题 --
208
515000
3000
同时,对于事情的感觉
08:53
the cognitive认知 biases偏见 I talked about earlier,
209
518000
2000
-- 之前说的认知偏见
08:55
fears恐惧, folk民间 beliefs信仰,
210
520000
3000
恐惧和盲目相信熟悉的人 --
08:58
basically基本上 an inadequate不足 model模型 of reality现实.
211
523000
3000
基本上一个对现实的不完整模型
09:02
So let me complicate复杂 things.
212
527000
3000
让我深入一点
09:05
I have feeling感觉 and reality现实.
213
530000
2000
我现在有感觉和现实
09:07
I want to add a third第三 element元件. I want to add model模型.
214
532000
3000
我想加入第三个元素,一个模型
09:10
Feeling感觉 and model模型 in our head,
215
535000
2000
感觉和模型存在于脑海里
09:12
reality现实 is the outside world世界.
216
537000
2000
现实存在于外部世界
09:14
It doesn't change更改; it's real真实.
217
539000
3000
它是不会变的,它是真实的
09:17
So feeling感觉 is based基于 on our intuition直觉.
218
542000
2000
所以感觉是建立在直觉上的
09:19
Model模型 is based基于 on reason原因.
219
544000
2000
模型是建立在理智上的
09:21
That's basically基本上 the difference区别.
220
546000
3000
那是关键的不同之处
09:24
In a primitive原始 and simple简单 world世界,
221
549000
2000
在一个原始又简单的世界里
09:26
there's really no reason原因 for a model模型
222
551000
3000
没有建立模型的必要
09:29
because feeling感觉 is close to reality现实.
223
554000
3000
因为感觉和现实很接近
09:32
You don't need a model模型.
224
557000
2000
你不需要
09:34
But in a modern现代 and complex复杂 world世界,
225
559000
2000
但是在现在这个复杂的世界里
09:36
you need models楷模
226
561000
2000
你需要模型
09:38
to understand理解 a lot of the risks风险 we face面对.
227
563000
3000
去理解面对的很多风险
09:42
There's no feeling感觉 about germs病菌.
228
567000
2000
比如说,没有什么感觉是关于细菌的
09:44
You need a model模型 to understand理解 them.
229
569000
3000
你需要一个模型去了解它们
09:47
So this model模型
230
572000
2000
所以这个模型
09:49
is an intelligent智能 representation表示 of reality现实.
231
574000
3000
是在理智层面上的现实
09:52
It's, of course课程, limited有限 by science科学,
232
577000
3000
它当然被科学和技术
09:55
by technology技术.
233
580000
2000
所限制着
09:57
We couldn't不能 have a germ病菌 theory理论 of disease疾病
234
582000
3000
我们没法在发明显微镜观察细菌前
10:00
before we invented发明 the microscope显微镜 to see them.
235
585000
3000
拥有一套关于细菌和疾病的理论
10:04
It's limited有限 by our cognitive认知 biases偏见.
236
589000
3000
它同时也被我们的认知偏见所限制
10:07
But it has the ability能力
237
592000
2000
但模型有能力
10:09
to override覆盖 our feelings情怀.
238
594000
2000
凌驾于我们的感觉
10:11
Where do we get these models楷模? We get them from others其他.
239
596000
3000
我们从哪里得到这些模型的呢?从其他人那里
10:14
We get them from religion宗教, from culture文化,
240
599000
3000
从宗教、文化
10:17
teachers教师, elders长老.
241
602000
2000
老师、长辈那里得到
10:19
A couple一对 years年份 ago,
242
604000
2000
很多年前
10:21
I was in South Africa非洲 on safari苹果浏览器.
243
606000
2000
我在南非狩猎
10:23
The tracker跟踪器 I was with grew成长 up in Kruger克鲁格 National国民 Park公园.
244
608000
3000
跟我一起的那个追踪者是在克鲁格国家公园长大的
10:26
He had some very complex复杂 models楷模 of how to survive生存.
245
611000
3000
他有一些如何生存的复杂模型
10:29
And it depended依赖 on if you were attacked袭击
246
614000
2000
分别针对被狮子、猎豹、
10:31
by a lion狮子 or a leopard or a rhino犀牛 or an elephant --
247
616000
2000
犀牛还是大象所攻击的情况
10:33
and when you had to run away, and when you couldn't不能 run away, and when you had to climb a tree --
248
618000
3000
和什么时候应该逃跑,什么时候应该爬树
10:36
when you could never climb a tree.
249
621000
2000
和什么时候千万别上树
10:38
I would have died死亡 in a day,
250
623000
3000
我可能会在一天内就死在那里
10:41
but he was born天生 there,
251
626000
2000
但他生在那里
10:43
and he understood了解 how to survive生存.
252
628000
2000
他知道生存的方法
10:45
I was born天生 in New York纽约 City.
253
630000
2000
我生在纽约
10:47
I could have taken采取 him to New York纽约, and he would have died死亡 in a day.
254
632000
3000
我可以把他带到纽约,估计他也会在一天内就没命了
10:50
(Laughter笑声)
255
635000
2000
(笑)
10:52
Because we had different不同 models楷模
256
637000
2000
原因在我们有建立在我们各自经验上的
10:54
based基于 on our different不同 experiences经验.
257
639000
3000
不同的模型
10:58
Models楷模 can come from the media媒体,
258
643000
2000
模型来自媒体
11:00
from our elected当选 officials官员.
259
645000
3000
来自我们选出的政府
11:03
Think of models楷模 of terrorism恐怖主义,
260
648000
3000
想一下恐怖袭击的模型
11:06
child儿童 kidnapping绑架,
261
651000
3000
绑架儿童的模型
11:09
airline航空公司 safety安全, car汽车 safety安全.
262
654000
2000
飞机和汽车的安全模型
11:11
Models楷模 can come from industry行业.
263
656000
3000
模型可以来自某个工业领域
11:14
The two I'm following以下 are surveillance监控 cameras相机,
264
659000
2000
我关注的两个是监视器
11:16
IDID cards,
265
661000
2000
和身份证
11:18
quite相当 a lot of our computer电脑 security安全 models楷模 come from there.
266
663000
3000
很多计算机安全模型都来自它们
11:21
A lot of models楷模 come from science科学.
267
666000
3000
还有些模型来自科学
11:24
Health健康 models楷模 are a great example.
268
669000
2000
以健康模型为例
11:26
Think of cancer癌症, of bird flu流感, swine flu流感, SARSSARS.
269
671000
3000
想想癌症、禽流感、猪流感、非典
11:29
All of our feelings情怀 of security安全
270
674000
3000
我们所有关于
11:32
about those diseases疾病
271
677000
2000
这些疾病的感觉
11:34
come from models楷模
272
679000
2000
都来自于
11:36
given特定 to us, really, by science科学 filtered过滤 through通过 the media媒体.
273
681000
3000
媒体从科学里过滤出来之后灌输给我们的
11:40
So models楷模 can change更改.
274
685000
3000
所以模型是可变的
11:43
Models楷模 are not static静态的.
275
688000
2000
模型不是静态的
11:45
As we become成为 more comfortable自在 in our environments环境,
276
690000
3000
随着我们越来越适应环境
11:48
our model模型 can move移动 closer接近 to our feelings情怀.
277
693000
4000
模型会越来越接近现实
11:53
So an example might威力 be,
278
698000
2000
举个例子
11:55
if you go back 100 years年份 ago
279
700000
2000
如果你回到一百年前
11:57
when electricity电力 was first becoming变得 common共同,
280
702000
3000
那时电刚刚普及
12:00
there were a lot of fears恐惧 about it.
281
705000
2000
仍然有很多人害怕它
12:02
I mean, there were people who were afraid害怕 to push doorbells门铃,
282
707000
2000
有些人害怕按门铃
12:04
because there was electricity电力 in there, and that was dangerous危险.
283
709000
3000
因为那有电,所以很危险
12:07
For us, we're very facile灵巧的 around electricity电力.
284
712000
3000
对于我们来说,我们跟电相处地很融洽
12:10
We change更改 light bulbs灯泡
285
715000
2000
我们不用怎么想
12:12
without even thinking思维 about it.
286
717000
2000
就可以换灯泡
12:14
Our model模型 of security安全 around electricity电力
287
719000
4000
我们拥有的关于电和安全的模型
12:18
is something we were born天生 into.
288
723000
3000
是天生的
12:21
It hasn't有没有 changed as we were growing生长 up.
289
726000
3000
它没有随着我们的成长而变化
12:24
And we're good at it.
290
729000
3000
并且我们很适应
12:27
Or think of the risks风险
291
732000
2000
再想想在不同年龄层的人
12:29
on the Internet互联网 across横过 generations --
292
734000
2000
关于互联网风险的认识 --
12:31
how your parents父母 approach途径 Internet互联网 security安全,
293
736000
2000
你的父母是怎么看待互联网安全的
12:33
versus how you do,
294
738000
2000
你是怎么看待的
12:35
versus how our kids孩子 will.
295
740000
3000
你的孩子们会怎么看待
12:38
Models楷模 eventually终于 fade褪色 into the background背景.
296
743000
3000
模型最终会消失在无意识里
12:42
Intuitive直观的 is just another另一个 word for familiar.
297
747000
3000
直觉来自于熟悉
12:45
So as your model模型 is close to reality现实,
298
750000
2000
所以随着你的模型越来越接近现实
12:47
and it converges收敛 with feelings情怀,
299
752000
2000
它将同感觉合二为一
12:49
you often经常 don't know it's there.
300
754000
3000
你将感觉不到它的存在
12:52
So a nice不错 example of this
301
757000
2000
以去年的猪流感为例
12:54
came来了 from last year and swine flu流感.
302
759000
3000
以去年的猪流感为例
12:57
When swine flu流感 first appeared出现,
303
762000
2000
当猪流感第一次出现时
12:59
the initial初始 news新闻 caused造成 a lot of overreaction过度反应.
304
764000
4000
一开始的新闻造成了过度的反应
13:03
Now it had a name名称,
305
768000
2000
现在它有了个名字
13:05
which哪一个 made制作 it scarier可怕 than the regular定期 flu流感,
306
770000
2000
使之变得比平常的流感更加可怕
13:07
even though虽然 it was more deadly致命.
307
772000
2000
即使它没那么致命
13:09
And people thought doctors医生 should be able能够 to deal合同 with it.
308
774000
4000
另外,人们觉得医生应该能够解决掉它
13:13
So there was that feeling感觉 of lack缺乏 of control控制.
309
778000
2000
所以产生了一种失去控制的感觉
13:15
And those two things
310
780000
2000
以上两种原因
13:17
made制作 the risk风险 more than it was.
311
782000
2000
使风险变得比实际更严重
13:19
As the novelty新奇 wore穿着 off, the months个月 went by,
312
784000
3000
几个月过去了,随着新鲜感的消退
13:22
there was some amount of tolerance公差,
313
787000
2000
人们接受了
13:24
people got used to it.
314
789000
2000
并且习惯了猪流感的事情
13:26
There was no new data数据, but there was less fear恐惧.
315
791000
3000
没有新的数据,但恐惧减少了
13:29
By autumn秋季,
316
794000
2000
秋天的时候
13:31
people thought
317
796000
2000
人们想
13:33
the doctors医生 should have solved解决了 this already已经.
318
798000
2000
医生应该已经解决问题了
13:35
And there's kind of a bifurcation分枝 --
319
800000
2000
一个选择出现了 --
13:37
people had to choose选择
320
802000
2000
人们必须从
13:39
between之间 fear恐惧 and acceptance验收 --
321
804000
4000
恐惧接受中选择 --
13:43
actually其实 fear恐惧 and indifference漠不关心 --
322
808000
2000
实际上是恐惧和漠视 --
13:45
they kind of chose选择 suspicion怀疑.
323
810000
3000
他们选择了怀疑
13:48
And when the vaccine疫苗 appeared出现 last winter冬季,
324
813000
3000
当疫苗在冬天出现的时候
13:51
there were a lot of people -- a surprising奇怪 number --
325
816000
3000
很多人 -- 非常大的数量 --
13:54
who refused拒绝 to get it --
326
819000
3000
拒绝接种
13:58
as a nice不错 example
327
823000
2000
这可以作为
14:00
of how people's人们 feelings情怀 of security安全 change更改, how their model模型 changes变化,
328
825000
3000
人们的安全感和模型是如何
14:03
sort分类 of wildly疯狂
329
828000
2000
剧烈地
14:05
with no new information信息,
330
830000
2000
在没有新信息
14:07
with no new input输入.
331
832000
2000
的情况下改变的
14:09
This kind of thing happens发生 a lot.
332
834000
3000
这种情况经常发生
14:12
I'm going to give one more complication并发症.
333
837000
3000
现在我再把概念深入一点
14:15
We have feeling感觉, model模型, reality现实.
334
840000
3000
我们有感觉、模型和现实
14:18
I have a very relativistic相对论 view视图 of security安全.
335
843000
2000
我认为安全其实还是相对的
14:20
I think it depends依靠 on the observer观察者.
336
845000
3000
它取决于观察者
14:23
And most security安全 decisions决定
337
848000
2000
大多数关于安全的决策
14:25
have a variety品种 of people involved参与.
338
850000
4000
是由各种人群所参与决定的
14:29
And stakeholders利益相关者
339
854000
2000
有小算盘的利益相关者
14:31
with specific具体 trade-offs权衡
340
856000
3000
有小算盘的利益相关者
14:34
will try to influence影响 the decision决定.
341
859000
2000
会试着影响决策的进行
14:36
And I call that their agenda议程.
342
861000
2000
我称其为他们的议程
14:38
And you see agenda议程 --
343
863000
2000
你可以瞧见这个议程 --
14:40
this is marketing营销, this is politics政治 --
344
865000
3000
不管是市场还是政治 --
14:43
trying to convince说服 you to have one model模型 versus another另一个,
345
868000
3000
它尝试着说服你只拥有其中一种模型
14:46
trying to convince说服 you to ignore忽视 a model模型
346
871000
2000
说服你去忽视模型
14:48
and trust相信 your feelings情怀,
347
873000
3000
而相信感觉
14:51
marginalizing边缘化 people with models楷模 you don't like.
348
876000
3000
边缘化那些拥有跟你的模型的不同的人们
14:54
This is not uncommon罕见.
349
879000
3000
这很常见
14:57
An example, a great example, is the risk风险 of smoking抽烟.
350
882000
3000
这里有个例子,很好的例子,关于吸烟的危害
15:01
In the history历史 of the past过去 50 years年份, the smoking抽烟 risk风险
351
886000
3000
在过去50年里,吸烟的危害
15:04
shows节目 how a model模型 changes变化,
352
889000
2000
展示了一个模型是怎么变化的
15:06
and it also shows节目 how an industry行业 fights打架 against反对
353
891000
3000
同时也展示了一个工业是怎么对付
15:09
a model模型 it doesn't like.
354
894000
2000
一个它不喜欢的模型
15:11
Compare比较 that to the secondhand二手 smoke抽烟 debate辩论 --
355
896000
3000
你可以把它跟20年后
15:14
probably大概 about 20 years年份 behind背后.
356
899000
3000
关于二手烟的争论相比较
15:17
Think about seat座位 belts皮带.
357
902000
2000
再想想安全带
15:19
When I was a kid孩子, no one wore穿着 a seat座位 belt.
358
904000
2000
当我还小的时候,没人系安全带
15:21
Nowadays如今, no kid孩子 will let you drive驾驶
359
906000
2000
现在呢,如果你不系安全带
15:23
if you're not wearing穿着 a seat座位 belt.
360
908000
2000
没有哪个孩子会让你开车
15:26
Compare比较 that to the airbag安全气囊 debate辩论 --
361
911000
2000
你可以把它跟30年后
15:28
probably大概 about 30 years年份 behind背后.
362
913000
3000
关于安全气囊的争论相比较
15:31
All examples例子 of models楷模 changing改变.
363
916000
3000
这几个例子里的模型都变了
15:36
What we learn学习 is that changing改变 models楷模 is hard.
364
921000
3000
由此我们可以的出结论,模型是很难被改变的
15:39
Models楷模 are hard to dislodge打跑.
365
924000
2000
模型是很难被移除的
15:41
If they equal等于 your feelings情怀,
366
926000
2000
如果模型跟你的感觉相符
15:43
you don't even know you have a model模型.
367
928000
3000
你甚至不知道你有个模型
15:46
And there's another另一个 cognitive认知 bias偏压
368
931000
2000
再说另一个认知偏见
15:48
I'll call confirmation确认 bias偏压,
369
933000
2000
证实性偏见
15:50
where we tend趋向 to accept接受 data数据
370
935000
3000
意思是我们倾向于接受
15:53
that confirms确认 our beliefs信仰
371
938000
2000
那些能够支持我们观点的数据
15:55
and reject拒绝 data数据 that contradicts相矛盾 our beliefs信仰.
372
940000
3000
而拒绝那些反对的
15:59
So evidence证据 against反对 our model模型,
373
944000
2000
所以对于那些与我们的模型相反的证据
16:01
we're likely容易 to ignore忽视, even if it's compelling引人注目.
374
946000
3000
我们会忽略掉,即使它们很有说服力
16:04
It has to get very compelling引人注目 before we'll pay工资 attention注意.
375
949000
3000
那些证据必须非常非常令人信服,我们才会去关注
16:08
New models楷模 that extend延伸 long periods of time are hard.
376
953000
2000
一个时间跨度长的新模型难以让人接受
16:10
Global全球 warming变暖 is a great example.
377
955000
2000
比如像全球变暖
16:12
We're terrible可怕
378
957000
2000
我们很难接受一个
16:14
at models楷模 that span跨度 80 years年份.
379
959000
2000
超过80年的的模型
16:16
We can do to the next下一个 harvest收成.
380
961000
2000
我们可以接受一年的
16:18
We can often经常 do until直到 our kids孩子 grow增长 up.
381
963000
3000
我们也可以接受让一个小孩长大那么长的时间
16:21
But 80 years年份, we're just not good at.
382
966000
3000
但80年还是太难了
16:24
So it's a very hard model模型 to accept接受.
383
969000
3000
所以那是个非常难以让人接受的模型
16:27
We can have both models楷模 in our head simultaneously同时,
384
972000
4000
我们可以同时拥有对同一件事情的
16:31
right, that kind of problem问题
385
976000
3000
两个模型
16:34
where we're holding保持 both beliefs信仰 together一起,
386
979000
3000
此时,我们拥有同时两种信念
16:37
right, the cognitive认知 dissonance不和谐.
387
982000
2000
这种情况也叫认知不协调
16:39
Eventually终于,
388
984000
2000
最后
16:41
the new model模型 will replace更换 the old model模型.
389
986000
3000
新模型代替了旧模型
16:44
Strong强大 feelings情怀 can create创建 a model模型.
390
989000
3000
强烈的感觉可以产生一个模型
16:47
September九月 11th created创建 a security安全 model模型
391
992000
3000
911在很多人脑里
16:50
in a lot of people's人们 heads.
392
995000
2000
产生了一个安全模型
16:52
Also, personal个人 experiences经验 with crime犯罪 can do it,
393
997000
3000
同时,个人的犯罪经历和
16:55
personal个人 health健康 scare,
394
1000000
2000
一次健康危机 --
16:57
a health健康 scare in the news新闻.
395
1002000
2000
就是那种在新闻里可以看到的那种 -- 也可以产生模型
16:59
You'll你会 see these called flashbulb镁光灯 events事件
396
1004000
2000
那些经历在心理学里叫做
17:01
by psychiatrists精神科医生.
397
1006000
2000
闪光灯事件
17:03
They can create创建 a model模型 instantaneously瞬间,
398
1008000
3000
它们能迅速地产生一个模型
17:06
because they're very emotive感情的.
399
1011000
3000
因为引起了强烈的个人感情
17:09
So in the technological技术性 world世界,
400
1014000
2000
所以在一个技术世界里
17:11
we don't have experience经验
401
1016000
2000
我们没有可以判断模型
17:13
to judge法官 models楷模.
402
1018000
2000
的经历
17:15
And we rely依靠 on others其他. We rely依靠 on proxies代理.
403
1020000
2000
我们依赖其他人,我们依赖于代理人
17:17
I mean, this works作品 as long as it's to correct正确 others其他.
404
1022000
4000
这样是可以的,只要它能够纠正错误就行
17:21
We rely依靠 on government政府 agencies机构
405
1026000
2000
我们依赖政府
17:23
to tell us what pharmaceuticals药品 are safe安全.
406
1028000
5000
来告诉我们哪些药品是安全的
17:28
I flew here yesterday昨天.
407
1033000
2000
我是昨天坐飞机来的
17:30
I didn't check the airplane飞机.
408
1035000
2000
我没检查飞机是否安全
17:32
I relied on some other group
409
1037000
2000
我依赖其他人
17:34
to determine确定 whether是否 my plane平面 was safe安全 to fly.
410
1039000
3000
去决定我坐的飞机是否安全
17:37
We're here, none没有 of us fear恐惧 the roof屋顶 is going to collapse坍方 on us,
411
1042000
3000
我们坐在这里,没人担心屋顶会塌
17:40
not because we checked检查,
412
1045000
3000
不是因为我们亲自检查过
17:43
but because we're pretty漂亮 sure
413
1048000
2000
而是我们非常确定
17:45
the building建造 codes代码 here are good.
414
1050000
3000
这建筑符合规范
17:48
It's a model模型 we just accept接受
415
1053000
2000
这是一种模型我们只是
17:50
pretty漂亮 much by faith信仰.
416
1055000
2000
因为信念而接受
17:52
And that's okay.
417
1057000
3000
这也没错
17:57
Now, what we want
418
1062000
2000
现在,我们希望的是
17:59
is people to get familiar enough足够
419
1064000
2000
人们能够认识一些
18:01
with better models楷模 --
420
1066000
2000
更好的模型 --
18:03
have it reflected反射的 in their feelings情怀 --
421
1068000
2000
在感觉里显现出来 --
18:05
to allow允许 them to make security安全 trade-offs权衡.
422
1070000
4000
以帮助他们做出更好的权衡取舍
18:09
Now when these go out of whack重打,
423
1074000
2000
当感觉和模型分开的时候
18:11
you have two options选项.
424
1076000
2000
你有两个选择
18:13
One, you can fix固定 people's人们 feelings情怀,
425
1078000
2000
第一,改变人们的感觉
18:15
directly appeal上诉 to feelings情怀.
426
1080000
2000
直接诉诸于感觉
18:17
It's manipulation操作, but it can work.
427
1082000
3000
这是一种操纵,但有效果
18:20
The second第二, more honest诚实 way
428
1085000
2000
第二,更诚实一点的做法
18:22
is to actually其实 fix固定 the model模型.
429
1087000
3000
就是改变模型
18:26
Change更改 happens发生 slowly慢慢地.
430
1091000
2000
改变是很缓慢的
18:28
The smoking抽烟 debate辩论 took 40 years年份,
431
1093000
3000
吸烟的争论持续了40年
18:31
and that was an easy简单 one.
432
1096000
3000
而那还是比较简单的一个
18:34
Some of this stuff东东 is hard.
433
1099000
2000
有一些是非常困难的
18:36
I mean really though虽然,
434
1101000
2000
是真的很困难
18:38
information信息 seems似乎 like our best最好 hope希望.
435
1103000
2000
看起来信息是我们最好的希望
18:40
And I lied说谎.
436
1105000
2000
事实上我之前撒了个谎
18:42
Remember记得 I said feeling感觉, model模型, reality现实;
437
1107000
2000
我之前提到感觉、模型和现实
18:44
I said reality现实 doesn't change更改. It actually其实 does.
438
1109000
3000
我说现实不会改变。事实上它会。
18:47
We live生活 in a technological技术性 world世界;
439
1112000
2000
我们生活在一个技术的世界里
18:49
reality现实 changes变化 all the time.
440
1114000
3000
现实每时每刻都在改变
18:52
So we might威力 have -- for the first time in our species种类 --
441
1117000
3000
所以,可能是我们这个物种里的第一次
18:55
feeling感觉 chases追逐 model模型, model模型 chases追逐 reality现实, reality's现实的 moving移动 --
442
1120000
3000
感觉追赶着模型,模型追赶着现实,而现实则在不断改变
18:58
they might威力 never catch抓住 up.
443
1123000
3000
它们可能永远也追不上
19:02
We don't know.
444
1127000
2000
这点谁知道呢
19:04
But in the long-term长期,
445
1129000
2000
但是就长期来看
19:06
both feeling感觉 and reality现实 are important重要.
446
1131000
3000
感觉和现实是很重要的
19:09
And I want to close with two quick stories故事 to illustrate说明 this.
447
1134000
3000
结束前我想以两个小故事来说明这点
19:12
1982 -- I don't know if people will remember记得 this --
448
1137000
2000
1982年 -- 我不知道人们还记不记得 --
19:14
there was a short epidemic疫情
449
1139000
3000
那时在美国发生了一次
19:17
of Tylenol泰诺 poisonings中毒 in the United联合的 States状态.
450
1142000
2000
时间不长但传播范围广的泰诺中毒事件
19:19
It's a horrific可怕的 story故事. Someone有人 took a bottle瓶子 of Tylenol泰诺,
451
1144000
3000
很可怕。有人拿了一瓶泰诺胶囊,
19:22
put poison in it, closed关闭 it up, put it back on the shelf.
452
1147000
3000
放毒进去,关上盖子,然后又放回货架
19:25
Someone有人 else其他 bought it and died死亡.
453
1150000
2000
七个人买回去吃了然后中毒而死
19:27
This terrified people.
454
1152000
2000
人们很害怕
19:29
There were a couple一对 of copycat山寨 attacks攻击.
455
1154000
2000
当时还有些模仿此投毒的行为
19:31
There wasn't any real真实 risk风险, but people were scared害怕.
456
1156000
3000
幸好后者没什么真正的危险,但人们被吓到了
19:34
And this is how
457
1159000
2000
这是防盗瓶盖产业
19:36
the tamper-proof防篡改 drug药物 industry行业 was invented发明.
458
1161000
2000
得以发展起来的原因
19:38
Those tamper-proof防篡改 caps帽子, that came来了 from this.
459
1163000
2000
那些防盗瓶盖就是这么来的
19:40
It's complete完成 security安全 theater剧院.
460
1165000
2000
它就是所谓的安全剧场
19:42
As a homework家庭作业 assignment分配, think of 10 ways方法 to get around it.
461
1167000
2000
你们可以想想10种破解防盗瓶盖的方法
19:44
I'll give you one, a syringe注射器.
462
1169000
3000
我先说一个,用注射器
19:47
But it made制作 people feel better.
463
1172000
3000
即使没那么安全,但至少人们感觉更安全了
19:50
It made制作 their feeling感觉 of security安全
464
1175000
2000
它让人们对安全的感觉
19:52
more match比赛 the reality现实.
465
1177000
2000
跟现实更为符合
19:54
Last story故事, a few少数 years年份 ago, a friend朋友 of mine gave birth分娩.
466
1179000
3000
最后一个故事。几年前,我一个朋友要生了
19:57
I visit访问 her in the hospital醫院.
467
1182000
2000
我去医院看她
19:59
It turns out when a baby's宝宝 born天生 now,
468
1184000
2000
发现当一个婴儿出生后
20:01
they put an RFIDRFID bracelet手镯 on the baby宝宝,
469
1186000
2000
他们会给婴儿戴上一个带RFID的手镯
20:03
put a corresponding相应 one on the mother母亲,
470
1188000
2000
然后给母亲也配一个对应的
20:05
so if anyone任何人 other than the mother母亲 takes the baby宝宝 out of the maternity母道 ward病房,
471
1190000
2000
这样,当一个不是母亲的人想把婴儿从产房带走
20:07
an alarm报警 goes off.
472
1192000
2000
警报就会响
20:09
I said, "Well, that's kind of neat整齐.
473
1194000
2000
我说:“这措施不错。
20:11
I wonder奇迹 how rampant猖獗 baby宝宝 snatching is
474
1196000
2000
我想知道在医院
20:13
out of hospitals医院."
475
1198000
2000
偷盗婴儿的行为有多猖獗。”
20:15
I go home, I look it up.
476
1200000
2000
回到家,我查了一下。
20:17
It basically基本上 never happens发生.
477
1202000
2000
基本上从来没发生过
20:19
But if you think about it,
478
1204000
2000
但如果你仔细想想
20:21
if you are a hospital醫院,
479
1206000
2000
如果你是医生
20:23
and you need to take a baby宝宝 away from its mother母亲,
480
1208000
2000
你需要给婴儿从母亲身边带走
20:25
out of the room房间 to run some tests测试,
481
1210000
2000
带出房间做点测试
20:27
you better have some good security安全 theater剧院,
482
1212000
2000
你最好有安全剧院
20:29
or she's going to rip安息 your arm off.
483
1214000
2000
不然的话那位母亲会把你的胳膊都拽下来
20:31
(Laughter笑声)
484
1216000
2000
(笑)
20:33
So it's important重要 for us,
485
1218000
2000
所以,安全剧院这个概念对于
20:35
those of us who design设计 security安全,
486
1220000
2000
那些做安全设计的,
20:37
who look at security安全 policy政策,
487
1222000
3000
那些以实际效果来看待
20:40
or even look at public上市 policy政策
488
1225000
2000
安全政策或公共政策的人来说
20:42
in ways方法 that affect影响 security安全.
489
1227000
2000
是非常重要的
20:44
It's not just reality现实; it's feeling感觉 and reality现实.
490
1229000
3000
它不只是现实,它是感觉和现实
20:47
What's important重要
491
1232000
2000
重要的是
20:49
is that they be about the same相同.
492
1234000
2000
它们几乎是一样的
20:51
It's important重要 that, if our feelings情怀 match比赛 reality现实,
493
1236000
2000
如果我们的感觉和现实相符
20:53
we make better security安全 trade-offs权衡.
494
1238000
2000
我们就能够做出更好的关于安全的权衡取舍
20:55
Thank you.
495
1240000
2000
谢谢
20:57
(Applause掌声)
496
1242000
2000
(鼓掌)
Translated by Yuli Zhou
Reviewed by Tony Yet

▲Back to top

ABOUT THE SPEAKER
Bruce Schneier - Security expert
Bruce Schneier thinks hard about security -- as a computer security guru, and as a philosopher of the larger notion of making a safer world.

Why you should listen

Bruce Schneier is an internationally renowned security technologist and author. Described by the Economist as a "security guru," he is best known as a refreshingly candid and lucid security critic and commentator. When people want to know how security really works, they turn to Schneier.

His first bestseller, Applied Cryptography, explained how the arcane science of secret codes actually works, and was described by Wired as "the book the National Security Agency wanted never to be published." His book on computer and network security, Secrets and Lies, was called by Fortune "[a] jewel box of little surprises you can actually use." Beyond Fear tackles the problems of security from the small to the large: personal safety, crime, corporate security, national security. His current book, Schneier on Security, offers insight into everything from the risk of identity theft (vastly overrated) to the long-range security threat of unchecked presidential power and the surprisingly simple way to tamper-proof elections.

Schneier publishes a free monthly newsletter, Crypto-Gram, with over 150,000 readers. In its ten years of regular publication, Crypto-Gram has become one of the most widely read forums for free-wheeling discussions, pointed critiques and serious debate about security. As head curmudgeon at the table, Schneier explains, debunks and draws lessons from security stories that make the news.

More profile about the speaker
Bruce Schneier | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee