ABOUT THE SPEAKER
Bruce Schneier - Security expert
Bruce Schneier thinks hard about security -- as a computer security guru, and as a philosopher of the larger notion of making a safer world.

Why you should listen

Bruce Schneier is an internationally renowned security technologist and author. Described by the Economist as a "security guru," he is best known as a refreshingly candid and lucid security critic and commentator. When people want to know how security really works, they turn to Schneier.

His first bestseller, Applied Cryptography, explained how the arcane science of secret codes actually works, and was described by Wired as "the book the National Security Agency wanted never to be published." His book on computer and network security, Secrets and Lies, was called by Fortune "[a] jewel box of little surprises you can actually use." Beyond Fear tackles the problems of security from the small to the large: personal safety, crime, corporate security, national security. His current book, Schneier on Security, offers insight into everything from the risk of identity theft (vastly overrated) to the long-range security threat of unchecked presidential power and the surprisingly simple way to tamper-proof elections.

Schneier publishes a free monthly newsletter, Crypto-Gram, with over 150,000 readers. In its ten years of regular publication, Crypto-Gram has become one of the most widely read forums for free-wheeling discussions, pointed critiques and serious debate about security. As head curmudgeon at the table, Schneier explains, debunks and draws lessons from security stories that make the news.

More profile about the speaker
Bruce Schneier | Speaker | TED.com
TEDxPSU

Bruce Schneier: The security mirage

Bruce Schneier: 安全的錯覺

Filmed:
958,315 views

資訊安全專家Bruce Schneie指出一般人對於安全的感受與現實狀態常有所出入。在TEDxPSU 的演講中,他解釋為什麼我們花上數十億在處理新聞事件中風險,而那些在國內機場採用的措施,只不過是提供安全假象的安全劇院罷了。現實中常見的風險我們反而忽略了。他也說明我們該如何跳脫這樣的模式。
- Security expert
Bruce Schneier thinks hard about security -- as a computer security guru, and as a philosopher of the larger notion of making a safer world. Full bio

Double-click the English transcript below to play the video.

00:15
So security安全 is two different不同 things:
0
0
2000
安全有兩種涵義
00:17
it's a feeling感覺, and it's a reality現實.
1
2000
2000
感覺上的安全,和真實裡的安全
00:19
And they're different不同.
2
4000
2000
二者並不相同
00:21
You could feel secure安全
3
6000
2000
你可能感到安全
00:23
even if you're not.
4
8000
2000
但現實情況是不安全的
00:25
And you can be secure安全
5
10000
2000
而在真實的安全中
00:27
even if you don't feel it.
6
12000
2000
卻感到不安全
00:29
Really, we have two separate分離 concepts概念
7
14000
2000
確實,這兩種不同的概念
00:31
mapped映射 onto the same相同 word.
8
16000
2000
被放在同一個字詞裡
00:33
And what I want to do in this talk
9
18000
2000
這個演講的目的
00:35
is to split分裂 them apart距離 --
10
20000
2000
就是將它們區分清楚 --
00:37
figuring盤算 out when they diverge偏離
11
22000
2000
探討它們何時會分歧
00:39
and how they converge匯集.
12
24000
2000
又在什麼狀況下合而為一
00:41
And language語言 is actually其實 a problem問題 here.
13
26000
2000
語言本身是個問題
00:43
There aren't a lot of good words
14
28000
2000
因為沒有足夠合適的字詞
00:45
for the concepts概念 we're going to talk about.
15
30000
3000
來傳達我們要談到的概念
00:48
So if you look at security安全
16
33000
2000
用經濟學的角度
00:50
from economic經濟 terms條款,
17
35000
2000
來看安全
00:52
it's a trade-off交易.
18
37000
2000
安全就是一項權衡的交易
00:54
Every一切 time you get some security安全,
19
39000
2000
要得到安全
00:56
you're always trading貿易 off something.
20
41000
2000
一定要先付出
00:58
Whether是否 this is a personal個人 decision決定 --
21
43000
2000
無論是個人的決定-
01:00
whether是否 you're going to install安裝 a burglar竊賊 alarm報警 in your home --
22
45000
2000
例如在家中安裝防盜警鈴
01:02
or a national國民 decision決定 -- where you're going to invade入侵 some foreign國外 country國家 --
23
47000
3000
還是攸關國家安全的決策-例如侵略他國
01:05
you're going to trade貿易 off something,
24
50000
2000
你總得有所付出
01:07
either money or time, convenience方便, capabilities功能,
25
52000
3000
不是錢就是時間,或是便利性,能力
01:10
maybe fundamental基本的 liberties自由.
26
55000
3000
也可能是基本自由
01:13
And the question to ask when you look at a security安全 anything
27
58000
3000
面對安全議題,該問的
01:16
is not whether是否 this makes品牌 us safer更安全,
28
61000
3000
不是「這樣做會更安全嗎」
01:19
but whether是否 it's worth價值 the trade-off交易.
29
64000
3000
而是「值得付出這個代價嗎」
01:22
You've heard聽說 in the past過去 several一些 years年份,
30
67000
2000
在過去這幾年,你們都聽過這種說法
01:24
the world世界 is safer更安全 because Saddam薩達姆 Hussein侯賽因 is not in power功率.
31
69000
2000
我們的世界更安全是因為薩達姆.海珊垮台的緣故
01:26
That might威力 be true真正, but it's not terribly可怕 relevant相應.
32
71000
3000
兩件事情也許都是真的,但兩者之間卻沒有關連
01:29
The question is, was it worth價值 it?
33
74000
3000
該問的問題是,這樣做值得嗎?
01:32
And you can make your own擁有 decision決定,
34
77000
3000
你可以做出自己的選擇
01:35
and then you'll你會 decide決定 whether是否 the invasion侵入 was worth價值 it.
35
80000
2000
然後判斷是否值得為此入侵他國
01:37
That's how you think about security安全 --
36
82000
2000
這就是以權衡的觀點
01:39
in terms條款 of the trade-off交易.
37
84000
2000
來分析安全的方法
01:41
Now there's often經常 no right or wrong錯誤 here.
38
86000
3000
決定沒有正確或錯誤之分
01:44
Some of us have a burglar竊賊 alarm報警 system系統 at home,
39
89000
2000
有人在家裡安裝防盜警鈴系統
01:46
and some of us don't.
40
91000
2000
有人不裝
01:48
And it'll它會 depend依靠 on where we live生活,
41
93000
2000
這取決於我們居住的地點
01:50
whether是否 we live生活 alone單獨 or have a family家庭,
42
95000
2000
是獨居或是與家人同住
01:52
how much cool stuff東東 we have,
43
97000
2000
擁有多少值錢的物品
01:54
how much we're willing願意 to accept接受
44
99000
2000
以及願意承擔多少竊盜損失
01:56
the risk風險 of theft盜竊.
45
101000
2000
竊盜損失
01:58
In politics政治 also,
46
103000
2000
政治上也一樣
02:00
there are different不同 opinions意見.
47
105000
2000
各種意見分歧
02:02
A lot of times, these trade-offs權衡
48
107000
2000
在權衡得失時
02:04
are about more than just security安全,
49
109000
2000
通常要考慮的不只有安全因素
02:06
and I think that's really important重要.
50
111000
2000
我認為這點很重要
02:08
Now people have a natural自然 intuition直覺
51
113000
2000
人們對於抉擇
02:10
about these trade-offs權衡.
52
115000
2000
有天生的直覺
02:12
We make them every一切 day --
53
117000
2000
我們每天都在做決定
02:14
last night in my hotel旅館 room房間,
54
119000
2000
像是昨晚在飯店
02:16
when I decided決定 to double-lock雙鎖 the door,
55
121000
2000
我決定把房門上雙層鎖
02:18
or you in your car汽車 when you drove開車 here,
56
123000
2000
或是當你在車裡決定開車來此地的時候
02:20
when we go eat lunch午餐
57
125000
2000
或是我們吃午餐時
02:22
and decide決定 the food's食品的 not poison and we'll eat it.
58
127000
3000
先判斷食物沒有毒,才決定吃它
02:25
We make these trade-offs權衡 again and again,
59
130000
2000
一天中有很多場合需要
02:27
multiple times a day.
60
132000
2000
需要一再地做出決定
02:29
We often經常 won't慣於 even notice注意 them.
61
134000
2000
大部分的時後,我們甚至不會留意到這點
02:31
They're just part部分 of being存在 alive; we all do it.
62
136000
2000
因為這已是我們生存的一部份;我們都是這樣的
02:33
Every一切 species種類 does it.
63
138000
3000
每個物種也都一樣
02:36
Imagine想像 a rabbit兔子 in a field領域, eating grass,
64
141000
2000
試想原野中的一隻兔子,正在吃著草
02:38
and the rabbit's going to see a fox狐狸.
65
143000
3000
這時牠見到一隻狐狸
02:41
That rabbit兔子 will make a security安全 trade-off交易:
66
146000
2000
兔子需要做一個攸關安全的抉擇
02:43
"Should I stay, or should I flee逃跑?"
67
148000
2000
留下還是逃命?
02:45
And if you think about it,
68
150000
2000
你認為
02:47
the rabbits that are good at making製造 that trade-off交易
69
152000
3000
擅長做出正確決定的兔子
02:50
will tend趨向 to live生活 and reproduce複製,
70
155000
2000
比較容易存活且繁衍下去
02:52
and the rabbits that are bad at it
71
157000
2000
而做出錯誤決定的兔子
02:54
will get eaten吃過 or starve餓死.
72
159000
2000
不是被吃就是餓死了
02:56
So you'd think
73
161000
2000
那麼
02:58
that us, as a successful成功 species種類 on the planet行星 --
74
163000
3000
在地球上表現傑出優異的我們 --
03:01
you, me, everybody每個人 --
75
166000
2000
包括你、我、以及每個人 --
03:03
would be really good at making製造 these trade-offs權衡.
76
168000
3000
必定也擅長做出正確抉擇吧
03:06
Yet然而 it seems似乎, again and again,
77
171000
2000
然而,事實似乎一再地證明
03:08
that we're hopelessly絕望地 bad at it.
78
173000
3000
人類做出的決策糟糕無比
03:11
And I think that's a fundamentally從根本上 interesting有趣 question.
79
176000
3000
這問題非常重要也相當有趣
03:14
I'll give you the short answer回答.
80
179000
2000
我給你們一個簡短的解答
03:16
The answer回答 is, we respond響應 to the feeling感覺 of security安全
81
181000
2000
答案是,因為人類是依據對安全的感覺做出判斷
03:18
and not the reality現實.
82
183000
3000
而非依據真實的安全狀況
03:21
Now most of the time, that works作品.
83
186000
3000
大部分的情況下,這麼做是正確的
03:25
Most of the time,
84
190000
2000
因為大多數的時候
03:27
feeling感覺 and reality現實 are the same相同.
85
192000
3000
感覺和真實是一致的
03:30
Certainly當然 that's true真正
86
195000
2000
人類在史前時代
03:32
for most of human人的 prehistory史前.
87
197000
3000
也是這樣的
03:35
We've我們已經 developed發達 this ability能力
88
200000
3000
我們發展出這種能力
03:38
because it makes品牌 evolutionary發展的 sense.
89
203000
2000
是因演化而來
03:40
One way to think of it
90
205000
2000
有些看法認為
03:42
is that we're highly高度 optimized優化
91
207000
2000
人類目前所擁有的最佳能力
03:44
for risk風險 decisions決定
92
209000
2000
是為了配合
03:46
that are endemic流行 to living活的 in small family家庭 groups
93
211000
3000
公元前100,000年在東非高地生活的小型家庭
03:49
in the East African非洲人 highlands高地 in 100,000 B.C.
94
214000
3000
他們生存所須具備的風險決策能力
03:52
2010 New York紐約, not so much.
95
217000
3000
但已不太符合在2010年的紐約生存的條件了
03:56
Now there are several一些 biases偏見 in risk風險 perception知覺.
96
221000
3000
如今,人類的風險感知能力出現偏差
03:59
A lot of good experiments實驗 in this.
97
224000
2000
很多的實驗在探討這點
04:01
And you can see certain某些 biases偏見 that come up again and again.
98
226000
3000
某些類型的偏差會反覆出現
04:04
So I'll give you four.
99
229000
2000
我會說明其中的四種
04:06
We tend趨向 to exaggerate誇大 spectacular壯觀 and rare罕見 risks風險
100
231000
3000
一,我們容易誇大驚心動魄且不常見的風險
04:09
and downplay淡化 common共同 risks風險 --
101
234000
2000
卻低估常見的風險
04:11
so flying飛行 versus driving主動.
102
236000
3000
例如搭飛機的風險對比陸地上駕駛的風險
04:14
The unknown未知 is perceived感知
103
239000
2000
二,我們認為未知的事
04:16
to be riskier風險較高 than the familiar.
104
241000
3000
比起熟知的事更加危險
04:20
One example would be,
105
245000
2000
其中一個例子是
04:22
people fear恐懼 kidnapping綁架 by strangers陌生人
106
247000
3000
人們害怕被陌生人綁架
04:25
when the data數據 supports支持 kidnapping綁架 by relatives親戚們 is much more common共同.
107
250000
3000
但資料顯示被親友綁架的案件更普遍
04:28
This is for children孩子.
108
253000
2000
這裡指的是誘拐孩童
04:30
Third第三, personified人格化 risks風險
109
255000
3000
三,我們認為具名化的事件
04:33
are perceived感知 to be greater更大 than anonymous匿名 risks風險 --
110
258000
3000
比不具名事件的風險高
04:36
so Bin箱子 Laden拉登 is scarier可怕 because he has a name名稱.
111
261000
3000
賓拉登很恐怖,正是因為他有個名字
04:39
And the fourth第四
112
264000
2000
第四
04:41
is people underestimate低估 risks風險
113
266000
2000
人們容易在可以控制狀況時
04:43
in situations情況 they do control控制
114
268000
2000
低估風險
04:45
and overestimate估計過高 them in situations情況 they don't control控制.
115
270000
4000
在不能控制的情境中高估風險
04:49
So once一旦 you take up skydiving跳傘 or smoking抽煙,
116
274000
3000
所以,你開始特技跳傘或是抽菸後
04:52
you downplay淡化 the risks風險.
117
277000
2000
就會忽略它的風險
04:54
If a risk風險 is thrust推力 upon you -- terrorism恐怖主義 was a good example --
118
279000
3000
面對突如其來的危險-例如恐怖主義
04:57
you'll你會 overplay表演過火 it because you don't feel like it's in your control控制.
119
282000
3000
人們會過度反應,是因為覺得無法控制狀況
05:02
There are a bunch of other of these biases偏見, these cognitive認知 biases偏見,
120
287000
3000
類似的偏差還有很多,這些認知的偏差
05:05
that affect影響 our risk風險 decisions決定.
121
290000
3000
影響我們的風險決策
05:08
There's the availability可用性 heuristic啟發式,
122
293000
2000
所謂”可得性捷思”
05:10
which哪一個 basically基本上 means手段
123
295000
2000
指的是
05:12
we estimate估計 the probability可能性 of something
124
297000
3000
人在評估事件可能發生的機率時
05:15
by how easy簡單 it is to bring帶來 instances實例 of it to mind心神.
125
300000
4000
是基於該事件在我們心目中容易聯想的程度
05:19
So you can imagine想像 how that works作品.
126
304000
2000
像一下這是怎麼運作的
05:21
If you hear a lot about tiger attacks攻擊, there must必須 be a lot of tigers老虎 around.
127
306000
3000
聽到多起老虎攻擊事件,就表示附近老虎很多
05:24
You don't hear about lion獅子 attacks攻擊, there aren't a lot of lions獅子 around.
128
309000
3000
沒聽到獅子攻擊事件,就表示附近的獅子不多
05:27
This works作品 until直到 you invent發明 newspapers報紙.
129
312000
3000
直到新聞報紙被發明前,這種判斷準則是成立的
05:30
Because what newspapers報紙 do
130
315000
2000
因為報紙所做的
05:32
is they repeat重複 again and again
131
317000
2000
就是一再地重複報導
05:34
rare罕見 risks風險.
132
319000
2000
那些鮮少發生的危險
05:36
I tell people, if it's in the news新聞, don't worry擔心 about it.
133
321000
2000
我要告訴大家,新聞中報導的事情,都無需煩憂
05:38
Because by definition定義,
134
323000
2000
因為根據定義
05:40
news新聞 is something that almost幾乎 never happens發生.
135
325000
3000
新聞就是不會發生的事件
05:43
(Laughter笑聲)
136
328000
2000
(笑)
05:45
When something is so common共同, it's no longer news新聞 --
137
330000
3000
太常見的事件,就不會是新聞
05:48
car汽車 crashes崩潰, domestic國內 violence暴力 --
138
333000
2000
像是車禍,家庭暴力
05:50
those are the risks風險 you worry擔心 about.
139
335000
3000
這些才是我們該擔憂的
05:53
We're also a species種類 of storytellers講故事的人.
140
338000
2000
人類是說故事的物種
05:55
We respond響應 to stories故事 more than data數據.
141
340000
3000
比起數據,故事更容易影響我們
05:58
And there's some basic基本 innumeracy數學盲 going on.
142
343000
2000
人類多少有點數字文盲,我的意思是
06:00
I mean, the joke玩笑 "One, Two, Three, Many許多" is kind of right.
143
345000
3000
有個笑話說:人只會數一,二,三,很多.
06:03
We're really good at small numbers數字.
144
348000
3000
人真的是這樣,我們對小數字很在行
06:06
One mango芒果, two mangoes芒果, three mangoes芒果,
145
351000
2000
一個芒果,兩個芒果,三個芒果
06:08
10,000 mangoes芒果, 100,000 mangoes芒果 --
146
353000
2000
一萬個芒果,十萬的芒果
06:10
it's still more mangoes芒果 you can eat before they rot腐爛.
147
355000
3000
在它們腐壞前,還有許多芒果可吃
06:13
So one half, one quarter25美分硬幣, one fifth第五 -- we're good at that.
148
358000
3000
½,¼, 1/5,這些數字我們也都很在行
06:16
One in a million百萬, one in a billion十億 --
149
361000
2000
百萬分之一,十億分之一
06:18
they're both almost幾乎 never.
150
363000
3000
這些被當作幾乎沒有
06:21
So we have trouble麻煩 with the risks風險
151
366000
2000
所以,一旦面對不尋常的危機
06:23
that aren't very common共同.
152
368000
2000
我們就不知該怎麼對付了
06:25
And what these cognitive認知 biases偏見 do
153
370000
2000
認知的偏見
06:27
is they act法案 as filters過濾器 between之間 us and reality現實.
154
372000
3000
如同濾鏡般,存在我們和真實之間
06:30
And the result結果
155
375000
2000
於是
06:32
is that feeling感覺 and reality現實 get out of whack重打,
156
377000
2000
感覺背離了真實
06:34
they get different不同.
157
379000
3000
他們不再相同
06:37
Now you either have a feeling感覺 -- you feel more secure安全 than you are.
158
382000
3000
並產生兩種可能狀況,一是擁有過多的安全感
06:40
There's a false sense of security安全.
159
385000
2000
這是錯誤的安全感
06:42
Or the other way,
160
387000
2000
另一種是,
06:44
and that's a false sense of insecurity不安全.
161
389000
2000
錯誤的不安全感
06:46
I write a lot about "security安全 theater劇院,"
162
391000
3000
我寫過很多關於「安全劇院」的文章
06:49
which哪一個 are products製品 that make people feel secure安全,
163
394000
3000
它是一種可以讓人們感覺到安全的機制
06:52
but don't actually其實 do anything.
164
397000
2000
但事實上並沒有改善實際的安全狀況
06:54
There's no real真實 word for stuff東東 that makes品牌 us secure安全,
165
399000
2000
沒有確切的字眼來形容那種能改善真實安全
06:56
but doesn't make us feel secure安全.
166
401000
2000
但無法增加安全感的機制
06:58
Maybe it's what the CIA's中情局 supposed應該 to do for us.
167
403000
3000
CIA該為我們做的也許就是這個
07:03
So back to economics經濟學.
168
408000
2000
回到經濟學
07:05
If economics經濟學, if the market市場, drives驅動器 security安全,
169
410000
4000
如果經濟,或者市場,是驅動安全的力量
07:09
and if people make trade-offs權衡
170
414000
2000
而人們是依據對安全的感覺
07:11
based基於 on the feeling感覺 of security安全,
171
416000
3000
來進行交易
07:14
then the smart聰明 thing for companies公司 to do
172
419000
2000
那麼,公司想要促進經濟誘因的
07:16
for the economic經濟 incentives獎勵
173
421000
2000
最佳策略
07:18
are to make people feel secure安全.
174
423000
3000
就是讓人們感覺到安全
07:21
And there are two ways方法 to do this.
175
426000
3000
有兩種方式可以達成這個目的
07:24
One, you can make people actually其實 secure安全
176
429000
2000
一是讓人們在真實中更安全
07:26
and hope希望 they notice注意.
177
431000
2000
並且期盼他們有留意到這點
07:28
Or two, you can make people just feel secure安全
178
433000
3000
或者你也可以讓人們只是感覺更安全
07:31
and hope希望 they don't notice注意.
179
436000
3000
但你要期望他們不會發現到真相
07:35
So what makes品牌 people notice注意?
180
440000
3000
究竟什麼會引起人們關注
07:38
Well a couple一對 of things:
181
443000
2000
舉例來說
07:40
understanding理解 of the security安全,
182
445000
2000
對安全的認知程度
07:42
of the risks風險, the threats威脅,
183
447000
2000
對風險及威脅的認知
07:44
the countermeasures對策, how they work.
184
449000
3000
以及了解如何採取對策等
07:47
But if you know stuff東東,
185
452000
2000
知道得更多
07:49
you're more likely容易 to have your feelings情懷 match比賽 reality現實.
186
454000
3000
感覺和真實就愈趨一致
07:52
Enough足夠 real真實 world世界 examples例子 helps幫助.
187
457000
3000
真實世界中有很多這方面的例子
07:55
Now we all know the crime犯罪 rate in our neighborhood鄰里,
188
460000
3000
我們對居家附近區域的犯罪率很明瞭
07:58
because we live生活 there, and we get a feeling感覺 about it
189
463000
3000
因為我們住在這裡,所以我們對治安的感覺
08:01
that basically基本上 matches火柴 reality現實.
190
466000
3000
基本上符合真實狀況
08:04
Security安全 theater's戲劇的 exposed裸露
191
469000
3000
安全劇院所揭露的
08:07
when it's obvious明顯 that it's not working加工 properly正確.
192
472000
3000
是真實與感覺明顯背離的情況
08:10
Okay, so what makes品牌 people not notice注意?
193
475000
4000
那麼,又是什麼讓人們忽略安全?
08:14
Well, a poor較差的 understanding理解.
194
479000
2000
認知不足
08:16
If you don't understand理解 the risks風險, you don't understand理解 the costs成本,
195
481000
3000
不了解風險,不了解代價
08:19
you're likely容易 to get the trade-off交易 wrong錯誤,
196
484000
2000
就愈可能做出錯誤的安全策略
08:21
and your feeling感覺 doesn't match比賽 reality現實.
197
486000
3000
並且無法感覺真實情況
08:24
Not enough足夠 examples例子.
198
489000
2000
相關的例子不多
08:26
There's an inherent固有 problem問題
199
491000
2000
對於不常發生的事件
08:28
with low probability可能性 events事件.
200
493000
2000
這是本質上存在的問題
08:30
If, for example,
201
495000
2000
舉例來說
08:32
terrorism恐怖主義 almost幾乎 never happens發生,
202
497000
2000
如果恐怖主義幾乎是不曾發生的
08:34
it's really hard to judge法官
203
499000
2000
那麼要判斷反恐措施的功效
08:36
the efficacy功效 of counter-terrorist反恐 measures措施.
204
501000
3000
就難上加難了
08:40
This is why you keep sacrificing犧牲 virgins處女,
205
505000
3000
這就是為什麼人們不斷地奉獻處女祭祀
08:43
and why your unicorn獨角獸 defenses防禦 are working加工 just great.
206
508000
3000
或是將過錯推諉給編造出來的「他」,都很有用
08:46
There aren't enough足夠 examples例子 of failures故障.
207
511000
3000
因為災難本來就不多
08:50
Also, feelings情懷 that are clouding混濁 the issues問題 --
208
515000
3000
加上心理作用作祟
08:53
the cognitive認知 biases偏見 I talked about earlier,
209
518000
2000
就是我剛剛所說的認知偏差
08:55
fears恐懼, folk民間 beliefs信仰,
210
520000
3000
恐懼,民間信仰
08:58
basically基本上 an inadequate不足 model模型 of reality現實.
211
523000
3000
這些基本上都無法適當地反映真實
09:02
So let me complicate複雜 things.
212
527000
3000
讓我把事情弄得再複雜些
09:05
I have feeling感覺 and reality現實.
213
530000
2000
除了感覺,以及真實的世界
09:07
I want to add a third第三 element元件. I want to add model模型.
214
532000
3000
我想再加上第三個元素-模型
09:10
Feeling感覺 and model模型 in our head,
215
535000
2000
感覺和模型存在腦海裡
09:12
reality現實 is the outside world世界.
216
537000
2000
而真實存在於外在
09:14
It doesn't change更改; it's real真實.
217
539000
3000
它不會變,它是真實的
09:17
So feeling感覺 is based基於 on our intuition直覺.
218
542000
2000
感覺是基於直覺
09:19
Model模型 is based基於 on reason原因.
219
544000
2000
模型是基於理智
09:21
That's basically基本上 the difference區別.
220
546000
3000
這是兩者最基本的差異
09:24
In a primitive原始 and simple簡單 world世界,
221
549000
2000
在遠古的簡單世界裡
09:26
there's really no reason原因 for a model模型
222
551000
3000
模型沒有存在的意義
09:29
because feeling感覺 is close to reality現實.
223
554000
3000
因為感覺和真實非常的接近
09:32
You don't need a model模型.
224
557000
2000
你不需要模型
09:34
But in a modern現代 and complex複雜 world世界,
225
559000
2000
但在現代複雜的社會
09:36
you need models楷模
226
561000
2000
你需要模型
09:38
to understand理解 a lot of the risks風險 we face面對.
227
563000
3000
來解析我們面對的風險
09:42
There's no feeling感覺 about germs病菌.
228
567000
2000
我們無法用感覺來認識細菌
09:44
You need a model模型 to understand理解 them.
229
569000
3000
所以需要模型
09:47
So this model模型
230
572000
2000
模型可以
09:49
is an intelligent智能 representation表示 of reality現實.
231
574000
3000
清楚地呈現真實
09:52
It's, of course課程, limited有限 by science科學,
232
577000
3000
然而,模型受限於科學
09:55
by technology技術.
233
580000
2000
與技術
09:57
We couldn't不能 have a germ病菌 theory理論 of disease疾病
234
582000
3000
在顯微鏡被發明來觀測細菌以前
10:00
before we invented發明 the microscope顯微鏡 to see them.
235
585000
3000
疾病的細菌理論就不可能存在
10:04
It's limited有限 by our cognitive認知 biases偏見.
236
589000
3000
模型也受限於我們認知的偏差
10:07
But it has the ability能力
237
592000
2000
但它的能力
10:09
to override覆蓋 our feelings情懷.
238
594000
2000
足以駕馭我們的感覺
10:11
Where do we get these models楷模? We get them from others其他.
239
596000
3000
模型來自何處? 通常是從他人而來
10:14
We get them from religion宗教, from culture文化,
240
599000
3000
可能是宗教,文化
10:17
teachers教師, elders長老.
241
602000
2000
老師或是長老
10:19
A couple一對 years年份 ago,
242
604000
2000
數年前
10:21
I was in South Africa非洲 on safari蘋果瀏覽器.
243
606000
2000
我到南非進行狩獵之旅
10:23
The tracker跟踪器 I was with grew成長 up in Kruger克魯格 National國民 Park公園.
244
608000
3000
我的追蹤嚮導是在克魯格國家公園長大的
10:26
He had some very complex複雜 models楷模 of how to survive生存.
245
611000
3000
他的求生模型非常的複雜
10:29
And it depended依賴 on if you were attacked襲擊
246
614000
2000
遭受到不同動物攻擊有不同的模型
10:31
by a lion獅子 or a leopard or a rhino犀牛 or an elephant --
247
616000
2000
像是獅子、美洲豹、犀牛或是大象
10:33
and when you had to run away, and when you couldn't不能 run away, and when you had to climb a tree --
248
618000
3000
依照不同的情況:在何時必須逃跑,或是爬樹
10:36
when you could never climb a tree.
249
621000
2000
或者無法爬樹,採用的模型也不同
10:38
I would have died死亡 in a day,
250
623000
3000
我在那裡可能活不過一天
10:41
but he was born天生 there,
251
626000
2000
但他生於此
10:43
and he understood了解 how to survive生存.
252
628000
2000
他了解此地求生之道
10:45
I was born天生 in New York紐約 City.
253
630000
2000
我生於紐約市
10:47
I could have taken採取 him to New York紐約, and he would have died死亡 in a day.
254
632000
3000
如果我帶他到紐約,那他可能也活不過一天吧
10:50
(Laughter笑聲)
255
635000
2000
(笑聲)
10:52
Because we had different不同 models楷模
256
637000
2000
因為我們有不同的生存模型
10:54
based基於 on our different不同 experiences經驗.
257
639000
3000
這來自我們不同的經驗
10:58
Models楷模 can come from the media媒體,
258
643000
2000
模型來自媒體
11:00
from our elected當選 officials官員.
259
645000
3000
也來自我們選出的官員
11:03
Think of models楷模 of terrorism恐怖主義,
260
648000
3000
回想一下恐怖攻擊
11:06
child兒童 kidnapping綁架,
261
651000
3000
幼童綁票
11:09
airline航空公司 safety安全, car汽車 safety安全.
262
654000
2000
飛行安全以及汽車安全這些模型
11:11
Models楷模 can come from industry行業.
263
656000
3000
模型也來自工業界
11:14
The two I'm following以下 are surveillance監控 cameras相機,
264
659000
2000
我最近關注在監控攝影機
11:16
IDID cards,
265
661000
2000
和身分證這兩項議題
11:18
quite相當 a lot of our computer電腦 security安全 models楷模 come from there.
266
663000
3000
很多資訊安全的模型與此有關
11:21
A lot of models楷模 come from science科學.
267
666000
3000
很多模型來自科學
11:24
Health健康 models楷模 are a great example.
268
669000
2000
和健康相關的模型是很好的例子
11:26
Think of cancer癌症, of bird flu流感, swine flu流感, SARSSARS.
269
671000
3000
例如癌症,禽流感,豬流感以及SARS
11:29
All of our feelings情懷 of security安全
270
674000
3000
我們對這些疾病
11:32
about those diseases疾病
271
677000
2000
產生的危機感
11:34
come from models楷模
272
679000
2000
其實是來自於模型
11:36
given特定 to us, really, by science科學 filtered過濾 through通過 the media媒體.
273
681000
3000
模型由科學家提供,經過媒體傳達給我們
11:40
So models楷模 can change更改.
274
685000
3000
模型是變動的
11:43
Models楷模 are not static靜態的.
275
688000
2000
不是固定的
11:45
As we become成為 more comfortable自在 in our environments環境,
276
690000
3000
當我們對愈適應環境時
11:48
our model模型 can move移動 closer接近 to our feelings情懷.
277
693000
4000
模型會愈趨近我們的感覺
11:53
So an example might威力 be,
278
698000
2000
另一個的例子可能是這樣的
11:55
if you go back 100 years年份 ago
279
700000
2000
假設你回到100年前
11:57
when electricity電力 was first becoming變得 common共同,
280
702000
3000
當時電力剛開始普及
12:00
there were a lot of fears恐懼 about it.
281
705000
2000
人們對電力存有相當多的恐懼
12:02
I mean, there were people who were afraid害怕 to push doorbells門鈴,
282
707000
2000
像是,有人害怕壓門鈴
12:04
because there was electricity電力 in there, and that was dangerous危險.
283
709000
3000
因為那裡有電,非常危險
12:07
For us, we're very facile靈巧的 around electricity電力.
284
712000
3000
現在的我們對電力已相當熟悉了
12:10
We change更改 light bulbs燈泡
285
715000
2000
像是換燈泡這種事情
12:12
without even thinking思維 about it.
286
717000
2000
我們不會去想它的安全問題
12:14
Our model模型 of security安全 around electricity電力
287
719000
4000
我們對電力的安全認知模型
12:18
is something we were born天生 into.
288
723000
3000
幾乎是與生俱來的
12:21
It hasn't有沒有 changed as we were growing生長 up.
289
726000
3000
長大後也沒變過
12:24
And we're good at it.
290
729000
3000
我們很擅長運用電力
12:27
Or think of the risks風險
291
732000
2000
你也可以想想看
12:29
on the Internet互聯網 across橫過 generations --
292
734000
2000
不同世代對網際網路的風險評估
12:31
how your parents父母 approach途徑 Internet互聯網 security安全,
293
736000
2000
你的父母親是怎麼看待網路安全的
12:33
versus how you do,
294
738000
2000
對照一下你自己的做法
12:35
versus how our kids孩子 will.
295
740000
3000
再對照一下我們的下一代,他們將會如何做
12:38
Models楷模 eventually終於 fade褪色 into the background背景.
296
743000
3000
模型最終會融到我們的生活背景
12:42
Intuitive直觀的 is just another另一個 word for familiar.
297
747000
3000
直覺其實是來自於熟悉
12:45
So as your model模型 is close to reality現實,
298
750000
2000
當模型與真實接近時
12:47
and it converges收斂 with feelings情懷,
299
752000
2000
並且與感覺合而為一
12:49
you often經常 don't know it's there.
300
754000
3000
此時,你感覺不到它的存在
12:52
So a nice不錯 example of this
301
757000
2000
有個很好的例子
12:54
came來了 from last year and swine flu流感.
302
759000
3000
就是去年發生的豬流感
12:57
When swine flu流感 first appeared出現,
303
762000
2000
豬流感剛開始時
12:59
the initial初始 news新聞 caused造成 a lot of overreaction過度反應.
304
764000
4000
最初的報導引起許多過度恐慌
13:03
Now it had a name名稱,
305
768000
2000
接著,它有正式名稱了
13:05
which哪一個 made製作 it scarier可怕 than the regular定期 flu流感,
306
770000
2000
這使得它比一般感冒更恐怖
13:07
even though雖然 it was more deadly致命.
307
772000
2000
即使一般感冒致死率更高
13:09
And people thought doctors醫生 should be able能夠 to deal合同 with it.
308
774000
4000
人們原本認為醫生應該可以處理豬流感
13:13
So there was that feeling感覺 of lack缺乏 of control控制.
309
778000
2000
這時,我們覺得事情失控了
13:15
And those two things
310
780000
2000
由於以上兩項因素
13:17
made製作 the risk風險 more than it was.
311
782000
2000
風險顯得比實際狀況更高
13:19
As the novelty新奇 wore穿著 off, the months個月 went by,
312
784000
3000
數個月過後,人們對新事物的陌生恐懼逐漸淡去
13:22
there was some amount of tolerance公差,
313
787000
2000
接納度提升
13:24
people got used to it.
314
789000
2000
也漸漸習慣了
13:26
There was no new data數據, but there was less fear恐懼.
315
791000
3000
雖然沒有新進展,但是恐懼減少了
13:29
By autumn秋季,
316
794000
2000
在秋天來臨前
13:31
people thought
317
796000
2000
人們相信
13:33
the doctors醫生 should have solved解決了 this already已經.
318
798000
2000
醫生已經解決問題了
13:35
And there's kind of a bifurcation分枝 --
319
800000
2000
這時出現了分歧
13:37
people had to choose選擇
320
802000
2000
人們必須
13:39
between之間 fear恐懼 and acceptance驗收 --
321
804000
4000
在恐懼或是接受中做出選擇
13:43
actually其實 fear恐懼 and indifference漠不關心 --
322
808000
2000
更正確的說,是恐懼和忽視
13:45
they kind of chose選擇 suspicion懷疑.
323
810000
3000
最後,人們選擇了懷疑
13:48
And when the vaccine疫苗 appeared出現 last winter冬季,
324
813000
3000
當疫苗在去年冬天上市時
13:51
there were a lot of people -- a surprising奇怪 number --
325
816000
3000
很多人 -- 令人驚訝的數目
13:54
who refused拒絕 to get it --
326
819000
3000
反而拒絕疫苗接種
13:58
as a nice不錯 example
327
823000
2000
這個例子很清楚指出
14:00
of how people's人們 feelings情懷 of security安全 change更改, how their model模型 changes變化,
328
825000
3000
人們的安全感是如何改變,模型又是如何改變
14:03
sort分類 of wildly瘋狂
329
828000
2000
在沒有新資訊
14:05
with no new information信息,
330
830000
2000
也沒有新來源時
14:07
with no new input輸入.
331
832000
2000
也會有巨大的改變
14:09
This kind of thing happens發生 a lot.
332
834000
3000
這樣的事情其實常常發生
14:12
I'm going to give one more complication並發症.
333
837000
3000
現在,我要再加上一個複雜的因素
14:15
We have feeling感覺, model模型, reality現實.
334
840000
3000
除了感覺,模型,真實三項因素
14:18
I have a very relativistic相對論 view視圖 of security安全.
335
843000
2000
我認為安全是相對的
14:20
I think it depends依靠 on the observer觀察者.
336
845000
3000
因人而異
14:23
And most security安全 decisions決定
337
848000
2000
多數的安全決策
14:25
have a variety品種 of people involved參與.
338
850000
4000
牽扯到許多不同類型的人
14:29
And stakeholders利益相關者
339
854000
2000
有利益牽扯的
14:31
with specific具體 trade-offs權衡
340
856000
3000
利害關係人
14:34
will try to influence影響 the decision決定.
341
859000
2000
會試圖去影響決定
14:36
And I call that their agenda議程.
342
861000
2000
我稱之為關係人的「議程規畫表」
14:38
And you see agenda議程 --
343
863000
2000
這個規畫表
14:40
this is marketing營銷, this is politics政治 --
344
865000
3000
是一種行銷,也是政治
14:43
trying to convince說服 you to have one model模型 versus another另一個,
345
868000
3000
它企圖影響你信任某種模型而放棄另一個
14:46
trying to convince說服 you to ignore忽視 a model模型
346
871000
2000
企圖影響去忽視模型
14:48
and trust相信 your feelings情懷,
347
873000
3000
只信任你的感覺
14:51
marginalizing邊緣化 people with models楷模 you don't like.
348
876000
3000
並且邊緣化那些採用你不喜歡的模型的人
14:54
This is not uncommon罕見.
349
879000
3000
這並非不尋常
14:57
An example, a great example, is the risk風險 of smoking抽煙.
350
882000
3000
一個例子,很好的例子,就是關於抽菸的危害
15:01
In the history歷史 of the past過去 50 years年份, the smoking抽煙 risk風險
351
886000
3000
過去50 年的歷史,抽菸風險的變化
15:04
shows節目 how a model模型 changes變化,
352
889000
2000
顯示出模型是如何改變的
15:06
and it also shows節目 how an industry行業 fights打架 against反對
353
891000
3000
也顯示出業界如何對付
15:09
a model模型 it doesn't like.
354
894000
2000
它們不喜歡的模型
15:11
Compare比較 that to the secondhand二手 smoke抽煙 debate辯論 --
355
896000
3000
相對起來,關於二手煙的討論
15:14
probably大概 about 20 years年份 behind背後.
356
899000
3000
晚了約20年
15:17
Think about seat座位 belts皮帶.
357
902000
2000
再看看安全帶
15:19
When I was a kid孩子, no one wore穿著 a seat座位 belt.
358
904000
2000
我小的時後,沒有人繫安全帶
15:21
Nowadays如今, no kid孩子 will let you drive駕駛
359
906000
2000
而現今,如果不繫上安全帶
15:23
if you're not wearing穿著 a seat座位 belt.
360
908000
2000
連小孩都會阻止你開車
15:26
Compare比較 that to the airbag安全氣囊 debate辯論 --
361
911000
2000
相對起來,安全氣囊的討論
15:28
probably大概 about 30 years年份 behind背後.
362
913000
3000
落後了約三十年
15:31
All examples例子 of models楷模 changing改變.
363
916000
3000
所有的模型都會改變
15:36
What we learn學習 is that changing改變 models楷模 is hard.
364
921000
3000
我們目前知道的是,模型的改變不容易
15:39
Models楷模 are hard to dislodge打跑.
365
924000
2000
模型也很難被移走
15:41
If they equal等於 your feelings情懷,
366
926000
2000
當它們和感覺完全相同時
15:43
you don't even know you have a model模型.
367
928000
3000
你甚至不知道模型的存在
15:46
And there's another另一個 cognitive認知 bias偏壓
368
931000
2000
另一種認知偏見
15:48
I'll call confirmation確認 bias偏壓,
369
933000
2000
我認為是肯證偏見
15:50
where we tend趨向 to accept接受 data數據
370
935000
3000
是指人們傾向於接受
15:53
that confirms確認 our beliefs信仰
371
938000
2000
和自己立場相符的訊息
15:55
and reject拒絕 data數據 that contradicts相矛盾 our beliefs信仰.
372
940000
3000
而拒絕與我們立場相左的資訊
15:59
So evidence證據 against反對 our model模型,
373
944000
2000
所以和我們模型不符的證據
16:01
we're likely容易 to ignore忽視, even if it's compelling引人注目.
374
946000
3000
我們也會忽略它,不管它多麼的讓人信服
16:04
It has to get very compelling引人注目 before we'll pay工資 attention注意.
375
949000
3000
它必須強烈到無法忽視,才能引起我們的注意
16:08
New models楷模 that extend延伸 long periods of time are hard.
376
953000
2000
跨越長時間的新模型是難以接受的
16:10
Global全球 warming變暖 is a great example.
377
955000
2000
全球暖化的議題就是個例子
16:12
We're terrible可怕
378
957000
2000
我們很難接受
16:14
at models楷模 that span跨度 80 years年份.
379
959000
2000
一個長達八十年之久的模型
16:16
We can do to the next下一個 harvest收成.
380
961000
2000
我們可以應付下一個收割季來臨前的問題
16:18
We can often經常 do until直到 our kids孩子 grow增長 up.
381
963000
3000
也可以應付小孩長大前的事情
16:21
But 80 years年份, we're just not good at.
382
966000
3000
但是八十年耶,我們不知道怎麼辦了
16:24
So it's a very hard model模型 to accept接受.
383
969000
3000
所以,接受這種模型並不容易
16:27
We can have both models楷模 in our head simultaneously同時,
384
972000
4000
兩種模型可能並存在大腦中
16:31
right, that kind of problem問題
385
976000
3000
就像對某些事情
16:34
where we're holding保持 both beliefs信仰 together一起,
386
979000
3000
我們會有兩種信念
16:37
right, the cognitive認知 dissonance不和諧.
387
982000
2000
這是種認知失調
16:39
Eventually終於,
388
984000
2000
但最後
16:41
the new model模型 will replace更換 the old model模型.
389
986000
3000
舊模型終將被新模型取代
16:44
Strong強大 feelings情懷 can create創建 a model模型.
390
989000
3000
強烈的感覺可以產生模型
16:47
September九月 11th created創建 a security安全 model模型
391
992000
3000
九一一事件在很多人的心裡
16:50
in a lot of people's人們 heads.
392
995000
2000
建立新的安全模型
16:52
Also, personal個人 experiences經驗 with crime犯罪 can do it,
393
997000
3000
還有,個人經歷的犯罪事件
16:55
personal個人 health健康 scare,
394
1000000
2000
個人的健康危機
16:57
a health健康 scare in the news新聞.
395
1002000
2000
以及新聞報導中的健康問題都會產生新模型
16:59
You'll你會 see these called flashbulb鎂光燈 events事件
396
1004000
2000
精神病專家稱之為
17:01
by psychiatrists精神科醫生.
397
1006000
2000
閃光燈效應
17:03
They can create創建 a model模型 instantaneously瞬間,
398
1008000
3000
這些事件可以立即產生新模型
17:06
because they're very emotive感情的.
399
1011000
3000
因為他們引起強烈的情緒
17:09
So in the technological技術性 world世界,
400
1014000
2000
在科技的世界裡
17:11
we don't have experience經驗
401
1016000
2000
我們沒有經驗
17:13
to judge法官 models楷模.
402
1018000
2000
足以判斷模型
17:15
And we rely依靠 on others其他. We rely依靠 on proxies代理.
403
1020000
2000
所以,我們仰賴他人,我們仰賴代理人
17:17
I mean, this works作品 as long as it's to correct正確 others其他.
404
1022000
4000
只要代理人能夠指正錯誤,這樣做是可行的。
17:21
We rely依靠 on government政府 agencies機構
405
1026000
2000
我們依賴政府機關
17:23
to tell us what pharmaceuticals藥品 are safe安全.
406
1028000
5000
來告訴我們藥物是安全的
17:28
I flew here yesterday昨天.
407
1033000
2000
我昨天搭機來此地
17:30
I didn't check the airplane飛機.
408
1035000
2000
我沒有檢查飛機
17:32
I relied on some other group
409
1037000
2000
是因為另一群人
17:34
to determine確定 whether是否 my plane平面 was safe安全 to fly.
410
1039000
3000
會先檢查飛機是否安全
17:37
We're here, none沒有 of us fear恐懼 the roof屋頂 is going to collapse坍方 on us,
411
1042000
3000
我們在這裡,沒有人擔心屋頂會垮下來
17:40
not because we checked檢查,
412
1045000
3000
不是因為我們檢查過了
17:43
but because we're pretty漂亮 sure
413
1048000
2000
而是我們非常確定
17:45
the building建造 codes代碼 here are good.
414
1050000
3000
建築法規很建全
17:48
It's a model模型 we just accept接受
415
1053000
2000
基於這樣的信念
17:50
pretty漂亮 much by faith信仰.
416
1055000
2000
我們接受這個模型
17:52
And that's okay.
417
1057000
3000
它也運作得很好
17:57
Now, what we want
418
1062000
2000
我們希望
17:59
is people to get familiar enough足夠
419
1064000
2000
人們能去了解
18:01
with better models楷模 --
420
1066000
2000
更好的模型
18:03
have it reflected反射的 in their feelings情懷 --
421
1068000
2000
真正反應出感覺的模型
18:05
to allow允許 them to make security安全 trade-offs權衡.
422
1070000
4000
幫助人們可以在安全上做出正確的抉擇
18:09
Now when these go out of whack重打,
423
1074000
2000
當模型與感覺不一致時
18:11
you have two options選項.
424
1076000
2000
你有兩個選擇
18:13
One, you can fix固定 people's人們 feelings情懷,
425
1078000
2000
其一是,先修正個人的感覺
18:15
directly appeal上訴 to feelings情懷.
426
1080000
2000
然後直接針對感覺下判斷
18:17
It's manipulation操作, but it can work.
427
1082000
3000
雖然動了點手腳,但是行的通
18:20
The second第二, more honest誠實 way
428
1085000
2000
第二種方式比較誠實
18:22
is to actually其實 fix固定 the model模型.
429
1087000
3000
就是去修正模型
18:26
Change更改 happens發生 slowly慢慢地.
430
1091000
2000
改變是很緩慢的
18:28
The smoking抽煙 debate辯論 took 40 years年份,
431
1093000
3000
抽菸的辯論持續了40年
18:31
and that was an easy簡單 one.
432
1096000
3000
這還算是簡單的
18:34
Some of this stuff東東 is hard.
433
1099000
2000
有些改變很難
18:36
I mean really though雖然,
434
1101000
2000
相當困難
18:38
information信息 seems似乎 like our best最好 hope希望.
435
1103000
2000
要靠絕對的資訊才有希望能改變
18:40
And I lied說謊.
436
1105000
2000
我剛撒了一個謊
18:42
Remember記得 I said feeling感覺, model模型, reality現實;
437
1107000
2000
在說到感覺、模型和真實三個因素時
18:44
I said reality現實 doesn't change更改. It actually其實 does.
438
1109000
3000
我說,真實是不會變的,事實上它會
18:47
We live生活 in a technological技術性 world世界;
439
1112000
2000
我們處在科技的世界
18:49
reality現實 changes變化 all the time.
440
1114000
3000
所謂的真實一直都在變
18:52
So we might威力 have -- for the first time in our species種類 --
441
1117000
3000
第一次,我們人類這個物種發生這種現象
18:55
feeling感覺 chases追逐 model模型, model模型 chases追逐 reality現實, reality's現實的 moving移動 --
442
1120000
3000
感覺追逐模型,模型追逐真實,而真實不停的跑
18:58
they might威力 never catch抓住 up.
443
1123000
3000
它們可能永遠也追不上
19:02
We don't know.
444
1127000
2000
我們不知道結果
19:04
But in the long-term長期,
445
1129000
2000
但是,就長期來說
19:06
both feeling感覺 and reality現實 are important重要.
446
1131000
3000
感覺和真實都是重要的
19:09
And I want to close with two quick stories故事 to illustrate說明 this.
447
1134000
3000
我以兩個簡短的故事證明這點,並以此做為總結
19:12
1982 -- I don't know if people will remember記得 this --
448
1137000
2000
1982 年,不知道你們還記不記得
19:14
there was a short epidemic疫情
449
1139000
3000
當時美國有個很短暫但散播很廣的
19:17
of Tylenol泰諾 poisonings中毒 in the United聯合的 States狀態.
450
1142000
2000
泰諾(Thlenol)止痛藥中毒事件
19:19
It's a horrific可怕的 story故事. Someone有人 took a bottle瓶子 of Tylenol泰諾,
451
1144000
3000
事情很可怕.有人取走一瓶的泰諾
19:22
put poison in it, closed關閉 it up, put it back on the shelf.
452
1147000
3000
在裡面下毒,關上盒蓋,又放回架上販賣
19:25
Someone有人 else其他 bought it and died死亡.
453
1150000
2000
其他人買下這瓶藥後,中毒死亡
19:27
This terrified people.
454
1152000
2000
事情嚇壞了群眾
19:29
There were a couple一對 of copycat山寨 attacks攻擊.
455
1154000
2000
當時還有數起模仿這個手法的攻擊事件
19:31
There wasn't any real真實 risk風險, but people were scared害怕.
456
1156000
3000
雖然沒有真正的危險,但是民眾嚇壞了
19:34
And this is how
457
1159000
2000
這事件驅使
19:36
the tamper-proof防篡改 drug藥物 industry行業 was invented發明.
458
1161000
2000
藥品業界發明防盜安全裝置
19:38
Those tamper-proof防篡改 caps帽子, that came來了 from this.
459
1163000
2000
那些防盜安全瓶蓋就是這樣來的
19:40
It's complete完成 security安全 theater劇院.
460
1165000
2000
這就是安全劇場
19:42
As a homework家庭作業 assignment分配, think of 10 ways方法 to get around it.
461
1167000
2000
這是你們的作業-想出十個破解安全瓶蓋的方法
19:44
I'll give you one, a syringe注射器.
462
1169000
3000
我先給個答案,針筒
19:47
But it made製作 people feel better.
463
1172000
3000
但是安全瓶蓋確實讓人們感覺比較安全
19:50
It made製作 their feeling感覺 of security安全
464
1175000
2000
這使得人們對安全的感覺
19:52
more match比賽 the reality現實.
465
1177000
2000
和實際情況更符合
19:54
Last story故事, a few少數 years年份 ago, a friend朋友 of mine gave birth分娩.
466
1179000
3000
最後一個故事,數年前,我的一個朋友生小孩
19:57
I visit訪問 her in the hospital醫院.
467
1182000
2000
我去醫院看她
19:59
It turns out when a baby's寶寶 born天生 now,
468
1184000
2000
才發現現在小孩出生時
20:01
they put an RFIDRFID bracelet手鐲 on the baby寶寶,
469
1186000
2000
要繫上RFID(無線射頻辨識系統) 手環
20:03
put a corresponding相應 one on the mother母親,
470
1188000
2000
母親也配戴對應的RFID
20:05
so if anyone任何人 other than the mother母親 takes the baby寶寶 out of the maternity母道 ward病房,
471
1190000
2000
所以,除了母親以外的人抱小孩離開產房
20:07
an alarm報警 goes off.
472
1192000
2000
警報就會響起
20:09
I said, "Well, that's kind of neat整齊.
473
1194000
2000
我說:「哇!這真棒
20:11
I wonder奇蹟 how rampant猖獗 baby寶寶 snatching is
474
1196000
2000
那些猖獗的嬰兒綁架犯
20:13
out of hospitals醫院."
475
1198000
2000
怎麼可能走的出醫院」
20:15
I go home, I look it up.
476
1200000
2000
回到家,我查了一下資料
20:17
It basically基本上 never happens發生.
477
1202000
2000
發現嬰兒綁架幾乎不曾發生
20:19
But if you think about it,
478
1204000
2000
你想想看
20:21
if you are a hospital醫院,
479
1206000
2000
如果你是個醫務人員
20:23
and you need to take a baby寶寶 away from its mother母親,
480
1208000
2000
你需要從母親的手中把嬰兒
20:25
out of the room房間 to run some tests測試,
481
1210000
2000
帶出房間去進行檢驗
20:27
you better have some good security安全 theater劇院,
482
1212000
2000
那你最好有些絕佳的安全策略
20:29
or she's going to rip安息 your arm off.
483
1214000
2000
不然你的手臂一定會被嬰兒的母親扭斷
20:31
(Laughter笑聲)
484
1216000
2000
(笑聲)
20:33
So it's important重要 for us,
485
1218000
2000
這對我們很重要
20:35
those of us who design設計 security安全,
486
1220000
2000
有些人從事安全設計
20:37
who look at security安全 policy政策,
487
1222000
3000
有人審視安全政策
20:40
or even look at public上市 policy政策
488
1225000
2000
或是研究
20:42
in ways方法 that affect影響 security安全.
489
1227000
2000
會影響安全的公共政策
20:44
It's not just reality現實; it's feeling感覺 and reality現實.
490
1229000
3000
要考慮的不是只有真實,而是感覺與真實兩者
20:47
What's important重要
491
1232000
2000
重要的是
20:49
is that they be about the same相同.
492
1234000
2000
這兩者要盡可能相同
20:51
It's important重要 that, if our feelings情懷 match比賽 reality現實,
493
1236000
2000
這是重要的,當我們的感覺和真實更一致
20:53
we make better security安全 trade-offs權衡.
494
1238000
2000
才能在安全議題上做出更好的選擇
20:55
Thank you.
495
1240000
2000
謝謝
20:57
(Applause掌聲)
496
1242000
2000
(鼓掌)
Translated by wentzu chen
Reviewed by Diwen Mueller

▲Back to top

ABOUT THE SPEAKER
Bruce Schneier - Security expert
Bruce Schneier thinks hard about security -- as a computer security guru, and as a philosopher of the larger notion of making a safer world.

Why you should listen

Bruce Schneier is an internationally renowned security technologist and author. Described by the Economist as a "security guru," he is best known as a refreshingly candid and lucid security critic and commentator. When people want to know how security really works, they turn to Schneier.

His first bestseller, Applied Cryptography, explained how the arcane science of secret codes actually works, and was described by Wired as "the book the National Security Agency wanted never to be published." His book on computer and network security, Secrets and Lies, was called by Fortune "[a] jewel box of little surprises you can actually use." Beyond Fear tackles the problems of security from the small to the large: personal safety, crime, corporate security, national security. His current book, Schneier on Security, offers insight into everything from the risk of identity theft (vastly overrated) to the long-range security threat of unchecked presidential power and the surprisingly simple way to tamper-proof elections.

Schneier publishes a free monthly newsletter, Crypto-Gram, with over 150,000 readers. In its ten years of regular publication, Crypto-Gram has become one of the most widely read forums for free-wheeling discussions, pointed critiques and serious debate about security. As head curmudgeon at the table, Schneier explains, debunks and draws lessons from security stories that make the news.

More profile about the speaker
Bruce Schneier | Speaker | TED.com