ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Sam Harris is the author of five New York Times bestsellers. His books include The End of FaithLetter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.

More profile about the speaker
Sam Harris | Speaker | TED.com
TEDSummit

Sam Harris: Can we build AI without losing control over it?

山姆‧哈里斯: 我們能打造不會失控的人工智慧?

Filmed:
5,024,015 views

害怕超級人工智能嗎?你應該感到害怕,神經科學家、哲學家山姆哈里斯說——而且不只是從某種純理論角度出發。我們將會造出超人類機器,山姆哈里斯說,但我們尚未抓到問題,關於創造出一個會對待我們如同我們對待螞蟻的東西。
- Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live. Full bio

Double-click the English transcript below to play the video.

00:13
I'm going to talk
about a failure失敗 of intuition直覺
0
1000
2216
我要談一種我們很多人遭受的、直覺上的失誤。
00:15
that many許多 of us suffer遭受 from.
1
3240
1600
00:17
It's really a failure失敗
to detect檢測 a certain某些 kind of danger危險.
2
5480
3040
那其實是一種使你無法察覺到特定種類危險的失誤。
00:21
I'm going to describe描述 a scenario腳本
3
9360
1736
我會描述一個情境
00:23
that I think is both terrifying可怕的
4
11120
3256
是我認為很可怕
00:26
and likely容易 to occur發生,
5
14400
1760
而且很有機會發生的,
00:28
and that's not a good combination組合,
6
16840
1656
這不是很好的組合,
00:30
as it turns out.
7
18520
1536
一如預期。
00:32
And yet然而 rather than be scared害怕,
most of you will feel
8
20080
2456
然而比起感到害怕,大部分的人會覺得
00:34
that what I'm talking about
is kind of cool.
9
22560
2080
我正在說的東西有點酷。
00:37
I'm going to describe描述
how the gains收益 we make
10
25200
2976
我將會描述在人工智能領域我們的進展
00:40
in artificial人造 intelligence情報
11
28200
1776
00:42
could ultimately最終 destroy破壞 us.
12
30000
1776
如何能最終消滅我們。
00:43
And in fact事實, I think it's very difficult
to see how they won't慣於 destroy破壞 us
13
31800
3456
事實上,我認為很難看出他們為何不會消滅我們
或者驅使我們消滅自己。
00:47
or inspire啟發 us to destroy破壞 ourselves我們自己.
14
35280
1680
00:49
And yet然而 if you're anything like me,
15
37400
1856
如果你是和我類似的人,
00:51
you'll你會 find that it's fun開玩笑
to think about these things.
16
39280
2656
你會發現思考這類事情很有趣。
00:53
And that response響應 is part部分 of the problem問題.
17
41960
3376
那種反應也是問題的一部分。
00:57
OK? That response響應 should worry擔心 you.
18
45360
1720
對嗎?那種反應應該讓你感到擔心。
00:59
And if I were to convince說服 you in this talk
19
47920
2656
如果我是打算在這個裡演講說服你,
01:02
that we were likely容易
to suffer遭受 a global全球 famine飢荒,
20
50600
3416
我們很可能會遭受全球性的飢荒
01:06
either because of climate氣候 change更改
or some other catastrophe災難,
21
54040
3056
無論是因為氣候變遷或某種大災難
01:09
and that your grandchildren孫子,
or their grandchildren孫子,
22
57120
3416
而你的孫子們或者孫子們的孫子們
01:12
are very likely容易 to live生活 like this,
23
60560
1800
非常可能要這樣生活,
01:15
you wouldn't不會 think,
24
63200
1200
你不會覺得:
01:17
"Interesting有趣.
25
65440
1336
「有意思,
01:18
I like this TEDTED Talk."
26
66800
1200
我喜歡這個 TED 演講。」
01:21
Famine飢荒 isn't fun開玩笑.
27
69200
1520
飢荒並不有趣。
01:23
Death死亡 by science科學 fiction小說,
on the other hand, is fun開玩笑,
28
71800
3376
另一方面來說,科幻式的死亡,是有趣的。
01:27
and one of the things that worries me most
about the development發展 of AIAI at this point
29
75200
3976
而現階段人工智能的發展讓最讓我擔心的是
01:31
is that we seem似乎 unable無法 to marshal元帥
an appropriate適當 emotional情緒化 response響應
30
79200
4096
我們似乎無法組織出一個適當的情緒反應,
01:35
to the dangers危險 that lie謊言 ahead.
31
83320
1816
針對眼前的威脅。
01:37
I am unable無法 to marshal元帥 this response響應,
and I'm giving this talk.
32
85160
3200
我無法組織出這個回應,所以我在這裡講這個。
01:42
It's as though雖然 we stand before two doors.
33
90120
2696
就像我們站在兩扇門前面。
01:44
Behind背後 door number one,
34
92840
1256
一號門後面,
01:46
we stop making製造 progress進展
in building建造 intelligent智能 machines.
35
94120
3296
我們停止發展「製造有智能的機器」。
01:49
Our computer電腦 hardware硬件 and software軟件
just stops停止 getting得到 better for some reason原因.
36
97440
4016
我們的電腦硬體和軟體就因故停止變得更好。
01:53
Now take a moment時刻
to consider考慮 why this might威力 happen發生.
37
101480
3000
現在花一點時間想想為什麼這會發生。
01:57
I mean, given特定 how valuable有價值
intelligence情報 and automation自動化 are,
38
105080
3656
我的意思是,人工智能和自動化如此有價值,
02:00
we will continue繼續 to improve提高 our technology技術
if we are at all able能夠 to.
39
108760
3520
我們會持續改善我們的科技,只要我們有能力做。
02:05
What could stop us from doing this?
40
113200
1667
有什麼東西能阻止我們這麼做呢?
02:07
A full-scale全面 nuclear war戰爭?
41
115800
1800
一場全面性的核子戰爭?
02:11
A global全球 pandemic流感大流行?
42
119000
1560
一場全球性的流行病?
02:14
An asteroid小行星 impact碰撞?
43
122320
1320
一次小行星撞擊地球?
02:17
Justin賈斯汀 Bieber比伯 becoming變得
president主席 of the United聯合的 States狀態?
44
125640
2576
小賈斯汀成為美國總統?
02:20
(Laughter笑聲)
45
128240
2280
(笑聲)
02:24
The point is, something would have to
destroy破壞 civilization文明 as we know it.
46
132760
3920
重點是:必須有什麼我們知道的東西會毀滅我們的文明。
02:29
You have to imagine想像
how bad it would have to be
47
137360
4296
你必須想像到底能有多糟
02:33
to prevent避免 us from making製造
improvements改進 in our technology技術
48
141680
3336
才能阻止我們持續改善我們的科技,
02:37
permanently永久,
49
145040
1216
永久地,
02:38
generation after generation.
50
146280
2016
一代又一代人。
02:40
Almost幾乎 by definition定義,
this is the worst最差 thing
51
148320
2136
幾乎從定義上,這就是
人類歷史上發生過的最糟的事。
02:42
that's ever happened發生 in human人的 history歷史.
52
150480
2016
所以唯一的替代選項,
02:44
So the only alternative替代,
53
152520
1296
02:45
and this is what lies
behind背後 door number two,
54
153840
2336
這是在二號門之後的東西,
是我們繼續改善我們的智能機器
02:48
is that we continue繼續
to improve提高 our intelligent智能 machines
55
156200
3136
02:51
year after year after year.
56
159360
1600
年復一年,年復一年。
02:53
At a certain某些 point, we will build建立
machines that are smarter聰明 than we are,
57
161720
3640
到某個時間點我們會造出比我們還聰明的機器,
02:58
and once一旦 we have machines
that are smarter聰明 than we are,
58
166080
2616
而我們一旦造出比我們聰明的機器,
它們就會開始改善自己。
03:00
they will begin開始 to improve提高 themselves他們自己.
59
168720
1976
03:02
And then we risk風險 what
the mathematician數學家 IJIJ Good called
60
170720
2736
然後我們承擔數學家 ij Good 稱為
「人工智能爆發」的風險,
03:05
an "intelligence情報 explosion爆炸,"
61
173480
1776
03:07
that the process處理 could get away from us.
62
175280
2000
那個過程會脫離我們的掌握。
03:10
Now, this is often經常 caricatured諷刺,
as I have here,
63
178120
2816
這時常被漫畫化,如我的這張圖,
03:12
as a fear恐懼 that armies軍隊 of malicious惡毒 robots機器人
64
180960
3216
一種恐懼:充滿惡意的機械人軍團
03:16
will attack攻擊 us.
65
184200
1256
會攻擊我們。
03:17
But that isn't the most likely容易 scenario腳本.
66
185480
2696
但這不是最可能發生的情境。
03:20
It's not that our machines
will become成為 spontaneously自發 malevolent壞心腸的.
67
188200
4856
並不是說我們的機器會變得自然地帶有敵意。
03:25
The concern關心 is really
that we will build建立 machines
68
193080
2616
問題在於我們將會造出
03:27
that are so much
more competent勝任 than we are
69
195720
2056
遠比我們更有競爭力的機器,
03:29
that the slightest絲毫 divergence差異
between之間 their goals目標 and our own擁有
70
197800
3776
只要我們和他們的目標些微的歧異
03:33
could destroy破壞 us.
71
201600
1200
就會讓我們被毀滅。
03:35
Just think about how we relate涉及 to ants螞蟻.
72
203960
2080
就想想我們和螞蟻的關係。
03:38
We don't hate討厭 them.
73
206600
1656
我們不討厭牠們。
03:40
We don't go out of our way to harm危害 them.
74
208280
2056
我們不會特別去傷害牠們。
03:42
In fact事實, sometimes有時
we take pains辛勞 not to harm危害 them.
75
210360
2376
甚至有時我們為了不傷害牠們而承受痛苦。
03:44
We step over them on the sidewalk人行道.
76
212760
2016
我們在人行道跨越他們。
03:46
But whenever每當 their presence存在
77
214800
2136
但當他們的存在
03:48
seriously認真地 conflicts衝突 with one of our goals目標,
78
216960
2496
和我們的目標嚴重衝突,
03:51
let's say when constructing建設
a building建造 like this one,
79
219480
2477
譬如當我們要建造一棟和這裡一樣的建築物,
我們會毫無不安地除滅牠們。
03:53
we annihilate殲滅 them without a qualm疑慮.
80
221981
1960
03:56
The concern關心 is that we will
one day build建立 machines
81
224480
2936
問題在於有一天我們會造出機器,
03:59
that, whether是否 they're conscious意識 or not,
82
227440
2736
無論他們是有意識或的者沒有意識的,
04:02
could treat對待 us with similar類似 disregard漠視.
83
230200
2000
會對我們如螞蟻般的不予理會。
04:05
Now, I suspect疑似 this seems似乎
far-fetched牽強 to many許多 of you.
84
233760
2760
現在,我懷疑這對說法這裡大部分的人來說不著邊際。
04:09
I bet賭注 there are those of you who doubt懷疑
that superintelligent超智 AIAI is possible可能,
85
237360
6336
我確信你們有些人懷疑超級人工智能出現的可能,
04:15
much less inevitable必然.
86
243720
1656
更別說它必然出現。
04:17
But then you must必須 find something wrong錯誤
with one of the following以下 assumptions假設.
87
245400
3620
但接著你一點會發現接下來其中一個假設有點問題。
04:21
And there are only three of them.
88
249044
1572
以下只有三個假設。
04:23
Intelligence情報 is a matter of information信息
processing處理 in physical物理 systems系統.
89
251800
4719
智能是關於資訊在物質系統裡處理的過程。
04:29
Actually其實, this is a little bit more
than an assumption假設.
90
257320
2615
其實這個陳述稍微多於一個假設
04:31
We have already已經 built內置
narrow狹窄 intelligence情報 into our machines,
91
259959
3457
我們已經在我們的機器裡安裝了有限的智能,
04:35
and many許多 of these machines perform演出
92
263440
2016
而很多這樣的機器已經表現出
04:37
at a level水平 of superhuman超人
intelligence情報 already已經.
93
265480
2640
某種程度的超人類智能。
04:40
And we know that mere matter
94
268840
2576
而我們知道這個現象
04:43
can give rise上升 to what is called
"general一般 intelligence情報,"
95
271440
2616
可能導致被稱為「通用智能」的東西,
04:46
an ability能力 to think flexibly靈活
across橫過 multiple domains,
96
274080
3656
一種能跨多個領域彈性地思考的能力,
04:49
because our brains大腦 have managed管理 it. Right?
97
277760
3136
因為我們的腦已經掌握了這個,對吧?
04:52
I mean, there's just atoms原子 in here,
98
280920
3936
我的意思是,裡面都只是原子,
04:56
and as long as we continue繼續
to build建立 systems系統 of atoms原子
99
284880
4496
而只要我們繼續製造基於原子的系統
05:01
that display顯示 more and more
intelligent智能 behavior行為,
100
289400
2696
越來越能表現智能的行為,
05:04
we will eventually終於,
unless除非 we are interrupted間斷,
101
292120
2536
我們終究會,除非我們被打斷,
05:06
we will eventually終於
build建立 general一般 intelligence情報
102
294680
3376
我們終究會造出通用智能
05:10
into our machines.
103
298080
1296
裝進我們的機器裡。
05:11
It's crucial關鍵 to realize實現
that the rate of progress進展 doesn't matter,
104
299400
3656
關鍵是理解到發展的速率無關緊要,
05:15
because any progress進展
is enough足夠 to get us into the end結束 zone.
105
303080
3176
因為任何進展都足以帶我們到終結之境。
05:18
We don't need Moore's摩爾定律 law to continue繼續.
We don't need exponential指數 progress進展.
106
306280
3776
我們不需要摩爾定律才能繼續。我們不需要指數型的發展。
05:22
We just need to keep going.
107
310080
1600
我們只需要繼續前進。
05:25
The second第二 assumption假設
is that we will keep going.
108
313480
2920
第二個假設是我們會繼續前進。
05:29
We will continue繼續 to improve提高
our intelligent智能 machines.
109
317000
2760
我們會持續改善我們的智能機器。
05:33
And given特定 the value of intelligence情報 --
110
321000
4376
而因為智能的價值——
05:37
I mean, intelligence情報 is either
the source資源 of everything we value
111
325400
3536
我的意思是,智能是所有我們珍視的事物的源頭
05:40
or we need it to safeguard保障
everything we value.
112
328960
2776
或者我們需要智能來保護我們珍視的事物。
05:43
It is our most valuable有價值 resource資源.
113
331760
2256
智能是我們最珍貴的資源。
05:46
So we want to do this.
114
334040
1536
所以我們想要這麼做。
05:47
We have problems問題
that we desperately拼命 need to solve解決.
115
335600
3336
我們有許多亟需解決的問題。
05:50
We want to cure治愈 diseases疾病
like Alzheimer's老年癡呆症 and cancer癌症.
116
338960
3200
我們想要治癒疾病如阿茲海默症和癌症。
05:54
We want to understand理解 economic經濟 systems系統.
We want to improve提高 our climate氣候 science科學.
117
342960
3936
我們想要了解經濟系統。我們想要改進我們的氣候科學。
05:58
So we will do this, if we can.
118
346920
2256
所以我們會這麼做,只要我們可以。
06:01
The train培養 is already已經 out of the station,
and there's no brake制動 to pull.
119
349200
3286
火車已經出站,而沒有煞車可以拉。
06:05
Finally最後, we don't stand
on a peak of intelligence情報,
120
353880
5456
最後一點,我們不站在智能的巔峰,
06:11
or anywhere隨地 near it, likely容易.
121
359360
1800
或者根本不在那附近。
06:13
And this really is the crucial關鍵 insight眼光.
122
361640
1896
而這真的是一種重要的洞察。
06:15
This is what makes品牌
our situation情況 so precarious危險的,
123
363560
2416
正是這個讓我們的處境如此危險可疑,
06:18
and this is what makes品牌 our intuitions直覺
about risk風險 so unreliable靠不住.
124
366000
4040
這也讓我們對風險的直覺變得很不可靠。
06:23
Now, just consider考慮 the smartest最聰明的 person
who has ever lived生活.
125
371120
2720
現在,想想這世界上活過的最聰明的人。
06:26
On almost幾乎 everyone's大家的 shortlist名單 here
is John約翰 von Neumann諾伊曼.
126
374640
3416
每個人的清單上幾乎都會有 約翰·馮·諾伊曼 。
06:30
I mean, the impression印象 that von Neumann諾伊曼
made製作 on the people around him,
127
378080
3336
我是指, 馮·諾伊曼 對他周圍的人造成的印象,
06:33
and this included包括 the greatest最大
mathematicians數學家 and physicists物理學家 of his time,
128
381440
4056
而這包括和他同時代最棒的數學家和物理學家,
06:37
is fairly相當 well-documented充分證明.
129
385520
1936
被好好地記錄了。
06:39
If only half the stories故事
about him are half true真正,
130
387480
3776
只要有一半關於他的故事的一半是真的,
06:43
there's no question
131
391280
1216
那毫無疑問
06:44
he's one of the smartest最聰明的 people
who has ever lived生活.
132
392520
2456
他是世界上活過的最聰明的人之一。
06:47
So consider考慮 the spectrum光譜 of intelligence情報.
133
395000
2520
所以考慮智能的頻譜。
06:50
Here we have John約翰 von Neumann諾伊曼.
134
398320
1429
約翰·馮·諾伊曼 在這裡。
06:53
And then we have you and me.
135
401560
1334
然後你和我在這裡。
06:56
And then we have a chicken.
136
404120
1296
然後雞在這裡。
06:57
(Laughter笑聲)
137
405440
1936
(笑聲)
06:59
Sorry, a chicken.
138
407400
1216
抱歉,雞應該在那裡。
07:00
(Laughter笑聲)
139
408640
1256
(笑聲)
07:01
There's no reason原因 for me to make this talk
more depressing壓抑 than it needs需求 to be.
140
409920
3736
我實在無意把這個把這個演講弄得比它本身更讓人感到沮喪。
07:05
(Laughter笑聲)
141
413680
1600
(笑聲)
07:08
It seems似乎 overwhelmingly壓倒性 likely容易, however然而,
that the spectrum光譜 of intelligence情報
142
416339
3477
智能的頻譜似乎勢不可擋地
07:11
extends擴展 much further進一步
than we currently目前 conceive構想,
143
419840
3120
往比我們能理解的更遠的地方延伸,
07:15
and if we build建立 machines
that are more intelligent智能 than we are,
144
423880
3216
如果我們造出比我們更有智能的機器,
07:19
they will very likely容易
explore探索 this spectrum光譜
145
427120
2296
他們很可能會探索這個頻譜,
07:21
in ways方法 that we can't imagine想像,
146
429440
1856
以我們無法想像的方式,
07:23
and exceed超過 us in ways方法
that we can't imagine想像.
147
431320
2520
然後超越我們以我們無法想像的方式。
07:27
And it's important重要 to recognize認識 that
this is true真正 by virtue美德 of speed速度 alone單獨.
148
435000
4336
重要的是認識到這說法僅因速度的優勢即為真。
07:31
Right? So imagine想像 if we just built內置
a superintelligent超智 AIAI
149
439360
5056
對吧?請想像如果我們造出了一個超級人工智能
07:36
that was no smarter聰明
than your average平均 team球隊 of researchers研究人員
150
444440
3456
它不比你一般在史丹佛或者 MIT 遇到的研究團隊聰明。
07:39
at Stanford斯坦福 or MITMIT.
151
447920
2296
電子電路作用的速率比起生化作用快一百萬倍,
07:42
Well, electronic電子 circuits電路
function功能 about a million百萬 times faster更快
152
450240
2976
07:45
than biochemical生化 ones那些,
153
453240
1256
07:46
so this machine should think
about a million百萬 times faster更快
154
454520
3136
所以這個機器思考應該比製造它的心智快一百萬倍。
07:49
than the minds頭腦 that built內置 it.
155
457680
1816
07:51
So you set it running賽跑 for a week,
156
459520
1656
如果你設定讓它運行一星期,
07:53
and it will perform演出 20,000 years年份
of human-level人類水平 intellectual知識分子 work,
157
461200
4560
他會執行人類等級的智能要花兩萬年的工作,
07:58
week after week after week.
158
466400
1960
一週接著一週接著一週。
08:01
How could we even understand理解,
much less constrain壓抑,
159
469640
3096
我們如何可能理解,較不嚴格地說,
08:04
a mind心神 making製造 this sort分類 of progress進展?
160
472760
2280
一個達成如此進展的心智?
08:08
The other thing that's worrying令人擔憂, frankly坦率地說,
161
476840
2136
另一個另人擔心的事,老實說,
08:11
is that, imagine想像 the best最好 case案件 scenario腳本.
162
479000
4976
是想像最好的情況。
08:16
So imagine想像 we hit擊中 upon a design設計
of superintelligent超智 AIAI
163
484000
4176
想像我們想到一個沒有安全顧慮的超級人工智能的設計
08:20
that has no safety安全 concerns關注.
164
488200
1376
08:21
We have the perfect完善 design設計
the first time around.
165
489600
3256
我們第一次就做出了完美的設計。
08:24
It's as though雖然 we've我們已經 been handed an oracle神諭
166
492880
2216
如同我們被給予了一個神諭,
08:27
that behaves的行為 exactly究竟 as intended.
167
495120
2016
完全照我們的預期地動作。
08:29
Well, this machine would be
the perfect完善 labor-saving省力 device設備.
168
497160
3720
這個機器會是完美的人力節約裝置。
08:33
It can design設計 the machine
that can build建立 the machine
169
501680
2429
它能設計一個機器,那機器能製造出能做任何人工的機器,
08:36
that can do any physical物理 work,
170
504133
1763
08:37
powered動力 by sunlight陽光,
171
505920
1456
太陽能驅動,
08:39
more or less for the cost成本
of raw生的 materials物料.
172
507400
2696
幾乎只需要原料的成本。
08:42
So we're talking about
the end結束 of human人的 drudgery苦差事.
173
510120
3256
所以我們是在談人類苦役的終結。
08:45
We're also talking about the end結束
of most intellectual知識分子 work.
174
513400
2800
我們也是在談大部分的智力工作的終結。
08:49
So what would apes類人猿 like ourselves我們自己
do in this circumstance環境?
175
517200
3056
像我們一樣的猩猩在這種情況下會做什麼?
08:52
Well, we'd星期三 be free自由 to play Frisbee飛盤
and give each other massages按摩.
176
520280
4080
我們可能可以自由地玩飛盤和互相按摩。
08:57
Add some LSDLSD and some
questionable可疑的 wardrobe衣櫃 choices選擇,
177
525840
2856
加上一點迷幻藥和可議的服裝選擇,
09:00
and the whole整個 world世界
could be like Burning燃燒 Man.
178
528720
2176
整個世界都可以像在過火人祭典。
09:02
(Laughter笑聲)
179
530920
1640
(笑聲)
09:06
Now, that might威力 sound聲音 pretty漂亮 good,
180
534320
2000
那聽起來也許很不錯,
09:09
but ask yourself你自己 what would happen發生
181
537280
2376
但請問,在我們目前的經濟和政治秩序下,會發生什麼事情?
09:11
under our current當前 economic經濟
and political政治 order訂購?
182
539680
2736
09:14
It seems似乎 likely容易 that we would witness見證
183
542440
2416
我們很可能會見證
09:16
a level水平 of wealth財富 inequality不等式
and unemployment失業
184
544880
4136
一種我們從未見過的財富不均和失業程度。
09:21
that we have never seen看到 before.
185
549040
1496
缺乏一種意願來把這份新財富馬上
09:22
Absent缺席 a willingness願意
to immediately立即 put this new wealth財富
186
550560
2616
09:25
to the service服務 of all humanity人性,
187
553200
1480
放在服務全人類,
09:27
a few少數 trillionairestrillionaires could grace恩典
the covers蓋子 of our business商業 magazines雜誌
188
555640
3616
少數幾個萬億富翁能登上我們的財經雜誌
09:31
while the rest休息 of the world世界
would be free自由 to starve餓死.
189
559280
2440
而其他人可以自由地選擇挨餓。
09:34
And what would the Russians俄羅斯
or the Chinese中文 do
190
562320
2296
而俄國和中國會怎麼做?
09:36
if they heard聽說 that some company公司
in Silicon Valley
191
564640
2616
當他們聽說矽谷的某個公司
09:39
was about to deploy部署 a superintelligent超智 AIAI?
192
567280
2736
即將部署一個超級人工智能,
09:42
This machine would be capable
of waging發動 war戰爭,
193
570040
2856
這個機器能夠發動戰爭,
09:44
whether是否 terrestrial陸生 or cyber網絡,
194
572920
2216
無論是領土侵略或者網路電子戰,
09:47
with unprecedented史無前例 power功率.
195
575160
1680
以前所未見的威力。
09:50
This is a winner-take-all贏家通吃 scenario腳本.
196
578120
1856
這是個贏者全拿的劇本。
09:52
To be six months個月 ahead
of the competition競爭 here
197
580000
3136
在這個競爭領先六個月
09:55
is to be 500,000 years年份 ahead,
198
583160
2776
等於領先五十萬年,
09:57
at a minimum最低限度.
199
585960
1496
最少。
09:59
So it seems似乎 that even mere rumors傳聞
of this kind of breakthrough突破
200
587480
4736
所以即使僅僅是這種突破的謠言
10:04
could cause原因 our species種類 to go berserk發狂的.
201
592240
2376
都能使我們這個種族走向狂暴。
10:06
Now, one of the most frightening可怕的 things,
202
594640
2896
現在,最讓人驚恐的事情,
10:09
in my view視圖, at this moment時刻,
203
597560
2776
在我的看法,在這個時刻,
10:12
are the kinds of things
that AIAI researchers研究人員 say
204
600360
4296
是人工智慧研究者說的那類話
10:16
when they want to be reassuring令人欣慰.
205
604680
1560
當他們試著表現得讓人安心。
10:19
And the most common共同 reason原因
we're told not to worry擔心 is time.
206
607000
3456
而最常用來告訴我們現在不要擔心的理由是時間。
10:22
This is all a long way off,
don't you know.
207
610480
2056
這還有很長的路要走,你不知道嗎,
10:24
This is probably大概 50 or 100 years年份 away.
208
612560
2440
起碼還要 50 到 100 年。
10:27
One researcher研究員 has said,
209
615720
1256
一個研究人員曾說,
10:29
"Worrying令人擔憂 about AIAI safety安全
210
617000
1576
「憂心人工智慧安全
10:30
is like worrying令人擔憂
about overpopulation人口過剩 on Mars火星."
211
618600
2280
如同憂心火星人口爆炸。」
10:34
This is the Silicon Valley version
212
622116
1620
這是矽谷版本的
10:35
of "don't worry擔心 your
pretty漂亮 little head about it."
213
623760
2376
「別杞人憂天。」
(笑聲)
10:38
(Laughter笑聲)
214
626160
1336
10:39
No one seems似乎 to notice注意
215
627520
1896
似乎沒人注意到
10:41
that referencing引用 the time horizon地平線
216
629440
2616
以時間當參考
10:44
is a total non sequitur不合邏輯.
217
632080
2576
是一個不合理的推論。
10:46
If intelligence情報 is just a matter
of information信息 processing處理,
218
634680
3256
如果智能只是關於資訊的處理,
10:49
and we continue繼續 to improve提高 our machines,
219
637960
2656
而我們持續改善我們的機器,
10:52
we will produce生產
some form形成 of superintelligence超級智能.
220
640640
2880
我們會製作出某種形式的超級智能。
10:56
And we have no idea理念
how long it will take us
221
644320
3656
而且我們不知道要花我們多長的時間
11:00
to create創建 the conditions條件
to do that safely安然.
222
648000
2400
來創造安全地這麼做的條件。
11:04
Let me say that again.
223
652200
1296
讓我再說一次,
11:05
We have no idea理念 how long it will take us
224
653520
3816
我們不知道要花我們多長的時間
11:09
to create創建 the conditions條件
to do that safely安然.
225
657360
2240
來創造安全地這麼做的條件。
11:12
And if you haven't沒有 noticed注意到,
50 years年份 is not what it used to be.
226
660920
3456
而且如果你還沒注意到, 50 年已經不像以前的概念。
11:16
This is 50 years年份 in months個月.
227
664400
2456
這是 50 年以月來表示
11:18
This is how long we've我們已經 had the iPhone蘋果手機.
228
666880
1840
這是我們有了 iPhone 的時間。
11:21
This is how long "The Simpsons辛普森"
has been on television電視.
229
669440
2600
這是《辛普森家庭》在電視上播映的時間。
11:24
Fifty五十 years年份 is not that much time
230
672680
2376
50 年不是那麼長的時間
11:27
to meet遇到 one of the greatest最大 challenges挑戰
our species種類 will ever face面對.
231
675080
3160
來面對對我們這個種族來說最巨大的挑戰之一。
11:31
Once一旦 again, we seem似乎 to be failing失敗
to have an appropriate適當 emotional情緒化 response響應
232
679640
4016
再一次說,我們似乎無法產生適當的情緒反應
11:35
to what we have every一切 reason原因
to believe is coming未來.
233
683680
2696
對應我們有所有的理由相信將發生的事。
11:38
The computer電腦 scientist科學家 Stuart斯圖爾特 Russell羅素
has a nice不錯 analogy比喻 here.
234
686400
3976
資訊科學家斯圖亞特·羅素有個很好的比喻。
11:42
He said, imagine想像 that we received收到
a message信息 from an alien外僑 civilization文明,
235
690400
4896
他說,想像我們收到一則外星文明的訊息,
11:47
which哪一個 read:
236
695320
1696
寫道:
11:49
"People of Earth地球,
237
697040
1536
「地球的人們,
11:50
we will arrive到達 on your planet行星 in 50 years年份.
238
698600
2360
我們 50 年內會到達你們的星球。
11:53
Get ready準備."
239
701800
1576
作好準備。」
11:55
And now we're just counting數數 down
the months個月 until直到 the mothership母艦 lands土地?
240
703400
4256
而現在我們只是在倒數外星母艦還剩幾個月登陸?
11:59
We would feel a little
more urgency than we do.
241
707680
3000
我們會比我們現在稍微感到緊迫。
12:04
Another另一個 reason原因 we're told not to worry擔心
242
712680
1856
另一個我們被告知不用擔心的原因
12:06
is that these machines
can't help but share分享 our values
243
714560
3016
是這些機器不得不和我們有一樣的價值觀
12:09
because they will be literally按照字面
extensions擴展 of ourselves我們自己.
244
717600
2616
因為他們字面上只是我們的延伸。
12:12
They'll他們會 be grafted嫁接 onto our brains大腦,
245
720240
1816
它們會被植入我們的大腦裡,
12:14
and we'll essentially實質上
become成為 their limbic邊緣 systems系統.
246
722080
2360
而我們基本上變成他們大腦的邊緣系統。
12:17
Now take a moment時刻 to consider考慮
247
725120
1416
現在用一點時間想想
12:18
that the safest最安全
and only prudent謹慎 path路徑 forward前鋒,
248
726560
3176
這最安全而且唯一謹慎的往前的路,
12:21
recommended推薦的,
249
729760
1336
被推薦的,
12:23
is to implant注入 this technology技術
directly into our brains大腦.
250
731120
2800
是將這個科技植入我們的腦內。
12:26
Now, this may可能 in fact事實 be the safest最安全
and only prudent謹慎 path路徑 forward前鋒,
251
734600
3376
這也許的確是最安全而且唯一謹慎的往前的路,
12:30
but usually平時 one's那些 safety安全 concerns關注
about a technology技術
252
738000
3056
但通常科技的安全性問題對一個人來說
12:33
have to be pretty漂亮 much worked工作 out
before you stick it inside your head.
253
741080
3656
應該在把東西插到你腦袋裡之前就該大部分解決了。
12:36
(Laughter笑聲)
254
744760
2016
(笑聲)
12:38
The deeper更深 problem問題 is that
building建造 superintelligent超智 AIAI on its own擁有
255
746800
5336
更深層的問題是,打造超級人工智能本身
12:44
seems似乎 likely容易 to be easier更輕鬆
256
752160
1736
似乎相對容易於
12:45
than building建造 superintelligent超智 AIAI
257
753920
1856
「打造超級人工智慧
12:47
and having the completed完成 neuroscience神經科學
258
755800
1776
而且擁有完整的神經科學
12:49
that allows允許 us to seamlessly無縫
integrate整合 our minds頭腦 with it.
259
757600
2680
讓我們可以把我們的心智無縫與之整合」。
12:52
And given特定 that the companies公司
and governments政府 doing this work
260
760800
3176
而假設正在從事人工智能研發的許多公司和政府
12:56
are likely容易 to perceive感知 themselves他們自己
as being存在 in a race種族 against反對 all others其他,
261
764000
3656
很可能察覺他們正在和所有其他人競爭,
12:59
given特定 that to win贏得 this race種族
is to win贏得 the world世界,
262
767680
3256
假設贏了這個競爭就是贏得世界,
13:02
provided提供 you don't destroy破壞 it
in the next下一個 moment時刻,
263
770960
2456
假設你在下一刻不會毀了世界,
13:05
then it seems似乎 likely容易
that whatever隨你 is easier更輕鬆 to do
264
773440
2616
那麼很可能比較容易做的事
13:08
will get doneDONE first.
265
776080
1200
就會先被做完。
13:10
Now, unfortunately不幸,
I don't have a solution to this problem問題,
266
778560
2856
現在,很不幸地,我沒有這個問題的解決方法,
除了建議我們更多人思考這個問題。
13:13
apart距離 from recommending建議
that more of us think about it.
267
781440
2616
我想我們需要類似曼哈頓計畫的東西,
13:16
I think we need something
like a Manhattan曼哈頓 Project項目
268
784080
2376
針對人工智能這個課題。
13:18
on the topic話題 of artificial人造 intelligence情報.
269
786480
2016
不是因為我們不可避免地要這麼做而做,
13:20
Not to build建立 it, because I think
we'll inevitably必將 do that,
270
788520
2736
而是試著理解如何避免軍備競賽
13:23
but to understand理解
how to avoid避免 an arms武器 race種族
271
791280
3336
13:26
and to build建立 it in a way
that is aligned對齊 with our interests利益.
272
794640
3496
而且用一種符合我們利益的方式打造之。
13:30
When you're talking
about superintelligent超智 AIAI
273
798160
2136
當你在談論能夠對其本身造成改變的超級人工智能
13:32
that can make changes變化 to itself本身,
274
800320
2256
13:34
it seems似乎 that we only have one chance機會
to get the initial初始 conditions條件 right,
275
802600
4616
這似乎說明我們只有一次機會把初始條件做對,
13:39
and even then we will need to absorb吸收
276
807240
2056
而且我們會必須承受
13:41
the economic經濟 and political政治
consequences後果 of getting得到 them right.
277
809320
3040
為了將它們做對的經濟和政治的後果
13:45
But the moment時刻 we admit承認
278
813760
2056
但一旦我們承認
13:47
that information信息 processing處理
is the source資源 of intelligence情報,
279
815840
4000
資訊處理是智能的源頭,
13:52
that some appropriate適當 computational計算 system系統
is what the basis基礎 of intelligence情報 is,
280
820720
4800
某些適當的電腦系統是智能的基礎,
13:58
and we admit承認 that we will improve提高
these systems系統 continuously一直,
281
826360
3760
而且我們承認我們會持續改進這些系統,
14:03
and we admit承認 that the horizon地平線
of cognition認識 very likely容易 far exceeds超過
282
831280
4456
而且我們承認認知的極限有可能遠遠超越
14:07
what we currently目前 know,
283
835760
1200
我們目前所知,
14:10
then we have to admit承認
284
838120
1216
然後我們必須承認
14:11
that we are in the process處理
of building建造 some sort分類 of god.
285
839360
2640
我們正在打造某種神明的過程裡
14:15
Now would be a good time
286
843400
1576
現在是個好時機
14:17
to make sure it's a god we can live生活 with.
287
845000
1953
來確保那是個我們能夠與之共存的神明。
14:20
Thank you very much.
288
848120
1536
謝謝大家。
14:21
(Applause掌聲)
289
849680
5093
Translated by Hans Chiang
Reviewed by Qiyun Xing

▲Back to top

ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Sam Harris is the author of five New York Times bestsellers. His books include The End of FaithLetter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.

More profile about the speaker
Sam Harris | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee