ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Sam Harris is the author of five New York Times bestsellers. His books include The End of FaithLetter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.

More profile about the speaker
Sam Harris | Speaker | TED.com
TEDSummit

Sam Harris: Can we build AI without losing control over it?

Sam Harris: 我哋可唔可以完全掌控人工智能?

Filmed:
5,024,015 views

怕超級人工智能?神經學家同哲學家 Sam Harris 話畀你知你應該要怕,而且唔單止係口講嘅怕,仲要實在嘅怕。Sam Harris 話人類將要開發比人類仲犀利嘅機器,但係人類仲未處理好一啲帶嚟嘅問題,包括機器有可能會以我哋對待螞蟻嘅方式對待我哋。
- Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live. Full bio

Double-click the English transcript below to play the video.

00:13
I'm going to talk
about a failure of intuition直覺
0
1000
2216
我想講下
一個好多人都經歷過嘅感官錯覺
00:15
that many好多 of us suffer遭受 from.
1
3240
1600
00:17
It's really a failure
to detect檢測 a certain一定 kind一種 of danger危險.
2
5480
3040
當呢個錯覺嚟嗰陣
我哋會唔識得留意危險
00:21
I'm going to describe描述 a scenario場景
3
9360
1736
我亦都想講下一個我認為駭人聽聞
00:23
that I think is both terrifying可怕
4
11120
3256
同時又好有可能會發生嘅情景
00:26
and likely可能 to occur發生,
5
14400
1760
00:28
and that's not a good combination組合,
6
16840
1656
呢個情景發生嘅話,唔係一件好事嚟
00:30
as it turns輪流 out.
7
18520
1536
00:32
And yet尚未 rather than be scared害怕,
most of you will feel
8
20080
2456
你哋可能唔覺得我依家講緊嘅嘢恐怖
00:34
that what I'm talking講嘢 about
is kind一種 of cool.
9
22560
2080
反而覺得好型
00:37
I'm going to describe描述
how the gains收益 we make
10
25200
2976
所以我想講下
我哋人類喺人工智能方面取得嘅成就
00:40
in artificial人工 intelligence情報
11
28200
1776
00:42
could ultimately最終 destroy摧毀 us.
12
30000
1776
最终會點樣摧毀我哋
00:43
And in fact事實, I think it's very difficult困難
to see how they won't唔會 destroy摧毀 us
13
31800
3456
而事實上,我認為好難會見到
佢哋唔會摧毀我哋
或者導致我哋自我毀滅
00:47
or inspire激發 us to destroy摧毀 ourselves自己.
14
35280
1680
00:49
And yet尚未 if you're anything like me,
15
37400
1856
依家你哋或者同我一樣
00:51
you'll你咪會 find that it's fun有趣
to think about these things.
16
39280
2656
覺得諗呢啲嘢好得意
00:53
And that response響應 is part部分 of the problem個問題.
17
41960
3376
正因為覺得得意
亦都成為咗問題嘅一部份
00:57
OK? That response響應 should worry you.
18
45360
1720
你哋應該擔心你哋嘅反應至真!
00:59
And if I were to convince說服 you in this talk
19
47920
2656
如果我喺呢場演講度話畀你哋聽
01:02
that we were likely可能
to suffer遭受 a global全球 famine饑荒,
20
50600
3416
因為氣候變化或者大災難嘅原因
01:06
either一係 because of climate氣候 change
or some other catastrophe災難,
21
54040
3056
我哋會遭遇一場饑荒
而你嘅孫,或者佢哋嘅孫
會好似咁樣生活
01:09
and that your grandchildren孫子,
or their佢哋 grandchildren孫子,
22
57120
3416
01:12
are very likely可能 to live like this,
23
60560
1800
你就唔會覺得
01:15
you wouldn't唔會 think,
24
63200
1200
「好有趣,我鍾意呢場 TED 演講。」
01:17
"Interesting有趣.
25
65440
1336
01:18
I like this TED泰德 Talk."
26
66800
1200
01:21
Famine饑荒 isn't fun有趣.
27
69200
1520
饑荒一啲都唔有趣
但科幻小說描繪嘅死亡就好有趣
01:23
Death死亡 by science科學 fiction小說,
on the other hand, is fun有趣,
28
71800
3376
呢一刻,人工智能發展
最令我最困擾嘅係
01:27
and one of the things that worries me most
about the development發展 of AI at this pointD
29
75200
3976
我哋面對近在眼前嘅危險似乎無動於衷
01:31
is that we seem好似 unable冇辦法 to marshal元帥
an appropriate適當 emotional情感 response響應
30
79200
4096
01:35
to the dangers危險 that lie謊言 ahead提前.
31
83320
1816
雖然我喺你哋面前演講
01:37
I am unable冇辦法 to marshal元帥 this response響應,
and I'm giving this talk.
32
85160
3200
但我同你哋一樣都係冇反應
成件事就好似我哋企喺兩道門前面
01:42
It's as though雖然 we stand before two doors.
33
90120
2696
喺一號門後面,我哋唔再發展智能機器
01:44
Behind背後 door number數量 one,
34
92840
1256
01:46
we stop making決策 progress進展
in building建築 intelligent智能 machines機器.
35
94120
3296
因為某啲原因
01:49
Our computer計數機 hardware硬件 and software軟件
just stops停止 getting得到 better for some reason原因.
36
97440
4016
我哋電腦嘅硬件同軟件都停滯不前
依家嚟諗一下點解呢種情況會發生
01:53
Now take a moment時刻
to consider諗緊 why this might可能 happen發生.
37
101480
3000
即係話,因為智能同自動化好重要
01:57
I mean, given how valuable寶貴
intelligence情報 and automation自動化 are,
38
105080
3656
02:00
we will continue繼續 to improve提高 our technology技術
if we are at all able to.
39
108760
3520
所以我哋會喺許可嘅情況之下
繼續改善科技
咁究竟係乜嘢會阻止我哋?
02:05
What could stop us from doing this?
40
113200
1667
一個全面嘅核戰爭?
02:07
A full-scale全尺寸 nuclear war戰爭?
41
115800
1800
一個全球流行病?
02:11
A global全球 pandemic大流行性?
42
119000
1560
一個小行星撞擊?
02:14
An asteroid小行星 impact影響?
43
122320
1320
Justin Bieber 做咗美國總統?
02:17
Justin贾斯汀 Bieber比伯 becoming成為
president總統 of the United聯合 States國家?
44
125640
2576
(笑聲)
02:20
(Laughter笑聲)
45
128240
2280
02:24
The pointD is, something would have to
destroy摧毀 civilization文明 as we know it.
46
132760
3920
之但係,如我哋所知
有一啲嘢會摧毀文明
02:29
You have to imagine想象
how bad it would have to be
47
137360
4296
你必須要想像
如果我哋一代又一代人
永遠改善唔到科技
02:33
to prevent防止 us from making決策
improvements改進 in our technology技術
48
141680
3336
02:37
permanently永久,
49
145040
1216
情況會有幾嚴重
02:38
generation生成 after generation生成.
50
146280
2016
幾乎可以確定嘅係
02:40
Almost爭 D by definition定義,
this is the worst糟糕 thing
51
148320
2136
呢個係人類史上最壞嘅事
02:42
that's ever happened發生 in human人類 history歷史.
52
150480
2016
所以唯一嘅選擇
就係二號門後嘅做法
02:44
So the only alternative替代,
53
152520
1296
02:45
and this is what lies謊言
behind背後 door number數量 two,
54
153840
2336
我哋繼續年復一年升級改造智能機器
02:48
is that we continue繼續
to improve提高 our intelligent智能 machines機器
55
156200
3136
02:51
year after year after year.
56
159360
1600
到咗某個地步
02:53
At a certain一定 pointD, we will build建立
machines機器 that are smarter than we are,
57
161720
3640
我哋就會整出比我哋仲要聰明嘅機器
一旦我哋有咗比我哋自己
仲聰明嘅機器
02:58
and once一旦 we have machines機器
that are smarter than we are,
58
166080
2616
佢哋就會自我改良
03:00
they will begin初時 to improve提高 themselves自己.
59
168720
1976
到時我哋就會面臨數學家 IJ Good
講嘅「智能爆炸」危機
03:02
And then we risk風險 what
the mathematician數學家 IJIj Good called
60
170720
2736
03:05
an "intelligence情報 explosion爆炸,"
61
173480
1776
即係話,改良過程唔再需要人類
03:07
that the process過程 could get away from us.
62
175280
2000
依家,經常會有人學呢張諷刺漫畫咁
03:10
Now, this is often經常 caricatured譏諷,
as I have here,
63
178120
2816
03:12
as a fear恐懼 that armies軍隊 of malicious惡意 robots機械人
64
180960
3216
描繪叛變嘅機器人會攻擊我哋
03:16
will attack攻擊 us.
65
184200
1256
但係呢個唔係最有可能發生嘅情景
03:17
But that isn't the most likely可能 scenario場景.
66
185480
2696
03:20
It's not that our machines機器
will become成為 spontaneously自發性 malevolent惡意.
67
188200
4856
我哋嘅機器唔會自動變惡
所以問題在於我哋製造出
比我哋更加做到嘢嘅機器嘅時候
03:25
The concern關注 is really
that we will build建立 machines機器
68
193080
2616
03:27
that are so much
more competent主管 than we are
69
195720
2056
03:29
that the slightest絲毫 divergence分歧
between之間 their佢哋 goals目標 and our own自己
70
197800
3776
佢哋目標上同我哋嘅細微分歧
會置我哋於死地
03:33
could destroy摧毀 us.
71
201600
1200
03:35
Just think about how we relate to ants螞蟻.
72
203960
2080
就諗下我哋同螞蟻之間嘅關係︰
03:38
We don't hate討厭 them.
73
206600
1656
我哋唔討厭佢哋
我哋唔會傷害佢哋
03:40
We don't go out of our way to harm them.
74
208280
2056
03:42
In fact事實, sometimes有時
we take pains痛苦 not to harm them.
75
210360
2376
甚至我哋為咗唔傷害佢哋
而會受一啲苦
03:44
We step over them on the sidewalk人行道.
76
212760
2016
例如我哋會為咗唔踩到佢哋
而跨過佢哋
但係一旦佢哋嘅存在
同我哋嘅其中一個目標有嚴重衝突
03:46
But whenever每當 their佢哋 presence存在
77
214800
2136
03:48
seriously嚴重 conflicts衝突 with one of our goals目標,
78
216960
2496
譬如話要起一棟咁樣嘅樓
03:51
let's say when constructing構建
a building建築 like this one,
79
219480
2477
我哋諗都唔諗就殺死佢哋
03:53
we annihilate消滅 them without a qualm疑慮.
80
221981
1960
03:56
The concern關注 is that we will
one day build建立 machines機器
81
224480
2936
問題係,我哋終有一日整出嘅機器——
03:59
that, whether係唔係 they're conscious意識 or not,
82
227440
2736
無論佢哋自己有冇意識都好
同樣會冷漠咁對待我哋
04:02
could treat治療 us with similar類似 disregard無視.
83
230200
2000
依家,我估對於你哋大部份人嚟講
呢件情景都係遙不可及嘅
04:05
Now, I suspect懷疑 this seems好似
far-fetched牽強 to many好多 of you.
84
233760
2760
04:09
I bet打賭 there are those of you who doubt懷疑
that superintelligent超常智慧 AI is possible可能,
85
237360
6336
我賭你哋當中有人質疑
超級智能嘅可能性
更加唔好講
人類要避免超級智能
04:15
much less inevitable必然.
86
243720
1656
04:17
But then you must必須 find something wrong
with one of the following以下 assumptions假設.
87
245400
3620
但係你哋肯定會喺下面嘅假設當中
搵到一啲謬誤
呢度一共有三個假設
04:21
And there are only three of them.
88
249044
1572
04:23
Intelligence情報 is a matter個問題 of information信息
processing處理 in physical物理 systems系統.
89
251800
4719
喺物理系統裏面,智能等如訊息處理
04:29
Actually講真, this is a little bit more
than an assumption假設.
90
257320
2615
但係,呢個超過咗假設
因為我哋已經喺我哋嘅機器裏面
植入咗弱人工智能
04:31
We have already built建立
narrow intelligence情報 into our machines機器,
91
259959
3457
04:35
and many好多 of these machines機器 perform執行
92
263440
2016
而且呢啲機器好多
已經處於一個超人類智能水平
04:37
at a level水平 of superhuman超人
intelligence情報 already.
93
265480
2640
04:40
And we know that mere淨系 matter個問題
94
268840
2576
同時我哋知道僅僅係物質
就可以產生所謂嘅「一般智能」
04:43
can give rise上升 to what is called
"general麻麻 intelligence情報,"
95
271440
2616
一種可以喺唔同領域之間
靈活思考嘅能力
04:46
an ability能力 to think flexibly靈活
across multiple多個 domains,
96
274080
3656
咁係因為我哋嘅大腦
已經可以做到,係唔係?
04:49
because our brains大腦 have managed管理 it. Right?
97
277760
3136
我嘅意思係,大腦凈係得原子
04:52
I mean, there's just atoms原子 in here,
98
280920
3936
只要我哋繼續加設原子系統
04:56
and as long as we continue繼續
to build建立 systems系統 of atoms原子
99
284880
4496
05:01
that display顯示 more and more
intelligent智能 behavior行為,
100
289400
2696
機器就可以有更加多智能行為
除非進度有咩停頓
05:04
we will eventually最終,
unless除左 we are interrupted打斷,
101
292120
2536
05:06
we will eventually最終
build建立 general麻麻 intelligence情報
102
294680
3376
否則我哋最終會喺機器裏面
建構出一般智能
05:10
into our machines機器.
103
298080
1296
明白進度嘅快慢並唔影響係好重要
05:11
It's crucial關鍵 to realize實現
that the rate of progress進展 doesn't matter個問題,
104
299400
3656
因為任何過程都足以令我哋返唔到轉頭
05:15
because any progress進展
is enough to get us into the end結束 zone.
105
303080
3176
我哋唔需要按照摩爾定律進行
05:18
We don't need Moore's摩爾的 law法律 to continue繼續.
We don't need exponential指數 progress進展.
106
306280
3776
我哋唔需要指數式增長
05:22
We just need to keep going.
107
310080
1600
我哋只需要繼續做
第二個假設就係我哋會繼續做
05:25
The second第二 assumption假設
is that we will keep going.
108
313480
2920
我哋會繼續改造我哋嘅智能機器
05:29
We will continue繼續 to improve提高
our intelligent智能 machines機器.
109
317000
2760
05:33
And given the value價值 of intelligence情報 --
110
321000
4376
而考慮到智能嘅價值…
我係話,因為有智能
我哋至會珍重事物
05:37
I mean, intelligence情報 is either一係
the source of everything we value價值
111
325400
3536
05:40
or we need it to safeguard保障
everything we value價值.
112
328960
2776
或者我哋需要智能
去保護我哋珍重嘅一切
05:43
It is our most valuable寶貴 resource資源.
113
331760
2256
智能係我哋最有寶貴嘅資源
05:46
So we want to do this.
114
334040
1536
所以我哋想繼續發展智能
05:47
We have problems個問題
that we desperately拼命 need to solve解決.
115
335600
3336
我哋有極需解決嘅問題
例如我哋想治療類似阿茲海默症
同癌症嘅疾病
05:50
We want to cure治療 diseases疾病
like Alzheimer's阿尔茨海默病嘅 and cancer癌症.
116
338960
3200
我哋想認識經濟系統
05:54
We want to understand理解 economic經濟 systems系統.
We want to improve提高 our climate氣候 science科學.
117
342960
3936
我哋想改善我哋嘅氣候科學
所以如果可以做到嘅話
我哋會繼續發展智能
05:58
So we will do this, if we can.
118
346920
2256
06:01
The train火車 is already out of the station,
and there's no brake制動 to pull.
119
349200
3286
件事亦都可以比喻為︰
列車已經開出,但冇刹車掣可以踩
最終,我哋唔會去到
智能嘅頂峰或者高智能水平
06:05
Finally最後, we don't stand
on a peak峰值 of intelligence情報,
120
353880
5456
06:11
or anywhere地方 near附近 it, likely可能.
121
359360
1800
06:13
And this really is the crucial關鍵 insight洞察力.
122
361640
1896
而呢個就係非常重要嘅觀察結果
06:15
This is what makes使
our situation情況 so precarious危險,
123
363560
2416
就係呢個結果
將我哋置於岌岌可危嘅境地
06:18
and this is what makes使 our intuitions直覺
about risk風險 so unreliable可靠.
124
366000
4040
亦令到我哋對於危險嘅觸覺唔可靠
依家,就諗下史上最聰明嘅人
06:23
Now, just consider諗緊 the smartest person
who has ever lived.
125
371120
2720
幾乎喺每個人嘅名單上面
都會有 John von Neumann
06:26
On almost爭 D everyone's個個 shortlist名單 here
is John約翰 von Neumann紐曼.
126
374640
3416
06:30
I mean, the impression印象 that von Neumann紐曼
made作出 on the people around him,
127
378080
3336
我嘅意思係 John von Neumann
畀佢周圍嘅人嘅印象
包括佢畀嗰個時代最犀利嘅數學家
同物理學家嘅印象
06:33
and this included包括 the greatest最大
mathematicians數學家 and physicists物理學家 of his time,
128
381440
4056
都係有紀錄低嘅
06:37
is fairly都幾 well-documented記錄良好.
129
385520
1936
如果一半關於佢嘅故事有一半係真嘅
06:39
If only half一半 the stories故事
about him are half一半 true真係,
130
387480
3776
咁毫無疑問
06:43
there's no question個問題
131
391280
1216
佢係有史以來其中一個最聰明嘅人
06:44
he's one of the smartest people
who has ever lived.
132
392520
2456
所以當我哋畫一幅比較智力嘅圖
06:47
So consider諗緊 the spectrum of intelligence情報.
133
395000
2520
喺右邊高分嘅位置
我哋有 John von Neumann
06:50
Here we have John約翰 von Neumann紐曼.
134
398320
1429
喺中間有你同我
06:53
And then we have you and me.
135
401560
1334
06:56
And then we have a chicken.
136
404120
1296
去到最左邊,我哋有雞仔
06:57
(Laughter笑聲)
137
405440
1936
(笑聲)
係吖,就係一隻雞仔
06:59
Sorry, a chicken.
138
407400
1216
07:00
(Laughter笑聲)
139
408640
1256
(笑聲)
我冇理由將呢個演講搞到咁灰㗎
07:01
There's no reason原因 for me to make this talk
more depressing令人沮喪 than it needs需要 to be.
140
409920
3736
(笑聲)
07:05
(Laughter笑聲)
141
413680
1600
07:08
It seems好似 overwhelmingly絕大多數 likely可能, however然而,
that the spectrum of intelligence情報
142
416339
3477
但好有可能智力分佈
遠比我哋目前認知嘅廣
07:11
extends延伸 much further進一步
than we currently目前 conceive設想,
143
419840
3120
如果我哋建造出
比我哋擁有更高智慧嘅機器
07:15
and if we build建立 machines機器
that are more intelligent智能 than we are,
144
423880
3216
佢哋嘅智力好有可能會
超越我哋認知嘅最高智力
07:19
they will very likely可能
explore探討 this spectrum
145
427120
2296
07:21
in ways方式 that we can't imagine想象,
146
429440
1856
07:23
and exceed超過 us in ways方式
that we can't imagine想象.
147
431320
2520
同埋以無法想像嘅方式超越我哋
07:27
And it's important重要 to recognize認識 that
this is true真係 by virtue美德 of speed速度 alone一手一腳.
148
435000
4336
同樣重要嘅係
單憑運算速度就可以超越我哋
啱唔啱?諗下如果我哋整咗一個
07:31
Right? So imagine想象 if we just built建立
a superintelligent超常智慧 AI
149
439360
5056
冇哈佛或者麻省理工研究人員
咁聰明嘅超級人工智能
07:36
that was no smarter
than your average平均 team團隊 of researchers研究者
150
444440
3456
07:39
at Stanford斯坦福 or MIT蔴省理工學院.
151
447920
2296
但電路運行速度大概
比生化電路快一百萬倍
07:42
Well, electronic電子 circuits電路
function功能 about a million times faster更快
152
450240
2976
07:45
than biochemical生化 ones,
153
453240
1256
所以呢個機器嘅思考速度應該會
比佢嘅創造者快大概一百萬倍
07:46
so this machine should think
about a million times faster更快
154
454520
3136
07:49
than the minds頭腦 that built建立 it.
155
457680
1816
所以如果佢運行一個星期
07:51
So you set設置 it running運行 for a week,
156
459520
1656
07:53
and it will perform執行 20,000 years
of human-level人級 intellectual智力 work,
157
461200
4560
佢就可以完成人類要兩萬年
先至完成得到嘅工作
07:58
week after week after week.
158
466400
1960
而我哋又點會明白
08:01
How could we even understand理解,
much less constrain約束,
159
469640
3096
人工智能係點樣完成咁龐大嘅運算呢?
08:04
a mind介意 making決策 this sort排序 of progress進展?
160
472760
2280
另一個令人擔憂嘅事,老實講
08:08
The other thing that's worrying, frankly坦率地說,
161
476840
2136
08:11
is that, imagine想象 the best最好 case情況下 scenario場景.
162
479000
4976
就係…不如想像一下最好嘅情形
08:16
So imagine想象 we hit upon之後 a design設計
of superintelligent超常智慧 AI
163
484000
4176
想像一下我哋設計咗一個
冇安全問題嘅超級人工智能
08:20
that has no safety安全 concerns關注.
164
488200
1376
我哋第一次擁有完美嘅設計
08:21
We have the perfect完美 design設計
the first time around.
165
489600
3256
08:24
It's as though雖然 we've我哋都 been handed遞畀 an oracle甲骨文
166
492880
2216
就好似我哋摞住
一個按照預期發展嘅神諭
08:27
that behaves行為 exactly完全 as intended打算.
167
495120
2016
08:29
Well, this machine would be
the perfect完美 labor-saving省力 device裝置.
168
497160
3720
呢個機器仲會變成完美嘅慳力設備
事關機器可以生產另一款機器出嚟
08:33
It can design設計 the machine
that can build建立 the machine
169
501680
2429
做任何體力勞動
08:36
that can do any physical物理 work,
170
504133
1763
兼由太陽能驅動
08:37
powered動力 by sunlight陽光,
171
505920
1456
成本仲同買原材料差唔多
08:39
more or less for the cost成本
of raw原始 materials材料.
172
507400
2696
所以,我哋唔單止講緊咕哩勞力嘅終結
08:42
So we're talking講嘢 about
the end結束 of human人類 drudgery苦差事.
173
510120
3256
我哋同時講緊大部份用腦工作嘅終結
08:45
We're also talking講嘢 about the end結束
of most intellectual智力 work.
174
513400
2800
咁我哋人類面對工作削減
應該何去何從?
08:49
So what would apes like ourselves自己
do in this circumstance情況?
175
517200
3056
我哋會好自由咁去掟飛盤 、同人按摩
08:52
Well, we'd我哋會 be free自由 to play Frisbee飛碟
and give each每個 other massages按摩.
176
520280
4080
服食一啲 LSD 精神藥
同埋著上怪異服飾
08:57
Add添加 some LSDLsd and some
questionable可疑 wardrobe衣櫃 choices選擇,
177
525840
2856
於是成個世界都會變成火人節嘅人咁
09:00
and the whole整個 world世界
could be like Burning燃燒 Man.
178
528720
2176
(笑聲)
09:02
(Laughter笑聲)
179
530920
1640
09:06
Now, that might可能 sound聲音 pretty good,
180
534320
2000
頭先講到嘅嘢聽起上嚟好似好好咁
但係撫心自問
09:09
but ask問吓 yourself自己 what would happen發生
181
537280
2376
面對目前嘅經濟政治秩序
乜嘢會發生呢?
09:11
under our current當前 economic經濟
and political政治 order?
182
539680
2736
09:14
It seems好似 likely可能 that we would witness證人
183
542440
2416
似乎我哋會目睹
我哋從未見過咁嚴重嘅
貧富懸殊同失業率
09:16
a level水平 of wealth財富 inequality不等式
and unemployment失業
184
544880
4136
09:21
that we have never seen看到 before.
185
549040
1496
如果呢筆新財富唔即時用嚟服務全人類
09:22
Absent缺席 a willingness意願
to immediately即刻 put this new新增功能 wealth財富
186
550560
2616
09:25
to the service服務 of all humanity人類,
187
553200
1480
就算一啲億萬富翁使好多錢
㨘靚商業雜誌嘅封面
09:27
a few幾個 trillionairestrillionaires could grace恩典
the covers涵蓋 of our business業務 magazines雜誌
188
555640
3616
世界上其他人都要挨餓
09:31
while the rest休息 of the world世界
would be free自由 to starve餓死.
189
559280
2440
咁如果俄羅斯人或者中國人
09:34
And what would the Russians俄羅斯
or the Chinese中文 do
190
562320
2296
聽到矽谷嘅一啲公司
打算使用一個超級人工智能
09:36
if they heard聽到 that some company公司
in Silicon Valley山谷
191
564640
2616
09:39
was about to deploy部署 a superintelligent超常智慧 AI?
192
567280
2736
佢哋會點諗?
09:42
This machine would be capable可以
of waging發動 war戰爭,
193
570040
2856
呢個機器有能力用未見過嘅力度
發動地面或者網絡戰爭
09:44
whether係唔係 terrestrial陸地 or cyber網絡,
194
572920
2216
09:47
with unprecedented前所未有 power權力.
195
575160
1680
呢個係「勝者全取」嘅情況
09:50
This is a winner-take-all贏家-全取 scenario場景.
196
578120
1856
喺呢場人工智能較量中有六個月嘅優勢
09:52
To be six months ahead提前
of the competition競爭 here
197
580000
3136
就係至少要做多人類五十萬年做到嘅嘢
09:55
is to be 500,000 years ahead提前,
198
583160
2776
09:57
at a minimum最低.
199
585960
1496
甚至只係關於人工智能突破嘅謠言
就可以令到人類亂起上嚟
09:59
So it seems好似 that even mere淨系 rumors謠言
of this kind一種 of breakthrough突破
200
587480
4736
10:04
could cause原因 our species物種 to go berserk發狂.
201
592240
2376
10:06
Now, one of the most frightening可怕 things,
202
594640
2896
依家最驚人嘅一件事,我覺得
10:09
in my view视图, at this moment時刻,
203
597560
2776
就係人工智能研究人員
安定人心時講嘅說話
10:12
are the kinds of things
that AI researchers研究者 say
204
600360
4296
10:16
when they want to be reassuring放心.
205
604680
1560
佢哋成日話,因為我哋有時間
所以我哋唔需要擔心
10:19
And the most common常見 reason原因
we're told not to worry is time.
206
607000
3456
「乜你唔知有排咩?
10:22
This is all a long way off,
don't you know.
207
610480
2056
10:24
This is probably可能 50 or 100 years away.
208
612560
2440
仲有五十年或者一百年先到。」
一位研究人員曾經咁講︰
10:27
One researcher研究員 has said,
209
615720
1256
10:29
"Worrying about AI safety安全
210
617000
1576
「擔心人工智能嘅安全就好似
擔心火星人口爆棚一樣。」
10:30
is like worrying
about overpopulation人口過剩 on Mars火星."
211
618600
2280
呢句嘢等如矽谷同你講︰
10:34
This is the Silicon Valley山谷 version版本
212
622116
1620
10:35
of "don't worry your
pretty little head about it."
213
623760
2376
「你十八廿二就杞人憂天!」
10:38
(Laughter笑聲)
214
626160
1336
(笑聲)
冇人意識到
攞時間嚟到講完全係無稽之談
10:39
No one seems好似 to notice通知
215
627520
1896
10:41
that referencing引用 the time horizon地平線
216
629440
2616
10:44
is a total non sequitur推論.
217
632080
2576
如果智能凈係同處理訊息有關
10:46
If intelligence情報 is just a matter個問題
of information信息 processing處理,
218
634680
3256
同埋我哋繼續改良我哋嘅機器嘅話
10:49
and we continue繼續 to improve提高 our machines機器,
219
637960
2656
我哋最終會生產到超級智能
10:52
we will produce生產
some form形式 of superintelligencesuperintelligence.
220
640640
2880
但我哋唔知道要用幾長時間
10:56
And we have no idea想法
how long it will take us
221
644320
3656
先可以生產安全嘅超級智能
11:00
to create創建 the conditions條件
to do that safely安全.
222
648000
2400
11:04
Let me say that again.
223
652200
1296
等我再講多一次
我哋唔知道要用幾長時間
11:05
We have no idea想法 how long it will take us
224
653520
3816
先可以生產安全嘅超級智能
11:09
to create創建 the conditions條件
to do that safely安全.
225
657360
2240
11:12
And if you haven't noticed注意,
50 years is not what it used to be.
226
660920
3456
如果你仲未意識到
五十年嘅概念已經唔同咗喇
11:16
This is 50 years in months.
227
664400
2456
呢幅圖顯示咗以月份計嘅五十年
11:18
This is how long we've我哋都 had the iPhoneIphone.
228
666880
1840
先係 iPhone 面世至今嘅時間
再係阿森一族出現係電視上嘅時間
11:21
This is how long "The Simpsons辛普森"
has been on television電視.
229
669440
2600
五十年不足以畀人類應對最大挑戰
11:24
Fifty五十 years is not that much time
230
672680
2376
11:27
to meet滿足 one of the greatest最大 challenges挑戰
our species物種 will ever face塊面.
231
675080
3160
再一次,我哋對於有理由發生嘅事
11:31
Once一旦 again, we seem好似 to be failing
to have an appropriate適當 emotional情感 response響應
232
679640
4016
未有採取適當嘅情緒反應
11:35
to what we have every reason原因
to believe is coming.
233
683680
2696
對此,電腦科學家 Stuart Russell
有一個好嘅比喻
11:38
The computer計數機 scientist科學家 Stuart斯图尔特 Russell羅素
has a nice analogy類比 here.
234
686400
3976
佢話︰想像一下我哋收到
一個來自外星文明嘅信息
11:42
He said, imagine想象 that we received收到
a message消息 from an alien外星人 civilization文明,
235
690400
4896
上面寫住:
11:47
which read:
236
695320
1696
「地球上嘅人類,
11:49
"People of Earth地球,
237
697040
1536
我哋五十年之後會到達你哋嘅星球。
11:50
we will arrive到達 on your planet星球 in 50 years.
238
698600
2360
請準備好。」
11:53
Get ready準備."
239
701800
1576
11:55
And now we're just counting計數 down
the months until直到 the mothership母艦 lands土地?
240
703400
4256
咁我哋依家凈係會倒數外星人來臨?
我哋應該更加緊張至係
11:59
We would feel a little
more urgency緊迫性 than we do.
241
707680
3000
另一個我哋被告知唔使擔心嘅原因係
12:04
Another另一個 reason原因 we're told not to worry
242
712680
1856
12:06
is that these machines機器
can't help but share共享 our values
243
714560
3016
呢啲機器只會識得
將我哋嘅價值觀傳開
12:09
because they will be literally從字面上
extensions擴展 of ourselves自己.
244
717600
2616
因為佢哋係我哋人類嘅附屬嘅一部分
但同時佢哋會被植入我哋嘅大腦
12:12
They'll佢地會 be grafted接枝 onto our brains大腦,
245
720240
1816
12:14
and we'll我哋就 essentially基本上
become成為 their佢哋 limbic邊緣 systems系統.
246
722080
2360
所以我哋會成為佢哋嘅邊緣系統
依家使啲時間諗下
最安全同唯一審慎嘅做法
12:17
Now take a moment時刻 to consider諗緊
247
725120
1416
12:18
that the safest安全
and only prudent審慎 path路徑 forward向前,
248
726560
3176
而推薦嘅做法就係
直接將呢種科技植入我哋嘅大腦
12:21
recommended推薦,
249
729760
1336
12:23
is to implant植入 this technology技術
directly直接 into our brains大腦.
250
731120
2800
12:26
Now, this may可能 in fact事實 be the safest安全
and only prudent審慎 path路徑 forward向前,
251
734600
3376
呢種做法可能係最安全同唯一審慎嘅
但係喺你將佢植入你個腦之前
12:30
but usually通常 one's人嘅 safety安全 concerns關注
about a technology技術
252
738000
3056
12:33
have to be pretty much worked工作 out
before you stick堅持 it inside your head.
253
741080
3656
科技嘅安全問題需要解決
12:36
(Laughter笑聲)
254
744760
2016
(笑聲)
更深一層嘅問題係
12:38
The deeper更深 problem個問題 is that
building建築 superintelligent超常智慧 AI on its own自己
255
746800
5336
人工智能自己整超級人工智能
似乎比整一個可以喺神經科學上
12:44
seems好似 likely可能 to be easier容易
256
752160
1736
12:45
than building建築 superintelligent超常智慧 AI
257
753920
1856
12:47
and having the completed完成 neuroscience神經
258
755800
1776
同我哋腦部無縫接合嘅
超級人工智能簡單
12:49
that allows允許 us to seamlessly無縫
integrate整合 our minds頭腦 with it.
259
757600
2680
12:52
And given that the companies公司
and governments政府 doing this work
260
760800
3176
考慮到從事研發人工智能嘅公司
同政府好可能會互相競爭
12:56
are likely可能 to perceive感知 themselves自己
as being in a race比賽 against all others,
261
764000
3656
考慮到要贏呢場比賽就要贏成個世界
12:59
given that to win贏得 this race比賽
is to win贏得 the world世界,
262
767680
3256
同埋先假設如果你下一刻
唔會糟塌人工智能嘅成果
13:02
provided提供 you don't destroy摧毀 it
in the next moment時刻,
263
770960
2456
13:05
then it seems好似 likely可能
that whatever無論 is easier容易 to do
264
773440
2616
咁樣,似乎更加簡單嘅事會完成咗先
13:08
will get done first.
265
776080
1200
但唔好彩嘅係
我除咗叫大家反思呢個問題
13:10
Now, unfortunately不幸,
I don't have a solution解決方案 to this problem個問題,
266
778560
2856
我就再冇辦法解決呢個問題
13:13
apart分開 from recommending推薦
that more of us think about it.
267
781440
2616
我覺得我哋喺人工智能方面
13:16
I think we need something
like a Manhattan曼哈頓 Project項目
268
784080
2376
需要好似「曼哈頓計劃」咁嘅計劃
13:18
on the topic主題 of artificial人工 intelligence情報.
269
786480
2016
唔係講點樣整人工智能
因為我認為人工智能終有一日會整到
13:20
Not to build建立 it, because I think
we'll我哋就 inevitably不可避免地 do that,
270
788520
2736
13:23
but to understand理解
how to avoid避免 an arms武器 race比賽
271
791280
3336
而係搞清楚點樣避免一場軍備競賽
同埋往符合我哋利益嘅方向
發展人工智能
13:26
and to build建立 it in a way
that is aligned一致 with our interests利益.
272
794640
3496
當你講緊可以自我改造嘅超級人工智能
13:30
When you're talking講嘢
about superintelligent超常智慧 AI
273
798160
2136
13:32
that can make changes變化 to itself本身,
274
800320
2256
13:34
it seems好似 that we only have one chance機會
to get the initial初始 conditions條件 right,
275
802600
4616
我哋似乎只有一個機會
令到人工智能發展得安全
就算發展得安全
13:39
and even then we will need to absorb吸收
276
807240
2056
我哋都要接受
人工智能對經濟同政治產生嘅結果
13:41
the economic經濟 and political政治
consequences後果 of getting得到 them right.
277
809320
3040
13:45
But the moment時刻 we admit承認
278
813760
2056
但係當我哋同意
訊息處理係智能嘅起步點
13:47
that information信息 processing處理
is the source of intelligence情報,
279
815840
4000
同意一啲適當嘅計算系統係智能嘅基礎
13:52
that some appropriate適當 computational計算 system系統
is what the basis基礎 of intelligence情報 is,
280
820720
4800
同意我哋會不斷完善人工智能
13:58
and we admit承認 that we will improve提高
these systems系統 continuously係咁,
281
826360
3760
同意將來有好多嘢超越我哋認知嘅
14:03
and we admit承認 that the horizon地平線
of cognition認知 very likely可能 far exceeds超過
282
831280
4456
14:07
what we currently目前 know,
283
835760
1200
咁我哋就必須要承認
我哋正喺度創造緊某種神明
14:10
then we have to admit承認
284
838120
1216
14:11
that we are in the process過程
of building建築 some sort排序 of god.
285
839360
2640
依家會係一個好時機
確保佢係可以同我哋共存嘅神明
14:15
Now would be a good time
286
843400
1576
14:17
to make sure it's a god we can live with.
287
845000
1953
好多謝你哋
14:20
Thank you very much.
288
848120
1536
(掌聲)
14:21
(Applause掌聲)
289
849680
5093
Translated by 潘 可儿
Reviewed by Chak Lam Wan

▲Back to top

ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopher
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.

Why you should listen

Sam Harris is the author of five New York Times bestsellers. His books include The End of FaithLetter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.

Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.

Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.

More profile about the speaker
Sam Harris | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee