ABOUT THE SPEAKER
Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com
TED@IBM

Grady Booch: Don't fear superintelligent AI

格雷迪·布啟: 不要害怕超級人工智能

Filmed:
2,866,438 views

科學家和哲學家格雷迪·布啟(Grady Booch)說,新科技帶來新的焦慮,但我們不需要害怕功能強大,沒有情感的人工智能電腦。布啟解釋我們將教它們,與它們分享我們的價值觀,來消弭我們對超級智能電腦的恐懼。與其擔心不可能存在的威脅,他敦促我們考慮人工智能如何改善人類生活。
- Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn. Full bio

Double-click the English transcript below to play the video.

00:12
When I was a kid孩子,
I was the quintessential典型 nerd書呆子.
0
760
3840
我小時候是個典型的書呆子。
00:17
I think some of you were, too.
1
5320
2176
你們有些人也曾經是書呆子。
00:19
(Laughter笑聲)
2
7520
1216
(笑聲)
00:20
And you, sir先生, who laughed笑了 the loudest最響亮,
you probably大概 still are.
3
8760
3216
那位笑最大聲的也許現在還是書呆子。
00:24
(Laughter笑聲)
4
12000
2256
(笑聲)
00:26
I grew成長 up in a small town
in the dusty塵土飛揚 plains平原 of north Texas德州,
5
14280
3496
我在北德州一座塵土飛揚的
平原小鎮長大,
00:29
the son兒子 of a sheriff郡治安官
who was the son兒子 of a pastor牧師.
6
17800
3336
父親是警長,祖父是牧師。
00:33
Getting入門 into trouble麻煩 was not an option選項.
7
21160
1920
所以我「惹上麻煩」是不可能的事。
00:36
And so I started開始 reading
calculus結石 books圖書 for fun開玩笑.
8
24040
3256
因此我開始讀微積分的書當消遣。
00:39
(Laughter笑聲)
9
27320
1536
(笑聲)
00:40
You did, too.
10
28880
1696
你也讀過。
00:42
That led me to building建造 a laser激光
and a computer電腦 and model模型 rockets火箭,
11
30600
3736
這引導我去製作雷射、
電腦和火箭模型,
00:46
and that led me to making製造
rocket火箭 fuel汽油 in my bedroom臥室.
12
34360
3000
然後我在臥室裡製造火箭推進燃料。
00:49
Now, in scientific科學 terms條款,
13
37960
3656
如果用科學上的說法來形容,
00:53
we call this a very bad idea理念.
14
41640
3256
我們把這叫做「糟糕透頂的主意」。
00:56
(Laughter笑聲)
15
44920
1216
(笑聲)
00:58
Around that same相同 time,
16
46160
2176
大概在那段時期,
01:00
Stanley斯坦利 Kubrick's他的 "2001: A Space空間 Odyssey奧德賽"
came來了 to the theaters劇院,
17
48360
3216
史丹利·庫柏力克執導的
《2001 太空漫遊》上映了,
01:03
and my life was forever永遠 changed.
18
51600
2200
我的人生也永遠地改變了。
01:06
I loved喜愛 everything about that movie電影,
19
54280
2056
我愛上了這部電影裡的一切,
01:08
especially特別 the HALHAL 9000.
20
56360
2536
尤其是「豪爾-9000」。
01:10
Now, HALHAL was a sentient有情 computer電腦
21
58920
2056
豪爾是一台知覺電腦,
01:13
designed設計 to guide指南 the Discovery發現 spacecraft宇宙飛船
22
61000
2456
設計來引導「發現號」太空船,
01:15
from the Earth地球 to Jupiter木星.
23
63480
2536
從地球前往木星。
01:18
HALHAL was also a flawed有缺陷 character字符,
24
66040
2056
豪爾也是個有缺陷的角色,
01:20
for in the end結束 he chose選擇
to value the mission任務 over human人的 life.
25
68120
4280
因為它最終選擇
任務優先、人命其次。
01:24
Now, HALHAL was a fictional虛構 character字符,
26
72840
2096
儘管豪爾只是個虛構的角色,
01:26
but nonetheless儘管如此, he speaks說話 to our fears恐懼,
27
74960
2656
卻直指我們內心的恐懼,
01:29
our fears恐懼 of being存在 subjugated征服
28
77640
2096
我們害怕被征服的恐懼,
01:31
by some unfeeling冷酷, artificial人造 intelligence情報
29
79760
3016
臣服於某個沒有情感,
對於人類漠不關心的人工智能電腦。
01:34
who is indifferent冷漠 to our humanity人性.
30
82800
1960
01:37
I believe that such這樣 fears恐懼 are unfounded杞人憂天.
31
85880
2576
我認為這種恐懼只是杞人憂天。
01:40
Indeed確實, we stand at a remarkable卓越 time
32
88480
2696
沒錯,我們現在處於
人類史上一個偉大的時代。
01:43
in human人的 history歷史,
33
91200
1536
01:44
where, driven驅動 by refusal拒絕 to accept接受
the limits範圍 of our bodies身體 and our minds頭腦,
34
92760
4976
我們拒絕接受肉體和心靈上的限制,
01:49
we are building建造 machines
35
97760
1696
我們製造細緻、精美,複雜
01:51
of exquisite精美, beautiful美麗
complexity複雜 and grace恩典
36
99480
3616
又優雅的機器。
01:55
that will extend延伸 the human人的 experience經驗
37
103120
2056
這些機器將透過各種超乎想像的方式,
01:57
in ways方法 beyond our imagining想像.
38
105200
1680
拓展人類的經驗範圍。
01:59
After a career事業 that led me
from the Air空氣 Force Academy學院
39
107720
2576
我曾任職於美國空軍學院,
02:02
to Space空間 Command命令 to now,
40
110320
1936
現在服務於美國空軍太空司令部。
02:04
I became成為 a systems系統 engineer工程師,
41
112280
1696
我成了系統工程師,
最近我被派去解決一個
02:06
and recently最近 I was drawn
into an engineering工程 problem問題
42
114000
2736
與美國太空總署的
火星任務有關的工程問題。
02:08
associated相關 with NASA's美國航空航天局 mission任務 to Mars火星.
43
116760
2576
02:11
Now, in space空間 flights航班 to the Moon月亮,
44
119360
2496
目前,前往月球的太空航行,
02:13
we can rely依靠 upon
mission任務 control控制 in Houston休斯頓
45
121880
3136
我們可以仰賴位於休士頓的控制中心
02:17
to watch over all aspects方面 of a flight飛行.
46
125040
1976
來監控這段旅程的所有層面。
02:19
However然而, Mars火星 is 200 times further進一步 away,
47
127040
3536
然而,火星的距離比月球遠了200倍,
02:22
and as a result結果 it takes
on average平均 13 minutes分鐘
48
130600
3216
一個訊號平均要花 13 分鐘,
02:25
for a signal信號 to travel旅行
from the Earth地球 to Mars火星.
49
133840
3136
才能從地球傳送到火星。
02:29
If there's trouble麻煩,
there's not enough足夠 time.
50
137000
3400
如果發生了任何狀況,
根本不夠時間解決。
02:32
And so a reasonable合理 engineering工程 solution
51
140840
2496
所以一個合理的工程解決方案,
02:35
calls電話 for us to put mission任務 control控制
52
143360
2576
就是把控制中心的位置,
02:37
inside the walls牆壁 of the Orion獵戶座 spacecraft宇宙飛船.
53
145960
3016
放在「獵戶座」太空船裡面。
02:41
Another另一個 fascinating迷人 idea理念
in the mission任務 profile輪廓
54
149000
2896
在任務方面,還有另一個絕妙的點子,
02:43
places地方 humanoid人形 robots機器人
on the surface表面 of Mars火星
55
151920
2896
就是提早在人類之前抵達前,
先在火星表面部署人型機器人。
02:46
before the humans人類 themselves他們自己 arrive到達,
56
154840
1856
02:48
first to build建立 facilities設備
57
156720
1656
它們首先建造設備,
02:50
and later後來 to serve服務 as collaborative共同
members會員 of the science科學 team球隊.
58
158400
3360
以後擔任科學小組的協助角色。
02:55
Now, as I looked看著 at this
from an engineering工程 perspective透視,
59
163400
2736
當我從工程師的角度來看這件事,
02:58
it became成為 very clear明確 to me
that what I needed需要 to architect建築師
60
166160
3176
很明顯我需要做的
03:01
was a smart聰明, collaborative共同,
61
169360
2176
就是製作一個聰明、善於合作、
03:03
socially社交上 intelligent智能
artificial人造 intelligence情報.
62
171560
2376
具備社交智能的人工智能電腦。
03:05
In other words, I needed需要 to build建立
something very much like a HALHAL
63
173960
4296
換句話說,我需要製作一個很像豪爾,
03:10
but without the homicidal殺人 tendencies傾向.
64
178280
2416
但是沒有殺人的癖好的電腦。
03:12
(Laughter笑聲)
65
180720
1360
(笑聲)
03:14
Let's pause暫停 for a moment時刻.
66
182920
1816
讓我們暫停一下。
03:16
Is it really possible可能 to build建立
an artificial人造 intelligence情報 like that?
67
184760
3896
真的有可能,製作出那樣的
人工智慧電腦嗎?
03:20
Actually其實, it is.
68
188680
1456
確實有可能。
03:22
In many許多 ways方法,
69
190160
1256
就許多方面來說,
03:23
this is a hard engineering工程 problem問題
70
191440
1976
這是一個困難的工程問題,
03:25
with elements分子 of AIAI,
71
193440
1456
其中包含了人工智慧的元素,
03:26
not some wet hair頭髮 ball of an AIAI problem問題
that needs需求 to be engineered工程.
72
194920
4696
而不是一個難解的人工智能問題。
03:31
To paraphrase意譯 Alan艾倫 Turing圖靈,
73
199640
2656
就如同艾倫·圖靈所說的,
03:34
I'm not interested有興趣
in building建造 a sentient有情 machine.
74
202320
2376
我沒有興趣製造一台情識的機器。
03:36
I'm not building建造 a HALHAL.
75
204720
1576
我所製作的並不是豪爾,
03:38
All I'm after is a simple簡單 brain,
76
206320
2416
我想要的是一個簡單的大腦,
03:40
something that offers報價
the illusion錯覺 of intelligence情報.
77
208760
3120
它能夠營造出「具備智慧」的錯覺。
03:45
The art藝術 and the science科學 of computing計算
have come a long way
78
213000
3136
在豪爾登台亮相之後,
藝術和計算科學已有了長足的進步。
03:48
since以來 HALHAL was onscreen在屏幕上,
79
216160
1496
03:49
and I'd imagine想像 if his inventor發明者
Dr博士. Chandra錢德拉 were here today今天,
80
217680
3216
我想如果它的發明者
強德拉(Chandra)博士今天也在現場,
03:52
he'd他會 have a whole整個 lot of questions問題 for us.
81
220920
2336
他會有很多問題要問我們。
03:55
Is it really possible可能 for us
82
223280
2096
我們是否真的有可能
03:57
to take a system系統 of millions百萬
upon millions百萬 of devices設備,
83
225400
4016
讓一個擁有無數元件的系統,
04:01
to read in their data數據 streams,
84
229440
1456
去讀自己的數據流,
04:02
to predict預測 their failures故障
and act法案 in advance提前?
85
230920
2256
然後預測故障以及提前預防?
04:05
Yes.
86
233200
1216
這是可能的。
04:06
Can we build建立 systems系統 that converse交談
with humans人類 in natural自然 language語言?
87
234440
3176
我們可否做出可使用自然語言
交談的系統?
04:09
Yes.
88
237640
1216
可以。
04:10
Can we build建立 systems系統
that recognize認識 objects對象, identify鑑定 emotions情緒,
89
238880
2976
我們可否做出能夠
辨別物體、辨識情緒、
04:13
emote作表情 themselves他們自己,
play games遊戲 and even read lips嘴唇?
90
241880
3376
模擬情緒、玩遊戲,
甚至能讀唇語的系統?
04:17
Yes.
91
245280
1216
可以。
04:18
Can we build建立 a system系統 that sets goals目標,
92
246520
2136
我們可否做出一個能夠設定目標,
04:20
that carries攜帶 out plans計劃 against反對 those goals目標
and learns獲悉 along沿 the way?
93
248680
3616
並一面執行朝向目標的計劃,
同時在過程中學習的系統?
04:24
Yes.
94
252320
1216
可以。
04:25
Can we build建立 systems系統
that have a theory理論 of mind心神?
95
253560
3336
我們可否做出具備心智理論的系統?
04:28
This we are learning學習 to do.
96
256920
1496
這個目前我們還在學習做。
04:30
Can we build建立 systems系統 that have
an ethical合乎道德的 and moral道德 foundation基礎?
97
258440
3480
我們可否做出擁有
倫理與道德基礎的系統?
04:34
This we must必須 learn學習 how to do.
98
262480
2040
這個我們必須摸索如何做。
04:37
So let's accept接受 for a moment時刻
99
265360
1376
所以讓我們姑且相信
04:38
that it's possible可能 to build建立
such這樣 an artificial人造 intelligence情報
100
266760
2896
建造這樣的人工智能系統有可能成真,
04:41
for this kind of mission任務 and others其他.
101
269680
2136
以用在諸如此類和其他的任務。
04:43
The next下一個 question
you must必須 ask yourself你自己 is,
102
271840
2536
接下來我們必須捫心自問,
04:46
should we fear恐懼 it?
103
274400
1456
我們是否應該感到害怕?
04:47
Now, every一切 new technology技術
104
275880
1976
誠然,每一項新的科技
04:49
brings帶來 with it
some measure測量 of trepidation不安.
105
277880
2896
都會帶來某種程度的擔憂。
04:52
When we first saw cars汽車,
106
280800
1696
當汽車問世的時候,
04:54
people lamented感嘆 that we would see
the destruction毀壞 of the family家庭.
107
282520
4016
眾人都在哀嘆家庭可能因此而毀滅。
04:58
When we first saw telephones電話 come in,
108
286560
2696
當電話問世的時候,
05:01
people were worried擔心 it would destroy破壞
all civil國內 conversation會話.
109
289280
2896
眾人擔心日常的對話會不復存在。
05:04
At a point in time we saw
the written書面 word become成為 pervasive無處不在,
110
292200
3936
當書面文字風行的時候,
05:08
people thought we would lose失去
our ability能力 to memorize記憶.
111
296160
2496
眾人以為我們會失去記憶的能力。
05:10
These things are all true真正 to a degree,
112
298680
2056
這些擔憂在某種程度上都有所根據,
05:12
but it's also the case案件
that these technologies技術
113
300760
2416
但同時這些科技
05:15
brought to us things
that extended擴展 the human人的 experience經驗
114
303200
3376
以某些深刻的方式,
帶給我們許多拓展人類經驗的事物。
05:18
in some profound深刻 ways方法.
115
306600
1880
05:21
So let's take this a little further進一步.
116
309840
2280
讓我們再繼續探討。
05:25
I do not fear恐懼 the creation創建
of an AIAI like this,
117
313120
4736
我並不懼怕創造
像這樣的人工智能系統,
05:29
because it will eventually終於
embody體現 some of our values.
118
317880
3816
因為它最終會
體現我們的一些價值觀。
05:33
Consider考慮 this: building建造 a cognitive認知 system系統
is fundamentally從根本上 different不同
119
321720
3496
思考一下:建造認知系統
與過去建造傳統
軟體密集型的系統根本不同。
05:37
than building建造 a traditional傳統
software-intensive軟件密集型 system系統 of the past過去.
120
325240
3296
05:40
We don't program程序 them. We teach them.
121
328560
2456
我們不寫電腦程式。我們教它們。
05:43
In order訂購 to teach a system系統
how to recognize認識 flowers花卉,
122
331040
2656
為了教導系統如何辨識花,
05:45
I show顯示 it thousands數千 of flowers花卉
of the kinds I like.
123
333720
3016
我給它們看數以千計
我喜歡的花的圖片。
05:48
In order訂購 to teach a system系統
how to play a game遊戲 --
124
336760
2256
為了教系統如何玩遊戲--
05:51
Well, I would. You would, too.
125
339040
1960
我會,你也會。
05:54
I like flowers花卉. Come on.
126
342600
2040
我喜歡花。你也是吧!
05:57
To teach a system系統
how to play a game遊戲 like Go,
127
345440
2856
為了教系統如何玩遊戲,例如圍棋,
06:00
I'd have it play thousands數千 of games遊戲 of Go,
128
348320
2056
我會讓它一面下數千局圍棋,
06:02
but in the process處理 I also teach it
129
350400
1656
也在下棋的過程中
教它如何分辨好的、不好的棋局。
06:04
how to discern辨別
a good game遊戲 from a bad game遊戲.
130
352080
2416
06:06
If I want to create創建 an artificially人為
intelligent智能 legal法律 assistant助理,
131
354520
3696
如果我要造個人工智能的法律助理,
06:10
I will teach it some corpus文集 of law
132
358240
1776
我會教它一些法律,
06:12
but at the same相同 time I am fusing定影 with it
133
360040
2856
同時我會將它與憐憫的感覺
06:14
the sense of mercy憐憫 and justice正義
that is part部分 of that law.
134
362920
2880
和法律正義融合在一起。
06:18
In scientific科學 terms條款,
this is what we call ground地面 truth真相,
135
366560
2976
以科學術語方面,
這就是我們所謂的真理,
06:21
and here's這裡的 the important重要 point:
136
369560
2016
重點來了:
06:23
in producing生產 these machines,
137
371600
1456
在製造這些機器時,
06:25
we are therefore因此 teaching教學 them
a sense of our values.
138
373080
3416
我們因此教它們我們的價值感。
06:28
To that end結束, I trust相信
an artificial人造 intelligence情報
139
376520
3136
為此,我相信人工智能
06:31
the same相同, if not more,
as a human人的 who is well-trained訓練有素.
140
379680
3640
至少不會輸給訓練有素的人。
06:36
But, you may可能 ask,
141
384080
1216
但是,你可能問,
06:37
what about rogue流氓 agents代理,
142
385320
2616
那些流氓代理,
06:39
some well-funded資金雄厚
nongovernment非政府 organization組織?
143
387960
3336
那些資金豐富的非政府組織呢?
06:43
I do not fear恐懼 an artificial人造 intelligence情報
in the hand of a lone孤單 wolf.
144
391320
3816
我不擔心獨狼(獨行俠)
手上的人工智能。
06:47
Clearly明確地, we cannot不能 protect保護 ourselves我們自己
against反對 all random隨機 acts行為 of violence暴力,
145
395160
4536
很顯然,我們無法防禦
所有隨機的暴力行為,
06:51
but the reality現實 is such這樣 a system系統
146
399720
2136
但是現實是這種系統,
06:53
requires要求 substantial大量的 training訓練
and subtle微妙 training訓練
147
401880
3096
需要大量的訓練和微妙的訓練,
06:57
far beyond the resources資源 of an individual個人.
148
405000
2296
非個人的資源所能及。
06:59
And furthermore此外,
149
407320
1216
此外,
07:00
it's far more than just injecting注射
an internet互聯網 virus病毒 to the world世界,
150
408560
3256
遠遠超出像把互聯網病毒注入世界,
07:03
where you push a button按鍵,
all of a sudden突然 it's in a million百萬 places地方
151
411840
3096
只要按個按鈕,
頃刻之間病毒就會散播各處,
07:06
and laptops筆記本電腦 start開始 blowing up
all over the place地點.
152
414960
2456
所有地方的筆電開始當機。
07:09
Now, these kinds of substances物質
are much larger,
153
417440
2816
這些實質要大得多,
07:12
and we'll certainly當然 see them coming未來.
154
420280
1715
我們確定會看到它們的到來。
07:14
Do I fear恐懼 that such這樣
an artificial人造 intelligence情報
155
422520
3056
我擔心這樣的人工智能
07:17
might威力 threaten威脅 all of humanity人性?
156
425600
1960
會威脅全體人類嗎?
07:20
If you look at movies電影
such這樣 as "The Matrix矩陣," "Metropolis都會,"
157
428280
4376
電影如《駭客任務》、
《大都會》、《魔鬼終結者》,
07:24
"The Terminator終結者,"
shows節目 such這樣 as "Westworld西方世界,"
158
432680
3176
電視劇像是《西方極樂園》,
07:27
they all speak說話 of this kind of fear恐懼.
159
435880
2136
都是在談這種恐懼。
07:30
Indeed確實, in the book "SuperintelligenceSuperintelligence"
by the philosopher哲學家 Nick缺口 Bostrom博斯特羅姆,
160
438040
4296
的確,在哲學家尼克· 博斯特倫
寫的 《超級智能》書裡,
07:34
he picks精選 up on this theme主題
161
442360
1536
他以此主題,
07:35
and observes觀察 that a superintelligence超級智能
might威力 not only be dangerous危險,
162
443920
4016
主張超智能不僅危險,
07:39
it could represent代表 an existential存在 threat威脅
to all of humanity人性.
163
447960
3856
還可能威脅全人類的存亡。
07:43
Dr博士. Bostrom's博斯特羅姆的 basic基本 argument論據
164
451840
2216
博斯特倫博士的基本論點是:
07:46
is that such這樣 systems系統 will eventually終於
165
454080
2736
這種系統最終
07:48
have such這樣 an insatiable貪心
thirst口渴 for information信息
166
456840
3256
將會不屈不撓地渴望資訊,
07:52
that they will perhaps也許 learn學習 how to learn學習
167
460120
2896
它們或許會學習到如何學習的方法,
07:55
and eventually終於 discover發現
that they may可能 have goals目標
168
463040
2616
最終發現它們的目標
可能與人類的背道而馳。
07:57
that are contrary相反 to human人的 needs需求.
169
465680
2296
08:00
Dr博士. Bostrom博斯特羅姆 has a number of followers追隨者.
170
468000
1856
博斯特倫博士有不少追隨者。
08:01
He is supported支持的 by people
such這樣 as Elon伊隆 Musk and Stephen斯蒂芬 Hawking霍金.
171
469880
4320
他得到伊隆·馬斯克和
史蒂芬·霍金等人的支持。
08:06
With all due應有 respect尊重
172
474880
2400
我非常尊重
08:10
to these brilliant輝煌 minds頭腦,
173
478160
2016
這些非常聰明的人,
08:12
I believe that they
are fundamentally從根本上 wrong錯誤.
174
480200
2256
但是我相信他們從根本上就是錯誤的,
08:14
Now, there are a lot of pieces
of Dr博士. Bostrom's博斯特羅姆的 argument論據 to unpack解壓,
175
482480
3176
博斯格羅姆博士的許多說法
需要被詳細分析,
08:17
and I don't have time to unpack解壓 them all,
176
485680
2136
但在此我沒有時間分別解說。
08:19
but very briefly簡要地, consider考慮 this:
177
487840
2696
簡要的說,就考慮這個:
08:22
super knowing會心 is very different不同
than super doing.
178
490560
3736
超智能與超執行完全是兩回事。
08:26
HALHAL was a threat威脅 to the Discovery發現 crew船員
179
494320
1896
只有當豪爾全面掌控「發現號」時,
08:28
only insofar只要 as HALHAL commanded指揮
all aspects方面 of the Discovery發現.
180
496240
4416
才對「發現號」的組員造成威脅。
08:32
So it would have to be
with a superintelligence超級智能.
181
500680
2496
所以它就必須是有超智慧。
08:35
It would have to have dominion主權
over all of our world世界.
182
503200
2496
它必須擁有對我們全世界的統治權。
08:37
This is the stuff東東 of Skynet天網
from the movie電影 "The Terminator終結者"
183
505720
2816
這是電影《魔鬼終結者》 裡的
「天網」所具有的。
08:40
in which哪一個 we had a superintelligence超級智能
184
508560
1856
它有超智能
08:42
that commanded指揮 human人的 will,
185
510440
1376
來指揮的人的意志,
08:43
that directed針對 every一切 device設備
that was in every一切 corner of the world世界.
186
511840
3856
來操控世界每一個角落的每個設備。
08:47
Practically幾乎 speaking請講,
187
515720
1456
實際上,
08:49
it ain't gonna happen發生.
188
517200
2096
這不會發生。
08:51
We are not building建造 AIs認可
that control控制 the weather天氣,
189
519320
3056
我們不是製造人工智能來控制氣候、
08:54
that direct直接 the tides潮汐,
190
522400
1336
控制海潮,
08:55
that command命令 us
capricious任性, chaotic混亂的 humans人類.
191
523760
3376
指揮反覆無常和混亂的人類。
08:59
And furthermore此外, if such這樣
an artificial人造 intelligence情報 existed存在,
192
527160
3896
此外,如果這樣的人工智能真的存在,
09:03
it would have to compete競爭
with human人的 economies經濟,
193
531080
2936
它必須與人類的經濟競爭,
09:06
and thereby從而 compete競爭 for resources資源 with us.
194
534040
2520
從而與我們競爭資源。
09:09
And in the end結束 --
195
537200
1216
最後,
09:10
don't tell SiriSiri的 this --
196
538440
1240
不要告訴 Siri 這個——
09:12
we can always unplug them.
197
540440
1376
我們可以隨時拔掉它們的插頭。
09:13
(Laughter笑聲)
198
541840
2120
(笑聲)
09:17
We are on an incredible難以置信 journey旅程
199
545360
2456
我們正處於與我們的機器共同演化的
09:19
of coevolution協同進化 with our machines.
200
547840
2496
一個難以置信的旅程。
09:22
The humans人類 we are today今天
201
550360
2496
今天的人類
09:24
are not the humans人類 we will be then.
202
552880
2536
不是屆時的人類。
09:27
To worry擔心 now about the rise上升
of a superintelligence超級智能
203
555440
3136
現在擔心超級智能的出現,
09:30
is in many許多 ways方法 a dangerous危險 distraction娛樂
204
558600
3056
會使我們在許多方面危險地分心,
09:33
because the rise上升 of computing計算 itself本身
205
561680
2336
因為計算機本身的興起,
09:36
brings帶來 to us a number
of human人的 and societal社會的 issues問題
206
564040
3016
帶來的許多人類和社會問題,
09:39
to which哪一個 we must必須 now attend出席.
207
567080
1640
我們現在就必須解決。
09:41
How shall I best最好 organize組織 society社會
208
569360
2816
當人力的需要逐漸減少時,
09:44
when the need for human人的 labor勞動 diminishes減少?
209
572200
2336
我們如何重整這個社會?
09:46
How can I bring帶來 understanding理解
and education教育 throughout始終 the globe地球
210
574560
3816
我該如何為全球帶來理解和教育
09:50
and still respect尊重 our differences分歧?
211
578400
1776
而仍然尊重我們彼此的分歧?
09:52
How might威力 I extend延伸 and enhance提高 human人的 life
through通過 cognitive認知 healthcare衛生保健?
212
580200
4256
我們如何通過認知保健
延伸和增強人的生命?
09:56
How might威力 I use computing計算
213
584480
2856
我如何使用計算技術
09:59
to help take us to the stars明星?
214
587360
1760
來幫助我們去到其他星球?
10:01
And that's the exciting扣人心弦 thing.
215
589760
2040
這是令人興奮的事。
10:04
The opportunities機會 to use computing計算
216
592400
2336
透過計算技術的運用,
10:06
to advance提前 the human人的 experience經驗
217
594760
1536
來拓展人類經驗的機會,
10:08
are within our reach達到,
218
596320
1416
就在我們眼前。
10:09
here and now,
219
597760
1856
此時、此刻,
10:11
and we are just beginning開始.
220
599640
1680
我們才正要起步而已。
10:14
Thank you very much.
221
602280
1216
非常感謝各位。
10:15
(Applause掌聲)
222
603520
4286
(掌聲)
Translated by Melody Tang
Reviewed by Helen Chang

▲Back to top

ABOUT THE SPEAKER
Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee