ABOUT THE SPEAKER
Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com
TED@IBM

Grady Booch: Don't fear superintelligent AI

Grady Booch: 人工智能,使乜驚啊?

Filmed:
2,866,438 views

新科技帶嚟新憂慮。科學家兼哲學家 Grady Booch 同我哋講,高智能又冷漠無情嘅人工智能使乜驚啊。 Grady Booch 叫我哋唔好先入為主俾啲科幻片嚇親,並解釋人類會點樣教育,而唔係編寫,令人工智能擁有人類嘅價值觀。與其對無可能發生嘅事擔心,不如諗下人工智能將會點樣提高人類嘅生活質素。
- Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn. Full bio

Double-click the English transcript below to play the video.

我細個嗰陣係個書蟲
00:12
When I was a kid孩子,
I was the quintessential典型 nerd.
0
760
3840
00:17
I think some of you were, too.
1
5320
2176
我估你哋一啲人都係
00:19
(Laughter笑聲)
2
7520
1216
(笑聲)
00:20
And you, sir先生, who laughed the loudest,
you probably可能 still are.
3
8760
3216
仲有你呀,阿生
笑得最大聲嗰個,睇嚟宜家都仲係
00:24
(Laughter笑聲)
4
12000
2256
(笑聲)
00:26
I grew增長 up in a small town
in the dusty塵土飛揚 plains平原 of north Texas德州,
5
14280
3496
我喺德州北部平原一個鎮仔大嘅
00:29
the son of a sheriff警長
who was the son of a pastor牧師.
6
17800
3336
我阿爺係牧師,我老豆係警察
00:33
Getting得到 into trouble唔該 was not an option選項.
7
21160
1920
所以自細就係乖乖仔
00:36
And so I started初時 reading閲讀
calculus微積分 books for fun有趣.
8
24040
3256
睇微積分書當娛樂
00:39
(Laughter笑聲)
9
27320
1536
(笑聲)
00:40
You did, too.
10
28880
1696
你都係咧
00:42
That led me to building建築 a laser激光
and a computer計數機 and model模型 rockets火箭,
11
30600
3736
於是我自己整
激光啦、電腦啦、火箭模型啦
00:46
and that led me to making決策
rocket火箭 fuel燃料 in my bedroom睡房.
12
34360
3000
最後仲喺房裏邊整埋火箭燃料
00:49
Now, in scientific科學 terms條款,
13
37960
3656
用我哋宜家科學嘅講法
00:53
we call this a very bad idea想法.
14
41640
3256
係搞搞震無幫襯
00:56
(Laughter笑聲)
15
44920
1216
(笑聲)
00:58
Around that same相同 time,
16
46160
2176
就喺嗰陣
Stanley Kubrick 拍嗰部
《2001太空漫遊》上畫
01:00
Stanley赤柱 Kubrick's "2001: A Space空間 Odyssey奧德賽"
came to the theaters劇院,
17
48360
3216
我嘅生活從此改寫
01:03
and my life was forever永遠 changed.
18
51600
2200
01:06
I loved everything about that movie電影,
19
54280
2056
我係部片嘅忠實擁躉
01:08
especially尤其係 the HAL哈爾 9000.
20
56360
2536
特別中意哈爾 9000
01:10
Now, HAL哈爾 was a sentient意識 computer計數機
21
58920
2056
哈爾係一部機械人
01:13
designed設計 to guide指導 the Discovery發現 spacecraft航天器
22
61000
2456
專為太空船導航而設計
01:15
from the Earth地球 to Jupiter木星.
23
63480
2536
指引太空船由地球飛往木星
01:18
HAL哈爾 was also a flawed缺陷 character字符,
24
66040
2056
但哈爾都有缺點
01:20
for in the end結束 he chose選擇
to value價值 the mission任務 over human人類 life.
25
68120
4280
佢最後為實現目的而罔顧人類安全
01:24
Now, HAL哈爾 was a fictional虛構 character字符,
26
72840
2096
哈爾雖然係虛構
01:26
but nonetheless儘管如此 he speaks to our fears恐懼,
27
74960
2656
但佢反映咗我哋嘅恐懼
01:29
our fears恐懼 of being subjugated征服
28
77640
2096
就係怕無感情嘅人工智能
以後會操控人類
01:31
by some unfeeling绝情, artificial人工 intelligence情報
29
79760
3016
01:34
who is indifferent冷漠 to our humanity人類.
30
82800
1960
01:37
I believe that such fears恐懼 are unfounded根據.
31
85880
2576
我認為呢種恐懼唔成立
01:40
Indeed講真, we stand at a remarkable顯著 time
32
88480
2696
講真,人類呢一刻嘅歷史
可以話係最輝煌嘅
01:43
in human人類 history歷史,
33
91200
1536
01:44
where, driven驅動 by refusal拒絕 to accept接受
the limits限制 of our bodies機構 and our minds頭腦,
34
92760
4976
我哋冇向身體同大腦限制低頭
01:49
we are building建築 machines機器
35
97760
1696
反而製造出精良、複雜又美觀嘅機器
01:51
of exquisite精緻, beautiful
complexity複雜性 and grace恩典
36
99480
3616
01:55
that will extend擴展 the human人類 experience經驗
37
103120
2056
協助人類逹至超乎想象嘅領域
01:57
in ways方式 beyond超越 our imagining想象.
38
105200
1680
01:59
After a career職業 that led me
from the Air空氣 Force Academy學院
39
107720
2576
我以前喺空軍學校同太空指揮部做嘢嘅
02:02
to Space空間 Command命令 to now,
40
110320
1936
咁而家,我做咗系統工程師
02:04
I became成為 a systems系統 engineer工程師,
41
112280
1696
02:06
and recently最近 I was drawn繪製
into an engineering工程 problem個問題
42
114000
2736
最近我著手於美國太空總署嘅
火星任務嘅一個工程問題
02:08
associated相關 with NASA'sNASA 嘅 mission任務 to Mars火星.
43
116760
2576
02:11
Now, in space空間 flights航班 to the Moon月亮,
44
119360
2496
到目前為止,所有上月球嘅太空船
02:13
we can rely依賴 upon之後
mission任務 control控制 in Houston休斯頓
45
121880
3136
我哋都可以留喺休斯頓控制中心控制
02:17
to watch over all aspects方面 of a flight飛行.
46
125040
1976
02:19
However然而, Mars火星 is 200 times further進一步 away,
47
127040
3536
不過,火星比月球遠 200 倍
於是乎由地球傳送到火星嘅訊號
02:22
and as a result結果 it takes
on average平均 13 minutes分鐘
48
130600
3216
02:25
for a signal信號 to travel旅行
from the Earth地球 to Mars火星.
49
133840
3136
平均要花 13 分鐘先可以送到
02:29
If there's trouble唔該,
there's not enough time.
50
137000
3400
所以如果中途出咗咩問題
我哋都唔夠時間解決
02:32
And so a reasonable合理 engineering工程 solution解決方案
51
140840
2496
所以解決方法係
02:35
calls調用 for us to put mission任務 control控制
52
143360
2576
我哋要將任務控制台
裝喺太空船獵戶號嘅墻板裏邊
02:37
inside the walls of the Orion獵戶座 spacecraft航天器.
53
145960
3016
02:41
Another另一個 fascinating迷人 idea想法
in the mission任務 profile配置文件
54
149000
2896
另一個方法係
02:43
places地方 humanoid人形 robots機械人
on the surface表面 of Mars火星
55
151920
2896
喺人類登陸之前
喺火星表面放置一個機械人
02:46
before the humans人類 themselves自己 arrive到達,
56
154840
1856
02:48
first to build建立 facilities設施
57
156720
1656
機械人前期負責興建設施
02:50
and later之後 to serve服務 as collaborative協同
members成員 of the science科學 team團隊.
58
158400
3360
後期就加入科學團隊擔當協助
02:55
Now, as I looked at this
from an engineering工程 perspective視角,
59
163400
2736
而當我企喺工程師嘅角度嚟睇
02:58
it became成為 very clear清楚 to me
that what I needed需要 to architect建築師
60
166160
3176
我好清楚我要打造嘅
係一個高智能、富團隊精神
03:01
was a smart, collaborative協同,
61
169360
2176
03:03
socially社會 intelligent智能
artificial人工 intelligence情報.
62
171560
2376
同善於交際嘅人工智能
03:05
In other words的話, I needed需要 to build建立
something very much like a HAL哈爾
63
173960
4296
換句話嚟講,我要整到好似哈爾咁樣
03:10
but without the homicidal殺人 tendencies傾向.
64
178280
2416
不過係唔會當人類
係冚家鏟要消滅嘅機械人
03:12
(Laughter笑聲)
65
180720
1360
(笑聲)
03:14
Let's pause暫停 for a moment時刻.
66
182920
1816
等陣先
03:16
Is it really possible可能 to build建立
an artificial人工 intelligence情報 like that?
67
184760
3896
咁係咪真係有可能整個咁嘅機械人先?
03:20
Actually講真, it is.
68
188680
1456
其實,係可以嘅
03:22
In many好多 ways方式,
69
190160
1256
好多時,設計人工智能元素係好難嘅
03:23
this is a hard努力 engineering工程 problem個問題
70
191440
1976
03:25
with elements元素 of AI,
71
193440
1456
03:26
not some wet hair頭髮 ball of an AI problem個問題
that needs需要 to be engineered工程.
72
194920
4696
咁唔係講緊整機械人頭髮
03:31
To paraphrase套用 Alan艾倫 Turing圖靈機,
73
199640
2656
學 Alan Turning 話齋
03:34
I'm not interested興趣
in building建築 a sentient意識 machine.
74
202320
2376
我無興趣整隻有感覺嘅機器
03:36
I'm not building建築 a HAL哈爾.
75
204720
1576
亦都唔係生產另一個哈爾
03:38
All I'm after is a simple簡單 brain大腦,
76
206320
2416
我所追求嘅係一個會簡單思考
03:40
something that offers提供
the illusion幻想 of intelligence情報.
77
208760
3120
同有少少智慧嘅機械人
03:45
The art藝術 and the science科學 of computing計算
have come a long way
78
213000
3136
自從哈爾喺電影出現之後
計算科學已經發展咗好多
03:48
since因為 HAL哈爾 was onscreen屏幕,
79
216160
1496
03:49
and I'd imagine想象 if his inventor發明家
Dr博士. Chandra钱德拉 were here today今日,
80
217680
3216
我想像得到,如果發明佢嘅
Chandra 博士今日喺度嘅話
03:52
he'd佢會 have a whole整個 lot of questions個問題 for us.
81
220920
2336
佢實有好多問題問我哋
03:55
Is it really possible可能 for us
82
223280
2096
我哋有冇可能
從數以億計嘅設備裡面讀取數據
03:57
to take a system系統 of millions数百万
upon之後 millions数百万 of devices設備,
83
225400
4016
04:01
to read in their佢哋 data數據 streams,
84
229440
1456
同時預測機器犯嘅錯誤
同埋及早更正呢?
04:02
to predict預測 their佢哋 failures
and act行為 in advance進展?
85
230920
2256
04:05
Yes.
86
233200
1216
可以
04:06
Can we build建立 systems系統 that converse交談
with humans人類 in natural自然 language語言?
87
234440
3176
我哋可唔可以
整隻機械人出嚟識講人話?
04:09
Yes.
88
237640
1216
可以
我哋可唔可以整隻機械人
04:10
Can we build建立 systems系統
that recognize認識 objects對象, identify識別 emotions情緒,
89
238880
2976
識得識別物體、辨別情感
自帶情感、打機,甚至識讀唇語?
04:13
emote表情 themselves自己,
play games遊戲 and even read lips嘴唇?
90
241880
3376
04:17
Yes.
91
245280
1216
可以
我哋可唔可以整隻機械人識訂立目標
04:18
Can we build建立 a system系統 that sets goals目標,
92
246520
2136
04:20
that carries進行 out plans計劃 against those goals目標
and learns學習 along沿 the way?
93
248680
3616
實現目標兼且喺過程中自學?
04:24
Yes.
94
252320
1216
可以
我哋可唔可以整隻
有思維邏輯嘅機械人?
04:25
Can we build建立 systems系統
that have a theory理論 of mind介意?
95
253560
3336
04:28
This we are learning學習 to do.
96
256920
1496
我哋宜家嘗試緊
04:30
Can we build建立 systems系統 that have
an ethical倫理 and moral道德 foundation基礎?
97
258440
3480
我哋可唔可以整隻機械人
明白道德觀念同底線?
04:34
This we must必須 learn學習 how to do.
98
262480
2040
呢個任務我哋責無旁貸
04:37
So let's accept接受 for a moment時刻
99
265360
1376
咁姑且我哋有可能
04:38
that it's possible可能 to build建立
such an artificial人工 intelligence情報
100
266760
2896
為呢個任務或者其他任務
整隻咁樣嘅機械人
04:41
for this kind一種 of mission任務 and others.
101
269680
2136
04:43
The next question個問題
you must必須 ask問吓 yourself自己 is,
102
271840
2536
跟住你實會問
咁嘅人工智能會唔會造成威脅?
04:46
should we fear恐懼 it?
103
274400
1456
04:47
Now, every new新增功能 technology技術
104
275880
1976
時至今日,每項新科技面世
唔多唔少都會帶嚟不安
04:49
brings帶嚟 with it
some measure措施 of trepidation誠惶誠恐.
105
277880
2896
04:52
When we first saw cars汽車,
106
280800
1696
以前啲人第一次見到汽車時
04:54
people lamented感嘆 that we would see
the destruction破壞 of the family家庭.
107
282520
4016
就驚車禍會造成家破人亡
04:58
When we first saw telephones電話 come in,
108
286560
2696
以前第一次見到電話時
05:01
people were worried it would destroy摧毀
all civil民事 conversation談話.
109
289280
2896
啲人就驚人同人之間嘅交流會受到破壞
05:04
At a pointD in time we saw
the written word become成為 pervasive普遍,
110
292200
3936
曾幾何時啲人見到文字可以傳送
05:08
people thought we would lose失去
our ability能力 to memorize記住.
111
296160
2496
又驚人類嘅記憶力會喪失
05:10
These things are all true真係 to a degree程度,
112
298680
2056
呢啲不安喺一定程度上嚟講無錯
05:12
but it's also the case情況下
that these technologies技術
113
300760
2416
但同時呢啲科技
拓闊咗人類嘅體驗
05:15
brought to us things
that extended擴展 the human人類 experience經驗
114
303200
3376
05:18
in some profound深刻 ways方式.
115
306600
1880
05:21
So let's take this a little further進一步.
116
309840
2280
我哋不如再講遠啲
05:25
I do not fear恐懼 the creation創造
of an AI like this,
117
313120
4736
我唔驚呢啲人工智能面世
05:29
because it will eventually最終
embody體現 some of our values.
118
317880
3816
因為佢最終會接受人類嘅一啲價值
05:33
Consider諗緊 this: building建築 a cognitive認知 system系統
is fundamentally從根本上 different不同
119
321720
3496
試下咁唸:製造感知機械人
同製造以前傳統嘅軟件密集型機械人
05:37
than building建築 a traditional傳統
software-intensive軟件密集型 system系統 of the past過去.
120
325240
3296
係有根本性分別
05:40
We don't program程序 them. We teach them.
121
328560
2456
我哋而家唔係編程機械人
我哋係教機械人
05:43
In order to teach a system系統
how to recognize認識 flowers,
122
331040
2656
為咗教機械人識別花
05:45
I show顯示 it thousands數以千計 of flowers
of the kinds I like.
123
333720
3016
我攞幾千種我鍾意嘅花畀佢睇
05:48
In order to teach a system系統
how to play a game遊戲 --
124
336760
2256
為咗教曉機械人點打機——
05:51
Well, I would. You would, too.
125
339040
1960
我真係教㗎。你都會咁做
05:54
I like flowers. Come on.
126
342600
2040
我真係鍾意花呢——
05:57
To teach a system系統
how to play a game遊戲 like Go,
127
345440
2856
為咗教識機械人打「殺出重圍」
06:00
I'd have it play thousands數以千計 of games遊戲 of Go,
128
348320
2056
我畀佢打幾千局遊戲
06:02
but in the process過程 I also teach it
129
350400
1656
不過呢個過程,我又會教佢
辨別好局同劣局
06:04
how to discern辨別
a good game遊戲 from a bad game遊戲.
130
352080
2416
06:06
If I want to create創建 an artificially人工
intelligent智能 legal法律 assistant助理,
131
354520
3696
如果我想整隻法律助理機械人
06:10
I will teach it some corpus語料庫 of law法律
132
358240
1776
我除咗會教佢法律
06:12
but at the same相同 time I am fusing融合 with it
133
360040
2856
我仲會教佢法律嘅寛容同公義
06:14
the sense of mercy憐憫 and justice正義
that is part部分 of that law法律.
134
362920
2880
06:18
In scientific科學 terms條款,
this is what we call ground地面 truth真理,
135
366560
2976
套用科學術語
呢啲係我哋所謂嘅參考標準
06:21
and here's呢度有 the important重要 pointD:
136
369560
2016
而且重要嘅係:
06:23
in producing生產 these machines機器,
137
371600
1456
製造呢啲機械人時
06:25
we are therefore因此 teaching教學 them
a sense of our values.
138
373080
3416
我哋將自己嘅價值觀灌輸畀佢
06:28
To that end結束, I trust信任
an artificial人工 intelligence情報
139
376520
3136
最終,我相信人工智能
06:31
the same相同, if not more,
as a human人類 who is well-trained訓練有素的.
140
379680
3640
會同一個受過專業訓練嘅人一樣
06:36
But, you may可能 ask問吓,
141
384080
1216
不過,你哋可能會問
06:37
what about rogue流氓 agents代理,
142
385320
2616
如果呢啲人工智能被非法分子利用呢?
06:39
some well-funded資金充足
nongovernment民間 organization組織?
143
387960
3336
06:43
I do not fear恐懼 an artificial人工 intelligence情報
in the hand of a lone孤獨 wolf.
144
391320
3816
雖然我哋唔可能杜絕所有暴力事件發生
06:47
Clearly清楚, we cannot唔可以 protect保護 ourselves自己
against all random隨機 acts行為 of violence暴力,
145
395160
4536
但我唔擔心人工智能落入一啲壞人手中
06:51
but the reality現實 is such a system系統
146
399720
2136
因為人工智能需要持續同細微嘅改進
06:53
requires需要 substantial大量 training培訓
and subtle微妙 training培訓
147
401880
3096
單憑個人資源唔使旨意做到
06:57
far beyond超越 the resources資源 of an individual.
148
405000
2296
06:59
And furthermore此外,
149
407320
1216
而且,唔好當成植入網絡病毒咁簡單
07:00
it's far more than just injecting注射
an internet互聯網 virus病毒 to the world世界,
150
408560
3256
07:03
where you push a button,
all of a sudden突然 it's in a million places地方
151
411840
3096
唔好以為隨時隨地㩒個掣
全世界嘅電腦瞬間就會爆炸
07:06
and laptops筆記本電腦 start初時 blowing up
all over the place地方.
152
414960
2456
07:09
Now, these kinds of substances物質
are much larger,
153
417440
2816
人工智能係複雜好多嘅嘢
07:12
and we'll我哋就 certainly梗係 see them coming.
154
420280
1715
但我相信遲早有一日人工智能會出現
07:14
Do I fear恐懼 that such
an artificial人工 intelligence情報
155
422520
3056
我驚唔驚人工智能會威脅全人類?
07:17
might可能 threaten威脅 all of humanity人類?
156
425600
1960
07:20
If you look at movies電影
such as "The Matrix矩陣," "Metropolis都市,"
157
428280
4376
電影《黑客帝國》、《大都會》
《終結者》、電視劇《西部世界》
07:24
"The Terminator終結,"
shows顯示 such as "Westworld西方世界,"
158
432680
3176
07:27
they all speak of this kind一種 of fear恐懼.
159
435880
2136
呢類影視作品都係刻畫呢種恐懼
07:30
Indeed講真, in the book "SuperintelligenceSuperintelligence"
by the philosopher哲學家 Nick尼克 Bostrom博斯特罗姆,
160
438040
4296
無錯,哲學家 Nick Bostrom
喺《超人工智能》一書裏邊都認為
07:34
he picks選擇 up on this theme主題
161
442360
1536
07:35
and observes觀察 that a superintelligencesuperintelligence
might可能 not only be dangerous危險,
162
443920
4016
超人工智能唔單止危險
而且仲危及全人類
07:39
it could represent代表 an existential存在 threat威脅
to all of humanity人類.
163
447960
3856
07:43
Dr博士. Bostrom's博斯特罗姆嘅 basic基本 argument參數
164
451840
2216
佢嘅基本論點有︰
07:46
is that such systems系統 will eventually最終
165
454080
2736
咁樣嘅機械人
最終唔會滿足於眼前擁有嘅資訊
07:48
have such an insatiable貪得無厭
thirst口渴 for information信息
166
456840
3256
07:52
that they will perhaps或者 learn學習 how to learn學習
167
460120
2896
機械人可能會因而自己鑽研學習方法
07:55
and eventually最終 discover發現
that they may可能 have goals目標
168
463040
2616
以至到最後發現
自己有啲目標同人類需要有矛盾
07:57
that are contrary相反 to human人類 needs需要.
169
465680
2296
08:00
Dr博士. Bostrom博斯特罗姆 has a number數量 of followers追隨者.
170
468000
1856
有人支持博森博士嘅觀點
08:01
He is supported支持 by people
such as Elon伊隆 Musk麝香 and Stephen斯蒂芬 Hawking霍金.
171
469880
4320
包括 Elon Musk 同霍金
08:06
With all due由于 respect尊重
172
474880
2400
我想指出其實幾位智者諗錯咗
08:10
to these brilliant輝煌 minds頭腦,
173
478160
2016
08:12
I believe that they
are fundamentally從根本上 wrong.
174
480200
2256
08:14
Now, there are a lot of pieces
of Dr博士. Bostrom's博斯特罗姆嘅 argument參數 to unpack解壓,
175
482480
3176
Nick Bostrom 嘅理論有好多錯誤
08:17
and I don't have time to unpack解壓 them all,
176
485680
2136
但我無時間講曬所有
08:19
but very briefly簡要, consider諗緊 this:
177
487840
2696
但簡單嚟講,可以咁理解:
超智能唔代表超萬能
08:22
super超級 knowing is very different不同
than super超級 doing.
178
490560
3736
08:26
HAL哈爾 was a threat威脅 to the Discovery發現 crew船員
179
494320
1896
哈爾對於成個太空探索團隊嘅威脅
08:28
only insofar因為 as HAL哈爾 commanded指揮
all aspects方面 of the Discovery發現.
180
496240
4416
僅限於佢可以對探索任務落命令
08:32
So it would have to be
with a superintelligencesuperintelligence.
181
500680
2496
所以任務需要一個超智能機器
08:35
It would have to have dominion統治
over all of our world世界.
182
503200
2496
落命令嘅需要有統治世界嘅能力
08:37
This is the stuff啲嘢 of Skynet天網
from the movie電影 "The Terminator終結"
183
505720
2816
電影《未來戰士 2018》
裡面嘅 Skynet
嗰個可以操控人類意志嘅
超人工智能防禦系統
08:40
in which we had a superintelligencesuperintelligence
184
508560
1856
08:42
that commanded指揮 human人類 will,
185
510440
1376
控制曬全世界所有嘅機器同裝置
08:43
that directed指示 every device裝置
that was in every corner角落 of the world世界.
186
511840
3856
08:47
Practically爭 D speaking,
187
515720
1456
即係話
08:49
it ain't唔係 gonna happen發生.
188
517200
2096
電影情節係唔會發生
08:51
We are not building建築 AIsAis
that control控制 the weather天氣,
189
519320
3056
我哋唔係整隻機械人出嚟呼風喚雨
08:54
that direct直接 the tides潮汐,
190
522400
1336
08:55
that command命令 us
capricious反復無常, chaotic混沌 humans人類.
191
523760
3376
操控喜怒無常、陷於鬥爭嘅人類
08:59
And furthermore此外, if such
an artificial人工 intelligence情報 existed存在,
192
527160
3896
再者,如果咁樣嘅機械人真係存在
09:03
it would have to compete競爭
with human人類 economies經濟,
193
531080
2936
佢就會同人類嘅經濟鬥過
09:06
and thereby從而 compete競爭 for resources資源 with us.
194
534040
2520
甚至同人類爭資源
09:09
And in the end結束 --
195
537200
1216
最終結果係——
09:10
don't tell SiriSiri this --
196
538440
1240
唔好話俾 Siri 聽——
09:12
we can always unplug拔掉 them.
197
540440
1376
我哋可以隨時熄咗佢哋
(笑聲)
09:13
(Laughter笑聲)
198
541840
2120
09:17
We are on an incredible不可思議 journey旅程
199
545360
2456
我哋同機械人係共同進化
09:19
of coevolution進化 with our machines機器.
200
547840
2496
未來嘅人類同今日嘅我哋
唔可以同日而語
09:22
The humans人類 we are today今日
201
550360
2496
09:24
are not the humans人類 we will be then.
202
552880
2536
09:27
To worry now about the rise上升
of a superintelligencesuperintelligence
203
555440
3136
人類擔心人工智能帶嚟威脅
09:30
is in many好多 ways方式 a dangerous危險 distraction分心
204
558600
3056
只會令人類唔去真正關心
科技崛起帶嚟嘅人文同社會問題
09:33
because the rise上升 of computing計算 itself本身
205
561680
2336
09:36
brings帶嚟 to us a number數量
of human人類 and societal社會 issues個問題
206
564040
3016
而呢啲問題正正係
我哋需要著手解決嘅
09:39
to which we must必須 now attend參加.
207
567080
1640
09:41
How shall I best最好 organize組織 society社會
208
569360
2816
問題例如有︰
當我哋唔再需要勞動力嘅時候
我哋要點去調控社會?
09:44
when the need for human人類 labor勞動 diminishes減少?
209
572200
2336
09:46
How can I bring understanding理解
and education教育 throughout整個 the globe全球
210
574560
3816
點樣向全世界傳播知識同教育
同時又尊重當地嘅差異?
09:50
and still respect尊重 our differences差異?
211
578400
1776
09:52
How might可能 I extend擴展 and enhance提高 human人類 life
through透過 cognitive認知 healthcare醫療?
212
580200
4256
點樣通過認知式保健幫人類延年益壽?
09:56
How might可能 I use computing計算
213
584480
2856
點樣利用電腦幫人類踏足外太空?
09:59
to help take us to the stars星星?
214
587360
1760
10:01
And that's the exciting令人興奮 thing.
215
589760
2040
諗下都覺得興奮
10:04
The opportunities機會 to use computing計算
216
592400
2336
利用計算科學去開拓人類經歷嘅機會
就喺手裏邊
10:06
to advance進展 the human人類 experience經驗
217
594760
1536
10:08
are within our reach達到,
218
596320
1416
10:09
here and now,
219
597760
1856
10:11
and we are just beginning初時.
220
599640
1680
而我哋只係啱啱捉緊到
10:14
Thank you very much.
221
602280
1216
多謝大家
(掌聲)
10:15
(Applause掌聲)
222
603520
4286
Translated by Danly Deng
Reviewed by Chak Lam Wan

▲Back to top

ABOUT THE SPEAKER
Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com