ABOUT THE SPEAKER
Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com
TED@IBM

Grady Booch: Don't fear superintelligent AI

格雷迪·布切: 无需畏惧超级人工智能

Filmed:
2,866,438 views

“新技术产生了新的焦虑。“Grady Booch, 这位科学家和哲学家如此说到,”但是我们无需畏惧一个全能但无情的人工智能。" Booch通过解释我们是如何教导人工智能,而非编写它来传递我们的价值观,缓和了我们由于科幻作品导致的对于超级智能机器的恐慌。与其担忧一个不太可能出现的威胁,他强烈建议我们考虑如何通过人工智能使人类的生活变得更美好。
- Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn. Full bio

Double-click the English transcript below to play the video.

在我还是孩子的时候 ,
我是一个典型的书呆子。
00:12
When I was a kid孩子,
I was the quintessential典型 nerd书呆子.
0
760
3840
00:17
I think some of you were, too.
1
5320
2176
我猜你们的一部分人和我一样。
00:19
(Laughter笑声)
2
7520
1216
(笑声)
00:20
And you, sir先生, who laughed笑了 the loudest最响亮,
you probably大概 still are.
3
8760
3216
还有你,那位先生,笑得最大声,
说不定你现在还是呢。
00:24
(Laughter笑声)
4
12000
2256
(笑声)
00:26
I grew成长 up in a small town
in the dusty尘土飞扬 plains平原 of north Texas德州,
5
14280
3496
我成长在德克萨斯州北部
荒凉平原上的一个小镇。
00:29
the son儿子 of a sheriff郡治安官
who was the son儿子 of a pastor牧师.
6
17800
3336
我的爸爸是警长,爷爷是一位牧师,
00:33
Getting入门 into trouble麻烦 was not an option选项.
7
21160
1920
所以自找麻烦从来不是一个好的选项。
00:36
And so I started开始 reading
calculus结石 books图书 for fun开玩笑.
8
24040
3256
所以我开始看有关
微积分的书来打发时间。
00:39
(Laughter笑声)
9
27320
1536
(笑声)
00:40
You did, too.
10
28880
1696
你也是这样。
00:42
That led me to building建造 a laser激光
and a computer电脑 and model模型 rockets火箭,
11
30600
3736
于是我学着制作了一个激光器,
一台电脑,和一个火箭模型,
00:46
and that led me to making制造
rocket火箭 fuel汽油 in my bedroom卧室.
12
34360
3000
并且在自己的卧室制取火箭燃料。
00:49
Now, in scientific科学 terms条款,
13
37960
3656
现在,从科学的角度而言,
00:53
we call this a very bad idea理念.
14
41640
3256
我们把这个叫做
一个糟糕透顶的主意。
00:56
(Laughter笑声)
15
44920
1216
(笑声)
00:58
Around that same相同 time,
16
46160
2176
在差不多同一时间,
01:00
Stanley斯坦利 Kubrick's库布里克 "2001: A Space空间 Odyssey奥德赛"
came来了 to the theaters剧院,
17
48360
3216
Stanley Kubrick的
“2001:太空漫游”上映了,
01:03
and my life was forever永远 changed.
18
51600
2200
我的生活从此改变。
01:06
I loved喜爱 everything about that movie电影,
19
54280
2056
我喜欢关于那部电影的一切,
01:08
especially特别 the HALHAL 9000.
20
56360
2536
特别是 HAL 9000。
01:10
Now, HALHAL was a sentient有情 computer电脑
21
58920
2056
HAL是一台有情感的电脑,
01:13
designed设计 to guide指南 the Discovery发现 spacecraft宇宙飞船
22
61000
2456
为引导“发现一号”飞船从地球
01:15
from the Earth地球 to Jupiter木星.
23
63480
2536
前往木星而设计出来。
01:18
HALHAL was also a flawed有缺陷 character字符,
24
66040
2056
HAL 也是一个有缺陷的角色,
01:20
for in the end结束 he chose选择
to value the mission任务 over human人的 life.
25
68120
4280
因为在结尾他将任务的
价值置于生命之上。
01:24
Now, HALHAL was a fictional虚构 character字符,
26
72840
2096
HAL 是一个虚构的角色,
01:26
but nonetheless尽管如此, he speaks说话 to our fears恐惧,
27
74960
2656
但尽管如此,他挑战了我们的恐惧,
01:29
our fears恐惧 of being存在 subjugated征服
28
77640
2096
被一些冷漠无情的
01:31
by some unfeeling冷酷, artificial人造 intelligence情报
29
79760
3016
人工智能(AI)
01:34
who is indifferent冷漠 to our humanity人性.
30
82800
1960
所统治的恐惧。
01:37
I believe that such这样 fears恐惧 are unfounded杞人忧天.
31
85880
2576
但我相信这些恐惧只是杞人忧天。
的确,我们正处于人类历史上
01:40
Indeed确实, we stand at a remarkable卓越 time
32
88480
2696
一个值得铭记的时间点,
01:43
in human人的 history历史,
33
91200
1536
不甘被自身肉体和头脑所局限,
01:44
where, driven驱动 by refusal拒绝 to accept接受
the limits范围 of our bodies身体 and our minds头脑,
34
92760
4976
我们正在制造
01:49
we are building建造 machines
35
97760
1696
那些可以通过我们无法想象的方式
01:51
of exquisite精美, beautiful美丽
complexity复杂 and grace恩典
36
99480
3616
01:55
that will extend延伸 the human人的 experience经验
37
103120
2056
来拓展人类体验的机器,
01:57
in ways方法 beyond our imagining想象.
38
105200
1680
它们精美,复杂,而且优雅。
01:59
After a career事业 that led me
from the Air空气 Force Academy学院
39
107720
2576
我在空军学院
02:02
to Space空间 Command命令 to now,
40
110320
1936
和航天司令部工作过,
02:04
I became成为 a systems系统 engineer工程师,
41
112280
1696
现在是一个系统工程师。
02:06
and recently最近 I was drawn
into an engineering工程 problem问题
42
114000
2736
最近我碰到了一个和
NASA火星任务相关的
02:08
associated相关 with NASA's美国航空航天局 mission任务 to Mars火星.
43
116760
2576
工程问题。
02:11
Now, in space空间 flights航班 to the Moon月亮,
44
119360
2496
当前,前往月球的宇宙飞行,
02:13
we can rely依靠 upon
mission任务 control控制 in Houston休斯顿
45
121880
3136
我们可以依靠休斯顿的指挥中心
02:17
to watch over all aspects方面 of a flight飞行.
46
125040
1976
来密切关注飞行的各个方面。
02:19
However然而, Mars火星 is 200 times further进一步 away,
47
127040
3536
但是,火星相比而言
多出了200倍的距离,
02:22
and as a result结果 it takes
on average平均 13 minutes分钟
48
130600
3216
这使得一个信号从地球到火星
02:25
for a signal信号 to travel旅行
from the Earth地球 to Mars火星.
49
133840
3136
平均要花费13分钟。
02:29
If there's trouble麻烦,
there's not enough足够 time.
50
137000
3400
如果有了麻烦,
我们并没有足够的时间来解决。
02:32
And so a reasonable合理 engineering工程 solution
51
140840
2496
所以一个可靠的工程解决方案
02:35
calls电话 for us to put mission任务 control控制
52
143360
2576
促使我们把一个指挥中枢
02:37
inside the walls墙壁 of the Orion猎户座 spacecraft宇宙飞船.
53
145960
3016
放在“猎户星”飞船之中。
02:41
Another另一个 fascinating迷人 idea理念
in the mission任务 profile轮廓
54
149000
2896
在任务档案中的另一个创意
02:43
places地方 humanoid人形 robots机器人
on the surface表面 of Mars火星
55
151920
2896
是在人类抵达火星之前,把人形机器人
02:46
before the humans人类 themselves他们自己 arrive到达,
56
154840
1856
先一步放在火星表面,
02:48
first to build建立 facilities设备
57
156720
1656
它们可以先建造据点,
然后作为科学团队的合作伙伴驻扎。
02:50
and later后来 to serve服务 as collaborative共同
members会员 of the science科学 team球队.
58
158400
3360
02:55
Now, as I looked看着 at this
from an engineering工程 perspective透视,
59
163400
2736
当我从工程师的角度看待这个想法,
02:58
it became成为 very clear明确 to me
that what I needed需要 to architect建筑师
60
166160
3176
对于建造一个聪明,懂得合作,
03:01
was a smart聪明, collaborative共同,
61
169360
2176
擅长社交的人工智能的需求
03:03
socially社交上 intelligent智能
artificial人造 intelligence情报.
62
171560
2376
是十分明显的。
换句话说,我需要建造一个和HAL一样,
03:05
In other words, I needed需要 to build建立
something very much like a HALHAL
63
173960
4296
03:10
but without the homicidal杀人 tendencies倾向.
64
178280
2416
但是没有谋杀倾向的机器。
03:12
(Laughter笑声)
65
180720
1360
(笑声)
03:14
Let's pause暂停 for a moment时刻.
66
182920
1816
待会儿再回到这个话题。
03:16
Is it really possible可能 to build建立
an artificial人造 intelligence情报 like that?
67
184760
3896
真的有可能打造一个
类似的人工智能吗?
03:20
Actually其实, it is.
68
188680
1456
是的,当然可以。
在许多方面,
03:22
In many许多 ways方法,
69
190160
1256
一个困难的工程问题来自于
03:23
this is a hard engineering工程 problem问题
70
191440
1976
AI的各个零件,
03:25
with elements分子 of AIAI,
71
193440
1456
而不是什么琐碎的AI问题。
03:26
not some wet湿 hair头发 ball of an AIAI problem问题
that needs需求 to be engineered工程.
72
194920
4696
借用Alan Turing的话来说,
03:31
To paraphrase意译 Alan艾伦 Turing图灵,
73
199640
2656
我没有兴趣建造一台有情感的机器。
03:34
I'm not interested有兴趣
in building建造 a sentient有情 machine.
74
202320
2376
我不是要建造HAL。
03:36
I'm not building建造 a HALHAL.
75
204720
1576
我只想要一个简单的大脑,
03:38
All I'm after is a simple简单 brain,
76
206320
2416
一个能让你以为它有智能的东西。
03:40
something that offers报价
the illusion错觉 of intelligence情报.
77
208760
3120
03:45
The art艺术 and the science科学 of computing计算
have come a long way
78
213000
3136
自从HAL登上荧幕,
关于编程的技术和艺术
已经发展了许多,
03:48
since以来 HALHAL was onscreen在屏幕上,
79
216160
1496
我能想象如果它的发明者
Chandra博士今天在这里的话,
03:49
and I'd imagine想像 if his inventor发明者
Dr博士. Chandra钱德拉 were here today今天,
80
217680
3216
他会有很多的问题问我们。
03:52
he'd他会 have a whole整个 lot of questions问题 for us.
81
220920
2336
我们真的可能
03:55
Is it really possible可能 for us
82
223280
2096
用一个连接了无数设备的系统,
03:57
to take a system系统 of millions百万
upon millions百万 of devices设备,
83
225400
4016
通过读取数据流,
04:01
to read in their data数据 streams,
84
229440
1456
来预测它们的失败并提前行动吗?
04:02
to predict预测 their failures故障
and act法案 in advance提前?
85
230920
2256
是的。
04:05
Yes.
86
233200
1216
我们能建造一个和人类
用语言交流的系统吗?
04:06
Can we build建立 systems系统 that converse交谈
with humans人类 in natural自然 language语言?
87
234440
3176
能。
04:09
Yes.
88
237640
1216
我们能够打造一个能
辨识目标,鉴别情绪,
04:10
Can we build建立 systems系统
that recognize认识 objects对象, identify鉴定 emotions情绪,
89
238880
2976
表现自身情感,打游戏,
甚至读唇的系统吗?
04:13
emote作表情 themselves他们自己,
play games游戏 and even read lips嘴唇?
90
241880
3376
可以。
04:17
Yes.
91
245280
1216
我们可以建造一个能设定目标,
04:18
Can we build建立 a system系统 that sets goals目标,
92
246520
2136
通过各种达成目的的
方法来学习的系统吗?
04:20
that carries携带 out plans计划 against反对 those goals目标
and learns获悉 along沿 the way?
93
248680
3616
也可以。
04:24
Yes.
94
252320
1216
我们可以建造一个类似人脑的系统吗?
04:25
Can we build建立 systems系统
that have a theory理论 of mind心神?
95
253560
3336
这是我们正在努力做的。
04:28
This we are learning学习 to do.
96
256920
1496
我们可以建造一个有道德
和感情基础的系统吗?
04:30
Can we build建立 systems系统 that have
an ethical合乎道德的 and moral道德 foundation基础?
97
258440
3480
04:34
This we must必须 learn学习 how to do.
98
262480
2040
这是我们必须要学习的。
04:37
So let's accept接受 for a moment时刻
99
265360
1376
总而言之,
建造一个类似的用于这类任务的
04:38
that it's possible可能 to build建立
such这样 an artificial人造 intelligence情报
100
266760
2896
人工智能是可行的。
04:41
for this kind of mission任务 and others其他.
101
269680
2136
另一个你必须扪心自问的问题是,
04:43
The next下一个 question
you must必须 ask yourself你自己 is,
102
271840
2536
我们应该害怕它吗?
04:46
should we fear恐惧 it?
103
274400
1456
毕竟,每一种新技术
04:47
Now, every一切 new technology技术
104
275880
1976
都给我们带来某种程度的不安。
04:49
brings带来 with it
some measure测量 of trepidation不安.
105
277880
2896
我们第一次看见汽车的时候,
04:52
When we first saw cars汽车,
106
280800
1696
人们悲叹我们会看到家庭的毁灭。
04:54
people lamented感叹 that we would see
the destruction毁坏 of the family家庭.
107
282520
4016
我们第一次看见电话的时候,
04:58
When we first saw telephones电话 come in,
108
286560
2696
人们担忧这会破坏所有的文明交流。
05:01
people were worried担心 it would destroy破坏
all civil国内 conversation会话.
109
289280
2896
在某个时间点我们看到书写文字的蔓延,
05:04
At a point in time we saw
the written书面 word become成为 pervasive无处不在,
110
292200
3936
人们认为我们会丧失记忆的能力。
05:08
people thought we would lose失去
our ability能力 to memorize记忆.
111
296160
2496
这些在某个程度上是合理的,
05:10
These things are all true真正 to a degree,
112
298680
2056
但也正是这些技术
05:12
but it's also the case案件
that these technologies技术
113
300760
2416
给人类的生活在某些方面
05:15
brought to us things
that extended扩展 the human人的 experience经验
114
303200
3376
带来了前所未有的体验。
05:18
in some profound深刻 ways方法.
115
306600
1880
05:21
So let's take this a little further进一步.
116
309840
2280
我们再稍稍扩展一下。
05:25
I do not fear恐惧 the creation创建
of an AIAI like this,
117
313120
4736
我并不畏惧这类人工智能的诞生,
因为它最终会融入我们的部分价值观。
05:29
because it will eventually终于
embody体现 some of our values.
118
317880
3816
想想这个:建造一个认知系统
05:33
Consider考虑 this: building建造 a cognitive认知 system系统
is fundamentally从根本上 different不同
119
321720
3496
与建造一个以传统的软件为主的
系统有着本质的不同。
05:37
than building建造 a traditional传统
software-intensive软件密集型 system系统 of the past过去.
120
325240
3296
05:40
We don't program程序 them. We teach them.
121
328560
2456
我们不编译它们。我们教导它们。
为了教会系统如何识别花朵,
05:43
In order订购 to teach a system系统
how to recognize认识 flowers花卉,
122
331040
2656
我给它看了上千种我喜欢的花。
05:45
I show显示 it thousands数千 of flowers花卉
of the kinds I like.
123
333720
3016
为了教会系统怎么打游戏——
05:48
In order订购 to teach a system系统
how to play a game游戏 --
124
336760
2256
当然,我会。你们也会。
05:51
Well, I would. You would, too.
125
339040
1960
05:54
I like flowers花卉. Come on.
126
342600
2040
我喜欢花。这没什么。
05:57
To teach a system系统
how to play a game游戏 like Go,
127
345440
2856
为了教会系统如何玩像围棋一样的游戏,
我玩了许多围棋的游戏,
06:00
I'd have it play thousands数千 of games游戏 of Go,
128
348320
2056
但是在这个过程中
06:02
but in the process处理 I also teach it
129
350400
1656
我也教会它如何分别差游戏和好游戏。
06:04
how to discern辨别
a good game游戏 from a bad game游戏.
130
352080
2416
如果我想要一个人工智能法律助手,
06:06
If I want to create创建 an artificially人为
intelligent智能 legal法律 assistant助理,
131
354520
3696
我会给它一些法律文集,
06:10
I will teach it some corpus文集 of law
132
358240
1776
但同时我会将 怜悯和正义
06:12
but at the same相同 time I am fusing定影 with it
133
360040
2856
也是法律的一部分 这种观点融入其中。
06:14
the sense of mercy怜悯 and justice正义
that is part部分 of that law.
134
362920
2880
用一个术语来解释,
就是我们所说的真相,
06:18
In scientific科学 terms条款,
this is what we call ground地面 truth真相,
135
366560
2976
06:21
and here's这里的 the important重要 point:
136
369560
2016
而关键在于:
为了制造这些机器,
06:23
in producing生产 these machines,
137
371600
1456
06:25
we are therefore因此 teaching教学 them
a sense of our values.
138
373080
3416
我们正教给它们我们的价值观。
06:28
To that end结束, I trust相信
an artificial人造 intelligence情报
139
376520
3136
正因如此,我相信一个人工智能
绝不逊于一个经过良好训练的人类。
06:31
the same相同, if not more,
as a human人的 who is well-trained训练有素.
140
379680
3640
06:36
But, you may可能 ask,
141
384080
1216
但是,你或许会问,
06:37
what about rogue流氓 agents代理,
142
385320
2616
要是流氓组织,
06:39
some well-funded资金雄厚
nongovernment非政府 organization组织?
143
387960
3336
和资金充沛的无政府组织
也在利用它们呢?
06:43
I do not fear恐惧 an artificial人造 intelligence情报
in the hand of a lone孤单 wolf.
144
391320
3816
我并不害怕独狼掌控的人工智能。
很明显,我们无法从
随机的暴力行为中保护自己,
06:47
Clearly明确地, we cannot不能 protect保护 ourselves我们自己
against反对 all random随机 acts行为 of violence暴力,
145
395160
4536
但是现实是,制造这样一个系统
06:51
but the reality现实 is such这样 a system系统
146
399720
2136
06:53
requires要求 substantial大量的 training训练
and subtle微妙 training训练
147
401880
3096
超越了个人所拥有资源的极限,
06:57
far beyond the resources资源 of an individual个人.
148
405000
2296
因为它需要踏实细致的训练和培养。
06:59
And furthermore此外,
149
407320
1216
还有,
这远比向世界散播一个网络病毒,
07:00
it's far more than just injecting注射
an internet互联网 virus病毒 to the world世界,
150
408560
3256
比如你按下一个按钮,
瞬间全世界都被感染,
07:03
where you push a button按键,
all of a sudden突然 it's in a million百万 places地方
151
411840
3096
并且在各处的笔记本电脑中
开始爆发来的复杂。
07:06
and laptops笔记本电脑 start开始 blowing up
all over the place地点.
152
414960
2456
这类东西正在越来越强大,
07:09
Now, these kinds of substances物质
are much larger,
153
417440
2816
07:12
and we'll certainly当然 see them coming未来.
154
420280
1715
我们必然会看到它们的来临。
07:14
Do I fear恐惧 that such这样
an artificial人造 intelligence情报
155
422520
3056
我会害怕一个有可能威胁所有人类的
07:17
might威力 threaten威胁 all of humanity人性?
156
425600
1960
人工智能吗?
07:20
If you look at movies电影
such这样 as "The Matrix矩阵," "Metropolis都会,"
157
428280
4376
如果你看过《黑客帝国》,《大都会》,
《终结者》,或者
《西部世界》这类电视剧,
07:24
"The Terminator终结者,"
shows节目 such这样 as "Westworld西部世界,"
158
432680
3176
它们都在表达这种恐惧。
07:27
they all speak说话 of this kind of fear恐惧.
159
435880
2136
的确,在哲学家Nick Bostrom
写的《超级智能》中,
07:30
Indeed确实, in the book "Superintelligence超级智能"
by the philosopher哲学家 Nick缺口 Bostrom博斯特伦,
160
438040
4296
他选择了这个主题,
07:34
he picks精选 up on this theme主题
161
442360
1536
并且观察到超级智能不仅仅危险,
07:35
and observes观察 that a superintelligence超级智能
might威力 not only be dangerous危险,
162
443920
4016
它还对所有人类的存在造成了威胁。
07:39
it could represent代表 an existential存在 threat威胁
to all of humanity人性.
163
447960
3856
Bostrom博士的基础观点认为,
07:43
Dr博士. Bostrom's博斯特伦的 basic基本 argument论据
164
451840
2216
这样的系统迟早会
07:46
is that such这样 systems系统 will eventually终于
165
454080
2736
产生对信息的无止境渴求,
07:48
have such这样 an insatiable贪心
thirst口渴 for information信息
166
456840
3256
也许它们会开始学习,
07:52
that they will perhaps也许 learn学习 how to learn学习
167
460120
2896
并且最终发现它们的目的
07:55
and eventually终于 discover发现
that they may可能 have goals目标
168
463040
2616
和人类的需求背道而驰。
07:57
that are contrary相反 to human人的 needs需求.
169
465680
2296
Bostrom博士有一群粉丝。
08:00
Dr博士. Bostrom博斯特伦 has a number of followers追随者.
170
468000
1856
Elon Musk和Stephen Hawking也支持他。
08:01
He is supported支持的 by people
such这样 as Elon伊隆 Musk and Stephen斯蒂芬 Hawking霍金.
171
469880
4320
08:06
With all due应有 respect尊重
172
474880
2400
虽然要向这些伟大的头脑
08:10
to these brilliant辉煌 minds头脑,
173
478160
2016
致以崇高的敬意,
但我还是相信他们从一开始就错了。
08:12
I believe that they
are fundamentally从根本上 wrong错误.
174
480200
2256
Bostrom博士的观点
有很多地方可以细细体会,
08:14
Now, there are a lot of pieces
of Dr博士. Bostrom's博斯特伦的 argument论据 to unpack解压,
175
482480
3176
08:17
and I don't have time to unpack解压 them all,
176
485680
2136
但现在我没有时间一一解读,
08:19
but very briefly简要地, consider考虑 this:
177
487840
2696
简要而言,请考虑这句话:
08:22
super knowing会心 is very different不同
than super doing.
178
490560
3736
全知并非全能。
HAL成为了对发现一号成员的威胁,
08:26
HALHAL was a threat威胁 to the Discovery发现 crew船员
179
494320
1896
只是因为它控制了
发现一号的各个方面。
08:28
only insofar只要 as HALHAL commanded指挥
all aspects方面 of the Discovery发现.
180
496240
4416
正因如此它才需要是一个人工智能。
08:32
So it would have to be
with a superintelligence超级智能.
181
500680
2496
它需要对于我们世界的完全掌控。
08:35
It would have to have dominion主权
over all of our world世界.
182
503200
2496
这就是《终结者》中的天网,
08:37
This is the stuff东东 of Skynet天网
from the movie电影 "The Terminator终结者"
183
505720
2816
一个控制了人们意志,
08:40
in which哪一个 we had a superintelligence超级智能
184
508560
1856
控制了世界各处
08:42
that commanded指挥 human人的 will,
185
510440
1376
08:43
that directed针对 every一切 device设备
that was in every一切 corner of the world世界.
186
511840
3856
各个机器的超级智能。
说实在的,
08:47
Practically几乎 speaking请讲,
187
515720
1456
这完全是杞人忧天。
08:49
it ain't gonna happen发生.
188
517200
2096
我们不是在制造可以控制天气,
08:51
We are not building建造 AIs认可
that control控制 the weather天气,
189
519320
3056
引导潮水,
08:54
that direct直接 the tides潮汐,
190
522400
1336
指挥我们这些
多变,混乱的人类的人工智能。
08:55
that command命令 us
capricious任性, chaotic混乱的 humans人类.
191
523760
3376
08:59
And furthermore此外, if such这样
an artificial人造 intelligence情报 existed存在,
192
527160
3896
另外,即使这类人工智能存在,
它需要和人类的经济竞争,
09:03
it would have to compete竞争
with human人的 economies经济,
193
531080
2936
进而和我们拥有的资源竞争。
09:06
and thereby从而 compete竞争 for resources资源 with us.
194
534040
2520
09:09
And in the end结束 --
195
537200
1216
最后——
不要告诉Siri——
09:10
don't tell SiriSiri的 this --
196
538440
1240
我们可以随时拔掉电源。
09:12
we can always unplug them.
197
540440
1376
09:13
(Laughter笑声)
198
541840
2120
(笑声)
09:17
We are on an incredible难以置信 journey旅程
199
545360
2456
我们正处于和机器共同演化的
09:19
of coevolution协同进化 with our machines.
200
547840
2496
奇妙旅途之中。
09:22
The humans人类 we are today今天
201
550360
2496
未来的人类
将与今天的人类大相径庭。
09:24
are not the humans人类 we will be then.
202
552880
2536
当前对人工智能崛起的担忧,
09:27
To worry担心 now about the rise上升
of a superintelligence超级智能
203
555440
3136
09:30
is in many许多 ways方法 a dangerous危险 distraction娱乐
204
558600
3056
从各方面来说都是
一个危险的错误指引,
因为电脑的崛起
09:33
because the rise上升 of computing计算 itself本身
205
561680
2336
带给了我们许多必须参与的
09:36
brings带来 to us a number
of human人的 and societal社会的 issues问题
206
564040
3016
09:39
to which哪一个 we must必须 now attend出席.
207
567080
1640
关乎人类和社会的问题。
09:41
How shall I best最好 organize组织 society社会
208
569360
2816
我应该如何管理社会
来应对人类劳工需求量的降低?
09:44
when the need for human人的 labor劳动 diminishes减少?
209
572200
2336
09:46
How can I bring带来 understanding理解
and education教育 throughout始终 the globe地球
210
574560
3816
我应该怎样在进行全球化
交流和教育的同时,
依旧尊重彼此的不同?
09:50
and still respect尊重 our differences分歧?
211
578400
1776
我应该如何通过可认知医疗
来延长并强化人类的生命?
09:52
How might威力 I extend延伸 and enhance提高 human人的 life
through通过 cognitive认知 healthcare卫生保健?
212
580200
4256
09:56
How might威力 I use computing计算
213
584480
2856
我应该如何通过计算机
来帮助我们前往其他星球?
09:59
to help take us to the stars明星?
214
587360
1760
10:01
And that's the exciting扣人心弦 thing.
215
589760
2040
这些都很令人兴奋。
10:04
The opportunities机会 to use computing计算
216
592400
2336
通过计算机来升级
人类体验的机会
10:06
to advance提前 the human人的 experience经验
217
594760
1536
10:08
are within our reach达到,
218
596320
1416
就在我们手中,
就在此时此刻,
10:09
here and now,
219
597760
1856
我们的旅途才刚刚开始。
10:11
and we are just beginning开始.
220
599640
1680
10:14
Thank you very much.
221
602280
1216
谢谢大家。
(掌声)
10:15
(Applause掌声)
222
603520
4286
Translated by Weidi Liu
Reviewed by Jiawei Ni

▲Back to top

ABOUT THE SPEAKER
Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee