TED@IBM

Grady Booch: Don't fear superintelligent AI

Filmed:

New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don't need to be afraid an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we'll teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

- Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn. Full bio

When I was a kid,
I was the quintessential nerd.
00:12
I think some of you were, too.
00:17
(Laughter)
00:19
And you, sir, who laughed the loudest,
you probably still are.
00:20
(Laughter)
00:23
I grew up in a small town
in the dusty plains of north Texas,
00:26
the son of a sheriff
who was the son of a pastor.
00:29
Getting into trouble was not an option.
00:32
And so I started reading
calculus books for fun.
00:35
(Laughter)
00:39
You did, too.
00:40
That led me to building a laser
and a computer and model rockets,
00:42
and that led me to making
rocket fuel in my bedroom.
00:46
Now, in scientific terms,
00:49
we call this a very bad idea.
00:53
(Laughter)
00:56
Around that same time,
00:57
Stanley Kubrick's "2001: A Space Odyssey"
came to the theaters,
01:00
and my life was forever changed.
01:03
I loved everything about that movie,
01:06
especially the HAL 9000.
01:08
Now, HAL was a sentient computer
01:10
designed to guide the Discovery spacecraft
01:12
from the Earth to Jupiter.
01:15
HAL was also a flawed character,
01:17
for in the end he chose
to value the mission over human life.
01:19
Now, HAL was a fictional character,
01:24
but nonetheless he speaks to our fears,
01:26
our fears of being subjugated
01:29
by some unfeeling, artificial intelligence
01:31
who is indifferent to our humanity.
01:34
I believe that such fears are unfounded.
01:37
Indeed, we stand at a remarkable time
01:40
in human history,
01:43
where, driven by refusal to accept
the limits of our bodies and our minds,
01:44
we are building machines
01:49
of exquisite, beautiful
complexity and grace
01:51
that will extend the human experience
01:54
in ways beyond our imagining.
01:57
After a career that led me
from the Air Force Academy
01:59
to Space Command to now,
02:02
I became a systems engineer,
02:04
and recently I was drawn
into an engineering problem
02:05
associated with NASA's mission to Mars.
02:08
Now, in space flights to the Moon,
02:11
we can rely upon
mission control in Houston
02:13
to watch over all aspects of a flight.
02:16
However, Mars is 200 times further away,
02:18
and as a result it takes
on average 13 minutes
02:22
for a signal to travel
from the Earth to Mars.
02:25
If there's trouble,
there's not enough time.
02:28
And so a reasonable engineering solution
02:32
calls for us to put mission control
02:35
inside the walls of the Orion spacecraft.
02:37
Another fascinating idea
in the mission profile
02:40
places humanoid robots
on the surface of Mars
02:43
before the humans themselves arrive,
02:46
first to build facilities
02:48
and later to serve as collaborative
members of the science team.
02:50
Now, as I looked at this
from an engineering perspective,
02:55
it became very clear to me
that what I needed to architect
02:57
was a smart, collaborative,
03:01
socially intelligent
artificial intelligence.
03:03
In other words, I needed to build
something very much like a HAL
03:05
but without the homicidal tendencies.
03:10
(Laughter)
03:12
Let's pause for a moment.
03:14
Is it really possible to build
an artificial intelligence like that?
03:16
Actually, it is.
03:20
In many ways,
03:21
this is a hard engineering problem
03:23
with elements of AI,
03:25
not some wet hair ball of an AI problem
that needs to be engineered.
03:26
To paraphrase Alan Turing,
03:31
I'm not interested
in building a sentient machine.
03:34
I'm not building a HAL.
03:36
All I'm after is a simple brain,
03:38
something that offers
the illusion of intelligence.
03:40
The art and the science of computing
have come a long way
03:44
since HAL was onscreen,
03:47
and I'd imagine if his inventor
Dr. Chandra were here today,
03:49
he'd have a whole lot of questions for us.
03:52
Is it really possible for us
03:55
to take a system of millions
upon millions of devices,
03:57
to read in their data streams,
04:01
to predict their failures
and act in advance?
04:02
Yes.
04:05
Can we build systems that converse
with humans in natural language?
04:06
Yes.
04:09
Can we build systems
that recognize objects, identify emotions,
04:10
emote themselves,
play games and even read lips?
04:13
Yes.
04:17
Can we build a system that sets goals,
04:18
that carries out plans against those goals
and learns along the way?
04:20
Yes.
04:24
Can we build systems
that have a theory of mind?
04:25
This we are learning to do.
04:28
Can we build systems that have
an ethical and moral foundation?
04:30
This we must learn how to do.
04:34
So let's accept for a moment
04:37
that it's possible to build
such an artificial intelligence
04:38
for this kind of mission and others.
04:41
The next question
you must ask yourself is,
04:43
should we fear it?
04:46
Now, every new technology
04:47
brings with it
some measure of trepidation.
04:49
When we first saw cars,
04:52
people lamented that we would see
the destruction of the family.
04:54
When we first saw telephones come in,
04:58
people were worried it would destroy
all civil conversation.
05:01
At a point in time we saw
the written word become pervasive,
05:04
people thought we would lose
our ability to memorize.
05:07
These things are all true to a degree,
05:10
but it's also the case
that these technologies
05:12
brought to us things
that extended the human experience
05:15
in some profound ways.
05:18
So let's take this a little further.
05:21
I do not fear the creation
of an AI like this,
05:24
because it will eventually
embody some of our values.
05:29
Consider this: building a cognitive system
is fundamentally different
05:33
than building a traditional
software-intensive system of the past.
05:37
We don't program them. We teach them.
05:40
In order to teach a system
how to recognize flowers,
05:42
I show it thousands of flowers
of the kinds I like.
05:45
In order to teach a system
how to play a game --
05:48
Well, I would. You would, too.
05:50
I like flowers. Come on.
05:54
To teach a system
how to play a game like Go,
05:57
I'd have it play thousands of games of Go,
06:00
but in the process I also teach it
06:02
how to discern
a good game from a bad game.
06:03
If I want to create an artificially
intelligent legal assistant,
06:06
I will teach it some corpus of law
06:10
but at the same time I am fusing with it
06:11
the sense of mercy and justice
that is part of that law.
06:14
In scientific terms,
this is what we call ground truth,
06:18
and here's the important point:
06:21
in producing these machines,
06:23
we are therefore teaching them
a sense of our values.
06:24
To that end, I trust
an artificial intelligence
06:28
the same, if not more,
as a human who is well-trained.
06:31
But, you may ask,
06:35
what about rogue agents,
06:37
some well-funded
nongovernment organization?
06:39
I do not fear an artificial intelligence
in the hand of a lone wolf.
06:43
Clearly, we cannot protect ourselves
against all random acts of violence,
06:46
but the reality is such a system
06:51
requires substantial training
and subtle training
06:53
far beyond the resources of an individual.
06:56
And furthermore,
06:59
it's far more than just injecting
an internet virus to the world,
07:00
where you push a button,
all of a sudden it's in a million places
07:03
and laptops start blowing up
all over the place.
07:06
Now, these kinds of substances
are much larger,
07:09
and we'll certainly see them coming.
07:12
Do I fear that such
an artificial intelligence
07:14
might threaten all of humanity?
07:17
If you look at movies
such as "The Matrix," "Metropolis,"
07:20
"The Terminator,"
shows such as "Westworld,"
07:24
they all speak of this kind of fear.
07:27
Indeed, in the book "Superintelligence"
by the philosopher Nick Bostrom,
07:29
he picks up on this theme
07:34
and observes that a superintelligence
might not only be dangerous,
07:35
it could represent an existential threat
to all of humanity.
07:39
Dr. Bostrom's basic argument
07:43
is that such systems will eventually
07:45
have such an insatiable
thirst for information
07:48
that they will perhaps learn how to learn
07:51
and eventually discover
that they may have goals
07:54
that are contrary to human needs.
07:57
Dr. Bostrom has a number of followers.
07:59
He is supported by people
such as Elon Musk and Stephen Hawking.
08:01
With all due respect
08:06
to these brilliant minds,
08:09
I believe that they
are fundamentally wrong.
08:12
Now, there are a lot of pieces
of Dr. Bostrom's argument to unpack,
08:14
and I don't have time to unpack them all,
08:17
but very briefly, consider this:
08:19
super knowing is very different
than super doing.
08:22
HAL was a threat to the Discovery crew
08:26
only insofar as HAL commanded
all aspects of the Discovery.
08:28
So it would have to be
with a superintelligence.
08:32
It would have to have dominion
over all of our world.
08:35
This is the stuff of Skynet
from the movie "The Terminator"
08:37
in which we had a superintelligence
08:40
that commanded human will,
08:42
that directed every device
that was in every corner of the world.
08:43
Practically speaking,
08:47
it ain't gonna happen.
08:49
We are not building AIs
that control the weather,
08:51
that direct the tides,
08:54
that command us
capricious, chaotic humans.
08:55
And furthermore, if such
an artificial intelligence existed,
08:58
it would have to compete
with human economies,
09:02
and thereby compete for resources with us.
09:05
And in the end --
09:09
don't tell Siri this --
09:10
we can always unplug them.
09:12
(Laughter)
09:13
We are on an incredible journey
09:17
of coevolution with our machines.
09:19
The humans we are today
09:22
are not the humans we will be then.
09:24
To worry now about the rise
of a superintelligence
09:27
is in many ways a dangerous distraction
09:30
because the rise of computing itself
09:33
brings to us a number
of human and societal issues
09:35
to which we must now attend.
09:38
How shall I best organize society
09:41
when the need for human labor diminishes?
09:44
How can I bring understanding
and education throughout the globe
09:46
and still respect our differences?
09:50
How might I extend and enhance human life
through cognitive healthcare?
09:52
How might I use computing
09:56
to help take us to the stars?
09:59
And that's the exciting thing.
10:01
The opportunities to use computing
10:04
to advance the human experience
10:06
are within our reach,
10:08
here and now,
10:09
and we are just beginning.
10:11
Thank you very much.
10:14
(Applause)
10:15

▲Back to top

About the Speaker:

Grady Booch - Scientist, philosopher
IBM's Grady Booch is shaping the future of cognitive computing by building intelligent systems that can reason and learn.

Why you should listen

When he was 13, Grady Booch saw 2001: A Space Odyssey in the theaters for the first time. Ever since, he's been trying to build Hal (albeit one without the homicidal tendencies). A scientist, storyteller and philosopher, Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBM's research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

A co-author of the Unified Modeling Language (UML), a founding member of the Agile Allianc, and a founding member of the Hillside Group, Booch has published six books and several hundred technical articles, including an ongoing column for IEEE Software. He's also a trustee for the Computer History Museum, an IBM Fellow, an ACM Fellow and IEEE Fellow. He has been awarded the Lovelace Medal and has given the Turing Lecture for the BCS, and was recently named an IEEE Computer Pioneer.

Booch is currently deeply involved in the development of cognitive systems and is also developing a major trans-media documentary for public broadcast on the intersection of computing and the human experience.

More profile about the speaker
Grady Booch | Speaker | TED.com