sponsored links
TEDGlobal 2005

Nick Bostrom: A philosophical quest for our biggest problems

July 14, 2005

Oxford philosopher and transhumanist Nick Bostrom examines the future of humankind and asks whether we might alter the fundamental nature of humanity to solve our most intrinsic problems.

Nick Bostrom - Philosopher
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us? Full bio

sponsored links
Double-click the English subtitles below to play the video.
I want to talk today about --
00:24
I've been asked to take the long view, and I'm going to tell you what
00:27
I think are the three biggest problems for humanity
00:33
from this long point of view.
00:37
Some of these have already been touched upon by other speakers,
00:40
which is encouraging.
00:43
It seems that there's not just one person
00:45
who thinks that these problems are important.
00:47
The first is -- death is a big problem.
00:49
If you look at the statistics,
00:53
the odds are not very favorable to us.
00:56
So far, most people who have lived have also died.
00:58
Roughly 90 percent of everybody who has been alive has died by now.
01:02
So the annual death rate adds up to 150,000 --
01:06
sorry, the daily death rate -- 150,000 people per day,
01:12
which is a huge number by any standard.
01:15
The annual death rate, then, becomes 56 million.
01:18
If we just look at the single, biggest cause of death -- aging --
01:23
it accounts for roughly two-thirds of all human people who die.
01:29
That adds up to an annual death toll
01:34
of greater than the population of Canada.
01:37
Sometimes, we don't see a problem
01:39
because either it's too familiar or it's too big.
01:41
Can't see it because it's too big.
01:45
I think death might be both too familiar and too big
01:47
for most people to see it as a problem.
01:50
Once you think about it, you see this is not statistical points;
01:53
these are -- let's see, how far have I talked?
01:55
I've talked for three minutes.
01:57
So that would be, roughly, 324 people have died since I've begun speaking.
02:00
People like -- it's roughly the population in this room has just died.
02:07
Now, the human cost of that is obvious,
02:12
once you start to think about it -- the suffering, the loss --
02:14
it's also, economically, enormously wasteful.
02:17
I just look at the information, and knowledge, and experience
02:20
that is lost due to natural causes of death in general,
02:23
and aging, in particular.
02:26
Suppose we approximated one person with one book?
02:28
Now, of course, this is an underestimation.
02:31
A person's lifetime of learning and experience
02:33
is a lot more than you could put into a single book.
02:39
But let's suppose we did this.
02:41
52 million people die of natural causes each year
02:44
corresponds, then, to 52 million volumes destroyed.
02:49
Library of Congress holds 18 million volumes.
02:53
We are upset about the burning of the Library of Alexandria.
02:57
It's one of the great cultural tragedies
03:00
that we remember, even today.
03:02
But this is the equivalent of three Libraries of Congress --
03:06
burnt down, forever lost -- each year.
03:08
So that's the first big problem.
03:11
And I wish Godspeed to Aubrey de Grey,
03:13
and other people like him,
03:16
to try to do something about this as soon as possible.
03:18
Existential risk -- the second big problem.
03:22
Existential risk is a threat to human survival, or to the long-term potential of our species.
03:25
Now, why do I say that this is a big problem?
03:32
Well, let's first look at the probability --
03:34
and this is very, very difficult to estimate --
03:38
but there have been only four studies on this in recent years,
03:41
which is surprising.
03:44
You would think that it would be of some interest
03:46
to try to find out more about this given that the stakes are so big,
03:49
but it's a very neglected area.
03:53
But there have been four studies --
03:55
one by John Lesley, wrote a book on this.
03:57
He estimated a probability that we will fail
03:59
to survive the current century: 50 percent.
04:01
Similarly, the Astronomer Royal, whom we heard speak yesterday,
04:04
also has a 50 percent probability estimate.
04:09
Another author doesn't give any numerical estimate,
04:12
but says the probability is significant that it will fail.
04:15
I wrote a long paper on this.
04:18
I said assigning a less than 20 percent probability would be a mistake
04:21
in light of the current evidence we have.
04:25
Now, the exact figures here,
04:28
we should take with a big grain of salt,
04:30
but there seems to be a consensus that the risk is substantial.
04:32
Everybody who has looked at this and studied it agrees.
04:35
Now, if we think about what just reducing
04:38
the probability of human extinction by just one percentage point --
04:40
not very much -- so that's equivalent to 60 million lives saved,
04:45
if we just count the currently living people, the current generation.
04:50
Now one percent of six billion people is equivalent to 60 million.
04:54
So that's a large number.
04:58
If we were to take into account future generations
05:00
that will never come into existence if we blow ourselves up,
05:03
then the figure becomes astronomical.
05:08
If we could eventually colonize a chunk of the universe --
05:11
the Virgo supercluster --
05:14
maybe it will take us 100 million years to get there,
05:16
but if we go extinct we never will.
05:18
Then, even a one percentage point reduction
05:21
in the extinction risk could be equivalent
05:24
to this astronomical number -- 10 to the power of 32.
05:28
So if you take into account future generations as much as our own,
05:31
every other moral imperative of philanthropic cost just becomes irrelevant.
05:35
The only thing you should focus on
05:40
would be to reduce existential risk
05:42
because even the tiniest decrease in existential risk
05:44
would just overwhelm any other benefit you could hope to achieve.
05:48
And even if you just look at the current people,
05:52
and ignore the potential that would be lost if we went extinct,
05:54
it should still have a high priority.
05:59
Now, let me spend the rest of my time on the third big problem,
06:01
because it's more subtle and perhaps difficult to grasp.
06:06
Think about some time in your life --
06:12
some people might never have experienced it -- but some people,
06:16
there are just those moments that you have experienced
06:19
where life was fantastic.
06:22
It might have been at the moment of some great, creative inspiration
06:24
you might have had when you just entered this flow stage.
06:31
Or when you understood something you had never done before.
06:33
Or perhaps in the ecstasy of romantic love.
06:35
Or an aesthetic experience -- a sunset or a great piece of art.
06:39
Every once in a while we have these moments,
06:44
and we realize just how good life can be when it's at its best.
06:46
And you wonder, why can't it be like that all the time?
06:50
You just want to cling onto this.
06:55
And then, of course, it drifts back into ordinary life and the memory fades.
06:57
And it's really difficult to recall, in a normal frame of mind,
07:01
just how good life can be at its best.
07:05
Or how bad it can be at its worst.
07:08
The third big problem is that life isn't usually
07:11
as wonderful as it could be.
07:14
I think that's a big, big problem.
07:16
It's easy to say what we don't want.
07:20
Here are a number of things that we don't want --
07:23
illness, involuntary death, unnecessary suffering, cruelty,
07:26
stunted growth, memory loss, ignorance, absence of creativity.
07:29
Suppose we fixed these things -- we did something about all of these.
07:35
We were very successful.
07:38
We got rid of all of these things.
07:40
We might end up with something like this,
07:42
which is -- I mean, it's a heck of a lot better than that.
07:45
But is this really the best we can dream of?
07:49
Is this the best we can do?
07:54
Or is it possible to find something a little bit more inspiring to work towards?
07:56
And if we think about this,
08:02
I think it's very clear that there are ways
08:04
in which we could change things, not just by eliminating negatives,
08:08
but adding positives.
08:11
On my wish list, at least, would be:
08:13
much longer, healthier lives, greater subjective well-being,
08:15
enhanced cognitive capacities, more knowledge and understanding,
08:20
unlimited opportunity for personal growth
08:25
beyond our current biological limits, better relationships,
08:27
an unbounded potential for spiritual, moral
08:31
and intellectual development.
08:33
If we want to achieve this, what, in the world, would have to change?
08:35
And this is the answer -- we would have to change.
08:43
Not just the world around us, but we, ourselves.
08:48
Not just the way we think about the world, but the way we are -- our very biology.
08:51
Human nature would have to change.
08:55
Now, when we think about changing human nature,
08:57
the first thing that comes to mind
08:59
are these human modification technologies --
09:01
growth hormone therapy, cosmetic surgery,
09:05
stimulants like Ritalin, Adderall, anti-depressants,
09:07
anabolic steroids, artificial hearts.
09:10
It's a pretty pathetic list.
09:12
They do great things for a few people
09:15
who suffer from some specific condition,
09:17
but for most people, they don't really transform
09:19
what it is to be human.
09:24
And they also all seem a little bit --
09:26
most people have this instinct that, well, sure,
09:28
there needs to be anti-depressants for the really depressed people.
09:31
But there's a kind of queasiness
09:33
that these are unnatural in some way.
09:35
It's worth recalling that there are a lot of other
09:38
modification technologies and enhancement technologies that we use.
09:40
We have skin enhancements, clothing.
09:43
As far as I can see, all of you are users of this
09:47
enhancement technology in this room, so that's a great thing.
09:51
Mood modifiers have been used from time immemorial --
09:56
caffeine, alcohol, nicotine, immune system enhancement,
09:59
vision enhancement, anesthetics --
10:04
we take that very much for granted,
10:06
but just think about how great progress that is --
10:08
like, having an operation before anesthetics was not fun.
10:12
Contraceptives, cosmetics and brain reprogramming techniques --
10:16
that sounds ominous,
10:22
but the distinction between what is a technology --
10:24
a gadget would be the archetype --
10:28
and other ways of changing and rewriting human nature is quite subtle.
10:30
So if you think about what it means to learn arithmetic or to learn to read,
10:34
you're actually, literally rewriting your own brain.
10:38
You're changing the microstructure of your brain as you go along.
10:41
So in a broad sense, we don't need to think about technology
10:45
as only little gadgets, like these things here,
10:48
but even institutions and techniques,
10:50
psychological methods and so forth.
10:54
Forms of organization can have a profound impact on human nature.
10:56
Looking ahead, there is a range of technologies
11:01
that are almost certain to be developed sooner or later.
11:03
We are very ignorant about what the time scale for these things are,
11:06
but they all are consistent with everything we know
11:10
about physical laws, laws of chemistry, etc.
11:12
It's possible to assume,
11:16
setting aside a possibility of catastrophe,
11:18
that sooner or later we will develop all of these.
11:21
And even just a couple of these would be enough
11:24
to transform the human condition.
11:27
So let's look at some of the dimensions of human nature
11:29
that seem to leave room for improvement.
11:34
Health span is a big and urgent thing,
11:37
because if you're not alive,
11:39
then all the other things will be to little avail.
11:41
Intellectual capacity -- let's take that box,
11:44
which falls into a lot of different sub-categories:
11:46
memory, concentration, mental energy, intelligence, empathy.
11:51
These are really great things.
11:54
Part of the reason why we value these traits
11:56
is that they make us better at competing with other people --
11:58
they're positional goods.
12:02
But part of the reason --
12:04
and that's the reason why we have ethical ground for pursuing these --
12:06
is that they're also intrinsically valuable.
12:10
It's just better to be able to understand more of the world around you
12:13
and the people that you are communicating with,
12:17
and to remember what you have learned.
12:19
Modalities and special faculties.
12:23
Now, the human mind is not a single unitary information processor,
12:25
but it has a lot of different, special, evolved modules
12:30
that do specific things for us.
12:34
If you think about what we normally take as giving life a lot of its meaning --
12:36
music, humor, eroticism, spirituality, aesthetics,
12:40
nurturing and caring, gossip, chatting with people --
12:44
all of these, very likely, are enabled by a special circuitry
12:49
that we humans have,
12:53
but that you could have another intelligent life form that lacks these.
12:55
We're just lucky that we have the requisite neural machinery
12:58
to process music and to appreciate it and enjoy it.
13:01
All of these would enable, in principle -- be amenable to enhancement.
13:05
Some people have a better musical ability
13:08
and ability to appreciate music than others have.
13:10
It's also interesting to think about what other things are --
13:12
so if these all enabled great values,
13:15
why should we think that evolution has happened to provide us
13:19
with all the modalities we would need to engage
13:22
with other values that there might be?
13:25
Imagine a species
13:27
that just didn't have this neural machinery for processing music.
13:29
And they would just stare at us with bafflement
13:33
when we spend time listening to a beautiful performance,
13:36
like the one we just heard -- because of people making stupid movements,
13:40
and they would be really irritated and wouldn't see what we were up to.
13:42
But maybe they have another faculty, something else
13:45
that would seem equally irrational to us,
13:48
but they actually tap into some great possible value there.
13:51
But we are just literally deaf to that kind of value.
13:54
So we could think of adding on different,
13:58
new sensory capacities and mental faculties.
14:00
Bodily functionality and morphology and affective self-control.
14:04
Greater subjective well-being.
14:09
Be able to switch between relaxation and activity --
14:11
being able to go slow when you need to do that, and to speed up.
14:14
Able to switch back and forth more easily
14:18
would be a neat thing to be able to do --
14:20
easier to achieve the flow state,
14:22
when you're totally immersed in something you are doing.
14:24
Conscientiousness and sympathy.
14:28
The ability to -- it's another interesting application
14:30
that would have large social ramification, perhaps.
14:33
If you could actually choose to preserve your romantic attachments to one person,
14:36
undiminished through time,
14:42
so that wouldn't have to -- love would never have to fade if you didn't want it to.
14:44
That's probably not all that difficult.
14:49
It might just be a simple hormone or something that could do this.
14:52
It's been done in voles.
14:57
You can engineer a prairie vole to become monogamous
15:01
when it's naturally polygamous.
15:04
It's just a single gene.
15:06
Might be more complicated in humans, but perhaps not that much.
15:08
This is the last picture that I want to --
15:10
now we've got to use the laser pointer.
15:13
A possible mode of being here would be a way of life --
15:16
a way of being, experiencing, thinking, seeing,
15:19
interacting with the world.
15:23
Down here in this little corner, here, we have the little sub-space
15:25
of this larger space that is accessible to human beings --
15:30
beings with our biological capacities.
15:34
It's a part of the space that's accessible to animals;
15:37
since we are animals, we are a subset of that.
15:40
And then you can imagine some enhancements of human capacities.
15:43
There would be different modes of being you could experience
15:47
if you were able to stay alive for, say, 200 years.
15:50
Then you could live sorts of lives and accumulate wisdoms
15:53
that are just not possible for humans as we currently are.
15:57
So then, you move off to this larger sphere of "human +,"
16:00
and you could continue that process and eventually
16:04
explore a lot of this larger space of possible modes of being.
16:07
Now, why is that a good thing to do?
16:11
Well, we know already that in this little human circle there,
16:13
there are these enormously wonderful and worthwhile modes of being --
16:17
human life at its best is wonderful.
16:21
We have no reason to believe that within this much, much larger space
16:24
there would not also be extremely worthwhile modes of being,
16:29
perhaps ones that would be way beyond our wildest ability
16:33
even to imagine or dream about.
16:39
And so, to fix this third problem,
16:41
I think we need -- slowly, carefully, with ethical wisdom and constraint --
16:43
develop the means that enable us to go out in this larger space and explore it
16:49
and find the great values that might hide there.
16:54
Thanks.
16:56

sponsored links

Nick Bostrom - Philosopher
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us?

Why you should listen

Philosopher Nick Bostrom envisioned a future full of human enhancement, nanotechnology and machine intelligence long before they became mainstream concerns. From his famous simulation argument -- which identified some striking implications of rejecting the Matrix-like idea that humans are living in a computer simulation -- to his work on existential risk, Bostrom approaches both the inevitable and the speculative using the tools of philosophy, probability theory, and scientific analysis.

Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. He has been referred to as one of the most important thinkers of our age.

Nick was honored as one of Foreign Policy's 2015 Global Thinkers .

His recent book Superintelligence advances the ominous idea that “the first ultraintelligent machine is the last invention that man need ever make.”

sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.