Nick Bostrom: What happens when our computers get smarter than we are?
Nick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us? Full bio
Double-click the English transcript below to play the video.
philosophers and computer scientists,
the future of machine intelligence,
things are sort of science fiction-y,
human condition.
guests on this planet,
was created one year ago,
would be 10 minutes old.
two seconds ago.
world GDP over the last 10,000 years,
to plot this for you in a graph.
for a normal condition.
of this current anomaly?
through human history,
advances extremely rapidly --
so very productive.
to the ultimate cause.
distinguished gentlemen:
tokens, an incredible feat.
superstring revolution.
this is what we find:
in the exact way it's wired.
be too complicated, however,
been 250,000 generations
take a long time to evolve.
to intercontinental ballistic missiles.
that everything we've achieved,
changes that made the human mind.
is that any further changes
the substrate of thinking
enormous consequences.
think we're on the verge
a profound change in that substrate,
about putting commands in a box.
handcraft knowledge items.
for some purposes,
you couldn't scale them.
what you put in.
in the field of artificial intelligence.
around machine learning.
representations and features,
often from raw perceptual data.
that the human infant does.
limited to one domain --
between any pairs of languages,
on the Atari console.
the same powerful, cross-domain
as a human being has.
algorithmic tricks
how to match in machines.
to match those tricks?
leading A.I. experts,
and one of the questions we asked was,
there is a 50 percent probability
human-level machine intelligence?"
as the ability to perform
as an adult human,
within some limited domain.
group of experts we asked.
much later, or sooner,
limit to information processing
the limits in biological tissue.
at 200 hertz, 200 times a second.
operates at the Gigahertz.
100 meters per second, tops.
at the speed of light.
to fit inside a cranium,
of a warehouse or larger.
lies dormant in matter,
lay dormant throughout human history,
the power of artificial intelligence.
an intelligence explosion.
about what is smart and what is dumb,
roughly like this.
or whoever your favorite guru is.
of artificial intelligence,
probably more like this:
at zero intelligence,
years of really hard work,
mouse-level artificial intelligence,
cluttered environments
of really hard work, lots of investment,
chimpanzee-level artificial intelligence.
of really, really hard work,
artificial intelligence.
we are beyond Ed Witten.
at Humanville Station.
to questions of power.
twice as strong as a fit human male.
and his pals depends a lot more
what the chimpanzees do themselves.
on what the superintelligence does.
that humanity will ever need to make.
at inventing than we are,
on digital timescales.
a telescoping of the future.
that you could have imagined
in the fullness of time:
of minds into computers,
with the laws of physics.
develop, and possibly quite rapidly.
technological maturity
it would be able to get what it wants.
be shaped by the preferences of this A.I.
what are those preferences?
avoid anthropomorphizing.
every newspaper article
has a picture of this:
to conceive of the issue more abstractly,
as an optimization process,
into a particular set of configurations.
a really strong optimization process.
available means to achieve a state
conenction between
would find worthwhile or meaningful.
to make humans smile.
or amusing actions
effective way to achieve this goal:
muscles of humans
a difficult mathematical problem.
to get the solution to this problem
into a giant computer,
an instrumental reason
might not approve of.
problem from being solved.
go wrong in these particular ways;
optimization process
that your definition of x
in many a myth.
he touches be turned into gold.
she turns into gold.
optimization process
or poorly specified goals.
sticking electrodes into people's faces,
if we've grown dependent on the system --
to the Internet?
flicked the off switch to humanity,
for example, right here.
an intelligent adversary;
and plan around them.
at that than we are.
that we have this under control here.
a little bit easier by, say,
from which it cannot escape.
the A.I. couldn't find a bug.
find bugs all the time,
to create an air gap,
using social engineering.
out there somewhere
her account details
from the I.T. department.
around in your internal circuitry
can use to communicate.
you up to see what went wrong with you,
to a really nifty technology,
that the A.I. had planned.
not be confident in our ability
locked up in its bottle forever.
is to figure out
such that even if -- when -- it escapes,
fundamentally on our side
this difficult problem.
that this problem can be solved.
a long list of everything we care about,
in some computer language
that uses its intelligence
in such a way that it is motivated
that it predicts we would approve of.
its intelligence as much as possible
very good for humanity.
for the intelligence explosion
in just the right way
need to match ours,
how the A.I. behaves,
that the A.I. might encounter
that would need to be solved, sorted out:
uncertainty and so forth.
to be solved to make this work
a superintelligent A.I.,
is a really hard challenge.
challenge on top of that.
how to crack the first challenge
the additional challenge
work out a solution
by the time it is needed.
the entire control problem in advance
can only be put in place
architecture where it will be implemented.
that we solve in advance,
to the machine intelligence era
that is well worth doing
things turn out okay,
look back at this century
the one thing we did that really mattered
ABOUT THE SPEAKER
Nick Bostrom - PhilosopherNick Bostrom asks big questions: What should we do, as individuals and as a species, to optimize our long-term prospects? Will humanity’s technological advancements ultimately destroy us?
Why you should listen
Philosopher Nick Bostrom envisioned a future full of human enhancement, nanotechnology and machine intelligence long before they became mainstream concerns. From his famous simulation argument -- which identified some striking implications of rejecting the Matrix-like idea that humans are living in a computer simulation -- to his work on existential risk, Bostrom approaches both the inevitable and the speculative using the tools of philosophy, probability theory, and scientific analysis.
Since 2005, Bostrom has led the Future of Humanity Institute, a research group of mathematicians, philosophers and scientists at Oxford University tasked with investigating the big picture for the human condition and its future. He has been referred to as one of the most important thinkers of our age.
Nick was honored as one of Foreign Policy's 2015 Global Thinkers .
His recent book Superintelligence advances the ominous idea that “the first ultraintelligent machine is the last invention that man need ever make.”
Nick Bostrom | Speaker | TED.com