Sam Harris: Can we build AI without losing control over it?
Sam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live. Full bio
Double-click the English transcript below to play the video.
about a failure of intuition
to detect a certain kind of danger.
most of you will feel
is kind of cool.
how the gains we make
to see how they won't destroy us
to think about these things.
to suffer a global famine,
or some other catastrophe,
or their grandchildren,
on the other hand, is fun,
about the development of AI at this point
an appropriate emotional response
and I'm giving this talk.
in building intelligent machines.
just stops getting better for some reason.
to consider why this might happen.
intelligence and automation are,
if we are at all able to.
president of the United States?
destroy civilization as we know it.
how bad it would have to be
improvements in our technology
this is the worst thing
behind door number two,
to improve our intelligent machines
machines that are smarter than we are,
that are smarter than we are,
the mathematician IJ Good called
as I have here,
will become spontaneously malevolent.
that we will build machines
more competent than we are
between their goals and our own
we take pains not to harm them.
a building like this one,
one day build machines
far-fetched to many of you.
that superintelligent AI is possible,
with one of the following assumptions.
processing in physical systems.
than an assumption.
narrow intelligence into our machines,
intelligence already.
"general intelligence,"
across multiple domains,
to build systems of atoms
intelligent behavior,
unless we are interrupted,
build general intelligence
that the rate of progress doesn't matter,
is enough to get us into the end zone.
We don't need exponential progress.
is that we will keep going.
our intelligent machines.
the source of everything we value
everything we value.
that we desperately need to solve.
like Alzheimer's and cancer.
We want to improve our climate science.
and there's no brake to pull.
on a peak of intelligence,
our situation so precarious,
about risk so unreliable.
who has ever lived.
is John von Neumann.
made on the people around him,
mathematicians and physicists of his time,
about him are half true,
who has ever lived.
more depressing than it needs to be.
that the spectrum of intelligence
than we currently conceive,
that are more intelligent than we are,
explore this spectrum
that we can't imagine.
this is true by virtue of speed alone.
a superintelligent AI
than your average team of researchers
function about a million times faster
about a million times faster
of human-level intellectual work,
much less constrain,
of superintelligent AI
the first time around.
the perfect labor-saving device.
that can build the machine
of raw materials.
the end of human drudgery.
of most intellectual work.
do in this circumstance?
and give each other massages.
questionable wardrobe choices,
could be like Burning Man.
and political order?
and unemployment
to immediately put this new wealth
the covers of our business magazines
would be free to starve.
or the Chinese do
in Silicon Valley
of waging war,
of the competition here
of this kind of breakthrough
that AI researchers say
we're told not to worry is time.
don't you know.
about overpopulation on Mars."
pretty little head about it."
of information processing,
some form of superintelligence.
how long it will take us
to do that safely.
to do that safely.
50 years is not what it used to be.
has been on television.
our species will ever face.
to have an appropriate emotional response
to believe is coming.
has a nice analogy here.
a message from an alien civilization,
the months until the mothership lands?
more urgency than we do.
can't help but share our values
extensions of ourselves.
become their limbic systems.
and only prudent path forward,
directly into our brains.
and only prudent path forward,
about a technology
before you stick it inside your head.
building superintelligent AI on its own
integrate our minds with it.
and governments doing this work
as being in a race against all others,
is to win the world,
in the next moment,
that whatever is easier to do
I don't have a solution to this problem,
that more of us think about it.
like a Manhattan Project
we'll inevitably do that,
how to avoid an arms race
that is aligned with our interests.
about superintelligent AI
to get the initial conditions right,
consequences of getting them right.
is the source of intelligence,
is what the basis of intelligence is,
these systems continuously,
of cognition very likely far exceeds
of building some sort of god.
ABOUT THE SPEAKER
Sam Harris - Neuroscientist, philosopherSam Harris's work focuses on how our growing understanding of ourselves and the world is changing our sense of how we should live.
Why you should listen
Sam Harris is the author of five New York Times bestsellers. His books include The End of Faith, Letter to a Christian Nation, The Moral Landscape, Free Will, Lying, Waking Up and Islam and the Future of Tolerance (with Maajid Nawaz). The End of Faith won the 2005 PEN Award for Nonfiction. Harris's writing and public lectures cover a wide range of topics -- neuroscience, moral philosophy, religion, spirituality, violence, human reasoning -- but generally focus on how a growing understanding of ourselves and the world is changing our sense of how we should live.
Harris's work has been published in more than 20 languages and has been discussed in the New York Times, Time, Scientific American, Nature, Newsweek, Rolling Stone and many other journals. He has written for the New York Times, the Los Angeles Times, The Economist, The Times (London), the Boston Globe, The Atlantic, The Annals of Neurology and elsewhere. Harris also regularly hosts a popular podcast.
Harris received a degree in philosophy from Stanford University and a Ph.D. in neuroscience from UCLA.
Sam Harris | Speaker | TED.com