Max Tegmark: How to get empowered, not overpowered, by AI
Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder. Full bio
Double-click the English transcript below to play the video.
of cosmic history,
have begun gazing out into the cosmos
is vastly grander
imperceptibly small perturbation
something inspiring,
we're developing has the potential
but for billions of years,
much of this amazing cosmos.
anything during its lifetime.
because we can learn,
new software into our brains,
its software but also its hardware
has already made us "Life 2.1,"
pacemakers and cochlear implants.
at our relationship with technology, OK?
was both successful and inspiring,
use technology wisely,
that our ancestors could only dream of.
more powerful than rocket engines,
aren't just three astronauts
journey into the future
that just as with rocketry,
our technology powerful.
if we're going to be really ambitious,
for artificial intelligence:
and the destination.
to accomplish complex goals,
biological and artificial intelligence.
the silly carbon-chauvinism idea
if you're made of meat.
of AI has grown recently.
saying stuff that you never said.
took 3,000 years of human Go games
player by just playing against itself.
wasn't that it crushed human gamers,
handcrafting game-playing software.
not just in Go but even at chess,
really begs the question:
landscape of tasks,
how hard it is for AI to do each task
what AI can do today.
as AI improves,
going on here in the task landscape.
is to avoid careers at the waterfront --
automated and disrupted.
bigger question as well.
to flood everything,
of artificial general intelligence --
of AI research since its inception.
that humans can do better than machines,"
that we'll never get AGI.
to have some human jobs
and purpose with our jobs,
transform life as we know it
the most intelligent.
mainly not by humans but by AI,
could be way faster
and development timescale of years,
of an intelligence explosion
intelligence far behind,
as superintelligence.
like Rodney Brooks,
for hundreds of years.
founder Demis Hassabis,
it happen much sooner.
that most AI researchers
get AGI within decades,
and cheaper than us?
that can do everything we can do
that makes all humans obsolete,
embarrassingly lame.
in the spirit of TED.
high-tech future
of our rocket metaphor: the steering.
rather than flounder?
beneficial technology use,
for the future of life to exist
is better than the Stone Age.
a really inspiring high-tech future ...
power of our technology
with which we manage it.
a change of strategy
has been learning from mistakes.
screwed up a bunch of times --
the seat belt and the airbag,
like nuclear weapons and AGI,
is a lousy strategy,
rather than reactive;
right the first time
the only time we'll get.
sometimes people tell me,
call safety engineering.
the Apollo 11 mission,
everything that could go wrong
on top of explosive fuel tanks
where no one could help them.
the safety engineering
I think we should take with AGI.
to make sure it goes right.
we've organized conferences,
AI researchers and other thinkers
we need to keep AI beneficial.
was in Asilomar, California last year
by over 1,000 AI researchers
about three of these principles.
and lethal autonomous weapons.
can be used for new ways of helping people
are much more likely to be used
than for new ways of killing people,
and chemists pushed hard --
and chemical weapons.
and ban lethal autonomous weapons.
AI-fueled income inequality.
the economic pie dramatically with AI
how to divide this pie
if your computer has ever crashed.
this principle
in AI safety research,
of even more decisions and infrastructure,
today's buggy and hackable computers
that we can really trust,
can malfunction and harm us,
has to include work on AI value alignment,
from AGI isn't malice,
that just aren't aligned with ours.
the West African black rhino extinct,
of evil rhinoceros haters, did we?
we were smarter than them
ourselves in the position of those rhinos
to make machines understand our goals,
of our rocket metaphor: the destination.
that almost nobody talks about --
on short-term AI challenges.
are we hoping for if we succeed?
want us to build superintelligence:
than us in all ways.
was that we should be ambitious
about who or what should be in charge.
who want it to be just machines.
about what the role of humans should be,
at possible futures
to steer toward, alright?
metaphorical journey into the future.
of my AI colleagues like
and keep it under human control,
technology and wealth
and absolute power corrupts absolutely,
we humans just aren't smart enough,
moral qualms you might have
the superintelligence could outsmart us,
who are fine with AI taking over
are our worthy descendants,
have adopted our best values
tricking us into anthropomorphizing them?
who don't want human extinction
of those two high-tech options,
that low-tech is suicide
beyond today's technology,
is going to go extinct,
we're going to get taken out
that better technology could have solved.
our cake and eating it ...
are aligned with ours?
has called "friendly AI,"
it could be awesome.
experiences like disease, poverty,
the freedom to choose
of positive experiences --
the masters of our own destiny.
is complicated,
expect AGI within decades,
into this unprepared,
the biggest mistake in human history --
global dictatorship
surveillance and suffering,
where everybody's better off:
and free to live out their dreams.
that's politically right or left?
with strict moral rules,
forests and lakes,
some of those atoms with the computers,
build all of these societies
to choose which one they want to live in
be limited by our intelligence,
for this would be astronomical --
about our future,
is guaranteed to be beneficial,
as a mantra over and over and over again
towards our own obsolescence.
to steer our technology
the age of amazement,
in becoming not overpowered
ABOUT THE SPEAKER
Max Tegmark - Scientist, authorMax Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder.
Why you should listen
Max Tegmark is an MIT professor who loves thinking about life's big questions. He's written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligence, as well as more than 200 nerdy technical papers on topics from cosmology to AI.
He writes: "In my spare time, I'm president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially."
Max Tegmark | Speaker | TED.com