Max Tegmark: How to get empowered, not overpowered, by AI
麥克斯·泰格馬克: 如何讓人工智慧賦予我們力量,而非受控於它
Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder. Full bio
Double-click the English transcript below to play the video.
of cosmic history,
have begun gazing out into the cosmos
is vastly grander
imperceptibly small perturbation
小到幾乎無法感知到,
something inspiring,
we're developing has the potential
變得前所未有的繁盛,
but for billions of years,
而是能持續數十億年。
much of this amazing cosmos.
還是在這整個不可思議的宇宙中。
視為是「生命 1.0」,
anything during its lifetime.
無法學習任何東西。
because we can learn,
因為我們可以學習,
new software into our brains,
安裝到我們的大腦中,
its software but also its hardware
它的軟體,還能設計硬體,
has already made us "Life 2.1,"
讓我們成為「生命 2.1」了,
pacemakers and cochlear implants.
心律調節器,以及耳蝸植入。
at our relationship with technology, OK?
我們與科技的關係,好嗎?
was both successful and inspiring,
很成功也很鼓舞人心,
use technology wisely,
that our ancestors could only dream of.
more powerful than rocket engines,
更強大的東西所推動,
aren't just three astronauts
journey into the future
that just as with rocketry,
就像火箭學一樣,
有強大的力量是不足夠的。
our technology powerful.
if we're going to be really ambitious,
我們還得要想出
for artificial intelligence:
來談談這三點:
and the destination.
to accomplish complex goals,
複雜目標的能力,
biological and artificial intelligence.
和人工智慧都包含進去。
the silly carbon-chauvinism idea
你就一定是肉做的。
if you're made of meat.
of AI has grown recently.
在近期的成長十分驚人。
saying stuff that you never said.
說出你從來沒有說過的話。
took 3,000 years of human Go games
AlphaZero 的人工智慧
player by just playing against itself.
變成了世界上最厲害的圍棋手。
wasn't that it crushed human gamers,
並不是它擊垮了人類的棋手,
人工智慧研究者,
handcrafting game-playing software.
手工打造下棋軟體。
not just in Go but even at chess,
擊垮了人類的人工智慧研究者,
really begs the question:
人工智慧進步,讓大家想問:
landscape of tasks,
how hard it is for AI to do each task
what AI can do today.
人工智慧能做什麼。
as AI improves,
going on here in the task landscape.
有類似全球暖化的現象發生。
is to avoid careers at the waterfront --
避免從事在海濱的職業——
automated and disrupted.
bigger question as well.
to flood everything,
都能和人類的智慧匹敵?
of artificial general intelligence --
of AI research since its inception.
它就是人工智慧研究的聖杯。
that humans can do better than machines,"
人類能做得比機器好。」
that we'll never get AGI.
「我們永遠不會有 AGI。」
to have some human jobs
保留一些人類的工作,
and purpose with our jobs,
transform life as we know it
轉變我們所認知的生活,
the most intelligent.
mainly not by humans but by AI,
由人工智慧來主導,而非人類,
人工智慧進展會非常快,
could be way faster
and development timescale of years,
典型人類研究和發展的時間,
of an intelligence explosion
自我改進的人工智慧
intelligence far behind,
as superintelligence.
like Rodney Brooks,
像羅德尼·布魯克斯,
for hundreds of years.
founder Demis Hassabis,
創辦人傑米斯·哈薩比斯,
it happen much sooner.
that most AI researchers
大部分的人工智慧研究者
get AGI within decades,
在有生之年就能看到,
and cheaper than us?
比人類好、成本又更低的話?
that can do everything we can do
讓它們做所有我們能做的事,
that makes all humans obsolete,
讓全人類變成過時,
embarrassingly lame.
in the spirit of TED.
抱持 TED 精神。
high-tech future
一個真正鼓舞人心的高科技未來,
of our rocket metaphor: the steering.
火箭比喻的第二部分:操控。
rather than flounder?
beneficial technology use,
旨在促進有益的科技使用,
希望生命的未來能夠存在,
for the future of life to exist
is better than the Stone Age.
就是因為科技。
a really inspiring high-tech future ...
真的很鼓舞人心的高科技未來……
power of our technology
和不斷成長的管理科技智慧。
with which we manage it.
a change of strategy
has been learning from mistakes.
是從錯誤中學習。
screwed up a bunch of times --
the seat belt and the airbag,
安全帶,和安全氣囊,
like nuclear weapons and AGI,
比如核子武器和 AGI,
is a lousy strategy,
rather than reactive;
right the first time
the only time we'll get.
sometimes people tell me,
call safety engineering.
稱之為安全工程。
the Apollo 11 mission,
阿波羅 11 任務之前,
everything that could go wrong
所有可能出錯的狀況,
on top of explosive fuel tanks
放在易爆燃料槽上,
where no one could help them.
沒有人能協助他們的地方。
the safety engineering
應該採用的策略。
I think we should take with AGI.
to make sure it goes right.
確保它能不要出錯。
we've organized conferences,
研究者和其他思想家,
AI researchers and other thinkers
we need to keep AI beneficial.
確保人工智慧是有益的。
在加州的阿西洛馬會議中舉辦,
was in Asilomar, California last year
by over 1,000 AI researchers
超過一千名人工智慧研究者
about three of these principles.
and lethal autonomous weapons.
軍備競賽以及自主的致命武器。
can be used for new ways of helping people
用作助人或傷人的新方法。
are much more likely to be used
than for new ways of killing people,
而不是殺人的新方法,
and chemists pushed hard --
很努力推動——
and chemical weapons.
and ban lethal autonomous weapons.
想要譴責和禁用自主的致命武器。
AI-fueled income inequality.
由人工智慧引起的收入不平等。
the economic pie dramatically with AI
讓經濟大餅大幅地成長,
how to divide this pie
if your computer has ever crashed.
this principle
in AI safety research,
人工智慧安全的研究,
of even more decisions and infrastructure,
更多決策和基礎設施時。
today's buggy and hackable computers
有程式錯誤且可能被駭入的電腦,
that we can really trust,
穩定的人工智慧系統。
都可能故障、傷害我們,
can malfunction and harm us,
has to include work on AI value alignment,
包含人工智慧價值校準的工作,
from AGI isn't malice,
真正威脅不是惡意——
that just aren't aligned with ours.
與我們的目標不一致。
the West African black rhino extinct,
讓西非的黑犀牛濱臨絕種時,
of evil rhinoceros haters, did we?
才這麼做的,對吧?
we were smarter than them
就是比我們聰明的,
ourselves in the position of those rhinos
創造了 AGI 之後,
to make machines understand our goals,
讓機器了解我們的目標,
of our rocket metaphor: the destination.
第三部分:目的地。
that almost nobody talks about --
(顯而易見又被忽略)——
on short-term AI challenges.
短期的人工智慧挑戰。
are we hoping for if we succeed?
創造出什麼樣的未來社會?
want us to build superintelligence:
我們建造超級人工智慧:
than us in all ways.
was that we should be ambitious
就是我們應該要有野心,
about who or what should be in charge.
大家的意見就不那麼一致了。
who want it to be just machines.
about what the role of humans should be,
意見就完全不一致了,
at possible futures
to steer toward, alright?
metaphorical journey into the future.
of my AI colleagues like
很喜歡的一個選擇是
並保持讓它被人類控制,
and keep it under human control,
technology and wealth
絕對權力絕對腐敗。」
and absolute power corrupts absolutely,
we humans just aren't smart enough,
也許我們人類就是不夠聰明,
moral qualms you might have
任何道德疑慮之外,
the superintelligence could outsmart us,
who are fine with AI taking over
讓人工智慧接管也沒不好,
are our worthy descendants,
配得上做我們的後裔就好,
have adopted our best values
已經採用了我們最好的價值觀,
tricking us into anthropomorphizing them?
騙我們將人性賦予它們?
who don't want human extinction
of those two high-tech options,
都不合你們的意,
that low-tech is suicide
從宇宙的觀點來看低科技是自殺,
beyond today's technology,
現今的科技,
is going to go extinct,
we're going to get taken out
that better technology could have solved.
本可解決的其他問題?
our cake and eating it ...
are aligned with ours?
善待我們的 AGI 呢?
has called "friendly AI,"
所謂的「友善的人工智慧」,
it could be awesome.
experiences like disease, poverty,
the freedom to choose
新的正面經驗中做選擇——
of positive experiences --
the masters of our own destiny.
成為自己命運的主宰。
is complicated,
expect AGI within decades,
AGI 會在數十年內出現,
into this unprepared,
the biggest mistake in human history --
global dictatorship
全球獨裁主義成為可能,
surveillance and suffering,
監控,以及苦難,
where everybody's better off:
人人都過得更好:
and free to live out their dreams.
去實現他們的夢想。
that's politically right or left?
是右派還是左派?
with strict moral rules,
道德規則的虔誠社會,
forests and lakes,
some of those atoms with the computers,
build all of these societies
我們就能建立出所有這些社會,
to choose which one they want to live in
他們想要住在哪個社會中,
be limited by our intelligence,
for this would be astronomical --
about our future,
is guaranteed to be beneficial,
as a mantra over and over and over again
一次又一次地重述,
towards our own obsolescence.
to steer our technology
the age of amazement,
in becoming not overpowered
ABOUT THE SPEAKER
Max Tegmark - Scientist, authorMax Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder.
Why you should listen
Max Tegmark is an MIT professor who loves thinking about life's big questions. He's written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligence, as well as more than 200 nerdy technical papers on topics from cosmology to AI.
He writes: "In my spare time, I'm president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially."
Max Tegmark | Speaker | TED.com