Iyad Rahwan: What moral decisions should driverless cars make?
伊亞德.拉萬: 無人駕駛車該做出怎樣的道德決定?
Iyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence. Full bio
Double-click the English transcript below to play the video.
about technology and society.
estimated that last year
from traffic crashes in the US alone.
die every year in traffic accidents.
死於交通意外。
90 percent of those accidents,
90% 的那些意外,
promises to achieve
許諾要達成的目標,
source of accidents --
in a driverless car in the year 2030,
你坐在一臺無人駕駛的車內,
this vintage TEDxCambridge video.
TEDxCambridge 影片。
and is unable to stop.
of pedestrians crossing the street,
and who should decide?
could swerve into a wall,
突然轉向去撞牆,
會身亡,
by the trolley problem,
by philosophers a few decades ago
about this problem matters.
是很重要的。
not think about it at all.
misses the point
the scenario too literally.
is going to look like this;
to calculate something
a certain group of people,
versus another direction,
跟另一個方向做比較,
to passengers or other drivers
或其他駕駛人的風險,
a more complex calculation,
to involve trade-offs,
"Well, let's not worry about this.
is fully ready and 100 percent safe."
準備好且 100% 安全。」
eliminate 90 percent of those accidents,
90% 的意外事故,
the last one percent of accidents
意外事故都消除,
dead in car accidents
have been coming up with all sorts of ways
提出各式各樣的方式,
the car should just swerve somehow
that's what the car should do.
那它是應該這麼做。
in which this is not possible.
沒辦法這樣做的情況。
was a suggestion by a blogger
是一個部落客的建議,
that you press --
彈射按鈕,讓你可以──
will have to make trade-offs on the road,
路上行駛時會需要做取捨,
to find out what society wants,
來了解社會想要什麼,
are a reflection of societal values.
with these types of scenarios.
inspired by two philosophers:
選項靈感來自兩位哲學家:
should follow utilitarian ethics:
that will minimize total harm --
能讓傷害最小的行為──
will kill the passenger.
should follow duty-bound principles,
責無旁貸原則,
that explicitly harms a human being,
會傷害到人類的行為,
want cars to be utilitarian,
車是功利主義的,
whether they would purchase such cars,
他們是否會買這樣的車,
that protect them at all costs,
to buy cars that minimize harm.
能把傷害減到最低的車。
back in history.
published a pamphlet
出版了一本小冊子,
for their sheep to graze.
放牧他們的羊。
brings a certain number of sheep --
and no one else will be harmed.
其他人也沒受害。
that individually rational decision,
做出了那個理性的決定,
and it will be depleted
資源會被用盡,
to the detriment of the sheep.
to mitigate climate change.
of driverless cars,
is basically public safety --
to ride in those cars.
rational choice
diminishing the common good,
他們就可能會削減了共善,
of driverless cars,
a little bit more insidious
an individual human being
may simply program cars
for their clients,
automatically on their own
increasing risk for pedestrians.
that have a mind of their own.
even if the farmer doesn't know it.
情況下自己去吃草。
the tragedy of the algorithmic commons,
「演算法公地悲劇」,
of social dilemmas using regulation,
or communities get together,
what kind of outcome they want
on individual behavior
that the public good is preserved.
傷害降至最低就好了?
what people say they want.
sacrifice me in a very rare case,
非常少數情況下把我犧牲的車,
enjoys unconditional protection.
whether they would support regulation
我們有問人們是否支持規制,
said no to regulation;
and to minimize total harm,
並把總傷害降至最低,
opt into the safer technology
比較安全的科技,
than human drivers.
answer to this riddle,
we are comfortable with
會覺得比較舒服,
in which we can enforce those trade-offs.
my brilliant students,
of random dilemmas in a sequence
the car should do in a given scenario.
車要怎麼做。
the species of the different victims.
over five million decisions
來自全世界超過 100 萬人,
form an early picture
people are comfortable with
感到比較舒服、
is helping people recognize
are tasked with impossible choices.
做不可能的選擇。
understand the kinds of trade-offs
去了解這類的取捨,
ultimately in regulation.
被納入到規制當中。
the Department of Transport --
for all carmakers to provide,
是要汽車製造商避免的,
reflect on their own decisions
of what they chose.
歸納給他們。
that this is not your typical example,
這不是典型的例子,
saved character for this person.
及最會拯救的角色(貓)是這些。
prefer passengers over pedestrians
也略高於行人,
let's call it the ethical dilemma --
就姑且稱之為道德兩難,
in a specific scenario:
that the problem was a different one.
這個問題有所不同。
society to agree on and enforce
wrote his famous laws of robotics --
寫了著名的機器人學法則──
itself to come to harm --
讓它自己受到傷害──
pushing these laws to the limit,
這些法則推到了極限,
may not harm humanity as a whole.
機器人不得傷害整體人性。
in the context of driverless cars
或任何明確的情境中,
is not only a technological problem
to ask the right questions.
開始問出對的問題。
ABOUT THE SPEAKER
Iyad Rahwan - Computational social scientistIyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence.
Why you should listen
Iyad Rahwan is the AT&T Career Development Professor and an associate professor of media arts & sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. A native of Aleppo, Syria, Rahwan holds a PhD. from the University of Melbourne, Australia and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS). He led the winning team in the US State Department's Tag Challenge, using social media to locate individuals in remote cities within 12 hours using only their mug shots. Recently he crowdsourced 30 million decisions from people worldwide about the ethics of AI systems. Rahwan's work appeared in major academic journals, including Science and PNAS, and features regularly in major media outlets, including the New York Times, The Economist and the Wall Street Journal.
(Photo: Victoriano Izquierdo)
Iyad Rahwan | Speaker | TED.com