Iyad Rahwan: What moral decisions should driverless cars make?
Lyad Rahwan: 无人车的道德评判标准是什么?
Iyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence. Full bio
Double-click the English transcript below to play the video.
about technology and society.
estimated that last year
在美国,仅去年就有
from traffic crashes in the US alone.
die every year in traffic accidents.
120万人死于交通事故。
90 percent of those accidents,
promises to achieve
source of accidents --
in a driverless car in the year 2030,
坐在一辆无人车里
this vintage TEDxCambridge video.
TEDxCambridge视频。
and is unable to stop.
of pedestrians crossing the street,
and who should decide?
又该是谁来做这个决定呢?
could swerve into a wall,
从而挽救其他人的生命,
by the trolley problem,
“电车问题”的启发,
by philosophers a few decades ago
about this problem matters.
not think about it at all.
不应该去纠结这个问题。
misses the point
the scenario too literally.
is going to look like this;
to calculate something
a certain group of people,
versus another direction,
增加了乘客,或者其他驾驶员
to passengers or other drivers
a more complex calculation,
to involve trade-offs,
"Well, let's not worry about this.
is fully ready and 100 percent safe."
100%安全的时候再用吧。”
eliminate 90 percent of those accidents,
未来的10年内消除90%,
the last one percent of accidents
dead in car accidents
那可还要牺牲
have been coming up with all sorts of ways
the car should just swerve somehow
刚好从人群和路边的
that's what the car should do.
毫无疑问就应该这么做。
in which this is not possible.
was a suggestion by a blogger
that you press --
will have to make trade-offs on the road,
将不得不在行驶中做出权衡的话,
to find out what society wants,
看看大众是什么想法,
are a reflection of societal values.
阿米滋·谢里夫一起,
with these types of scenarios.
inspired by two philosophers:
伊曼努尔·康德(德国)的启发,
should follow utilitarian ethics:
遵循功利主义道德:
that will minimize total harm --
will kill the passenger.
should follow duty-bound principles,
车辆应该遵循义不容辞的原则,
that explicitly harms a human being,
去伤害一个人,
want cars to be utilitarian,
车辆是功利主义的,
whether they would purchase such cars,
会不会买这样一辆车时,
不顾一切保障自己安全的车,
that protect them at all costs,
to buy cars that minimize harm.
都买能将伤害降到最低的车。
back in history.
published a pamphlet
出版了一个宣传册,
for their sheep to graze.
brings a certain number of sheep --
带了一定数量的羊,
and no one else will be harmed.
不过其他人也都没什么损失。
that individually rational decision,
都擅自增加羊的数量,
and it will be depleted
to the detriment of the sheep.
to mitigate climate change.
of driverless cars,
is basically public safety --
to ride in those cars.
rational choice
能将总损失降到最低的
diminishing the common good,
of driverless cars,
a little bit more insidious
an individual human being
may simply program cars
简单的把行车电脑程序
for their clients,
automatically on their own
increasing risk for pedestrians.
对行人的潜在危险。
that have a mind of their own.
可以自己思考的机器羊。
even if the farmer doesn't know it.
而农场主对此毫不知情。
the tragedy of the algorithmic commons,
of social dilemmas using regulation,
来解决这些社会道德困境,
or communities get together,
what kind of outcome they want
on individual behavior
that the public good is preserved.
what people say they want.
sacrifice me in a very rare case,
牺牲我的利益的车,
enjoys unconditional protection.
whether they would support regulation
也问了人们,是否会支持立法,
said no to regulation;
这些车造成的损失最小,
and to minimize total harm,
opt into the safer technology
使用这种更安全的技术,
than human drivers.
answer to this riddle,
我并没有得到最终的答案,
we are comfortable with
是大家都可以接受的,
in which we can enforce those trade-offs.
这种权衡决策的方法。
my brilliant students,
of random dilemmas in a sequence
按顺序发生的随机困境,
the car should do in a given scenario.
the species of the different victims.
设置了年龄,甚至种族信息。
over five million decisions
超过5百万份决定,
form an early picture
形成了一个概念雏形,
people are comfortable with
哪些折中方案最适用,
is helping people recognize
are tasked with impossible choices.
不现实的选择。
understand the kinds of trade-offs
去理解那些最终将会被
ultimately in regulation.
the Department of Transport --
for all carmakers to provide,
提供的15点清单,
reflect on their own decisions
提供自己选择的概要,
of what they chose.
这不是一个典型的例子,
that this is not your typical example,
saved character for this person.
也最容易被保护的特征(宠物)。
prefer passengers over pedestrians
也似乎更愿意保护乘客,
就叫它道德困境问题——
let's call it the ethical dilemma --
in a specific scenario:
无人车应该如何抉择:
that the problem was a different one.
society to agree on and enforce
大众在他们能够接受的权衡方案中
(俄科幻小说家)就写下了他那著名的
wrote his famous laws of robotics --
itself to come to harm --
pushing these laws to the limit,
may not harm humanity as a whole.
in the context of driverless cars
其他特殊背景下
is not only a technological problem
to ask the right questions.
从提出正确的问题入手。
ABOUT THE SPEAKER
Iyad Rahwan - Computational social scientistIyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence.
Why you should listen
Iyad Rahwan is the AT&T Career Development Professor and an associate professor of media arts & sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. A native of Aleppo, Syria, Rahwan holds a PhD. from the University of Melbourne, Australia and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS). He led the winning team in the US State Department's Tag Challenge, using social media to locate individuals in remote cities within 12 hours using only their mug shots. Recently he crowdsourced 30 million decisions from people worldwide about the ethics of AI systems. Rahwan's work appeared in major academic journals, including Science and PNAS, and features regularly in major media outlets, including the New York Times, The Economist and the Wall Street Journal.
(Photo: Victoriano Izquierdo)
Iyad Rahwan | Speaker | TED.com