Iyad Rahwan: What moral decisions should driverless cars make?
Iyad Rahwan: Quelles décisions morales les voitures sans conducteur devraient-elles prendre?
Iyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence. Full bio
Double-click the English transcript below to play the video.
about technology and society.
de technologie et de société.
estimated that last year
que, l'année dernière,
from traffic crashes in the US alone.
circulation à 35 000, seulement aux USA.
die every year in traffic accidents.
gens qui meurent chaque année.
90 percent of those accidents,
90% de ces accidents,
promises to achieve
conducteur promet d'y arriver
source of accidents --
d'accidents--
in a driverless car in the year 2030,
sans conducteur,
this vintage TEDxCambridge video.
and is unable to stop.
de s'arrêter.
of pedestrians crossing the street,
en train de traverser;
and who should decide?
et qui devrait décider ?
could swerve into a wall,
by the trolley problem,
venant des trolleybus,
by philosophers a few decades ago
quelques décennies
about this problem matters.
en pensons.
not think about it at all.
pas du tout.
misses the point
d'un point important
the scenario too literally.
is going to look like this;
ne ressemblera à ça,
to calculate something
pourrait calculer
a certain group of people,
un groupe de personnes,
versus another direction,
to passengers or other drivers
ou des conducteurs
a more complex calculation,
complexe,
to involve trade-offs,
des compromis,
de l'éthique.
"Well, let's not worry about this.
« Ne nous en préoccupons pas.
is fully ready and 100 percent safe."
prête et sûre à 100%. »
eliminate 90 percent of those accidents,
de ces accidents,
années.
the last one percent of accidents
des accidents
adopter cette technologie ?
dead in car accidents
des accidents de voiture
est aussi un choix
have been coming up with all sorts of ways
sur les réseaux sociaux,
the car should just swerve somehow
devrait pouvoir dévier
that's what the car should do.
de le faire, elle le ferait sûrement.
in which this is not possible.
par d'autres scénarios.
was a suggestion by a blogger
de la part d'un blogueur,
that you press --
siège éjectable--
will have to make trade-offs on the road,
devront aussi faire des compromis,
to find out what society wants,
auprès de la société,
are a reflection of societal values.
des valeurs sociétales.
with these types of scenarios.
aux gens.
inspired by two philosophers:
inspirées par deux philosophes :
should follow utilitarian ethics:
l'éthique utilitariste :
that will minimize total harm --
tous les dommages --
un passant
will kill the passenger.
le passager.
should follow duty-bound principles,
certains commandements,
that explicitly harms a human being,
un être humain de manière explicite,
sa route
want cars to be utilitarian,
soit utilitariste,
whether they would purchase such cars,
s'ils achèteraient ces voitures,
that protect them at all costs,
les protègent à tout prix,
to buy cars that minimize harm.
achètent pour réduire les dommages.
back in history.
published a pamphlet
a publié un pamphlet
for their sheep to graze.
pour leurs moutons.
brings a certain number of sheep --
de moutons--
de plus,
and no one else will be harmed.
au détriment de personne.
that individually rational decision,
and it will be depleted
to the detriment of the sheep.
à plusieurs endroits :
to mitigate climate change.
atténuer le changement climatique.
of driverless cars,
sans conducteur,
is basically public safety --
la sécurité publique --
to ride in those cars.
décident de les conduire.
rational choice
diminishing the common good,
le bien commun,
of driverless cars,
sans conducteur,
a little bit more insidious
un peu plus insidieux
an individual human being
un seul être humain
may simply program cars
simplement les programmer
for their clients,
automatically on their own
par elles-mêmes
increasing risk for pedestrians.
augmenteraient le risque des piétons.
that have a mind of their own.
des moutons électriques conscients.
even if the farmer doesn't know it.
le sache.
the tragedy of the algorithmic commons,
Tragédie des communs algorithmiques,
of social dilemmas using regulation,
sociaux grâce aux lois :
or communities get together,
les communautés se réunissent
what kind of outcome they want
le résultat voulu
on individual behavior
au comportement
et au contrôle,
that the public good is preserved.
soit préservé.
réduisent les dommages ?
what people say they want.
sacrifice me in a very rare case,
me sacrifier dans un cas très rare,
enjoys unconditional protection.
sont protégés.
whether they would support regulation
s'ils valideraient un telle loi
said no to regulation;
non à la loi ;
and to minimize total harm,
réduisent les dommages,
les dommages,
opt into the safer technology
la technologie plus sécurisée
than human drivers.
answer to this riddle,
cette énigme,
we are comfortable with
nous correspondent le plus
in which we can enforce those trade-offs.
my brilliant students,
of random dilemmas in a sequence
the car should do in a given scenario.
la voiture dans ces cas.
the species of the different victims.
des différentes victimes.
over five million decisions
plus de 5 millions de décisions
form an early picture
people are comfortable with
à accepter
is helping people recognize
à reconnaître
de faire un choix
are tasked with impossible choices.
à des choix impossibles.
understand the kinds of trade-offs
à comprendre, en tant que société,
ultimately in regulation.
dans la loi.
the Department of Transport --
for all carmakers to provide,
par tous les fabricants de voitures.
éthique --
reflect on their own decisions
à leurs décisions
of what they chose.
de leurs choix.
that this is not your typical example,
un exemple typique,
saved character for this person.
et a tué le plus.
prefer passengers over pedestrians
les passagers aux piétons
ceux qui traversent en dehors des clous.
let's call it the ethical dilemma --
un dilemme éthique --
in a specific scenario:
un scénario précis :
that the problem was a different one.
le problème était autre.
society to agree on and enforce
que la société accepte et applique
wrote his famous laws of robotics --
sa célèbre loi de la robotique --
itself to come to harm --
à faire au mal --
pushing these laws to the limit,
testé ces règles,
may not harm humanity as a whole.
à l'Humanité.
in the context of driverless cars
le cas des voitures sans conducteur
la mettre en œuvre ;
is not only a technological problem
uniquement un problème technique
sociétale, j'espère que
to ask the right questions.
les bonnes questions.
ABOUT THE SPEAKER
Iyad Rahwan - Computational social scientistIyad Rahwan's work lies at the intersection of the computer and social sciences, with a focus on collective intelligence, large-scale cooperation and the social aspects of artificial intelligence.
Why you should listen
Iyad Rahwan is the AT&T Career Development Professor and an associate professor of media arts & sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. A native of Aleppo, Syria, Rahwan holds a PhD. from the University of Melbourne, Australia and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS). He led the winning team in the US State Department's Tag Challenge, using social media to locate individuals in remote cities within 12 hours using only their mug shots. Recently he crowdsourced 30 million decisions from people worldwide about the ethics of AI systems. Rahwan's work appeared in major academic journals, including Science and PNAS, and features regularly in major media outlets, including the New York Times, The Economist and the Wall Street Journal.
(Photo: Victoriano Izquierdo)
Iyad Rahwan | Speaker | TED.com