Zeynep Tufekci: Machine intelligence makes human morals more important
Zejnep Tufekči (Zeynep Tufekci): Zbog mašinske inteligencije ljudski moral je sve važniji
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread. Full bio
Double-click the English transcript below to play the video.
as a computer programmer
kao kompjuterska programerka
came down to where I was,
se spustio do mene
And why are we whispering?"
at the computer in the room.
na kompjuter u prostoriji.
an affair with the receptionist.
sa recepcionerom.
if you're lying."
the laugh's on me.
ali zapravo šala je na moj račun.
emotional states and even lying
emocionalna stanja, čak i laganje,
are very interested.
su veoma zainteresovane za to.
crazy about math and science.
koja su luda za matematikom i naukom.
I'd learned about nuclear weapons,
saznala za nuklearno oružje
with the ethics of science.
zbog naučne etike.
as soon as possible.
da počnem da radim što pre.
let me pick a technical field
hajde da izaberem tehničku oblast
with any troublesome questions of ethics.
bilo kakvim mučnim etičkim pitanjem.
All the laughs are on me.
Šala je skroz na moj račun.
are building platforms
prave platforme
people see every day.
što milijarde ljudi gledaju svakodnevno.
that could decide who to run over.
koji mogu da odluče koga da pregaze.
to make all sort of decisions,
da donesemo razne odluke,
that have no single right answers,
koja nemaju jedan pravi odgovor,
should you be shown?"
treba da bude vidljivo?"
likely to reoffend?"
novom prestupu?"
should be recommended to people?"
treba preporučiti ljudima?"
computers for a while,
koristimo kompjutere,
for such subjective decisions
kod sličnih subjektivnih odluka
for flying airplanes, building bridges,
koji upravljaju avionima, grade mostove,
Did the bridge sway and fall?
Da li se most zaljuljao i pao?
fairly clear benchmarks,
prilično jasne repere
haotičnih ljudskih odnosa.
our software is getting more powerful,
naši softveri postaju sve moćniji,
transparent and more complex.
transparentni i složeniji.
have made great strides.
su poprilično napredovali.
kod kreditnih kartica
na medicinskim snimcima.
from a method called "machine learning."
od metoda nazvanog "mašinsko učenje".
than traditional programming,
od tradicionalnog programiranja,
detailed, exact, painstaking instructions.
detaljne, tačne, minuciozne instrukcije.
and you feed it lots of data,
i pohranjivanju podataka u njega,
in our digital lives.
u digitalnim životima.
by churning through this data.
under a single-answer logic.
logikom samo jednog odgovora.
it's more probabilistic:
više se radi o verovatnoći:
what you're looking for."
onome što tražite."
this method is really powerful.
ovaj metod je zaista moćan.
za VI je to nazvao:
what the system learned.
instructions to a computer;
uputstava kompjuteru;
a puppy-machine-creature
bića - mehaničko kuče,
niti kontrolišemo.
intelligence system gets things wrong.
inteligencije nešto pogrešno shvati.
when it gets things right,
when it's a subjective problem.
kod subjektivnog problema.
algoritam za zapošljavanje -
using machine-learning systems.
koji koristi sisteme mašinskog učenja.
on previous employees' data
na podacima prethodnih zaposlenih
high performers in the company.
najučinkovitijih u firmi.
human resources managers and executives,
iz kadrovske službe i direktore,
more objective, less biased,
bilo objektivnije, nepristrasnije,
and minorities a better shot
zapošljavanje ljudi je pristrasno.
as a programmer,
prvih poslova kao programerke,
come down to where I was
prišla mestu na kom sam,
or really late in the afternoon,
ili veoma kasno poslepodne,
let's go to lunch!"
na besplatan ručak.
had not confessed to their higher-ups
nisu priznali svojim nadređenim
za ozbiljan posao bila tinejdžerka
for a serious job was a teen girl
I just looked wrong
samo pogrešnog izgleda
na rodno i rasno nepristrasan način
it is more complicated, and here's why:
složenije je, a evo zašto:
can infer all sorts of things about you
mogu da zaključe razne stvari o vama
disclosed those things.
seksualnu orijentaciju,
with high levels of accuracy.
sa visokim stepenom tačnosti.
you haven't even disclosed.
niste ni obelodanili.
such computational systems
sličan kompjuterski sistem
of clinical or postpartum depression
kliničke ili postporođajne depresije
the likelihood of depression
verovatnoću depresije
bilo kakvih simptoma -
for early intervention. Great!
za rane intervencije. Sjajno!
managers conference,
menadžera iz kadrovske
in a very large company,
iz prilično velike firme,
what if, unbeknownst to you,
bez tvog znanja,
with high future likelihood of depression?
izgledima za depresiju u budućnosti?
just maybe in the future, more likely.
verovatnoća da će biti u budućnosti.
more likely to be pregnant
s većom verovatnoćom da zatrudne
but aren't pregnant now?
ali trenutno nisu trudne?
because that's your workplace culture?"
jer je to kultura na vašem radnom mestu?"
at gender breakdowns.
posmatrajući rodnu nejednakost.
not traditional coding,
a ne tradicionalnom programiranju,
labeled "higher risk of depression,"
s oznakom "veći rizik od depresije",
what your system is selecting on,
na osnovu čega vaš sistem bira,
where to begin to look.
but you don't understand it.
ali je vi ne razumete.
pitala sam, "koju imate
isn't doing something shady?"
ne obavlja nešto sumnjivo?"
just stepped on 10 puppy tails.
nagazila na 10 kučećih repića.
another word about this."
isn't my problem, go away, death stare.
nije moj problem, nestani, prazan pogeld.
may even be less biased
čak da bude na neki način
shutting out of the job market
isključivanja sa tržišta rada
we want to build,
koji želimo da gradimo,
to machines we don't totally understand?
mašinama koje u potpunosti ne razumemo?
on data generated by our actions,
na podacima koje proizvode naša delanja,
reflecting our biases,
da odražavaju naše predrasude,
could be picking up on our biases
da pokupe naše predrasude
neutral computation."
neutralne proračune."
to be shown job ads for high-paying jobs.
prikazuju oglase za dobro plaćene poslove.
suggesting criminal history,
koji nagoveštavaju kriminalnu prošlost,
and black-box algorithms
i algoritmi nalik crnoj kutiji,
but sometimes we don't know,
ali ponekad to ne uspeju,
was sentenced to six years in prison
je osuđen na šest godina zatvora
in parole and sentencing decisions.
u odlučivanju o uslovnoj ili kazni.
How is this score calculated?
kako su izračunali ovaj rezultat?
be challenged in open court.
izazovu na javnom suđenju.
nonprofit, audited that very algorithm
organizacija je proverila taj algoritam
rezultati pristrasni,
was dismal, barely better than chance,
jedva bolja od nagađanja
black defendants as future criminals
okrivljene crnce kao buduće kriminalce,
picking up her godsister
with a friend of hers.
sa svojom prijateljicom.
and a scooter on a porch
i skuter na tremu
a woman came out and said,
žena je izašla i rekla:
but they were arrested.
but she was also just 18.
ali je takođe imala svega 18 godina.
for shoplifting in Home Depot --
zbog krađe u supermarketu -
a similar petty crime.
sličan manji zločin.
armed robbery convictions.
dve osude zbog oružane pljačke.
as high risk, and not him.
kao visokorizičnu, a njega nije.
that she had not reoffended.
da ona nije imala novih prekršaja.
for her with her record.
bilo teško da nađe posao.
prison term for a later crime.
zbog kasnijeg zločina.
naše crne kutije
this kind of unchecked power.
sličnu nekontrolisanu moć.
but they don't solve all our problems.
ali ne rešavaju sve naše probleme.
news feed algorithm --
algoritam za dostavu vesti -
and decides what to show you
i odlučuje šta da vam pokaže
i stranica koje pratite.
for engagement on the site:
od angažmana na sajtu:
teenager by a white police officer,
od strane policajca belca,
unfiltered Twitter feed,
Tviter nalogu,
keeps wanting to make you
nastoji da vas natera
were talking about it.
govore o tome.
wasn't showing it to me.
this was a widespread problem.
da je ovo raširen problem.
wasn't algorithm-friendly.
nije bila prihvatljiva za algoritam.
to even fewer people,
čak i manjem broju ljudi,
donate to charity, fine.
doniraj u dobrotvorne svrhe, fino.
algoritamski prihvatljivo.
but difficult conversation
can also be wrong
takođe mogu da greše
IBM-ovog sistema mašinske inteligencije
IBM's machine-intelligence system
with human contestants on Jeopardy?
ljudskim takmičarima na kvizu?
Watson was asked this question:
upitali sledeće pitanje:
po heroju iz II svetskog rata,
for a World War II hero,
for a World War II battle."
po bici iz II svetskog rata."
answered "Toronto" --
odgovorio je: "Toronto" -
a second-grader wouldn't make.
drugaš je nikad ne bi napravio.
error patterns of humans,
sa obrascima grešenja kod ljudi,
and be prepared for.
i na koje nismo pripremljeni.
one is qualified for,
za koji ste kvalifikovani,
if it was because of stack overflow
ako bi to bilo zbog preopterećenja
fueled by a feedback loop
je pokrenut povratnom petljom
of value in 36 minutes.
od trilion dolara za 36 minuta.
o tome šta znači "greška"
what "error" means
autonomous weapons.
autonomnog oružja.
bili pristrasni.
but that's exactly my point.
these difficult questions.
ova teška pitanja.
our responsibilities to machines.
naša zaduženja mašinama.
a "Get out of ethics free" card.
besplatnu kartu za "beg od etike".
calls this math-washing.
to naziva matematičkim ispiranjem.
scrutiny and investigation.
nadzor i istraživanje.
algorithmic accountability,
da imamo algoritamsku odgovrnost,
that bringing math and computation
da uvođenje matematike i kompjutera
vođene vrednostima
invades the algorithms.
osvaja algoritme.
to our moral responsibility to judgment,
našom moralnom odgovornošću i rasuđivanjem
unutar tog okvira,
and outsource our responsibilities
i da delegiramo naše odgovornosti
moramo čvrsto držati
ABOUT THE SPEAKER
Zeynep Tufekci - Techno-sociologistTechno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread.
Why you should listen
We've entered an era of digital connectivity and machine intelligence. Complex algorithms are increasingly used to make consequential decisions about us. Many of these decisions are subjective and have no right answer: who should be hired, fired or promoted; what news should be shown to whom; which of your friends do you see updates from; which convict should be paroled. With increasing use of machine learning in these systems, we often don't even understand how exactly they are making these decisions. Zeynep Tufekci studies what this historic transition means for culture, markets, politics and personal life.
Tufekci is a contributing opinion writer at the New York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvard's Berkman Klein Center for Internet and Society.
Her book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, was published in 2017 by Yale University Press. Her next book, from Penguin Random House, will be about algorithms that watch, judge and nudge us.
Zeynep Tufekci | Speaker | TED.com