Zeynep Tufekci: Machine intelligence makes human morals more important
Zeynep Tufekci - Techno-sociologist
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives as they play out online. Full bio
as a computer programmer
came down to where I was,
And why are we whispering?"
at the computer in the room.
an affair with the receptionist.
if you're lying."
the laugh's on me.
emotional states and even lying
are very interested.
crazy about math and science.
I'd learned about nuclear weapons,
with the ethics of science.
as soon as possible.
let me pick a technical field
with any troublesome questions of ethics.
All the laughs are on me.
are building platforms
people see every day.
that could decide who to run over.
to make all sort of decisions,
that have no single right answers,
should you be shown?"
likely to reoffend?"
should be recommended to people?"
computers for a while,
for such subjective decisions
for flying airplanes, building bridges,
Did the bridge sway and fall?
fairly clear benchmarks,
our software is getting more powerful,
transparent and more complex.
have made great strides.
from a method called "machine learning."
than traditional programming,
detailed, exact, painstaking instructions.
and you feed it lots of data,
in our digital lives.
by churning through this data.
under a single-answer logic.
it's more probabilistic:
what you're looking for."
this method is really powerful.
what the system learned.
instructions to a computer;
intelligence system gets things wrong.
when it gets things right,
when it's a subjective problem.
using machine-learning systems.
on previous employees' data
high performers in the company.
human resources managers and executives,
more objective, less biased,
and minorities a better shot
as a programmer,
come down to where I was
or really late in the afternoon,
let's go to lunch!"
had not confessed to their higher-ups
for a serious job was a teen girl
I just looked wrong
it is more complicated, and here's why:
can infer all sorts of things about you
disclosed those things.
with high levels of accuracy.
you haven't even disclosed.
such computational systems
of clinical or postpartum depression
the likelihood of depression
for early intervention. Great!
in a very large company,
what if, unbeknownst to you,
with high future likelihood of depression?
just maybe in the future, more likely.
more likely to be pregnant
but aren't pregnant now?
because that's your workplace culture?"
at gender breakdowns.
not traditional coding,
labeled "higher risk of depression,"
what your system is selecting on,
where to begin to look.
but you don't understand it.
isn't doing something shady?"
just stepped on 10 puppy tails.
another word about this."
isn't my problem, go away, death stare.
may even be less biased
shutting out of the job market
we want to build,
to machines we don't totally understand?
on data generated by our actions,
reflecting our biases,
could be picking up on our biases
to be shown job ads for high-paying jobs.
suggesting criminal history,
and black-box algorithms
but sometimes we don't know,
was sentenced to six years in prison
in parole and sentencing decisions.
How is this score calculated?
be challenged in open court.
nonprofit, audited that very algorithm
was dismal, barely better than chance,
black defendants as future criminals
picking up her godsister
with a friend of hers.
and a scooter on a porch
a woman came out and said,
but they were arrested.
but she was also just 18.
for shoplifting in Home Depot --
a similar petty crime.
armed robbery convictions.
as high risk, and not him.
that she had not reoffended.
for her with her record.
prison term for a later crime.
this kind of unchecked power.
but they don't solve all our problems.
news feed algorithm --
and decides what to show you
for engagement on the site:
teenager by a white police officer,
unfiltered Twitter feed,
keeps wanting to make you
were talking about it.
wasn't showing it to me.
this was a widespread problem.
to even fewer people,
donate to charity, fine.
but difficult conversation
can also be wrong
IBM's machine-intelligence system
with human contestants on Jeopardy?
Watson was asked this question:
for a World War II hero,
for a World War II battle."
answered "Toronto" --
a second-grader wouldn't make.
error patterns of humans,
and be prepared for.
one is qualified for,
if it was because of stack overflow
fueled by a feedback loop
of value in 36 minutes.
what "error" means
but that's exactly my point.
these difficult questions.
our responsibilities to machines.
a "Get out of ethics free" card.
calls this math-washing.
scrutiny and investigation.
that bringing math and computation
invades the algorithms.
to our moral responsibility to judgment,
and outsource our responsibilities
About the speaker:Zeynep Tufekci - Techno-sociologist
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives as they play out online.
Why you should listen
We've never had so many ways to express ourselves to the world, to break news, blast opinions, build communities. Zeynep Tufekci studies how online voices and online crowds -- using Facebook, Twitter and other social tools -- interact with traditional power. Her analysis of the Gezi Park demonstrations in her native Turkey broke new ground, and she's quickly become a must-follow on Medium for her sharp insights into news and events that are, more and more, influenced by spontaneous online social reaction.
An assistant professor at the School of Information and Library Science (SILS) at University of North Carolina, Chapel Hill, she's a faculty associate at Harvard's Berkman Center and the co-editor of Inequity in the Technopolis, a 10-year longitudinal study of tech access in Austin, Texas.
Zeynep Tufekci | Speaker | TED.com