English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TEDSummit

Zeynep Tufekci: Machine intelligence makes human morals more important

Filmed
Views 1,260,923

Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."

- Techno-sociologist
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives as they play out online. Full bio

So, I started my first job
as a computer programmer
00:12
in my very first year of college --
00:16
basically, as a teenager.
00:18
Soon after I started working,
00:20
writing software in a company,
00:22
a manager who worked at the company
came down to where I was,
00:24
and he whispered to me,
00:28
"Can he tell if I'm lying?"
00:30
There was nobody else in the room.
00:33
"Can who tell if you're lying?
And why are we whispering?"
00:37
The manager pointed
at the computer in the room.
00:42
"Can he tell if I'm lying?"
00:45
Well, that manager was having
an affair with the receptionist.
00:49
(Laughter)
00:53
And I was still a teenager.
00:55
So I whisper-shouted back to him,
00:57
"Yes, the computer can tell
if you're lying."
00:59
(Laughter)
01:03
Well, I laughed, but actually,
the laugh's on me.
01:04
Nowadays, there are computational systems
01:07
that can suss out
emotional states and even lying
01:11
from processing human faces.
01:14
Advertisers and even governments
are very interested.
01:17
I had become a computer programmer
01:22
because I was one of those kids
crazy about math and science.
01:24
But somewhere along the line
I'd learned about nuclear weapons,
01:27
and I'd gotten really concerned
with the ethics of science.
01:31
I was troubled.
01:34
However, because of family circumstances,
01:35
I also needed to start working
as soon as possible.
01:37
So I thought to myself, hey,
let me pick a technical field
01:41
where I can get a job easily
01:44
and where I don't have to deal
with any troublesome questions of ethics.
01:46
So I picked computers.
01:51
(Laughter)
01:52
Well, ha, ha, ha!
All the laughs are on me.
01:53
Nowadays, computer scientists
are building platforms
01:57
that control what a billion
people see every day.
01:59
They're developing cars
that could decide who to run over.
02:05
They're even building machines, weapons,
02:09
that might kill human beings in war.
02:12
It's ethics all the way down.
02:15
Machine intelligence is here.
02:19
We're now using computation
to make all sort of decisions,
02:21
but also new kinds of decisions.
02:25
We're asking questions to computation
that have no single right answers,
02:27
that are subjective
02:32
and open-ended and value-laden.
02:33
We're asking questions like,
02:36
"Who should the company hire?"
02:37
"Which update from which friend
should you be shown?"
02:40
"Which convict is more
likely to reoffend?"
02:42
"Which news item or movie
should be recommended to people?"
02:45
Look, yes, we've been using
computers for a while,
02:48
but this is different.
02:51
This is a historical twist,
02:53
because we cannot anchor computation
for such subjective decisions
02:55
the way we can anchor computation
for flying airplanes, building bridges,
03:00
going to the moon.
03:06
Are airplanes safer?
Did the bridge sway and fall?
03:08
There, we have agreed-upon,
fairly clear benchmarks,
03:11
and we have laws of nature to guide us.
03:16
We have no such anchors and benchmarks
03:18
for decisions in messy human affairs.
03:21
To make things more complicated,
our software is getting more powerful,
03:25
but it's also getting less
transparent and more complex.
03:30
Recently, in the past decade,
03:34
complex algorithms
have made great strides.
03:36
They can recognize human faces.
03:39
They can decipher handwriting.
03:41
They can detect credit card fraud
03:44
and block spam
03:46
and they can translate between languages.
03:47
They can detect tumors in medical imaging.
03:49
They can beat humans in chess and Go.
03:52
Much of this progress comes
from a method called "machine learning."
03:55
Machine learning is different
than traditional programming,
04:00
where you give the computer
detailed, exact, painstaking instructions.
04:03
It's more like you take the system
and you feed it lots of data,
04:07
including unstructured data,
04:11
like the kind we generate
in our digital lives.
04:13
And the system learns
by churning through this data.
04:15
And also, crucially,
04:18
these systems don't operate
under a single-answer logic.
04:20
They don't produce a simple answer;
it's more probabilistic:
04:24
"This one is probably more like
what you're looking for."
04:27
Now, the upside is:
this method is really powerful.
04:32
The head of Google's AI systems called it,
04:35
"the unreasonable effectiveness of data."
04:37
The downside is,
04:39
we don't really understand
what the system learned.
04:41
In fact, that's its power.
04:44
This is less like giving
instructions to a computer;
04:46
it's more like training
a puppy-machine-creature
04:51
we don't really understand or control.
04:55
So this is our problem.
04:58
It's a problem when this artificial
intelligence system gets things wrong.
05:00
It's also a problem
when it gets things right,
05:04
because we don't even know which is which
when it's a subjective problem.
05:08
We don't know what this thing is thinking.
05:11
So, consider a hiring algorithm --
05:15
a system used to hire people,
using machine-learning systems.
05:20
Such a system would have been trained
on previous employees' data
05:25
and instructed to find and hire
05:28
people like the existing
high performers in the company.
05:31
Sounds good.
05:34
I once attended a conference
05:35
that brought together
human resources managers and executives,
05:38
high-level people,
05:41
using such systems in hiring.
05:42
They were super excited.
05:43
They thought that this would make hiring
more objective, less biased,
05:45
and give women
and minorities a better shot
05:50
against biased human managers.
05:53
And look -- human hiring is biased.
05:55
I know.
05:59
I mean, in one of my early jobs
as a programmer,
06:00
my immediate manager would sometimes
come down to where I was
06:03
really early in the morning
or really late in the afternoon,
06:07
and she'd say, "Zeynep,
let's go to lunch!"
06:11
I'd be puzzled by the weird timing.
06:14
It's 4pm. Lunch?
06:16
I was broke, so free lunch. I always went.
06:19
I later realized what was happening.
06:22
My immediate managers
had not confessed to their higher-ups
06:24
that the programmer they hired
for a serious job was a teen girl
06:29
who wore jeans and sneakers to work.
06:32
I was doing a good job,
I just looked wrong
06:37
and was the wrong age and gender.
06:39
So hiring in a gender- and race-blind way
06:41
certainly sounds good to me.
06:44
But with these systems,
it is more complicated, and here's why:
06:47
Currently, computational systems
can infer all sorts of things about you
06:50
from your digital crumbs,
06:56
even if you have not
disclosed those things.
06:58
They can infer your sexual orientation,
07:01
your personality traits,
07:04
your political leanings.
07:06
They have predictive power
with high levels of accuracy.
07:08
Remember -- for things
you haven't even disclosed.
07:13
This is inference.
07:15
I have a friend who developed
such computational systems
07:17
to predict the likelihood
of clinical or postpartum depression
07:20
from social media data.
07:24
The results are impressive.
07:26
Her system can predict
the likelihood of depression
07:28
months before the onset of any symptoms --
07:31
months before.
07:35
No symptoms, there's prediction.
07:37
She hopes it will be used
for early intervention. Great!
07:39
But now put this in the context of hiring.
07:44
So at this human resources
managers conference,
07:48
I approached a high-level manager
in a very large company,
07:51
and I said to her, "Look,
what if, unbeknownst to you,
07:55
your system is weeding out people
with high future likelihood of depression?
08:00
They're not depressed now,
just maybe in the future, more likely.
08:07
What if it's weeding out women
more likely to be pregnant
08:11
in the next year or two
but aren't pregnant now?
08:15
What if it's hiring aggressive people
because that's your workplace culture?"
08:18
You can't tell this by looking
at gender breakdowns.
08:25
Those may be balanced.
08:27
And since this is machine learning,
not traditional coding,
08:29
there is no variable there
labeled "higher risk of depression,"
08:32
"higher risk of pregnancy,"
08:37
"aggressive guy scale."
08:39
Not only do you not know
what your system is selecting on,
08:41
you don't even know
where to begin to look.
08:45
It's a black box.
08:48
It has predictive power,
but you don't understand it.
08:49
"What safeguards," I asked, "do you have
08:52
to make sure that your black box
isn't doing something shady?"
08:54
She looked at me as if I had
just stepped on 10 puppy tails.
09:00
(Laughter)
09:04
She stared at me and she said,
09:06
"I don't want to hear
another word about this."
09:08
And she turned around and walked away.
09:13
Mind you -- she wasn't rude.
09:16
It was clearly: what I don't know
isn't my problem, go away, death stare.
09:17
(Laughter)
09:23
Look, such a system
may even be less biased
09:25
than human managers in some ways.
09:29
And it could make monetary sense.
09:31
But it could also lead
09:34
to a steady but stealthy
shutting out of the job market
09:36
of people with higher risk of depression.
09:41
Is this the kind of society
we want to build,
09:43
without even knowing we've done this,
09:46
because we turned decision-making
to machines we don't totally understand?
09:48
Another problem is this:
09:53
these systems are often trained
on data generated by our actions,
09:55
human imprints.
09:59
Well, they could just be
reflecting our biases,
10:02
and these systems
could be picking up on our biases
10:06
and amplifying them
10:09
and showing them back to us,
10:10
while we're telling ourselves,
10:12
"We're just doing objective,
neutral computation."
10:13
Researchers found that on Google,
10:18
women are less likely than men
to be shown job ads for high-paying jobs.
10:22
And searching for African-American names
10:28
is more likely to bring up ads
suggesting criminal history,
10:31
even when there is none.
10:35
Such hidden biases
and black-box algorithms
10:38
that researchers uncover sometimes
but sometimes we don't know,
10:42
can have life-altering consequences.
10:46
In Wisconsin, a defendant
was sentenced to six years in prison
10:49
for evading the police.
10:54
You may not know this,
10:56
but algorithms are increasingly used
in parole and sentencing decisions.
10:58
He wanted to know:
How is this score calculated?
11:02
It's a commercial black box.
11:05
The company refused to have its algorithm
be challenged in open court.
11:07
But ProPublica, an investigative
nonprofit, audited that very algorithm
11:12
with what public data they could find,
11:17
and found that its outcomes were biased
11:19
and its predictive power
was dismal, barely better than chance,
11:22
and it was wrongly labeling
black defendants as future criminals
11:25
at twice the rate of white defendants.
11:30
So, consider this case:
11:35
This woman was late
picking up her godsister
11:38
from a school in Broward County, Florida,
11:41
running down the street
with a friend of hers.
11:44
They spotted an unlocked kid's bike
and a scooter on a porch
11:47
and foolishly jumped on it.
11:51
As they were speeding off,
a woman came out and said,
11:52
"Hey! That's my kid's bike!"
11:55
They dropped it, they walked away,
but they were arrested.
11:57
She was wrong, she was foolish,
but she was also just 18.
12:01
She had a couple of juvenile misdemeanors.
12:04
Meanwhile, that man had been arrested
for shoplifting in Home Depot --
12:07
85 dollars' worth of stuff,
a similar petty crime.
12:13
But he had two prior
armed robbery convictions.
12:16
But the algorithm scored her
as high risk, and not him.
12:21
Two years later, ProPublica found
that she had not reoffended.
12:26
It was just hard to get a job
for her with her record.
12:30
He, on the other hand, did reoffend
12:33
and is now serving an eight-year
prison term for a later crime.
12:35
Clearly, we need to audit our black boxes
12:40
and not have them have
this kind of unchecked power.
12:43
(Applause)
12:46
Audits are great and important,
but they don't solve all our problems.
12:50
Take Facebook's powerful
news feed algorithm --
12:54
you know, the one that ranks everything
and decides what to show you
12:57
from all the friends and pages you follow.
13:01
Should you be shown another baby picture?
13:04
(Laughter)
13:07
A sullen note from an acquaintance?
13:08
An important but difficult news item?
13:11
There's no right answer.
13:13
Facebook optimizes
for engagement on the site:
13:14
likes, shares, comments.
13:17
In August of 2014,
13:20
protests broke out in Ferguson, Missouri,
13:22
after the killing of an African-American
teenager by a white police officer,
13:25
under murky circumstances.
13:30
The news of the protests was all over
13:31
my algorithmically
unfiltered Twitter feed,
13:34
but nowhere on my Facebook.
13:36
Was it my Facebook friends?
13:39
I disabled Facebook's algorithm,
13:40
which is hard because Facebook
keeps wanting to make you
13:43
come under the algorithm's control,
13:46
and saw that my friends
were talking about it.
13:48
It's just that the algorithm
wasn't showing it to me.
13:50
I researched this and found
this was a widespread problem.
13:53
The story of Ferguson
wasn't algorithm-friendly.
13:56
It's not "likable."
14:00
Who's going to click on "like?"
14:01
It's not even easy to comment on.
14:03
Without likes and comments,
14:05
the algorithm was likely showing it
to even fewer people,
14:07
so we didn't get to see this.
14:10
Instead, that week,
14:12
Facebook's algorithm highlighted this,
14:14
which is the ALS Ice Bucket Challenge.
14:16
Worthy cause; dump ice water,
donate to charity, fine.
14:18
But it was super algorithm-friendly.
14:22
The machine made this decision for us.
14:25
A very important
but difficult conversation
14:27
might have been smothered,
14:31
had Facebook been the only channel.
14:32
Now, finally, these systems
can also be wrong
14:36
in ways that don't resemble human systems.
14:39
Do you guys remember Watson,
IBM's machine-intelligence system
14:42
that wiped the floor
with human contestants on Jeopardy?
14:45
It was a great player.
14:49
But then, for Final Jeopardy,
Watson was asked this question:
14:50
"Its largest airport is named
for a World War II hero,
14:54
its second-largest
for a World War II battle."
14:57
(Hums Final Jeopardy music)
14:59
Chicago.
15:01
The two humans got it right.
15:02
Watson, on the other hand,
answered "Toronto" --
15:04
for a US city category!
15:09
The impressive system also made an error
15:11
that a human would never make,
a second-grader wouldn't make.
15:14
Our machine intelligence can fail
15:18
in ways that don't fit
error patterns of humans,
15:21
in ways we won't expect
and be prepared for.
15:25
It'd be lousy not to get a job
one is qualified for,
15:28
but it would triple suck
if it was because of stack overflow
15:31
in some subroutine.
15:35
(Laughter)
15:36
In May of 2010,
15:38
a flash crash on Wall Street
fueled by a feedback loop
15:41
in Wall Street's "sell" algorithm
15:45
wiped a trillion dollars
of value in 36 minutes.
15:48
I don't even want to think
what "error" means
15:53
in the context of lethal
autonomous weapons.
15:55
So yes, humans have always made biases.
16:01
Decision makers and gatekeepers,
16:05
in courts, in news, in war ...
16:07
they make mistakes;
but that's exactly my point.
16:11
We cannot escape
these difficult questions.
16:14
We cannot outsource
our responsibilities to machines.
16:18
(Applause)
16:22
Artificial intelligence does not give us
a "Get out of ethics free" card.
16:29
Data scientist Fred Benenson
calls this math-washing.
16:34
We need the opposite.
16:38
We need to cultivate algorithm suspicion,
scrutiny and investigation.
16:39
We need to make sure we have
algorithmic accountability,
16:45
auditing and meaningful transparency.
16:48
We need to accept
that bringing math and computation
16:51
to messy, value-laden human affairs
16:54
does not bring objectivity;
16:57
rather, the complexity of human affairs
invades the algorithms.
17:00
Yes, we can and we should use computation
17:04
to help us make better decisions.
17:07
But we have to own up
to our moral responsibility to judgment,
17:09
and use algorithms within that framework,
17:15
not as a means to abdicate
and outsource our responsibilities
17:17
to one another as human to human.
17:22
Machine intelligence is here.
17:25
That means we must hold on ever tighter
17:28
to human values and human ethics.
17:31
Thank you.
17:34
(Applause)
17:35

▲Back to top

About the speaker:

Zeynep Tufekci - Techno-sociologist
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives as they play out online.

Why you should listen

We've never had so many ways to express ourselves to the world, to break news, blast opinions, build communities. Zeynep Tufekci studies how online voices and online crowds -- using Facebook, Twitter and other social tools -- interact with traditional power. Her analysis of the Gezi Park demonstrations in her native Turkey broke new ground, and she's quickly become a must-follow on Medium for her sharp insights into news and events that are, more and more, influenced by spontaneous online social reaction.

An assistant professor at the School of Information and Library Science (SILS) at University of North Carolina, Chapel Hill, she's a faculty associate at Harvard's Berkman Center and the co-editor of Inequity in the Technopolis, a 10-year longitudinal study of tech access in Austin, Texas.

More profile about the speaker
Zeynep Tufekci | Speaker | TED.com