Zeynep Tufekci: Machine intelligence makes human morals more important
日娜·土費琪: 在機器智慧時代,堅守人類道德更形重要
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread. Full bio
Double-click the English transcript below to play the video.
as a computer programmer
程式設計師,
came down to where I was,
而且,我們為什麼耳語呢?」
And why are we whispering?"
at the computer in the room.
an affair with the receptionist.
if you're lying."
the laugh's on me.
但可笑的人是我。
emotional states and even lying
就能判斷出情緒狀態,
are very interested.
crazy about math and science.
I'd learned about nuclear weapons,
with the ethics of science.
as soon as possible.
let me pick a technical field
能簡單地找到頭路,
with any troublesome questions of ethics.
這類麻煩問題的工作吧。
All the laughs are on me.
are building platforms
每天接收訊息的平台。
people see every day.
that could decide who to run over.
可以決定要輾過哪些人。
戰爭機器和武器。
to make all sort of decisions,
來做各種決策,
that have no single right answers,
但問題沒有單一的正解,
should you be shown?"
likely to reoffend?"
should be recommended to people?"
computers for a while,
for such subjective decisions
如何去做這樣的主觀決定,
for flying airplanes, building bridges,
Did the bridge sway and fall?
橋樑會搖擺或倒塌嗎?
fairly clear benchmarks,
our software is getting more powerful,
因軟體越來越強大,
transparent and more complex.
have made great strides.
擊敗人類棋手。
from a method called "machine learning."
「機器學習」法。
than traditional programming,
detailed, exact, painstaking instructions.
齊全的計算機指令;
and you feed it lots of data,
餵大量的數據給系統,
in our digital lives.
by churning through this data.
under a single-answer logic.
單一答案的邏輯系統下運作;
it's more probabilistic:
what you're looking for."
this method is really powerful.
what the system learned.
系統學到了什麼。
instructions to a computer;
a puppy-machine-creature
或無法控制的機器寵物狗。
intelligence system gets things wrong.
when it gets things right,
when it's a subjective problem.
我們不知哪個是哪個。
using machine-learning systems.
on previous employees' data
已有的員工數據來訓練機器,
high performers in the company.
human resources managers and executives,
more objective, less biased,
and minorities a better shot
as a programmer,
在早期某個編寫程式的工作,
come down to where I was
or really late in the afternoon,
let's go to lunch!"
had not confessed to their higher-ups
for a serious job was a teen girl
I just looked wrong
只是外表形象看起來不符,
不列入考慮的僱用系統
it is more complicated, and here's why:
can infer all sorts of things about you
推斷出關於你的各種事物,
disclosed those things.
with high levels of accuracy.
you haven't even disclosed.
such computational systems
of clinical or postpartum depression
臨床或產後抑鬱症的可能性。
the likelihood of depression
預測出抑鬱的可能性,
for early intervention. Great!
很好!
用在僱人的情況下。
managers conference,
in a very large company,
what if, unbeknownst to you,
with high future likelihood of depression?
未來極有可能抑鬱的人呢?
just maybe in the future, more likely.
只是未來『比較有可能』抑鬱。
more likely to be pregnant
在未來一兩年比較有可能懷孕,
but aren't pregnant now?
because that's your workplace culture?"
以符合你的職場文化呢?」
at gender breakdowns.
not traditional coding,
不是傳統編碼,
labeled "higher risk of depression,"
what your system is selecting on,
where to begin to look.
but you don't understand it.
isn't doing something shady?"
做了什麼不可告人之事?
just stepped on 10 puppy tails.
十隻小狗的尾巴。
another word about this."
再聽妳多說一個字。」
isn't my problem, go away, death stare.
may even be less biased
在某些方面更沒有偏見,
shutting out of the job market
在就業市場裡吃到閉門羹。
we want to build,
建立這種社會嗎?
to machines we don't totally understand?
我們不完全理解的機器做決策?
on data generated by our actions,
即人類的印記所訓練。
reflecting our biases,
could be picking up on our biases
neutral computation."
to be shown job ads for high-paying jobs.
少於男性。
suggesting criminal history,
and black-box algorithms
but sometimes we don't know,
但有時我們毫無所知,
was sentenced to six years in prison
因逃避警察而被判處六年監禁。
in parole and sentencing decisions.
How is this score calculated?
be challenged in open court.
nonprofit, audited that very algorithm
非營利機構評估該演算法,
was dismal, barely better than chance,
僅比碰運氣稍強,
black defendants as future criminals
黑人被告成為未來罪犯的機率,
picking up her godsister
學校接她的乾妹妹,
with a friend of hers.
兒童腳踏車和一台滑板車,
and a scooter on a porch
a woman came out and said,
「嘿!那是我孩子的腳踏車!」
but they were arrested.
但被逮捕了。
but she was also just 18.
但她只有十八歲。
for shoplifting in Home Depot --
偷竊八十五美元的東西而被捕,
a similar petty crime.
armed robbery convictions.
as high risk, and not him.
that she had not reoffended.
發現她未曾再犯;
for her with her record.
prison term for a later crime.
this kind of unchecked power.
這類未經檢查的權力。
but they don't solve all our problems.
但不足以解決所有的問題。
news feed algorithm --
and decides what to show you
和瀏覽過的頁面,
什麼給你看的演算法。
for engagement on the site:
爆發了抗議遊行,
teenager by a white police officer,
殺害一個非裔美國少年,
unfiltered Twitter feed,
keeps wanting to make you
想讓你回到演算法的控制下,
were talking about it.
wasn't showing it to me.
this was a widespread problem.
wasn't algorithm-friendly.
to even fewer people,
為漸凍人募款的冰桶挑戰這事。
donate to charity, fine.
有意義,很好;
but difficult conversation
新聞事件可能被埋沒掉,
can also be wrong
IBM's machine-intelligence system
機器智慧系統華生
with human contestants on Jeopardy?
智力問答比賽中橫掃人類的對手?
Watson was asked this question:
for a World War II hero,
for a World War II battle."
answered "Toronto" --
a second-grader wouldn't make.
error patterns of humans,
and be prepared for.
沒準備的地方出錯。
one is qualified for,
確實很糟糕,
if it was because of stack overflow
會是三倍的糟糕。
fueled by a feedback loop
回饋迴路觸發了股市的急速崩盤,
of value in 36 minutes.
在36分鐘內蒸發掉了。
what "error" means
自動武器上會是何種情況。
autonomous weapons.
都會犯錯;
but that's exactly my point.
these difficult questions.
our responsibilities to machines.
a "Get out of ethics free" card.
「倫理免責卡」。
calls this math-washing.
稱之為「數學粉飾」。
scrutiny and investigation.
和調查演算法的能力。
algorithmic accountability,
that bringing math and computation
具價值觀的人類事務
invades the algorithms.
侵入演算法。
to our moral responsibility to judgment,
加入道德義務,
and outsource our responsibilities
ABOUT THE SPEAKER
Zeynep Tufekci - Techno-sociologistTechno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread.
Why you should listen
We've entered an era of digital connectivity and machine intelligence. Complex algorithms are increasingly used to make consequential decisions about us. Many of these decisions are subjective and have no right answer: who should be hired, fired or promoted; what news should be shown to whom; which of your friends do you see updates from; which convict should be paroled. With increasing use of machine learning in these systems, we often don't even understand how exactly they are making these decisions. Zeynep Tufekci studies what this historic transition means for culture, markets, politics and personal life.
Tufekci is a contributing opinion writer at the New York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvard's Berkman Klein Center for Internet and Society.
Her book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, was published in 2017 by Yale University Press. Her next book, from Penguin Random House, will be about algorithms that watch, judge and nudge us.
Zeynep Tufekci | Speaker | TED.com