Zeynep Tufekci: Machine intelligence makes human morals more important
图费・克奇: 机器智能时代,坚守人类道德更重要
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread. Full bio
Double-click the English transcript below to play the video.
as a computer programmer
came down to where I was,
And why are we whispering?"
还有,我们干嘛要悄悄地说话?”
at the computer in the room.
an affair with the receptionist.
if you're lying."
the laugh's on me.
emotional states and even lying
are very interested.
都对此很感兴趣。
crazy about math and science.
痴迷于数学和科学孩子。
I'd learned about nuclear weapons,
with the ethics of science.
as soon as possible.
let me pick a technical field
选一个容易找工作
with any troublesome questions of ethics.
操心伦理问题的。
All the laughs are on me.
我多可笑。
are building platforms
people see every day.
that could decide who to run over.
用于杀人的武器。
都是伦理问题。
to make all sort of decisions,
that have no single right answers,
should you be shown?"
的哪条状态?”
likely to reoffend?"
should be recommended to people?"
哪条新闻或是电影?”
computers for a while,
已经有一段时间了,
for such subjective decisions
无法主导计算机,
for flying airplanes, building bridges,
管理飞机、建造桥梁、
可以主导它们。
Did the bridge sway and fall?
桥梁会摇晃或倒塌吗?
fairly clear benchmarks,
有统一而清晰的判断标准,
our software is getting more powerful,
是我们的软件正越来越强大,
transparent and more complex.
更加复杂。
have made great strides.
取得了长足发展,
医学图像识别肿瘤,
和围棋上击败人类。
from a method called "machine learning."
都来自一种叫“机器学习”的方法。
than traditional programming,
detailed, exact, painstaking instructions.
准确的逐条指令。
and you feed it lots of data,
喂了很多数据,
in our digital lives.
产生的数据。
by churning through this data.
under a single-answer logic.
it's more probabilistic:
简单的答案,而是概率性的:
what you're looking for."
this method is really powerful.
它真的非常强大。
负责人称它为:
what the system learned.
系统学到了什么,
它的强大之处。
instructions to a computer;
a puppy-machine-creature
了解和控制它。
intelligence system gets things wrong.
这是一个问题。
when it gets things right,
又是另一种问题。
when it's a subjective problem.
是不应该有答案的。
这些机器在想什么。
using machine-learning systems.
on previous employees' data
现有的数据进行自我培训,
high performers in the company.
human resources managers and executives,
人力资源部的经理和总监,
more objective, less biased,
更加客观,从而减少偏见,
and minorities a better shot
更多的机会,
招聘是存在偏见的,
as a programmer,
come down to where I was
or really late in the afternoon,
let's go to lunch!"
给搞糊涂了,
不会放过免费的午餐。
had not confessed to their higher-ups
向他们的上级坦白,
for a serious job was a teen girl
来做重要的编程工作,
运动鞋工作的女孩。
I just looked wrong
我只是看起来不合适,
it is more complicated, and here's why:
can infer all sorts of things about you
能根据零散的数据,
disclosed those things.
with high levels of accuracy.
you haven't even disclosed.
such computational systems
就是开发这种系统,
of clinical or postpartum depression
产后抑郁症的可能性。
the likelihood of depression
在症状出现前几个月
患抑郁的可能性,
就可以预测到,
for early intervention. Great!
临床早期干预,这很棒!
放到招聘中来看。
managers conference,
in a very large company,
what if, unbeknownst to you,
在不通知你的情况下,
with high future likelihood of depression?
有可能抑郁的人,怎么办?
just maybe in the future, more likely.
只是未来有可能。
more likely to be pregnant
有可能怀孕的女性,怎么办?
but aren't pregnant now?
但未来一两年有可能。
because that's your workplace culture?"
它只雇佣激进的候选人怎么办?”
at gender breakdowns.
你发现不了这些问题,
not traditional coding,
不是传统的代码,
labeled "higher risk of depression,"
“高抑郁风险”、
what your system is selecting on,
在选什么样的人,
where to begin to look.
从哪里入手了解。
but you don't understand it.
但你不了解它。
可以保证,
isn't doing something shady?"
在做些见不得人的事?”
just stepped on 10 puppy tails.
我刚踩了10只小狗的尾巴。
another word about this."
她不是无礼,
isn't my problem, go away, death stare.
这不是我的错,走开,不然我瞪死你。
may even be less biased
可能在某些方面
怀有更少偏见,
shutting out of the job market
we want to build,
并不完全了解的机器,
to machines we don't totally understand?
构建一种新的社会?
on data generated by our actions,
行为数据来训练。
reflecting our biases,
反馈我们的偏见,
could be picking up on our biases
继承我们的偏见,
neutral computation."
中立的预测。”
to be shown job ads for high-paying jobs.
更多的被展示给男性用户。
suggesting criminal history,
关于犯罪史的广告,
and black-box algorithms
以及暗箱中的算法,
but sometimes we don't know,
有些根本不会被发现,
改变一个人的人生。
was sentenced to six years in prison
in parole and sentencing decisions.
被应用在假释及量刑裁定上。
How is this score calculated?
得分是怎么算出来的?
be challenged in open court.
讨论他们的算法。
nonprofit, audited that very algorithm
的非盈利机构,
对这个算法进行了评估,
的结论是有偏见的,
was dismal, barely better than chance,
比碰运气强不了多少,
black defendants as future criminals
未来犯罪的可能性
picking up her godsister
布劳沃德县的一所学校,
with a friend of hers.
and a scooter on a porch
儿童自行车,和一辆电瓶车,
a woman came out and said,
另一个女人出来,喊道:
but they were arrested.
但还是被抓住了。
but she was also just 18.
但她也才刚满18岁,
青少年轻罪的记录。
for shoplifting in Home Depot --
在连锁超市偷窃被捕了,
a similar petty crime.
同样的轻微犯罪,
armed robbery convictions.
as high risk, and not him.
高风险,而这位男性则不是。
that she had not reoffended.
发现她没有再次犯罪,
for her with her record.
使她很难找到工作。
prison term for a later crime.
审查这些暗箱,
this kind of unchecked power.
不加限制的权限。
but they don't solve all our problems.
但不能解决所有的问题。
news feed algorithm --
新闻流算法来说,
and decides what to show you
和你浏览过的页面,
“推荐内容”的算法。
再推一张婴儿照片给你,
的沮丧状态?
但艰涩的新闻?
for engagement on the site:
网站的参与度来优化:
teenager by a white police officer,
unfiltered Twitter feed,
Twitter 上大量出现,
不关注这事吗?
keeps wanting to make you
因为 Facebook 希望
控制下使用,
were talking about it.
地谈论这件事。
wasn't showing it to me.
给我这些信息。
this was a widespread problem.
发现这是个普遍的问题。
wasn't algorithm-friendly.
对算法是不适用的,
的文章下点“赞”呢?
to even fewer people,
这些新闻的曝光,
donate to charity, fine.
为慈善捐款,很好。
but difficult conversation
主要的信息来源。
can also be wrong
也可能会在一些
的那些事情上搞错。
IBM's machine-intelligence system
那个在智力竞赛《危险边缘》中
with human contestants on Jeopardy?
IBM 机器智能系统,
Watson was asked this question:
Watson 被问道:
for a World War II hero,
二战英雄命名的,
for a World War II battle."
二战战场命名的。”
answered "Toronto" --
“多伦多”,
a second-grader wouldn't make.
小孩都不会犯的错误。
error patterns of humans,
出错模式的问题上出错,
and be prepared for.
无法预料和准备的。
one is qualified for,
的工作时,人们会感到很糟,
if it was because of stack overflow
子程序的过度堆积,
fueled by a feedback loop
股票闪电崩盘,
的反馈回路导致,
of value in 36 minutes.
损失了几十亿美金。
what "error" means
致命的自动化武器
autonomous weapons.
but that's exactly my point.
但这恰恰是我要说的。
these difficult questions.
这些困难的问题,
该承担的责任推给机器。
our responsibilities to machines.
a "Get out of ethics free" card.
一张“伦理免责卡”。
calls this math-washing.
称之为“数学粉饰”。
scrutiny and investigation.
怀疑、复查和调研能力。
algorithmic accountability,
有人为算法负责,
并切实的公开透明。
that bringing math and computation
把数学和计算引入
的人类事务中,
invades the algorithms.
的复杂性会扰乱算法。
并且需要使用计算机
to our moral responsibility to judgment,
加入道德义务,
and outsource our responsibilities
之间相互推卸那样,
我们要格外坚守
ABOUT THE SPEAKER
Zeynep Tufekci - Techno-sociologistTechno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread.
Why you should listen
We've entered an era of digital connectivity and machine intelligence. Complex algorithms are increasingly used to make consequential decisions about us. Many of these decisions are subjective and have no right answer: who should be hired, fired or promoted; what news should be shown to whom; which of your friends do you see updates from; which convict should be paroled. With increasing use of machine learning in these systems, we often don't even understand how exactly they are making these decisions. Zeynep Tufekci studies what this historic transition means for culture, markets, politics and personal life.
Tufekci is a contributing opinion writer at the New York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvard's Berkman Klein Center for Internet and Society.
Her book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, was published in 2017 by Yale University Press. Her next book, from Penguin Random House, will be about algorithms that watch, judge and nudge us.
Zeynep Tufekci | Speaker | TED.com