Zeynep Tufekci: Machine intelligence makes human morals more important
Zeynep Tufekci: Trí tuệ nhân tạo khiến đạo đức con người trở nên quan trọng hơn.
Techno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread. Full bio
Double-click the English transcript below to play the video.
là một lập trình viên máy tính
as a computer programmer
came down to where I was,
"Và tại sao ta phải nói thầm?"
And why are we whispering?"
trong phòng.
at the computer in the room.
an affair with the receptionist.
với cô tiếp tân.
if you're lying."
ông nói dối đấy."
the laugh's on me.
là cười bản thân.
emotional states and even lying
ngay cả việc nói dối
are very interested.
chính quyền rất hứng thú với điều này.
crazy about math and science.
Khoa học.
I'd learned about nuclear weapons,
về vũ khí hạt nhân.
with the ethics of science.
đạo đức của khoa học.
as soon as possible.
càng sớm càng tốt.
let me pick a technical field
này, hãy chọn một ngành kỹ thuật
with any troublesome questions of ethics.
câu hỏi đạo đức phiền phức.
All the laughs are on me.
Tôi cười vì chính mình!
are building platforms
xây dựng hệ điều hành
people see every day.
xem hằng ngày.
that could decide who to run over.
tự quyết định nó sẽ cán qua ai.
máy móc, vũ khí,
trong chiến tranh.
to make all sort of decisions,
đưa ra mọi quyết định
that have no single right answers,
một đáp án đúng nào cả,
should you be shown?"
likely to reoffend?"
should be recommended to people?"
đề xuất cho mọi người?"
computers for a while,
máy tính một thời gian dài,
for such subjective decisions
toán cho các quyết định chủ quan thế này
for flying airplanes, building bridges,
để lái máy bay, xây cầu,
Did the bridge sway and fall?
Cây cầu có lắc lư và sập không?
fairly clear benchmarks,
thống nhất và khá rõ ràng,
hướng dẫn.
tiêu chuẩn như vậy
phức tạp của con người.
our software is getting more powerful,
của ta ngày càng trở nên hùng mạnh,
transparent and more complex.
và phức tạp hơn.
have made great strides.
những bước tiến lớn.
phim chụp y khoa.
trong cờ vua và Go.
from a method called "machine learning."
phương pháp "máy tính tự học"
than traditional programming,
truyền thống,
detailed, exact, painstaking instructions.
cụ thể, chính xác, kỹ lưỡng cho máy tính.
and you feed it lots of data,
vào hệ thống,
sắp xếp,
in our digital lives.
by churning through this data.
các dữ liệu này.
under a single-answer logic.
trên logic một-câu-trả-lời-duy-nhất.
it's more probabilistic:
có tính xác suất hơn
what you're looking for."
đang muốn tìm."
this method is really powerful.
rất hiệu quả.
what the system learned.
học được.
instructions to a computer;
cho máy tính;
a puppy-machine-creature
chó cưng bằng máy
hay kiểm soát.
intelligence system gets things wrong.
trí tuệ nhân tạo hiểu sai sự việc.
when it gets things right,
sự việc,
when it's a subjective problem.
khi nó là một vấn đề chủ quan.
đang nghĩ gì.
using machine-learning systems.
dựa vào hệ thống máy móc tự học.
on previous employees' data
dựa trên dự liệu của nhân viên cũ
high performers in the company.
xuất sắc hiện có ở công ty.
human resources managers and executives,
more objective, less biased,
thuê người khách quan và ít thiên vị hơn,
and minorities a better shot
một cơ hội tốt hơn
có sự thiên vị.
as a programmer,
vai trò lập trình viên,
come down to where I was
sẽ đến chỗ tôi
or really late in the afternoon,
vào buổi chiều,
let's go to lunch!"
thất thường.
Cho nên tôi luôn đi
đang diễn ra.
had not confessed to their higher-ups
thông báo với cấp trên
for a serious job was a teen girl
quan trọng là một thiếu nữ
I just looked wrong
giới tính và sắc tộc
it is more complicated, and here's why:
phức tạp hơn, và đây là lý do:
can infer all sorts of things about you
đủ mọi loại kết luận vể bạn
disclosed those things.
những việc đó.
tình dục của bạn,
with high levels of accuracy.
sự chuẩn xác cao.
you haven't even disclosed.
không hề tiết lộ.
such computational systems
hệ thống tính toán như vậy
of clinical or postpartum depression
lâm sàng hoặc hậu thai sản
the likelihood of depression
khả năng mắc trầm cảm
xuất hiện --
lại có dự đoán.
for early intervention. Great!
can thiệp sớm. Tuyệt vời!
tuyển chọn.
managers conference,
in a very large company,
một công ty lớn,
what if, unbeknownst to you,
hiểu biết của bạn,
with high future likelihood of depression?
bị trầm cảm cao trong tương lai?
just maybe in the future, more likely.
trong tương lai có khả năng.
more likely to be pregnant
có khả năng mang thai
but aren't pregnant now?
không mang thai?
because that's your workplace culture?"
vì đó là bản chất làm việc ở đây?"
at gender breakdowns.
xem tỉ lệ giới tính
cân bằng.
not traditional coding,
không phải mã hóa truyền thống,
labeled "higher risk of depression,"
"có khả năng trầm cảm cao",
what your system is selecting on,
lựa chọn dựa trên tiêu chí gì,
where to begin to look.
tìm từ đâu.
but you don't understand it.
không hiểu nó.
isn't doing something shady?"
không làm gì mờ ám?"
just stepped on 10 puppy tails.
10 cái đuôi chó.
another word about this."
về vấn đề này nữa."
isn't my problem, go away, death stare.
là vấn đề của tôi, đi đi, ánh nhìn chết người.
may even be less biased
shutting out of the job market
we want to build,
to machines we don't totally understand?
on data generated by our actions,
reflecting our biases,
could be picking up on our biases
neutral computation."
to be shown job ads for high-paying jobs.
suggesting criminal history,
and black-box algorithms
but sometimes we don't know,
was sentenced to six years in prison
in parole and sentencing decisions.
How is this score calculated?
be challenged in open court.
nonprofit, audited that very algorithm
was dismal, barely better than chance,
black defendants as future criminals
picking up her godsister
with a friend of hers.
and a scooter on a porch
a woman came out and said,
but they were arrested.
but she was also just 18.
for shoplifting in Home Depot --
a similar petty crime.
armed robbery convictions.
as high risk, and not him.
that she had not reoffended.
for her with her record.
prison term for a later crime.
this kind of unchecked power.
but they don't solve all our problems.
news feed algorithm --
and decides what to show you
for engagement on the site:
teenager by a white police officer,
unfiltered Twitter feed,
keeps wanting to make you
were talking about it.
wasn't showing it to me.
this was a widespread problem.
wasn't algorithm-friendly.
to even fewer people,
donate to charity, fine.
but difficult conversation
can also be wrong
IBM's machine-intelligence system
with human contestants on Jeopardy?
Watson was asked this question:
for a World War II hero,
for a World War II battle."
answered "Toronto" --
a second-grader wouldn't make.
error patterns of humans,
and be prepared for.
one is qualified for,
if it was because of stack overflow
fueled by a feedback loop
of value in 36 minutes.
what "error" means
autonomous weapons.
but that's exactly my point.
these difficult questions.
our responsibilities to machines.
a "Get out of ethics free" card.
calls this math-washing.
scrutiny and investigation.
algorithmic accountability,
that bringing math and computation
invades the algorithms.
to our moral responsibility to judgment,
and outsource our responsibilities
ABOUT THE SPEAKER
Zeynep Tufekci - Techno-sociologistTechno-sociologist Zeynep Tufekci asks big questions about our societies and our lives, as both algorithms and digital connectivity spread.
Why you should listen
We've entered an era of digital connectivity and machine intelligence. Complex algorithms are increasingly used to make consequential decisions about us. Many of these decisions are subjective and have no right answer: who should be hired, fired or promoted; what news should be shown to whom; which of your friends do you see updates from; which convict should be paroled. With increasing use of machine learning in these systems, we often don't even understand how exactly they are making these decisions. Zeynep Tufekci studies what this historic transition means for culture, markets, politics and personal life.
Tufekci is a contributing opinion writer at the New York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvard's Berkman Klein Center for Internet and Society.
Her book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, was published in 2017 by Yale University Press. Her next book, from Penguin Random House, will be about algorithms that watch, judge and nudge us.
Zeynep Tufekci | Speaker | TED.com