sponsored links
TEDxSiliconValley

Damon Horowitz: We need a "moral operating system"

May 14, 2011

Damon Horowitz reviews the enormous new powers that technology gives us: to know more -- and more about each other -- than ever before. Drawing the audience into a philosophical discussion, Horowitz invites us to pay new attention to the basic philosophy -- the ethical principles -- behind the burst of invention remaking our world. Where's the moral operating system that allows us to make sense of it? (Filmed at TEDxSiliconValley.)

Damon Horowitz - Philosopher, entrepreneur
Damon Horowitz explores what is possible at the boundaries of technology and the humanities. Full bio

sponsored links
Double-click the English subtitles below to play the video.
Power.
00:15
That is the word that comes to mind.
00:17
We're the new technologists.
00:19
We have a lot of data, so we have a lot of power.
00:21
How much power do we have?
00:24
Scene from a movie: "Apocalypse Now" -- great movie.
00:26
We've got to get our hero, Captain Willard, to the mouth of the Nung River
00:29
so he can go pursue Colonel Kurtz.
00:32
The way we're going to do this is fly him in and drop him off.
00:34
So the scene:
00:36
the sky is filled with this fleet of helicopters carrying him in.
00:38
And there's this loud, thrilling music in the background,
00:41
this wild music.
00:43
♫ Dum da ta da dum ♫
00:45
♫ Dum da ta da dum ♫
00:47
♫ Da ta da da ♫
00:49
That's a lot of power.
00:52
That's the kind of power I feel in this room.
00:54
That's the kind of power we have
00:56
because of all of the data that we have.
00:58
Let's take an example.
01:00
What can we do
01:02
with just one person's data?
01:04
What can we do
01:07
with that guy's data?
01:09
I can look at your financial records.
01:11
I can tell if you pay your bills on time.
01:13
I know if you're good to give a loan to.
01:15
I can look at your medical records; I can see if your pump is still pumping --
01:17
see if you're good to offer insurance to.
01:20
I can look at your clicking patterns.
01:23
When you come to my website, I actually know what you're going to do already
01:25
because I've seen you visit millions of websites before.
01:28
And I'm sorry to tell you,
01:30
you're like a poker player, you have a tell.
01:32
I can tell with data analysis what you're going to do
01:34
before you even do it.
01:36
I know what you like. I know who you are,
01:38
and that's even before I look at your mail
01:41
or your phone.
01:43
Those are the kinds of things we can do
01:45
with the data that we have.
01:47
But I'm not actually here to talk about what we can do.
01:50
I'm here to talk about what we should do.
01:56
What's the right thing to do?
02:00
Now I see some puzzled looks
02:04
like, "Why are you asking us what's the right thing to do?
02:06
We're just building this stuff. Somebody else is using it."
02:09
Fair enough.
02:12
But it brings me back.
02:15
I think about World War II --
02:17
some of our great technologists then,
02:19
some of our great physicists,
02:21
studying nuclear fission and fusion --
02:23
just nuclear stuff.
02:25
We gather together these physicists in Los Alamos
02:27
to see what they'll build.
02:30
We want the people building the technology
02:33
thinking about what we should be doing with the technology.
02:36
So what should we be doing with that guy's data?
02:41
Should we be collecting it, gathering it,
02:44
so we can make his online experience better?
02:47
So we can make money?
02:49
So we can protect ourselves
02:51
if he was up to no good?
02:53
Or should we respect his privacy,
02:55
protect his dignity and leave him alone?
02:58
Which one is it?
03:02
How should we figure it out?
03:05
I know: crowdsource. Let's crowdsource this.
03:07
So to get people warmed up,
03:11
let's start with an easy question --
03:14
something I'm sure everybody here has an opinion about:
03:16
iPhone versus Android.
03:19
Let's do a show of hands -- iPhone.
03:21
Uh huh.
03:24
Android.
03:26
You'd think with a bunch of smart people
03:29
we wouldn't be such suckers just for the pretty phones.
03:31
(Laughter)
03:33
Next question,
03:35
a little bit harder.
03:37
Should we be collecting all of that guy's data
03:39
to make his experiences better
03:41
and to protect ourselves in case he's up to no good?
03:43
Or should we leave him alone?
03:46
Collect his data.
03:48
Leave him alone.
03:53
You're safe. It's fine.
03:56
(Laughter)
03:58
Okay, last question --
04:00
harder question --
04:02
when trying to evaluate
04:04
what we should do in this case,
04:07
should we use a Kantian deontological moral framework,
04:10
or should we use a Millian consequentialist one?
04:14
Kant.
04:19
Mill.
04:22
Not as many votes.
04:25
(Laughter)
04:27
Yeah, that's a terrifying result.
04:30
Terrifying, because we have stronger opinions
04:34
about our hand-held devices
04:38
than about the moral framework
04:40
we should use to guide our decisions.
04:42
How do we know what to do with all the power we have
04:44
if we don't have a moral framework?
04:47
We know more about mobile operating systems,
04:50
but what we really need is a moral operating system.
04:53
What's a moral operating system?
04:58
We all know right and wrong, right?
05:00
You feel good when you do something right,
05:02
you feel bad when you do something wrong.
05:04
Our parents teach us that: praise with the good, scold with the bad.
05:06
But how do we figure out what's right and wrong?
05:09
And from day to day, we have the techniques that we use.
05:12
Maybe we just follow our gut.
05:15
Maybe we take a vote -- we crowdsource.
05:18
Or maybe we punt --
05:21
ask the legal department, see what they say.
05:23
In other words, it's kind of random,
05:26
kind of ad hoc,
05:28
how we figure out what we should do.
05:30
And maybe, if we want to be on surer footing,
05:33
what we really want is a moral framework that will help guide us there,
05:36
that will tell us what kinds of things are right and wrong in the first place,
05:39
and how would we know in a given situation what to do.
05:42
So let's get a moral framework.
05:46
We're numbers people, living by numbers.
05:48
How can we use numbers
05:51
as the basis for a moral framework?
05:53
I know a guy who did exactly that.
05:56
A brilliant guy --
05:59
he's been dead 2,500 years.
06:02
Plato, that's right.
06:05
Remember him -- old philosopher?
06:07
You were sleeping during that class.
06:09
And Plato, he had a lot of the same concerns that we did.
06:12
He was worried about right and wrong.
06:14
He wanted to know what is just.
06:16
But he was worried that all we seem to be doing
06:18
is trading opinions about this.
06:20
He says something's just. She says something else is just.
06:22
It's kind of convincing when he talks and when she talks too.
06:25
I'm just going back and forth; I'm not getting anywhere.
06:27
I don't want opinions; I want knowledge.
06:29
I want to know the truth about justice --
06:32
like we have truths in math.
06:35
In math, we know the objective facts.
06:38
Take a number, any number -- two.
06:41
Favorite number. I love that number.
06:43
There are truths about two.
06:45
If you've got two of something,
06:47
you add two more, you get four.
06:49
That's true no matter what thing you're talking about.
06:51
It's an objective truth about the form of two,
06:53
the abstract form.
06:55
When you have two of anything -- two eyes, two ears, two noses,
06:57
just two protrusions --
06:59
those all partake of the form of two.
07:01
They all participate in the truths that two has.
07:04
They all have two-ness in them.
07:08
And therefore, it's not a matter of opinion.
07:10
What if, Plato thought,
07:13
ethics was like math?
07:15
What if there were a pure form of justice?
07:17
What if there are truths about justice,
07:20
and you could just look around in this world
07:22
and see which things participated,
07:24
partook of that form of justice?
07:26
Then you would know what was really just and what wasn't.
07:29
It wouldn't be a matter
07:32
of just opinion or just appearances.
07:34
That's a stunning vision.
07:37
I mean, think about that. How grand. How ambitious.
07:39
That's as ambitious as we are.
07:42
He wants to solve ethics.
07:44
He wants objective truths.
07:46
If you think that way,
07:48
you have a Platonist moral framework.
07:51
If you don't think that way,
07:54
well, you have a lot of company in the history of Western philosophy,
07:56
because the tidy idea, you know, people criticized it.
07:58
Aristotle, in particular, he was not amused.
08:01
He thought it was impractical.
08:04
Aristotle said, "We should seek only so much precision in each subject
08:07
as that subject allows."
08:11
Aristotle thought ethics wasn't a lot like math.
08:13
He thought ethics was a matter of making decisions in the here-and-now
08:16
using our best judgment
08:19
to find the right path.
08:21
If you think that, Plato's not your guy.
08:23
But don't give up.
08:25
Maybe there's another way
08:27
that we can use numbers as the basis of our moral framework.
08:29
How about this:
08:33
What if in any situation you could just calculate,
08:35
look at the choices,
08:38
measure out which one's better and know what to do?
08:40
That sound familiar?
08:43
That's a utilitarian moral framework.
08:45
John Stuart Mill was a great advocate of this --
08:48
nice guy besides --
08:50
and only been dead 200 years.
08:52
So basis of utilitarianism --
08:54
I'm sure you're familiar at least.
08:56
The three people who voted for Mill before are familiar with this.
08:58
But here's the way it works.
09:00
What if morals, what if what makes something moral
09:02
is just a matter of if it maximizes pleasure
09:05
and minimizes pain?
09:07
It does something intrinsic to the act.
09:09
It's not like its relation to some abstract form.
09:12
It's just a matter of the consequences.
09:14
You just look at the consequences
09:16
and see if, overall, it's for the good or for the worse.
09:18
That would be simple. Then we know what to do.
09:20
Let's take an example.
09:22
Suppose I go up
09:24
and I say, "I'm going to take your phone."
09:26
Not just because it rang earlier,
09:28
but I'm going to take it because I made a little calculation.
09:30
I thought, that guy looks suspicious.
09:33
And what if he's been sending little messages to Bin Laden's hideout --
09:36
or whoever took over after Bin Laden --
09:39
and he's actually like a terrorist, a sleeper cell.
09:41
I'm going to find that out, and when I find that out,
09:44
I'm going to prevent a huge amount of damage that he could cause.
09:47
That has a very high utility to prevent that damage.
09:50
And compared to the little pain that it's going to cause --
09:53
because it's going to be embarrassing when I'm looking on his phone
09:55
and seeing that he has a Farmville problem and that whole bit --
09:57
that's overwhelmed
10:00
by the value of looking at the phone.
10:03
If you feel that way,
10:05
that's a utilitarian choice.
10:07
But maybe you don't feel that way either.
10:10
Maybe you think, it's his phone.
10:13
It's wrong to take his phone
10:15
because he's a person
10:17
and he has rights and he has dignity,
10:19
and we can't just interfere with that.
10:21
He has autonomy.
10:23
It doesn't matter what the calculations are.
10:25
There are things that are intrinsically wrong --
10:27
like lying is wrong,
10:30
like torturing innocent children is wrong.
10:32
Kant was very good on this point,
10:35
and he said it a little better than I'll say it.
10:38
He said we should use our reason
10:40
to figure out the rules by which we should guide our conduct,
10:42
and then it is our duty to follow those rules.
10:45
It's not a matter of calculation.
10:48
So let's stop.
10:51
We're right in the thick of it, this philosophical thicket.
10:53
And this goes on for thousands of years,
10:56
because these are hard questions,
10:59
and I've only got 15 minutes.
11:01
So let's cut to the chase.
11:03
How should we be making our decisions?
11:05
Is it Plato, is it Aristotle, is it Kant, is it Mill?
11:09
What should we be doing? What's the answer?
11:12
What's the formula that we can use in any situation
11:14
to determine what we should do,
11:17
whether we should use that guy's data or not?
11:19
What's the formula?
11:21
There's not a formula.
11:25
There's not a simple answer.
11:29
Ethics is hard.
11:31
Ethics requires thinking.
11:34
And that's uncomfortable.
11:38
I know; I spent a lot of my career
11:40
in artificial intelligence,
11:42
trying to build machines that could do some of this thinking for us,
11:44
that could give us answers.
11:47
But they can't.
11:49
You can't just take human thinking
11:51
and put it into a machine.
11:53
We're the ones who have to do it.
11:55
Happily, we're not machines, and we can do it.
11:58
Not only can we think,
12:01
we must.
12:03
Hannah Arendt said,
12:05
"The sad truth
12:07
is that most evil done in this world
12:09
is not done by people
12:11
who choose to be evil.
12:13
It arises from not thinking."
12:15
That's what she called the "banality of evil."
12:18
And the response to that
12:22
is that we demand the exercise of thinking
12:24
from every sane person.
12:26
So let's do that. Let's think.
12:29
In fact, let's start right now.
12:31
Every person in this room do this:
12:34
think of the last time you had a decision to make
12:37
where you were worried to do the right thing,
12:40
where you wondered, "What should I be doing?"
12:42
Bring that to mind,
12:44
and now reflect on that
12:46
and say, "How did I come up that decision?
12:48
What did I do? Did I follow my gut?
12:51
Did I have somebody vote on it? Or did I punt to legal?"
12:54
Or now we have a few more choices.
12:56
"Did I evaluate what would be the highest pleasure
12:59
like Mill would?
13:01
Or like Kant, did I use reason to figure out what was intrinsically right?"
13:03
Think about it. Really bring it to mind. This is important.
13:06
It is so important
13:09
we are going to spend 30 seconds of valuable TEDTalk time
13:11
doing nothing but thinking about this.
13:13
Are you ready? Go.
13:15
Stop. Good work.
13:33
What you just did,
13:36
that's the first step towards taking responsibility
13:38
for what we should do with all of our power.
13:40
Now the next step -- try this.
13:45
Go find a friend and explain to them
13:49
how you made that decision.
13:51
Not right now. Wait till I finish talking.
13:53
Do it over lunch.
13:55
And don't just find another technologist friend;
13:57
find somebody different than you.
14:00
Find an artist or a writer --
14:02
or, heaven forbid, find a philosopher and talk to them.
14:04
In fact, find somebody from the humanities.
14:07
Why? Because they think about problems
14:09
differently than we do as technologists.
14:11
Just a few days ago, right across the street from here,
14:13
there was hundreds of people gathered together.
14:16
It was technologists and humanists
14:18
at that big BiblioTech Conference.
14:20
And they gathered together
14:22
because the technologists wanted to learn
14:24
what it would be like to think from a humanities perspective.
14:26
You have someone from Google
14:29
talking to someone who does comparative literature.
14:31
You're thinking about the relevance of 17th century French theater --
14:33
how does that bear upon venture capital?
14:36
Well that's interesting. That's a different way of thinking.
14:38
And when you think in that way,
14:41
you become more sensitive to the human considerations,
14:43
which are crucial to making ethical decisions.
14:46
So imagine that right now
14:49
you went and you found your musician friend.
14:51
And you're telling him what we're talking about,
14:53
about our whole data revolution and all this --
14:56
maybe even hum a few bars of our theme music.
14:58
♫ Dum ta da da dum dum ta da da dum ♫
15:00
Well, your musician friend will stop you and say,
15:03
"You know, the theme music
15:05
for your data revolution,
15:07
that's an opera, that's Wagner.
15:09
It's based on Norse legend.
15:11
It's Gods and mythical creatures
15:13
fighting over magical jewelry."
15:15
That's interesting.
15:19
Now it's also a beautiful opera,
15:22
and we're moved by that opera.
15:25
We're moved because it's about the battle
15:28
between good and evil,
15:30
about right and wrong.
15:32
And we care about right and wrong.
15:34
We care what happens in that opera.
15:36
We care what happens in "Apocalypse Now."
15:39
And we certainly care
15:42
what happens with our technologies.
15:44
We have so much power today,
15:46
it is up to us to figure out what to do,
15:48
and that's the good news.
15:51
We're the ones writing this opera.
15:53
This is our movie.
15:56
We figure out what will happen with this technology.
15:58
We determine how this will all end.
16:01
Thank you.
16:04
(Applause)
16:06

sponsored links

Damon Horowitz - Philosopher, entrepreneur
Damon Horowitz explores what is possible at the boundaries of technology and the humanities.

Why you should listen

Damon Horowitz is a philosophy professor and serial entrepreneur. He recently joined Google as In-House Philosopher / Director of Engineering, heading development of several initiatives involving social and search. He came to Google from Aardvark, the social search engine, where he was co-founder and CTO, overseeing product development and research strategy. Prior to Aardvark, Horowitz built several companies around applications of intelligent language processing. He co-founded Perspecta (acquired by Excite), was lead architect for Novation Biosciences (acquired by Agilent), and co-founded NewsDB (now Daylife).

Horowitz teaches courses in philosophy, cognitive science, and computer science at several institutions, including Stanford, NYU, University of Pennsylvania and San Quentin State Prison.

Get more information on the Prison University Project >>

The original video is available on TED.com
sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.