sponsored links
TED2014

Del Harvey: The strangeness of scale at Twitter

March 12, 2014

When hundreds of thousands of tweets are fired every second, a one-in-a-million chance -- including unlikely-sounding scenarios that could harm users -- happens about 500 times a day. For Del Harvey, who heads Twitter’s Trust and Safety Team, these odds aren’t good. The security maven spends her days thinking about how to prevent worst-case scenarios while giving voice to people around the globe. With deadpan humor, she offers a window into how she keeps 240 million users safe.

Del Harvey - Security maven
Del Harvey is the VP of Trust & Safety at Twitter. Full bio

sponsored links
Double-click the English subtitles below to play the video.
My job at Twitter
00:12
is to ensure user trust,
00:14
protect user rights and keep users safe,
00:16
both from each other
00:18
and, at times, from themselves.
00:20
Let's talk about what scale looks like at Twitter.
00:24
Back in January 2009,
00:28
we saw more than two million new tweets each day
00:31
on the platform.
00:34
January 2014, more than 500 million.
00:36
We were seeing two million tweets
00:42
in less than six minutes.
00:44
That's a 24,900-percent increase.
00:46
Now, the vast majority of activity on Twitter
00:53
puts no one in harm's way.
00:57
There's no risk involved.
00:58
My job is to root out and prevent activity that might.
01:00
Sounds straightforward, right?
01:06
You might even think it'd be easy,
01:08
given that I just said the vast majority
01:09
of activity on Twitter puts no one in harm's way.
01:11
Why spend so much time
01:15
searching for potential calamities
01:17
in innocuous activities?
01:20
Given the scale that Twitter is at,
01:23
a one-in-a-million chance happens
01:26
500 times a day.
01:28
It's the same for other companies
01:33
dealing at this sort of scale.
01:34
For us, edge cases,
01:36
those rare situations that are unlikely to occur,
01:38
are more like norms.
01:41
Say 99.999 percent of tweets
01:44
pose no risk to anyone.
01:48
There's no threat involved.
01:50
Maybe people are documenting travel landmarks
01:51
like Australia's Heart Reef,
01:54
or tweeting about a concert they're attending,
01:56
or sharing pictures of cute baby animals.
01:59
After you take out that 99.999 percent,
02:03
that tiny percentage of tweets remaining
02:08
works out to roughly
02:11
150,000 per month.
02:14
The sheer scale of what we're dealing with
02:17
makes for a challenge.
02:20
You know what else makes my role
02:22
particularly challenging?
02:23
People do weird things.
02:26
(Laughter)
02:31
And I have to figure out what they're doing,
02:33
why, and whether or not there's risk involved,
02:36
often without much in terms of context
02:38
or background.
02:40
I'm going to show you some examples
02:42
that I've run into during my time at Twitter --
02:44
these are all real examples —
02:46
of situations that at first seemed cut and dried,
02:48
but the truth of the matter was something
02:50
altogether different.
02:52
The details have been changed
02:53
to protect the innocent
02:55
and sometimes the guilty.
02:57
We'll start off easy.
03:00
["Yo bitch"]
03:03
If you saw a Tweet that only said this,
03:05
you might think to yourself,
03:08
"That looks like abuse."
03:10
After all, why would you
want to receive the message,
03:11
"Yo, bitch."
03:14
Now, I try to stay relatively hip
03:17
to the latest trends and memes,
03:21
so I knew that "yo, bitch"
03:24
was also often a common greeting between friends,
03:26
as well as being a popular "Breaking Bad" reference.
03:30
I will admit that I did not expect
03:34
to encounter a fourth use case.
03:36
It turns out it is also used on Twitter
03:39
when people are role-playing as dogs.
03:42
(Laughter)
03:45
And in fact, in that case,
03:51
it's not only not abusive,
03:52
it's technically just an accurate greeting.
03:54
(Laughter)
03:57
So okay, determining whether or not
04:00
something is abusive without context,
04:02
definitely hard.
04:04
Let's look at spam.
04:05
Here's an example of an account engaged
04:08
in classic spammer behavior,
04:10
sending the exact same message
04:12
to thousands of people.
04:13
While this is a mockup I put
together using my account,
04:15
we see accounts doing this all the time.
04:18
Seems pretty straightforward.
04:21
We should just automatically suspend accounts
04:23
engaging in this kind of behavior.
04:25
Turns out there's some exceptions to that rule.
04:28
Turns out that that message
could also be a notification
04:31
you signed up for that the International
Space Station is passing overhead
04:34
because you wanted to go outside
04:38
and see if you could see it.
04:40
You're not going to get that chance
04:42
if we mistakenly suspend the account
04:43
thinking it's spam.
04:45
Okay. Let's make the stakes higher.
04:47
Back to my account,
04:51
again exhibiting classic behavior.
04:53
This time it's sending the same message and link.
04:56
This is often indicative of
something called phishing,
04:59
somebody trying to steal another
person's account information
05:02
by directing them to another website.
05:05
That's pretty clearly not a good thing.
05:07
We want to, and do, suspend accounts
05:11
engaging in that kind of behavior.
05:13
So why are the stakes higher for this?
05:16
Well, this could also be a bystander at a rally
05:19
who managed to record a video
05:22
of a police officer beating a non-violent protester
05:24
who's trying to let the world know what's happening.
05:27
We don't want to gamble
05:30
on potentially silencing that crucial speech
05:32
by classifying it as spam and suspending it.
05:34
That means we evaluate hundreds of parameters
05:37
when looking at account behaviors,
05:40
and even then, we can still get it wrong
05:42
and have to reevaluate.
05:44
Now, given the sorts of challenges I'm up against,
05:46
it's crucial that I not only predict
05:50
but also design protections for the unexpected.
05:53
And that's not just an issue for me,
05:56
or for Twitter, it's an issue for you.
05:59
It's an issue for anybody who's building or creating
06:01
something that you think is going to be amazing
06:03
and will let people do awesome things.
06:05
So what do I do?
06:08
I pause and I think,
06:11
how could all of this
06:14
go horribly wrong?
06:16
I visualize catastrophe.
06:20
And that's hard. There's a sort of
06:24
inherent cognitive dissonance in doing that,
06:27
like when you're writing your wedding vows
06:30
at the same time as your prenuptial agreement.
06:32
(Laughter)
06:34
But you still have to do it,
06:36
particularly if you're marrying
500 million tweets per day.
06:38
What do I mean by "visualize catastrophe?"
06:43
I try to think of how something as
06:46
benign and innocuous as a picture of a cat
06:49
could lead to death,
06:52
and what to do to prevent that.
06:53
Which happens to be my next example.
06:55
This is my cat, Eli.
06:58
We wanted to give users the ability
07:01
to add photos to their tweets.
07:03
A picture is worth a thousand words.
07:05
You only get 140 characters.
07:06
You add a photo to your tweet,
07:08
look at how much more content you've got now.
07:10
There's all sorts of great things you can do
07:13
by adding a photo to a tweet.
07:14
My job isn't to think of those.
07:16
It's to think of what could go wrong.
07:19
How could this picture
07:21
lead to my death?
07:23
Well, here's one possibility.
07:27
There's more in that picture than just a cat.
07:30
There's geodata.
07:33
When you take a picture with your smartphone
07:35
or digital camera,
07:37
there's a lot of additional information
07:39
saved along in that image.
07:40
In fact, this image also contains
07:42
the equivalent of this,
07:44
more specifically, this.
07:46
Sure, it's not likely that someone's going to try
07:49
to track me down and do me harm
07:51
based upon image data associated
07:53
with a picture I took of my cat,
07:55
but I start by assuming the worst will happen.
07:57
That's why, when we launched photos on Twitter,
08:00
we made the decision to strip that geodata out.
08:03
(Applause)
08:06
If I start by assuming the worst
08:12
and work backwards,
08:15
I can make sure that the protections we build
08:16
work for both expected
08:18
and unexpected use cases.
08:20
Given that I spend my days and nights
08:22
imagining the worst that could happen,
08:25
it wouldn't be surprising if
my worldview was gloomy.
08:28
(Laughter)
08:32
It's not.
08:34
The vast majority of interactions I see --
08:35
and I see a lot, believe me -- are positive,
08:39
people reaching out to help
08:43
or to connect or share information with each other.
08:45
It's just that for those of us dealing with scale,
08:48
for those of us tasked with keeping people safe,
08:52
we have to assume the worst will happen,
08:56
because for us, a one-in-a-million chance
08:58
is pretty good odds.
09:02
Thank you.
09:05
(Applause)
09:07

sponsored links

Del Harvey - Security maven
Del Harvey is the VP of Trust & Safety at Twitter.

Why you should listen

At Twitter, Del Harvey works to ensure user safety and security, balancing Twitter's wide-open spaces against spammers, harassers and worse, to create a workable policy that lets the tweets flow. Prior to joining the booming social media site, she spent five years as the law enforcement liaison for a group fighting child exploitation, where she worked with agencies ranging from local police departments to the FBI, US Marshals and the Secret Service.

As Twitter grows, its ever inventive users (who famously came up with many of its key features by themselves) are finding ever new ways to overshare, offend and pick on others. Harvey and team's challenge is to weed out the worst while keeping the site feeling like a safe place to have this new kind of conversation we're all having there now.

The original video is available on TED.com
sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.