sponsored links
TEDxMidAtlantic

Avi Rubin: All your devices can be hacked

October 29, 2011

Could someone hack your pacemaker? At TEDxMidAtlantic, Avi Rubin explains how hackers are compromising cars, smartphones and medical devices, and warns us about the dangers of an increasingly hack-able world. (Filmed at TEDxMidAtlantic.)

Avi Rubin - Computer security expert
Avi Rubin is a professor of computer science and director of the Health and Medical Security Lab at Johns Hopkins University. His research is focused on the security of electronic records -- including medical and voting records. Full bio

sponsored links
Double-click the English subtitles below to play the video.
I'm a computer science professor,
00:04
and my area of expertise is
00:07
computer and information security.
00:09
When I was in graduate school,
00:11
I had the opportunity to overhear my grandmother
00:14
describing to one of her fellow senior citizens
00:16
what I did for a living.
00:20
Apparently, I was in charge of making sure that
00:23
no one stole the computers from the university. (Laughter)
00:26
And, you know, that's a perfectly reasonable thing
00:30
for her to think, because I told her I was working
00:33
in computer security,
00:35
and it was interesting to get her perspective.
00:36
But that's not the most ridiculous thing I've ever heard
00:40
anyone say about my work.
00:43
The most ridiculous thing I ever heard is,
00:45
I was at a dinner party, and a woman heard
00:47
that I work in computer security,
00:50
and she asked me if -- she said her computer had been
00:52
infected by a virus, and she was very concerned that she
00:55
might get sick from it, that she could get this virus. (Laughter)
00:59
And I'm not a doctor, but I reassured her
01:03
that it was very, very unlikely that this would happen,
01:06
but if she felt more comfortable, she could be free to use
01:09
latex gloves when she was on the computer,
01:12
and there would be no harm whatsoever in that.
01:14
I'm going to get back to this notion of being able to get
01:17
a virus from your computer, in a serious way.
01:19
What I'm going to talk to you about today
01:23
are some hacks, some real world cyberattacks that people
01:25
in my community, the academic research community,
01:29
have performed, which I don't think
01:32
most people know about,
01:35
and I think they're very interesting and scary,
01:36
and this talk is kind of a greatest hits
01:39
of the academic security community's hacks.
01:41
None of the work is my work. It's all work
01:44
that my colleagues have done, and I actually asked them
01:46
for their slides and incorporated them into this talk.
01:49
So the first one I'm going to talk about
01:51
are implanted medical devices.
01:53
Now medical devices have come a long way technologically.
01:56
You can see in 1926 the first pacemaker was invented.
01:59
1960, the first internal pacemaker was implanted,
02:02
hopefully a little smaller than that one that you see there,
02:06
and the technology has continued to move forward.
02:09
In 2006, we hit an important milestone from the perspective
02:12
of computer security.
02:16
And why do I say that?
02:19
Because that's when implanted devices inside of people
02:21
started to have networking capabilities.
02:24
One thing that brings us close to home is we look
02:26
at Dick Cheney's device, he had a device that
02:28
pumped blood from an aorta to another part of the heart,
02:31
and as you can see at the bottom there,
02:35
it was controlled by a computer controller,
02:36
and if you ever thought that software liability
02:39
was very important, get one of these inside of you.
02:42
Now what a research team did was they got their hands
02:45
on what's called an ICD.
02:49
This is a defibrillator, and this is a device
02:50
that goes into a person to control their heart rhythm,
02:52
and these have saved many lives.
02:57
Well, in order to not have to open up the person
02:59
every time you want to reprogram their device
03:01
or do some diagnostics on it, they made the thing be able
03:04
to communicate wirelessly, and what this research team did
03:06
is they reverse engineered the wireless protocol,
03:09
and they built the device you see pictured here,
03:12
with a little antenna, that could talk the protocol
03:14
to the device, and thus control it.
03:16
In order to make their experience real -- they were unable
03:21
to find any volunteers, and so they went
03:24
and they got some ground beef and some bacon
03:26
and they wrapped it all up to about the size
03:28
of a human being's area where the device would go,
03:30
and they stuck the device inside it
03:33
to perform their experiment somewhat realistically.
03:34
They launched many, many successful attacks.
03:37
One that I'll highlight here is changing the patient's name.
03:40
I don't know why you would want to do that,
03:43
but I sure wouldn't want that done to me.
03:44
And they were able to change therapies,
03:47
including disabling the device -- and this is with a real,
03:49
commercial, off-the-shelf device --
03:51
simply by performing reverse engineering and sending
03:53
wireless signals to it.
03:55
There was a piece on NPR that some of these ICDs
03:58
could actually have their performance disrupted
04:02
simply by holding a pair of headphones onto them.
04:04
Now, wireless and the Internet
04:08
can improve health care greatly.
04:09
There's several examples up on the screen
04:11
of situations where doctors are looking to implant devices
04:13
inside of people, and all of these devices now,
04:16
it's standard that they communicate wirelessly,
04:19
and I think this is great,
04:22
but without a full understanding of trustworthy computing,
04:24
and without understanding what attackers can do
04:27
and the security risks from the beginning,
04:29
there's a lot of danger in this.
04:31
Okay, let me shift gears and show you another target.
04:34
I'm going to show you a few different targets like this,
04:35
and that's my talk. So we'll look at automobiles.
04:37
This is a car, and it has a lot of components,
04:40
a lot of electronics in it today.
04:43
In fact, it's got many, many different computers inside of it,
04:45
more Pentiums than my lab did when I was in college,
04:49
and they're connected by a wired network.
04:52
There's also a wireless network in the car,
04:56
which can be reached from many different ways.
04:59
So there's Bluetooth, there's the FM and XM radio,
05:02
there's actually wi-fi, there's sensors in the wheels
05:06
that wirelessly communicate the tire pressure
05:09
to a controller on board.
05:11
The modern car is a sophisticated multi-computer device.
05:13
And what happens if somebody wanted to attack this?
05:18
Well, that's what the researchers
05:21
that I'm going to talk about today did.
05:23
They basically stuck an attacker on the wired network
05:24
and on the wireless network.
05:27
Now, they have two areas they can attack.
05:30
One is short-range wireless, where you can actually
05:32
communicate with the device from nearby,
05:34
either through Bluetooth or wi-fi,
05:36
and the other is long-range, where you can communicate
05:38
with the car through the cellular network,
05:41
or through one of the radio stations.
05:42
Think about it. When a car receives a radio signal,
05:44
it's processed by software.
05:47
That software has to receive and decode the radio signal,
05:50
and then figure out what to do with it,
05:53
even if it's just music that it needs to play on the radio,
05:54
and that software that does that decoding,
05:57
if it has any bugs in it, could create a vulnerability
05:59
for somebody to hack the car.
06:02
The way that the researchers did this work is,
06:05
they read the software in the computer chips
06:08
that were in the car, and then they used sophisticated
06:12
reverse engineering tools
06:15
to figure out what that software did,
06:17
and then they found vulnerabilities in that software,
06:19
and then they built exploits to exploit those.
06:22
They actually carried out their attack in real life.
06:25
They bought two cars, and I guess
06:28
they have better budgets than I do.
06:29
The first threat model was to see what someone could do
06:32
if an attacker actually got access
06:35
to the internal network on the car.
06:37
Okay, so think of that as, someone gets to go to your car,
06:39
they get to mess around with it, and then they leave,
06:41
and now, what kind of trouble are you in?
06:44
The other threat model is that they contact you
06:47
in real time over one of the wireless networks
06:49
like the cellular, or something like that,
06:52
never having actually gotten physical access to your car.
06:54
This is what their setup looks like for the first model,
06:58
where you get to have access to the car.
07:01
They put a laptop, and they connected to the diagnostic unit
07:02
on the in-car network, and they did all kinds of silly things,
07:06
like here's a picture of the speedometer
07:09
showing 140 miles an hour when the car's in park.
07:12
Once you have control of the car's computers,
07:14
you can do anything.
07:17
Now you might say, "Okay, that's silly."
07:18
Well, what if you make the car always say
07:19
it's going 20 miles an hour slower than it's actually going?
07:21
You might produce a lot of speeding tickets.
07:24
Then they went out to an abandoned airstrip with two cars,
07:26
the target victim car and the chase car,
07:30
and they launched a bunch of other attacks.
07:33
One of the things they were able to do from the chase car
07:36
is apply the brakes on the other car,
07:38
simply by hacking the computer.
07:40
They were able to disable the brakes.
07:42
They also were able to install malware that wouldn't kick in
07:44
and wouldn't trigger until the car was doing something like
07:48
going over 20 miles an hour, or something like that.
07:50
The results are astonishing, and when they gave this talk,
07:54
even though they gave this talk at a conference
07:56
to a bunch of computer security researchers,
07:58
everybody was gasping.
08:00
They were able to take over a bunch of critical computers
08:02
inside the car: the brakes computer, the lighting computer,
08:05
the engine, the dash, the radio, etc.,
08:09
and they were able to perform these on real commercial
08:12
cars that they purchased using the radio network.
08:14
They were able to compromise every single one of the
08:17
pieces of software that controlled every single one
08:20
of the wireless capabilities of the car.
08:23
All of these were implemented successfully.
08:26
How would you steal a car in this model?
08:28
Well, you compromise the car by a buffer overflow
08:31
of vulnerability in the software, something like that.
08:34
You use the GPS in the car to locate it.
08:37
You remotely unlock the doors through the computer
08:39
that controls that, start the engine, bypass anti-theft,
08:41
and you've got yourself a car.
08:44
Surveillance was really interesting.
08:46
The authors of the study have a video where they show
08:48
themselves taking over a car and then turning on
08:52
the microphone in the car, and listening in on the car
08:54
while tracking it via GPS on a map,
08:57
and so that's something that the drivers of the car
09:00
would never know was happening.
09:02
Am I scaring you yet?
09:04
I've got a few more of these interesting ones.
09:06
These are ones where I went to a conference,
09:08
and my mind was just blown, and I said,
09:10
"I have to share this with other people."
09:12
This was Fabian Monrose's lab
09:14
at the University of North Carolina, and what they did was
09:15
something intuitive once you see it,
09:19
but kind of surprising.
09:21
They videotaped people on a bus,
09:23
and then they post-processed the video.
09:25
What you see here in number one is a
09:28
reflection in somebody's glasses of the smartphone
09:30
that they're typing in.
09:35
They wrote software to stabilize --
09:36
even though they were on a bus
09:38
and maybe someone's holding their phone at an angle --
09:39
to stabilize the phone, process it, and
09:43
you may know on your smartphone, when you type
09:45
a password, the keys pop out a little bit, and they were able
09:47
to use that to reconstruct what the person was typing,
09:50
and had a language model for detecting typing.
09:53
What was interesting is, by videotaping on a bus,
09:57
they were able to produce exactly what people
09:59
on their smartphones were typing,
10:01
and then they had a surprising result, which is that
10:04
their software had not only done it for their target,
10:06
but other people who accidentally happened
10:09
to be in the picture, they were able to produce
10:10
what those people had been typing, and that was kind of
10:12
an accidental artifact of what their software was doing.
10:15
I'll show you two more. One is P25 radios.
10:18
P25 radios are used by law enforcement
10:23
and all kinds of government agencies
10:26
and people in combat to communicate,
10:29
and there's an encryption option on these phones.
10:31
This is what the phone looks like. It's not really a phone.
10:34
It's more of a two-way radio.
10:36
Motorola makes the most widely used one, and you can see
10:37
that they're used by Secret Service, they're used in combat,
10:41
it's a very, very common standard in the U.S. and elsewhere.
10:43
So one question the researchers asked themselves is,
10:47
could you block this thing, right?
10:49
Could you run a denial-of-service,
10:52
because these are first responders?
10:53
So, would a terrorist organization want to black out the
10:55
ability of police and fire to communicate at an emergency?
10:57
They found that there's this GirlTech device used for texting
11:01
that happens to operate at the same exact frequency
11:04
as the P25, and they built what they called
11:07
My First Jammer. (Laughter)
11:09
If you look closely at this device,
11:14
it's got a switch for encryption or cleartext.
11:16
Let me advance the slide, and now I'll go back.
11:20
You see the difference?
11:23
This is plain text. This is encrypted.
11:25
There's one little dot that shows up on the screen,
11:28
and one little tiny turn of the switch.
11:30
And so the researchers asked themselves, "I wonder how
11:32
many times very secure, important, sensitive conversations
11:34
are happening on these two-way radios where they forget
11:39
to encrypt and they don't notice that they didn't encrypt?"
11:40
So they bought a scanner. These are perfectly legal
11:43
and they run at the frequency of the P25,
11:46
and what they did is they hopped around frequencies
11:50
and they wrote software to listen in.
11:52
If they found encrypted communication, they stayed
11:54
on that channel and they wrote down, that's a channel
11:57
that these people communicate in,
11:59
these law enforcement agencies,
12:00
and they went to 20 metropolitan areas and listened in
12:02
on conversations that were happening at those frequencies.
12:05
They found that in every metropolitan area,
12:09
they would capture over 20 minutes a day
12:12
of cleartext communication.
12:14
And what kind of things were people talking about?
12:17
Well, they found the names and information
12:19
about confidential informants. They found information
12:20
that was being recorded in wiretaps,
12:23
a bunch of crimes that were being discussed,
12:25
sensitive information.
12:28
It was mostly law enforcement and criminal.
12:29
They went and reported this to the law enforcement
12:32
agencies, after anonymizing it,
12:34
and the vulnerability here is simply the user interface
12:36
wasn't good enough. If you're talking
12:39
about something really secure and sensitive, it should
12:41
be really clear to you that this conversation is encrypted.
12:43
That one's pretty easy to fix.
12:47
The last one I thought was really, really cool,
12:49
and I just had to show it to you, it's probably not something
12:50
that you're going to lose sleep over
12:53
like the cars or the defibrillators,
12:54
but it's stealing keystrokes.
12:56
Now, we've all looked at smartphones upside down.
12:59
Every security expert wants to hack a smartphone,
13:02
and we tend to look at the USB port, the GPS for tracking,
13:04
the camera, the microphone, but no one up till this point
13:08
had looked at the accelerometer.
13:12
The accelerometer is the thing that determines
13:13
the vertical orientation of the smartphone.
13:15
And so they had a simple setup.
13:18
They put a smartphone next to a keyboard,
13:20
and they had people type, and then their goal was
13:23
to use the vibrations that were created by typing
13:25
to measure the change in the accelerometer reading
13:28
to determine what the person had been typing.
13:32
Now, when they tried this on an iPhone 3GS,
13:36
this is a graph of the perturbations that were created
13:38
by the typing, and you can see that it's very difficult
13:41
to tell when somebody was typing or what they were typing,
13:44
but the iPhone 4 greatly improved the accelerometer,
13:47
and so the same measurement
13:50
produced this graph.
13:54
Now that gave you a lot of information while someone
13:56
was typing, and what they did then is used advanced
13:58
artificial intelligence techniques called machine learning
14:01
to have a training phase,
14:04
and so they got most likely grad students
14:06
to type in a whole lot of things, and to learn,
14:08
to have the system use the machine learning tools that
14:12
were available to learn what it is that the people were typing
14:15
and to match that up
14:17
with the measurements in the accelerometer.
14:20
And then there's the attack phase, where you get
14:23
somebody to type something in, you don't know what it was,
14:24
but you use your model that you created
14:27
in the training phase to figure out what they were typing.
14:28
They had pretty good success. This is an article from the USA Today.
14:32
They typed in, "The Illinois Supreme Court has ruled
14:35
that Rahm Emanuel is eligible to run for Mayor of Chicago"
14:38
— see, I tied it in to the last talk —
14:41
"and ordered him to stay on the ballot."
14:42
Now, the system is interesting, because it produced
14:44
"Illinois Supreme" and then it wasn't sure.
14:47
The model produced a bunch of options,
14:50
and this is the beauty of some of the A.I. techniques,
14:52
is that computers are good at some things,
14:55
humans are good at other things,
14:57
take the best of both and let the humans solve this one.
14:59
Don't waste computer cycles.
15:00
A human's not going to think it's the Supreme might.
15:02
It's the Supreme Court, right?
15:04
And so, together we're able to reproduce typing
15:06
simply by measuring the accelerometer.
15:08
Why does this matter? Well, in the Android platform,
15:11
for example, the developers have a manifest
15:15
where every device on there, the microphone, etc.,
15:19
has to register if you're going to use it
15:21
so that hackers can't take over it,
15:23
but nobody controls the accelerometer.
15:26
So what's the point? You can leave your iPhone next to
15:29
someone's keyboard, and just leave the room,
15:31
and then later recover what they did,
15:33
even without using the microphone.
15:35
If someone is able to put malware on your iPhone,
15:37
they could then maybe get the typing that you do
15:39
whenever you put your iPhone next to your keyboard.
15:42
There's several other notable attacks that unfortunately
15:44
I don't have time to go into, but the one that I wanted
15:46
to point out was a group from the University of Michigan
15:48
which was able to take voting machines,
15:51
the Sequoia AVC Edge DREs that
15:53
were going to be used in New Jersey in the election
15:55
that were left in a hallway, and put Pac-Man on it.
15:57
So they ran the Pac-Man game.
15:59
What does this all mean?
16:03
Well, I think that society tends to adopt technology
16:05
really quickly. I love the next coolest gadget.
16:08
But it's very important, and these researchers are showing,
16:11
that the developers of these things
16:14
need to take security into account from the very beginning,
16:15
and need to realize that they may have a threat model,
16:18
but the attackers may not be nice enough
16:21
to limit themselves to that threat model,
16:23
and so you need to think outside of the box.
16:25
What we can do is be aware
16:27
that devices can be compromised,
16:29
and anything that has software in it
16:31
is going to be vulnerable. It's going to have bugs.
16:33
Thank you very much. (Applause)
16:36
Translator:Joseph Geni
Reviewer:Morton Bast

sponsored links

Avi Rubin - Computer security expert
Avi Rubin is a professor of computer science and director of the Health and Medical Security Lab at Johns Hopkins University. His research is focused on the security of electronic records -- including medical and voting records.

Why you should listen

Along with running the Health and Medical Security Lab, Avi Rubin is also the technical director of the JHU Information Security Institute. From 1997 to 2002, Avi was a researcher in AT&T’s Secure Systems Department, where he focused on cryptography and network security. He is also the founder of Harbor Labs, which provides expert testimony and review in legal cases related to high tech security. Avi has authored several books related to electronic security, including Brave New Ballot, published in 2006.

The original video is available on TED.com
sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.