sponsored links
TED@Intel

Jennifer Healey: If cars could talk, accidents might be avoidable

April 8, 2013

When we drive, we get into a glass bubble, lock the doors and press the accelerator, relying on our eyes to guide us -- even though we can only see the few cars ahead of and behind us. But what if cars could share data with each other about their position and velocity, and use predictive models to calculate the safest routes for everyone on the road? Jennifer Healey imagines a world without accidents. (Filmed at TED@Intel.)

Jennifer Healey - Research scientist
A research scientist at Intel, Jennifer Healey develops the mobile internet devices of the future. Full bio

sponsored links
Double-click the English subtitles below to play the video.
Let's face it:
00:12
Driving is dangerous.
00:14
It's one of the things that we don't like to think about,
00:16
but the fact that religious icons and good luck charms
00:19
show up on dashboards around the world
00:23
betrays the fact that we know this to be true.
00:28
Car accidents are the leading cause of death
00:32
in people ages 16 to 19 in the United States --
00:36
leading cause of death --
00:40
and 75 percent of these accidents have nothing to do
00:43
with drugs or alcohol.
00:47
So what happens?
00:49
No one can say for sure, but I remember my first accident.
00:51
I was a young driver out on the highway,
00:55
and the car in front of me, I saw the brake lights go on.
00:59
I'm like, "Okay, all right, this guy is slowing down,
01:01
I'll slow down too."
01:03
I step on the brake.
01:04
But no, this guy isn't slowing down.
01:06
This guy is stopping, dead stop, dead stop on the highway.
01:09
It was just going 65 -- to zero?
01:12
I slammed on the brakes.
01:14
I felt the ABS kick in, and the car is still going,
01:16
and it's not going to stop, and I know it's not going to stop,
01:19
and the air bag deploys, the car is totaled,
01:22
and fortunately, no one was hurt.
01:25
But I had no idea that car was stopping,
01:28
and I think we can do a lot better than that.
01:32
I think we can transform the driving experience
01:36
by letting our cars talk to each other.
01:40
I just want you to think a little bit
01:44
about what the experience of driving is like now.
01:45
Get into your car. Close the door. You're in a glass bubble.
01:48
You can't really directly sense the world around you.
01:52
You're in this extended body.
01:55
You're tasked with navigating it down
01:57
partially-seen roadways,
02:00
in and amongst other metal giants, at super-human speeds.
02:02
Okay? And all you have to guide you are your two eyes.
02:06
Okay, so that's all you have,
02:11
eyes that weren't really designed for this task,
02:12
but then people ask you to do things like,
02:14
you want to make a lane change,
02:18
what's the first thing they ask you do?
02:19
Take your eyes off the road. That's right.
02:22
Stop looking where you're going, turn,
02:25
check your blind spot,
02:27
and drive down the road without looking where you're going.
02:29
You and everyone else. This is the safe way to drive.
02:32
Why do we do this? Because we have to,
02:35
we have to make a choice, do I look here or do I look here?
02:38
What's more important?
02:40
And usually we do a fantastic job
02:42
picking and choosing what we attend to on the road.
02:45
But occasionally we miss something.
02:48
Occasionally we sense something wrong or too late.
02:52
In countless accidents, the driver says,
02:56
"I didn't see it coming."
02:58
And I believe that. I believe that.
03:01
We can only watch so much.
03:04
But the technology exists now that can help us improve that.
03:07
In the future, with cars exchanging data with each other,
03:12
we will be able to see not just three cars ahead
03:16
and three cars behind, to the right and left,
03:20
all at the same time, bird's eye view,
03:22
we will actually be able to see into those cars.
03:25
We will be able to see the velocity of the car in front of us,
03:28
to see how fast that guy's going or stopping.
03:31
If that guy's going down to zero, I'll know.
03:34
And with computation and algorithms and predictive models,
03:38
we will be able to see the future.
03:42
You may think that's impossible.
03:45
How can you predict the future? That's really hard.
03:47
Actually, no. With cars, it's not impossible.
03:50
Cars are three-dimensional objects
03:53
that have a fixed position and velocity.
03:56
They travel down roads.
03:58
Often they travel on pre-published routes.
04:00
It's really not that hard to make reasonable predictions
04:02
about where a car's going to be in the near future.
04:06
Even if, when you're in your car
04:09
and some motorcyclist comes -- bshoom! --
04:11
85 miles an hour down, lane-splitting --
04:13
I know you've had this experience --
04:16
that guy didn't "just come out of nowhere."
04:18
That guy's been on the road probably for the last half hour.
04:21
(Laughter)
04:24
Right? I mean, somebody's seen him.
04:26
Ten, 20, 30 miles back, someone's seen that guy,
04:29
and as soon as one car sees that guy
04:32
and puts him on the map, he's on the map --
04:34
position, velocity,
04:36
good estimate he'll continue going 85 miles an hour.
04:39
You'll know, because your car will know, because
04:41
that other car will have whispered something in his ear,
04:43
like, "By the way, five minutes,
04:45
motorcyclist, watch out."
04:47
You can make reasonable predictions about how cars behave.
04:50
I mean, they're Newtonian objects.
04:53
That's very nice about them.
04:54
So how do we get there?
04:57
We can start with something as simple
05:00
as sharing our position data between cars,
05:02
just sharing GPS.
05:05
If I have a GPS and a camera in my car,
05:07
I have a pretty precise idea of where I am
05:10
and how fast I'm going.
05:12
With computer vision, I can estimate where
05:14
the cars around me are, sort of, and where they're going.
05:15
And same with the other cars.
05:19
They can have a precise idea of where they are,
05:20
and sort of a vague idea of where the other cars are.
05:22
What happens if two cars share that data,
05:24
if they talk to each other?
05:27
I can tell you exactly what happens.
05:29
Both models improve.
05:32
Everybody wins.
05:34
Professor Bob Wang and his team
05:36
have done computer simulations of what happens
05:39
when fuzzy estimates combine, even in light traffic,
05:41
when cars just share GPS data,
05:45
and we've moved this research out of the computer simulation
05:47
and into robot test beds that have the actual sensors
05:50
that are in cars now on these robots:
05:53
stereo cameras, GPS,
05:56
and the two-dimensional laser range finders
05:58
that are common in backup systems.
06:00
We also attach a discrete short-range communication radio,
06:02
and the robots talk to each other.
06:07
When these robots come at each other,
06:08
they track each other's position precisely,
06:10
and they can avoid each other.
06:13
We're now adding more and more robots into the mix,
06:16
and we encountered some problems.
06:19
One of the problems, when you get too much chatter,
06:20
it's hard to process all the packets, so you have to prioritize,
06:23
and that's where the predictive model helps you.
06:26
If your robot cars are all tracking the predicted trajectories,
06:29
you don't pay as much attention to those packets.
06:33
You prioritize the one guy
06:35
who seems to be going a little off course.
06:37
That guy could be a problem.
06:38
And you can predict the new trajectory.
06:41
So you don't only know that he's going off course, you know how.
06:44
And you know which drivers you need to alert to get out of the way.
06:46
And we wanted to do -- how can we best alert everyone?
06:50
How can these cars whisper, "You need to get out of the way?"
06:53
Well, it depends on two things:
06:56
one, the ability of the car,
06:57
and second the ability of the driver.
07:00
If one guy has a really great car,
07:03
but they're on their phone or, you know, doing something,
07:04
they're not probably in the best position
07:07
to react in an emergency.
07:09
So we started a separate line of research
07:12
doing driver state modeling.
07:14
And now, using a series of three cameras,
07:16
we can detect if a driver is looking forward,
07:19
looking away, looking down, on the phone,
07:21
or having a cup of coffee.
07:24
We can predict the accident
07:27
and we can predict who, which cars,
07:29
are in the best position to move out of the way
07:33
to calculate the safest route for everyone.
07:36
Fundamentally, these technologies exist today.
07:39
I think the biggest problem that we face
07:44
is our own willingness to share our data.
07:46
I think it's a very disconcerting notion,
07:50
this idea that our cars will be watching us,
07:52
talking about us to other cars,
07:55
that we'll be going down the road in a sea of gossip.
07:58
But I believe it can be done in a way that protects our privacy,
08:01
just like right now, when I look at your car from the outside,
08:05
I don't really know about you.
08:09
If I look at your license plate number,
08:11
I don't really know who you are.
08:12
I believe our cars can talk about us behind our backs.
08:14
(Laughter)
08:19
And I think it's going to be a great thing.
08:22
I want you to consider for a moment
08:25
if you really don't want the distracted teenager behind you
08:26
to know that you're braking,
08:31
that you're coming to a dead stop.
08:33
By sharing our data willingly,
08:36
we can do what's best for everyone.
08:38
So let your car gossip about you.
08:41
It's going to make the roads a lot safer.
08:44
Thank you.
08:47
(Applause)
08:49
Translator:Joseph Geni
Reviewer:Morton Bast

sponsored links

Jennifer Healey - Research scientist
A research scientist at Intel, Jennifer Healey develops the mobile internet devices of the future.

Why you should listen

Jennifer Healey imagines a future where computers and smartphones are capable of being sensitive to human emotions and where cars are able to talk to each other, and thus keep their drivers away from accidents. A scientist at Intel Corporation Research Labs, she researches devices and systems that would allow for these major innovations.

Healey holds PhD from MIT in electrical engineering and computer science. While there, she pioneered “Affective Computing” with Rosalind Picard and developed the first wearable computer with physiological sensors and a video camera that allows the wearer to track their daily activities and how they feel while doing them. From there, she moved to IBM where she worked on the next generation of multi-modal interactive smartphones and helped architect the "Interaction Mark-Up language" that allows users to switch from voice to speech input seamlessly.

Healey has also used her interest in embedded devices in the field of healthcare. While an instructor at Harvard Medical School and at Beth Israel Deaconess Medical Center, she worked on new ways to use heart rate to predict cardiac health. She then joined HP Research in Cambridge to further develop wearable sensors for health monitoring and continued this research when she joined Intel Digital Health.

sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.