TEDMED 2011

Sheila Nirenberg: A prosthetic eye to treat blindness

Filmed:

At TEDMED, Sheila Nirenberg shows a bold way to create sight in people with certain kinds of blindness: by hooking into the optic nerve and sending signals from a camera direct to the brain.

- Neuroscientist
Sheila Nirenberg studies how the brain encodes information -- possibly allowing us to decode it, and maybe develop prosthetic sensory devices. Full bio

I study how the brain processes
00:15
information. That is, how it takes
00:17
information in from the outside world, and
00:19
converts it into patterns of electrical activity,
00:21
and then how it uses those patterns
00:23
to allow you to do things --
00:25
to see, hear, to reach for an object.
00:27
So I'm really a basic scientist, not
00:29
a clinician, but in the last year and a half
00:31
I've started to switch over, to use what
00:33
we've been learning about these patterns
00:35
of activity to develop prosthetic devices,
00:37
and what I wanted to do today is show you
00:40
an example of this.
00:42
It's really our first foray into this.
00:44
It's the development of a prosthetic device
00:46
for treating blindness.
00:48
So let me start in on that problem.
00:50
There are 10 million people in the U.S.
00:52
and many more worldwide who are blind
00:54
or are facing blindness due to diseases
00:56
of the retina, diseases like
00:58
macular degeneration, and there's little
01:00
that can be done for them.
01:02
There are some drug treatments, but
01:04
they're only effective on a small fraction
01:06
of the population. And so, for the vast
01:08
majority of patients, their best hope for
01:10
regaining sight is through prosthetic devices.
01:12
The problem is that current prosthetics
01:14
don't work very well. They're still very
01:16
limited in the vision that they can provide.
01:18
And so, you know, for example, with these
01:20
devices, patients can see simple things
01:22
like bright lights and high contrast edges,
01:24
not very much more, so nothing close
01:26
to normal vision has been possible.
01:28
So what I'm going to tell you about today
01:31
is a device that we've been working on
01:33
that I think has the potential to make
01:35
a difference, to be much more effective,
01:37
and what I wanted to do is show you
01:39
how it works. Okay, so let me back up a
01:41
little bit and show you how a normal retina
01:43
works first so you can see the problem
01:45
that we were trying to solve.
01:47
Here you have a retina.
01:49
So you have an image, a retina, and a brain.
01:51
So when you look at something, like this image
01:53
of this baby's face, it goes into your eye
01:55
and it lands on your retina, on the front-end
01:57
cells here, the photoreceptors.
01:59
Then what happens is the retinal circuitry,
02:01
the middle part, goes to work on it,
02:03
and what it does is it performs operations
02:05
on it, it extracts information from it, and it
02:07
converts that information into a code.
02:09
And the code is in the form of these patterns
02:11
of electrical pulses that get sent
02:13
up to the brain, and so the key thing is
02:15
that the image ultimately gets converted
02:17
into a code. And when I say code,
02:19
I do literally mean code.
02:21
Like this pattern of pulses here actually means "baby's face,"
02:23
and so when the brain gets this pattern
02:26
of pulses, it knows that what was out there
02:28
was a baby's face, and if it
02:30
got a different pattern it would know
02:32
that what was out there was, say, a dog,
02:34
or another pattern would be a house.
02:36
Anyway, you get the idea.
02:38
And, of course, in real life, it's all dynamic,
02:40
meaning that it's changing all the time,
02:42
so the patterns of pulses are changing
02:44
all the time because the world you're
02:46
looking at is changing all the time too.
02:48
So, you know, it's sort of a complicated
02:51
thing. You have these patterns of pulses
02:53
coming out of your eye every millisecond
02:55
telling your brain what it is that you're seeing.
02:57
So what happens when a person
02:59
gets a retinal degenerative disease like
03:01
macular degeneration? What happens is
03:03
is that, the front-end cells die,
03:05
the photoreceptors die, and over time,
03:07
all the cells and the circuits that are
03:09
connected to them, they die too.
03:11
Until the only things that you have left
03:13
are these cells here, the output cells,
03:15
the ones that send the signals to the brain,
03:17
but because of all that degeneration
03:19
they aren't sending any signals anymore.
03:21
They aren't getting any input, so
03:23
the person's brain no longer gets
03:25
any visual information --
03:27
that is, he or she is blind.
03:29
So, a solution to the problem, then,
03:32
would be to build a device that could mimic
03:34
the actions of that front-end circuitry
03:36
and send signals to the retina's output cells,
03:38
and they can go back to doing their
03:40
normal job of sending signals to the brain.
03:42
So this is what we've been working on,
03:44
and this is what our prosthetic does.
03:46
So it consists of two parts, what we call
03:48
an encoder and a transducer.
03:50
And so the encoder does just
03:52
what I was saying: it mimics the actions
03:54
of the front-end circuitry -- so it takes images
03:56
in and converts them into the retina's code.
03:58
And then the transducer then makes the
04:00
output cells send the code on up
04:02
to the brain, and the result is
04:04
a retinal prosthetic that can produce
04:06
normal retinal output.
04:09
So a completely blind retina,
04:11
even one with no front-end circuitry at all,
04:13
no photoreceptors,
04:15
can now send out normal signals,
04:17
signals that the brain can understand.
04:19
So no other device has been able
04:22
to do this.
04:24
Okay, so I just want to take
04:26
a sentence or two to say something about
04:28
the encoder and what it's doing, because
04:30
it's really the key part and it's
04:32
sort of interesting and kind of cool.
04:34
I'm not sure "cool" is really the right word, but
04:36
you know what I mean.
04:38
So what it's doing is, it's replacing
04:40
the retinal circuitry, really the guts of
04:42
the retinal circuitry, with a set of equations,
04:44
a set of equations that we can implement
04:46
on a chip. So it's just math.
04:48
In other words, we're not literally replacing
04:50
the components of the retina.
04:53
It's not like we're making a little mini-device
04:55
for each of the different cell types.
04:57
We've just abstracted what the
04:59
retina's doing with a set of equations.
05:01
And so, in a way, the equations are serving
05:03
as sort of a codebook. An image comes in,
05:05
goes through the set of equations,
05:07
and out comes streams of electrical pulses,
05:10
just like a normal retina would produce.
05:12
Now let me put my money
05:16
where my mouth is and show you that
05:18
we can actually produce normal output,
05:20
and what the implications of this are.
05:22
Here are three sets of
05:24
firing patterns. The top one is from
05:26
a normal animal, the middle one is from
05:28
a blind animal that's been treated with
05:30
this encoder-transducer device, and the
05:32
bottom one is from a blind animal treated
05:34
with a standard prosthetic.
05:36
So the bottom one is the state-of-the-art
05:38
device that's out there right now, which is
05:40
basically made up of light detectors,
05:42
but no encoder. So what we did was we
05:44
presented movies of everyday things --
05:46
people, babies, park benches,
05:48
you know, regular things happening -- and
05:50
we recorded the responses from the retinas
05:52
of these three groups of animals.
05:54
Now just to orient you, each box is showing
05:56
the firing patterns of several cells,
05:58
and just as in the previous slides,
06:00
each row is a different cell,
06:02
and I just made the pulses a little bit smaller
06:04
and thinner so I could show you
06:06
a long stretch of data.
06:09
So as you can see, the firing patterns
06:11
from the blind animal treated with
06:13
the encoder-transducer really do very
06:15
closely match the normal firing patterns --
06:17
and it's not perfect, but it's pretty good --
06:19
and the blind animal treated with
06:21
the standard prosthetic,
06:23
the responses really don't.
06:25
And so with the standard method,
06:27
the cells do fire, they just don't fire
06:30
in the normal firing patterns because
06:32
they don't have the right code.
06:34
How important is this?
06:36
What's the potential impact
06:38
on a patient's ability to see?
06:40
So I'm just going to show you one
06:43
bottom-line experiment that answers this,
06:45
and of course I've got a lot of other data,
06:47
so if you're interested I'm happy
06:49
to show more. So the experiment
06:51
is called a reconstruction experiment.
06:53
So what we did is we took a moment
06:55
in time from these recordings and asked,
06:57
what was the retina seeing at that moment?
07:00
Can we reconstruct what the retina
07:02
was seeing from the responses
07:04
from the firing patterns?
07:06
So, when we did this for responses
07:08
from the standard method and from
07:11
our encoder and transducer.
07:14
So let me show you, and I'm going to
07:16
start with the standard method first.
07:18
So you can see that it's pretty limited,
07:20
and because the firing patterns aren't
07:22
in the right code, they're very limited in
07:24
what they can tell you about
07:26
what's out there. So you can see that
07:28
there's something there, but it's not so clear
07:30
what that something is, and this just sort of
07:32
circles back to what I was saying in the
07:34
beginning, that with the standard method,
07:36
patients can see high-contrast edges, they
07:38
can see light, but it doesn't easily go
07:40
further than that. So what was
07:42
the image? It was a baby's face.
07:44
So what about with our approach,
07:47
adding the code? And you can see
07:49
that it's much better. Not only can you
07:51
tell that it's a baby's face, but you can
07:53
tell that it's this baby's face, which is a
07:55
really challenging task.
07:57
So on the left is the encoder
07:59
alone, and on the right is from an actual
08:01
blind retina, so the encoder and the transducer.
08:03
But the key one really is the encoder alone,
08:05
because we can team up the encoder with
08:07
the different transducer.
08:09
This is just actually the first one that we tried.
08:11
I just wanted to say something about the standard method.
08:13
When this first came out, it was just a really
08:15
exciting thing, the idea that you
08:17
even make a blind retina respond at all.
08:19
But there was this limiting factor,
08:22
the issue of the code, and how to make
08:25
the cells respond better,
08:27
produce normal responses,
08:29
and so this was our contribution.
08:31
Now I just want to wrap up,
08:33
and as I was mentioning earlier
08:35
of course I have a lot of other data
08:37
if you're interested, but I just wanted to give
08:39
this sort of basic idea
08:41
of being able to communicate
08:43
with the brain in its language, and
08:46
the potential power of being able to do that.
08:48
So it's different from the motor prosthetics
08:51
where you're communicating from the brain
08:53
to a device. Here we have to communicate
08:55
from the outside world
08:57
into the brain and be understood,
08:59
and be understood by the brain.
09:01
And then the last thing I wanted
09:03
to say, really, is to emphasize
09:05
that the idea generalizes.
09:07
So the same strategy that we used
09:09
to find the code for the retina we can also
09:11
use to find the code for other areas,
09:13
for example, the auditory system and
09:15
the motor system, so for treating deafness
09:17
and for motor disorders.
09:19
So just the same way that we were able to
09:21
jump over the damaged
09:23
circuitry in the retina to get to the retina's
09:25
output cells, we can jump over the
09:27
damaged circuitry in the cochlea
09:29
to get the auditory nerve,
09:31
or jump over damaged areas in the cortex,
09:33
in the motor cortex, to bridge the gap
09:35
produced by a stroke.
09:38
I just want to end with a simple
09:40
message that understanding the code
09:42
is really, really important, and if we
09:44
can understand the code,
09:46
the language of the brain, things become
09:48
possible that didn't seem obviously
09:50
possible before. Thank you.
09:52
(Applause)
09:54

▲Back to top

About the Speaker:

Sheila Nirenberg - Neuroscientist
Sheila Nirenberg studies how the brain encodes information -- possibly allowing us to decode it, and maybe develop prosthetic sensory devices.

Why you should listen

Sheila Nirenberg is a neuroscientist/professor at Weill Medical College of Cornell University, where she studies neural coding – that is, how the brain takes information from the outside world and encodes it in patterns of electrical activity. The idea is to be able to decode the activity, to look at a pattern of electrical pulses and know what an animal is seeing or thinking or feeling.  Recently, she’s been using this work to develop new kinds of prosthetic devices, particularly ones for treating blindness.


More profile about the speaker
Sheila Nirenberg | Speaker | TED.com