ABOUT THE SPEAKER
Sheila Nirenberg - Neuroscientist
Sheila Nirenberg studies how the brain encodes information -- possibly allowing us to decode it, and maybe develop prosthetic sensory devices.

Why you should listen

Sheila Nirenberg is a neuroscientist/professor at Weill Medical College of Cornell University, where she studies neural coding – that is, how the brain takes information from the outside world and encodes it in patterns of electrical activity. The idea is to be able to decode the activity, to look at a pattern of electrical pulses and know what an animal is seeing or thinking or feeling.  Recently, she’s been using this work to develop new kinds of prosthetic devices, particularly ones for treating blindness.


More profile about the speaker
Sheila Nirenberg | Speaker | TED.com
TEDMED 2011

Sheila Nirenberg: A prosthetic eye to treat blindness

Filmed:
470,530 views

At TEDMED, Sheila Nirenberg shows a bold way to create sight in people with certain kinds of blindness: by hooking into the optic nerve and sending signals from a camera direct to the brain.
- Neuroscientist
Sheila Nirenberg studies how the brain encodes information -- possibly allowing us to decode it, and maybe develop prosthetic sensory devices. Full bio

Double-click the English transcript below to play the video.

00:15
I study how the brain processes
0
0
2000
00:17
information. That is, how it takes
1
2000
2000
00:19
information in from the outside world, and
2
4000
2000
00:21
converts it into patterns of electrical activity,
3
6000
2000
00:23
and then how it uses those patterns
4
8000
2000
00:25
to allow you to do things --
5
10000
2000
00:27
to see, hear, to reach for an object.
6
12000
2000
00:29
So I'm really a basic scientist, not
7
14000
2000
00:31
a clinician, but in the last year and a half
8
16000
2000
00:33
I've started to switch over, to use what
9
18000
2000
00:35
we've been learning about these patterns
10
20000
2000
00:37
of activity to develop prosthetic devices,
11
22000
3000
00:40
and what I wanted to do today is show you
12
25000
2000
00:42
an example of this.
13
27000
2000
00:44
It's really our first foray into this.
14
29000
2000
00:46
It's the development of a prosthetic device
15
31000
2000
00:48
for treating blindness.
16
33000
2000
00:50
So let me start in on that problem.
17
35000
2000
00:52
There are 10 million people in the U.S.
18
37000
2000
00:54
and many more worldwide who are blind
19
39000
2000
00:56
or are facing blindness due to diseases
20
41000
2000
00:58
of the retina, diseases like
21
43000
2000
01:00
macular degeneration, and there's little
22
45000
2000
01:02
that can be done for them.
23
47000
2000
01:04
There are some drug treatments, but
24
49000
2000
01:06
they're only effective on a small fraction
25
51000
2000
01:08
of the population. And so, for the vast
26
53000
2000
01:10
majority of patients, their best hope for
27
55000
2000
01:12
regaining sight is through prosthetic devices.
28
57000
2000
01:14
The problem is that current prosthetics
29
59000
2000
01:16
don't work very well. They're still very
30
61000
2000
01:18
limited in the vision that they can provide.
31
63000
2000
01:20
And so, you know, for example, with these
32
65000
2000
01:22
devices, patients can see simple things
33
67000
2000
01:24
like bright lights and high contrast edges,
34
69000
2000
01:26
not very much more, so nothing close
35
71000
2000
01:28
to normal vision has been possible.
36
73000
3000
01:31
So what I'm going to tell you about today
37
76000
2000
01:33
is a device that we've been working on
38
78000
2000
01:35
that I think has the potential to make
39
80000
2000
01:37
a difference, to be much more effective,
40
82000
2000
01:39
and what I wanted to do is show you
41
84000
2000
01:41
how it works. Okay, so let me back up a
42
86000
2000
01:43
little bit and show you how a normal retina
43
88000
2000
01:45
works first so you can see the problem
44
90000
2000
01:47
that we were trying to solve.
45
92000
2000
01:49
Here you have a retina.
46
94000
2000
01:51
So you have an image, a retina, and a brain.
47
96000
2000
01:53
So when you look at something, like this image
48
98000
2000
01:55
of this baby's face, it goes into your eye
49
100000
2000
01:57
and it lands on your retina, on the front-end
50
102000
2000
01:59
cells here, the photoreceptors.
51
104000
2000
02:01
Then what happens is the retinal circuitry,
52
106000
2000
02:03
the middle part, goes to work on it,
53
108000
2000
02:05
and what it does is it performs operations
54
110000
2000
02:07
on it, it extracts information from it, and it
55
112000
2000
02:09
converts that information into a code.
56
114000
2000
02:11
And the code is in the form of these patterns
57
116000
2000
02:13
of electrical pulses that get sent
58
118000
2000
02:15
up to the brain, and so the key thing is
59
120000
2000
02:17
that the image ultimately gets converted
60
122000
2000
02:19
into a code. And when I say code,
61
124000
2000
02:21
I do literally mean code.
62
126000
2000
02:23
Like this pattern of pulses here actually means "baby's face,"
63
128000
3000
02:26
and so when the brain gets this pattern
64
131000
2000
02:28
of pulses, it knows that what was out there
65
133000
2000
02:30
was a baby's face, and if it
66
135000
2000
02:32
got a different pattern it would know
67
137000
2000
02:34
that what was out there was, say, a dog,
68
139000
2000
02:36
or another pattern would be a house.
69
141000
2000
02:38
Anyway, you get the idea.
70
143000
2000
02:40
And, of course, in real life, it's all dynamic,
71
145000
2000
02:42
meaning that it's changing all the time,
72
147000
2000
02:44
so the patterns of pulses are changing
73
149000
2000
02:46
all the time because the world you're
74
151000
2000
02:48
looking at is changing all the time too.
75
153000
3000
02:51
So, you know, it's sort of a complicated
76
156000
2000
02:53
thing. You have these patterns of pulses
77
158000
2000
02:55
coming out of your eye every millisecond
78
160000
2000
02:57
telling your brain what it is that you're seeing.
79
162000
2000
02:59
So what happens when a person
80
164000
2000
03:01
gets a retinal degenerative disease like
81
166000
2000
03:03
macular degeneration? What happens is
82
168000
2000
03:05
is that, the front-end cells die,
83
170000
2000
03:07
the photoreceptors die, and over time,
84
172000
2000
03:09
all the cells and the circuits that are
85
174000
2000
03:11
connected to them, they die too.
86
176000
2000
03:13
Until the only things that you have left
87
178000
2000
03:15
are these cells here, the output cells,
88
180000
2000
03:17
the ones that send the signals to the brain,
89
182000
2000
03:19
but because of all that degeneration
90
184000
2000
03:21
they aren't sending any signals anymore.
91
186000
2000
03:23
They aren't getting any input, so
92
188000
2000
03:25
the person's brain no longer gets
93
190000
2000
03:27
any visual information --
94
192000
2000
03:29
that is, he or she is blind.
95
194000
3000
03:32
So, a solution to the problem, then,
96
197000
2000
03:34
would be to build a device that could mimic
97
199000
2000
03:36
the actions of that front-end circuitry
98
201000
2000
03:38
and send signals to the retina's output cells,
99
203000
2000
03:40
and they can go back to doing their
100
205000
2000
03:42
normal job of sending signals to the brain.
101
207000
2000
03:44
So this is what we've been working on,
102
209000
2000
03:46
and this is what our prosthetic does.
103
211000
2000
03:48
So it consists of two parts, what we call
104
213000
2000
03:50
an encoder and a transducer.
105
215000
2000
03:52
And so the encoder does just
106
217000
2000
03:54
what I was saying: it mimics the actions
107
219000
2000
03:56
of the front-end circuitry -- so it takes images
108
221000
2000
03:58
in and converts them into the retina's code.
109
223000
2000
04:00
And then the transducer then makes the
110
225000
2000
04:02
output cells send the code on up
111
227000
2000
04:04
to the brain, and the result is
112
229000
2000
04:06
a retinal prosthetic that can produce
113
231000
3000
04:09
normal retinal output.
114
234000
2000
04:11
So a completely blind retina,
115
236000
2000
04:13
even one with no front-end circuitry at all,
116
238000
2000
04:15
no photoreceptors,
117
240000
2000
04:17
can now send out normal signals,
118
242000
2000
04:19
signals that the brain can understand.
119
244000
3000
04:22
So no other device has been able
120
247000
2000
04:24
to do this.
121
249000
2000
04:26
Okay, so I just want to take
122
251000
2000
04:28
a sentence or two to say something about
123
253000
2000
04:30
the encoder and what it's doing, because
124
255000
2000
04:32
it's really the key part and it's
125
257000
2000
04:34
sort of interesting and kind of cool.
126
259000
2000
04:36
I'm not sure "cool" is really the right word, but
127
261000
2000
04:38
you know what I mean.
128
263000
2000
04:40
So what it's doing is, it's replacing
129
265000
2000
04:42
the retinal circuitry, really the guts of
130
267000
2000
04:44
the retinal circuitry, with a set of equations,
131
269000
2000
04:46
a set of equations that we can implement
132
271000
2000
04:48
on a chip. So it's just math.
133
273000
2000
04:50
In other words, we're not literally replacing
134
275000
3000
04:53
the components of the retina.
135
278000
2000
04:55
It's not like we're making a little mini-device
136
280000
2000
04:57
for each of the different cell types.
137
282000
2000
04:59
We've just abstracted what the
138
284000
2000
05:01
retina's doing with a set of equations.
139
286000
2000
05:03
And so, in a way, the equations are serving
140
288000
2000
05:05
as sort of a codebook. An image comes in,
141
290000
2000
05:07
goes through the set of equations,
142
292000
3000
05:10
and out comes streams of electrical pulses,
143
295000
2000
05:12
just like a normal retina would produce.
144
297000
4000
05:16
Now let me put my money
145
301000
2000
05:18
where my mouth is and show you that
146
303000
2000
05:20
we can actually produce normal output,
147
305000
2000
05:22
and what the implications of this are.
148
307000
2000
05:24
Here are three sets of
149
309000
2000
05:26
firing patterns. The top one is from
150
311000
2000
05:28
a normal animal, the middle one is from
151
313000
2000
05:30
a blind animal that's been treated with
152
315000
2000
05:32
this encoder-transducer device, and the
153
317000
2000
05:34
bottom one is from a blind animal treated
154
319000
2000
05:36
with a standard prosthetic.
155
321000
2000
05:38
So the bottom one is the state-of-the-art
156
323000
2000
05:40
device that's out there right now, which is
157
325000
2000
05:42
basically made up of light detectors,
158
327000
2000
05:44
but no encoder. So what we did was we
159
329000
2000
05:46
presented movies of everyday things --
160
331000
2000
05:48
people, babies, park benches,
161
333000
2000
05:50
you know, regular things happening -- and
162
335000
2000
05:52
we recorded the responses from the retinas
163
337000
2000
05:54
of these three groups of animals.
164
339000
2000
05:56
Now just to orient you, each box is showing
165
341000
2000
05:58
the firing patterns of several cells,
166
343000
2000
06:00
and just as in the previous slides,
167
345000
2000
06:02
each row is a different cell,
168
347000
2000
06:04
and I just made the pulses a little bit smaller
169
349000
2000
06:06
and thinner so I could show you
170
351000
3000
06:09
a long stretch of data.
171
354000
2000
06:11
So as you can see, the firing patterns
172
356000
2000
06:13
from the blind animal treated with
173
358000
2000
06:15
the encoder-transducer really do very
174
360000
2000
06:17
closely match the normal firing patterns --
175
362000
2000
06:19
and it's not perfect, but it's pretty good --
176
364000
2000
06:21
and the blind animal treated with
177
366000
2000
06:23
the standard prosthetic,
178
368000
2000
06:25
the responses really don't.
179
370000
2000
06:27
And so with the standard method,
180
372000
3000
06:30
the cells do fire, they just don't fire
181
375000
2000
06:32
in the normal firing patterns because
182
377000
2000
06:34
they don't have the right code.
183
379000
2000
06:36
How important is this?
184
381000
2000
06:38
What's the potential impact
185
383000
2000
06:40
on a patient's ability to see?
186
385000
3000
06:43
So I'm just going to show you one
187
388000
2000
06:45
bottom-line experiment that answers this,
188
390000
2000
06:47
and of course I've got a lot of other data,
189
392000
2000
06:49
so if you're interested I'm happy
190
394000
2000
06:51
to show more. So the experiment
191
396000
2000
06:53
is called a reconstruction experiment.
192
398000
2000
06:55
So what we did is we took a moment
193
400000
2000
06:57
in time from these recordings and asked,
194
402000
3000
07:00
what was the retina seeing at that moment?
195
405000
2000
07:02
Can we reconstruct what the retina
196
407000
2000
07:04
was seeing from the responses
197
409000
2000
07:06
from the firing patterns?
198
411000
2000
07:08
So, when we did this for responses
199
413000
3000
07:11
from the standard method and from
200
416000
3000
07:14
our encoder and transducer.
201
419000
2000
07:16
So let me show you, and I'm going to
202
421000
2000
07:18
start with the standard method first.
203
423000
2000
07:20
So you can see that it's pretty limited,
204
425000
2000
07:22
and because the firing patterns aren't
205
427000
2000
07:24
in the right code, they're very limited in
206
429000
2000
07:26
what they can tell you about
207
431000
2000
07:28
what's out there. So you can see that
208
433000
2000
07:30
there's something there, but it's not so clear
209
435000
2000
07:32
what that something is, and this just sort of
210
437000
2000
07:34
circles back to what I was saying in the
211
439000
2000
07:36
beginning, that with the standard method,
212
441000
2000
07:38
patients can see high-contrast edges, they
213
443000
2000
07:40
can see light, but it doesn't easily go
214
445000
2000
07:42
further than that. So what was
215
447000
2000
07:44
the image? It was a baby's face.
216
449000
3000
07:47
So what about with our approach,
217
452000
2000
07:49
adding the code? And you can see
218
454000
2000
07:51
that it's much better. Not only can you
219
456000
2000
07:53
tell that it's a baby's face, but you can
220
458000
2000
07:55
tell that it's this baby's face, which is a
221
460000
2000
07:57
really challenging task.
222
462000
2000
07:59
So on the left is the encoder
223
464000
2000
08:01
alone, and on the right is from an actual
224
466000
2000
08:03
blind retina, so the encoder and the transducer.
225
468000
2000
08:05
But the key one really is the encoder alone,
226
470000
2000
08:07
because we can team up the encoder with
227
472000
2000
08:09
the different transducer.
228
474000
2000
08:11
This is just actually the first one that we tried.
229
476000
2000
08:13
I just wanted to say something about the standard method.
230
478000
2000
08:15
When this first came out, it was just a really
231
480000
2000
08:17
exciting thing, the idea that you
232
482000
2000
08:19
even make a blind retina respond at all.
233
484000
3000
08:22
But there was this limiting factor,
234
487000
3000
08:25
the issue of the code, and how to make
235
490000
2000
08:27
the cells respond better,
236
492000
2000
08:29
produce normal responses,
237
494000
2000
08:31
and so this was our contribution.
238
496000
2000
08:33
Now I just want to wrap up,
239
498000
2000
08:35
and as I was mentioning earlier
240
500000
2000
08:37
of course I have a lot of other data
241
502000
2000
08:39
if you're interested, but I just wanted to give
242
504000
2000
08:41
this sort of basic idea
243
506000
2000
08:43
of being able to communicate
244
508000
3000
08:46
with the brain in its language, and
245
511000
2000
08:48
the potential power of being able to do that.
246
513000
3000
08:51
So it's different from the motor prosthetics
247
516000
2000
08:53
where you're communicating from the brain
248
518000
2000
08:55
to a device. Here we have to communicate
249
520000
2000
08:57
from the outside world
250
522000
2000
08:59
into the brain and be understood,
251
524000
2000
09:01
and be understood by the brain.
252
526000
2000
09:03
And then the last thing I wanted
253
528000
2000
09:05
to say, really, is to emphasize
254
530000
2000
09:07
that the idea generalizes.
255
532000
2000
09:09
So the same strategy that we used
256
534000
2000
09:11
to find the code for the retina we can also
257
536000
2000
09:13
use to find the code for other areas,
258
538000
2000
09:15
for example, the auditory system and
259
540000
2000
09:17
the motor system, so for treating deafness
260
542000
2000
09:19
and for motor disorders.
261
544000
2000
09:21
So just the same way that we were able to
262
546000
2000
09:23
jump over the damaged
263
548000
2000
09:25
circuitry in the retina to get to the retina's
264
550000
2000
09:27
output cells, we can jump over the
265
552000
2000
09:29
damaged circuitry in the cochlea
266
554000
2000
09:31
to get the auditory nerve,
267
556000
2000
09:33
or jump over damaged areas in the cortex,
268
558000
2000
09:35
in the motor cortex, to bridge the gap
269
560000
3000
09:38
produced by a stroke.
270
563000
2000
09:40
I just want to end with a simple
271
565000
2000
09:42
message that understanding the code
272
567000
2000
09:44
is really, really important, and if we
273
569000
2000
09:46
can understand the code,
274
571000
2000
09:48
the language of the brain, things become
275
573000
2000
09:50
possible that didn't seem obviously
276
575000
2000
09:52
possible before. Thank you.
277
577000
2000
09:54
(Applause)
278
579000
5000

▲Back to top

ABOUT THE SPEAKER
Sheila Nirenberg - Neuroscientist
Sheila Nirenberg studies how the brain encodes information -- possibly allowing us to decode it, and maybe develop prosthetic sensory devices.

Why you should listen

Sheila Nirenberg is a neuroscientist/professor at Weill Medical College of Cornell University, where she studies neural coding – that is, how the brain takes information from the outside world and encodes it in patterns of electrical activity. The idea is to be able to decode the activity, to look at a pattern of electrical pulses and know what an animal is seeing or thinking or feeling.  Recently, she’s been using this work to develop new kinds of prosthetic devices, particularly ones for treating blindness.


More profile about the speaker
Sheila Nirenberg | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee