sponsored links
TEDMED 2012

Miguel Nicolelis: A monkey that controls a robot with its thoughts. No, really.

April 11, 2012

Can we use our brains to directly control machines? Miguel Nicolelis suggests yes, showing how a clever monkey in the US learned to control a robot arm in Japan purely with its thoughts. The research has big implications for quadraplegic people -- and in fact, it powered the exoskeleton that kicked off the 2014 World Cup.

Miguel Nicolelis - Neuroscientist
Miguel Nicolelis explores the limits of the brain-machine interface. Full bio

sponsored links
Double-click the English subtitles below to play the video.
The kind of neuroscience that I do and my colleagues do
00:15
is almost like the weatherman.
00:18
We are always chasing storms.
00:20
We want to see and measure storms -- brainstorms, that is.
00:24
And we all talk about brainstorms in our daily lives,
00:29
but we rarely see or listen to one.
00:31
So I always like to start these talks
00:35
by actually introducing you to one of them.
00:36
Actually, the first time we recorded more than one neuron --
00:39
a hundred brain cells simultaneously --
00:43
we could measure the electrical sparks
00:45
of a hundred cells in the same animal,
00:48
this is the first image we got,
00:50
the first 10 seconds of this recording.
00:52
So we got a little snippet of a thought,
00:54
and we could see it in front of us.
00:58
I always tell the students
01:01
that we could also call neuroscientists some sort of astronomer,
01:02
because we are dealing with a system
01:06
that is only comparable in terms of number of cells
01:07
to the number of galaxies that we have in the universe.
01:10
And here we are, out of billions of neurons,
01:13
just recording, 10 years ago, a hundred.
01:16
We are doing a thousand now.
01:19
And we hope to understand something fundamental about our human nature.
01:21
Because, if you don't know yet,
01:26
everything that we use to define what human nature is comes from these storms,
01:28
comes from these storms that roll over the hills and valleys of our brains
01:33
and define our memories, our beliefs,
01:38
our feelings, our plans for the future.
01:42
Everything that we ever do,
01:44
everything that every human has ever done, do or will do,
01:47
requires the toil of populations of neurons producing these kinds of storms.
01:52
And the sound of a brainstorm, if you've never heard one,
01:57
is somewhat like this.
02:00
You can put it louder if you can.
02:03
My son calls this "making popcorn while listening to a badly-tuned A.M. station."
02:06
This is a brain.
02:13
This is what happens when you route these electrical storms to a loudspeaker
02:14
and you listen to a hundred brain cells firing,
02:18
your brain will sound like this -- my brain, any brain.
02:20
And what we want to do as neuroscientists in this time
02:25
is to actually listen to these symphonies, these brain symphonies,
02:29
and try to extract from them the messages they carry.
02:34
In particular, about 12 years ago
02:38
we created a preparation that we named brain-machine interfaces.
02:40
And you have a scheme here that describes how it works.
02:44
The idea is, let's have some sensors that listen to these storms, this electrical firing,
02:46
and see if you can, in the same time that it takes
02:52
for this storm to leave the brain and reach the legs or the arms of an animal --
02:55
about half a second --
03:00
let's see if we can read these signals,
03:03
extract the motor messages that are embedded in it,
03:05
translate it into digital commands
03:08
and send it to an artificial device
03:11
that will reproduce the voluntary motor wheel of that brain in real time.
03:13
And see if we can measure how well we can translate that message
03:19
when we compare to the way the body does that.
03:22
And if we can actually provide feedback,
03:26
sensory signals that go back from this robotic, mechanical, computational actuator
03:29
that is now under the control of the brain,
03:34
back to the brain,
03:37
how the brain deals with that,
03:38
of receiving messages from an artificial piece of machinery.
03:40
And that's exactly what we did 10 years ago.
03:45
We started with a superstar monkey called Aurora
03:47
that became one of the superstars of this field.
03:50
And Aurora liked to play video games.
03:53
As you can see here,
03:55
she likes to use a joystick, like any one of us, any of our kids, to play this game.
03:56
And as a good primate, she even tries to cheat before she gets the right answer.
04:01
So even before a target appears that she's supposed to cross
04:06
with the cursor that she's controlling with this joystick,
04:10
Aurora is trying to find the target, no matter where it is.
04:13
And if she's doing that,
04:17
because every time she crosses that target with the little cursor,
04:19
she gets a drop of Brazilian orange juice.
04:22
And I can tell you, any monkey will do anything for you
04:25
if you get a little drop of Brazilian orange juice.
04:28
Actually any primate will do that.
04:31
Think about that.
04:34
Well, while Aurora was playing this game, as you saw,
04:35
and doing a thousand trials a day
04:38
and getting 97 percent correct and 350 milliliters of orange juice,
04:41
we are recording the brainstorms that are produced in her head
04:45
and sending them to a robotic arm
04:48
that was learning to reproduce the movements that Aurora was making.
04:50
Because the idea was to actually turn on this brain-machine interface
04:54
and have Aurora play the game just by thinking,
04:57
without interference of her body.
05:02
Her brainstorms would control an arm
05:05
that would move the cursor and cross the target.
05:08
And to our shock, that's exactly what Aurora did.
05:10
She played the game without moving her body.
05:14
So every trajectory that you see of the cursor now,
05:18
this is the exact first moment she got that.
05:20
That's the exact first moment
05:23
a brain intention was liberated from the physical domains of a body of a primate
05:25
and could act outside, in that outside world,
05:32
just by controlling an artificial device.
05:35
And Aurora kept playing the game, kept finding the little target
05:38
and getting the orange juice that she wanted to get, that she craved for.
05:43
Well, she did that because she, at that time, had acquired a new arm.
05:47
The robotic arm that you see moving here 30 days later,
05:54
after the first video that I showed to you,
05:57
is under the control of Aurora's brain
06:00
and is moving the cursor to get to the target.
06:02
And Aurora now knows that she can play the game with this robotic arm,
06:05
but she has not lost the ability to use her biological arms to do what she pleases.
06:09
She can scratch her back, she can scratch one of us, she can play another game.
06:15
By all purposes and means,
06:19
Aurora's brain has incorporated that artificial device
06:21
as an extension of her body.
06:25
The model of the self that Aurora had in her mind
06:28
has been expanded to get one more arm.
06:31
Well, we did that 10 years ago.
06:35
Just fast forward 10 years.
06:38
Just last year we realized that you don't even need to have a robotic device.
06:40
You can just build a computational body, an avatar, a monkey avatar.
06:45
And you can actually use it for our monkeys to either interact with them,
06:51
or you can train them to assume in a virtual world
06:55
the first-person perspective of that avatar
07:00
and use her brain activity to control the movements of the avatar's arms or legs.
07:03
And what we did basically was to train the animals
07:08
to learn how to control these avatars
07:11
and explore objects that appear in the virtual world.
07:14
And these objects are visually identical,
07:18
but when the avatar crosses the surface of these objects,
07:20
they send an electrical message that is proportional to the microtactile texture of the object
07:24
that goes back directly to the monkey's brain,
07:31
informing the brain what it is the avatar is touching.
07:35
And in just four weeks, the brain learns to process this new sensation
07:40
and acquires a new sensory pathway -- like a new sense.
07:44
And you truly liberate the brain now
07:51
because you are allowing the brain to send motor commands to move this avatar.
07:53
And the feedback that comes from the avatar is being processed directly by the brain
07:58
without the interference of the skin.
08:03
So what you see here is this is the design of the task.
08:05
You're going to see an animal basically touching these three targets.
08:08
And he has to select one because only one carries the reward,
08:12
the orange juice that they want to get.
08:16
And he has to select it by touch using a virtual arm, an arm that doesn't exist.
08:18
And that's exactly what they do.
08:24
This is a complete liberation of the brain
08:26
from the physical constraints of the body and the motor in a perceptual task.
08:29
The animal is controlling the avatar to touch the targets.
08:33
And he's sensing the texture by receiving an electrical message directly in the brain.
08:38
And the brain is deciding what is the texture associated with the reward.
08:43
The legends that you see in the movie don't appear for the monkey.
08:47
And by the way, they don't read English anyway,
08:51
so they are here just for you to know that the correct target is shifting position.
08:53
And yet, they can find them by tactile discrimination,
08:59
and they can press it and select it.
09:03
So when we look at the brains of these animals,
09:06
on the top panel you see the alignment of 125 cells
09:08
showing what happens with the brain activity, the electrical storms,
09:12
of this sample of neurons in the brain
09:16
when the animal is using a joystick.
09:18
And that's a picture that every neurophysiologist knows.
09:21
The basic alignment shows that these cells are coding for all possible directions.
09:23
The bottom picture is what happens when the body stops moving
09:28
and the animal starts controlling either a robotic device or a computational avatar.
09:34
As fast as we can reset our computers,
09:40
the brain activity shifts to start representing this new tool,
09:43
as if this too was a part of that primate's body.
09:49
The brain is assimilating that too, as fast as we can measure.
09:54
So that suggests to us that our sense of self
09:59
does not end at the last layer of the epithelium of our bodies,
10:03
but it ends at the last layer of electrons of the tools that we're commanding with our brains.
10:07
Our violins, our cars, our bicycles, our soccer balls, our clothing --
10:12
they all become assimilated by this voracious, amazing, dynamic system called the brain.
10:17
How far can we take it?
10:24
Well, in an experiment that we ran a few years ago, we took this to the limit.
10:26
We had an animal running on a treadmill
10:30
at Duke University on the East Coast of the United States,
10:32
producing the brainstorms necessary to move.
10:35
And we had a robotic device, a humanoid robot,
10:37
in Kyoto, Japan at ATR Laboratories
10:42
that was dreaming its entire life to be controlled by a brain,
10:44
a human brain, or a primate brain.
10:50
What happens here is that the brain activity that generated the movements in the monkey
10:53
was transmitted to Japan and made this robot walk
10:58
while footage of this walking was sent back to Duke,
11:01
so that the monkey could see the legs of this robot walking in front of her.
11:05
So she could be rewarded, not by what her body was doing
11:11
but for every correct step of the robot on the other side of the planet
11:15
controlled by her brain activity.
11:20
Funny thing, that round trip around the globe took 20 milliseconds less
11:22
than it takes for that brainstorm to leave its head, the head of the monkey,
11:29
and reach its own muscle.
11:34
The monkey was moving a robot that was six times bigger, across the planet.
11:37
This is one of the experiments in which that robot was able to walk autonomously.
11:43
This is CB1 fulfilling its dream in Japan
11:50
under the control of the brain activity of a primate.
11:55
So where are we taking all this?
11:59
What are we going to do with all this research,
12:01
besides studying the properties of this dynamic universe that we have between our ears?
12:03
Well the idea is to take all this knowledge and technology
12:09
and try to restore one of the most severe neurological problems that we have in the world.
12:14
Millions of people have lost the ability to translate these brainstorms
12:19
into action, into movement.
12:24
Although their brains continue to produce those storms and code for movements,
12:26
they cannot cross a barrier that was created by a lesion on the spinal cord.
12:31
So our idea is to create a bypass,
12:36
is to use these brain-machine interfaces to read these signals,
12:39
larger-scale brainstorms that contain the desire to move again,
12:43
bypass the lesion using computational microengineering
12:47
and send it to a new body, a whole body called an exoskeleton,
12:51
a whole robotic suit that will become the new body of these patients.
12:58
And you can see an image produced by this consortium.
13:03
This is a nonprofit consortium called the Walk Again Project
13:08
that is putting together scientists from Europe,
13:12
from here in the United States, and in Brazil
13:14
together to work to actually get this new body built --
13:16
a body that we believe, through the same plastic mechanisms
13:21
that allow Aurora and other monkeys to use these tools through a brain-machine interface
13:24
and that allows us to incorporate the tools that we produce and use in our daily life.
13:30
This same mechanism, we hope, will allow these patients,
13:36
not only to imagine again the movements that they want to make
13:39
and translate them into movements of this new body,
13:43
but for this body to be assimilated as the new body that the brain controls.
13:46
So I was told about 10 years ago
13:53
that this would never happen, that this was close to impossible.
13:57
And I can only tell you that as a scientist,
14:02
I grew up in southern Brazil in the mid-'60s
14:04
watching a few crazy guys telling [us] that they would go to the Moon.
14:07
And I was five years old,
14:12
and I never understood why NASA didn't hire Captain Kirk and Spock to do the job;
14:14
after all, they were very proficient --
14:18
but just seeing that as a kid
14:20
made me believe, as my grandmother used to tell me,
14:24
that "impossible is just the possible
14:27
that someone has not put in enough effort to make it come true."
14:29
So they told me that it's impossible to make someone walk.
14:33
I think I'm going to follow my grandmother's advice.
14:36
Thank you.
14:40
(Applause)
14:41
Translator:Timothy Covell
Reviewer:Morton Bast

sponsored links

Miguel Nicolelis - Neuroscientist
Miguel Nicolelis explores the limits of the brain-machine interface.

Why you should listen

At the Nicolelis Laboratory at Duke University, Miguel Nicolelis is best known for pioneering studies in neuronal population coding, Brain Machine Interfaces (BMI) and neuroprosthetics in human patients and non-human primates.His lab's work was seen, famously though a bit too briefly, when a brain-controlled exoskeleton from his lab helped Juliano Pinto, a paraplegic man, kick the first ball at the 2014 World Cup.

But his lab is thinking even bigger. They've developed an integrative approach to studying neurological disorders, including Parkinsons disease and epilepsy. The approach, they hope, will allow the integration of molecular, cellular, systems and behavioral data in the same animal, producing a more complete understanding of the nature of the neurophysiological alterations associated with these disorders. He's the author of the books Beyond Boundaries and The Relativistic Brain.

Miguel was honored as one of Foreign Policy's 2015 Global Thinkers.

sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.