13:47
TEDGlobal 2010

Tan Le: A headset that reads your brainwaves

Filmed:

Tan Le's astonishing new computer interface reads its user's brainwaves, making it possible to control virtual objects, and even physical electronics, with mere thoughts (and a little concentration). She demos the headset, and talks about its far-reaching applications.

- Entrepreneur
Tan Le is the founder & CEO of Emotiv, a bioinformatics company that's working on identifying biomarkers for mental and other neurological conditions using electroencephalography (EEG). Full bio

Up until now, our communication with machines
00:16
has always been limited
00:18
to conscious and direct forms.
00:20
Whether it's something simple
00:22
like turning on the lights with a switch,
00:24
or even as complex as programming robotics,
00:26
we have always had to give a command to a machine,
00:29
or even a series of commands,
00:32
in order for it to do something for us.
00:34
Communication between people, on the other hand,
00:37
is far more complex and a lot more interesting
00:39
because we take into account
00:42
so much more than what is explicitly expressed.
00:44
We observe facial expressions, body language,
00:47
and we can intuit feelings and emotions
00:50
from our dialogue with one another.
00:52
This actually forms a large part
00:55
of our decision-making process.
00:57
Our vision is to introduce
00:59
this whole new realm of human interaction
01:01
into human-computer interaction
01:04
so that computers can understand
01:06
not only what you direct it to do,
01:08
but it can also respond
01:10
to your facial expressions
01:12
and emotional experiences.
01:14
And what better way to do this
01:16
than by interpreting the signals
01:18
naturally produced by our brain,
01:20
our center for control and experience.
01:22
Well, it sounds like a pretty good idea,
01:25
but this task, as Bruno mentioned,
01:27
isn't an easy one for two main reasons:
01:29
First, the detection algorithms.
01:32
Our brain is made up of
01:35
billions of active neurons,
01:37
around 170,000 km
01:39
of combined axon length.
01:42
When these neurons interact,
01:44
the chemical reaction emits an electrical impulse,
01:46
which can be measured.
01:48
The majority of our functional brain
01:50
is distributed over
01:53
the outer surface layer of the brain,
01:55
and to increase the area that's available for mental capacity,
01:57
the brain surface is highly folded.
02:00
Now this cortical folding
02:03
presents a significant challenge
02:05
for interpreting surface electrical impulses.
02:07
Each individual's cortex
02:10
is folded differently,
02:12
very much like a fingerprint.
02:14
So even though a signal
02:16
may come from the same functional part of the brain,
02:18
by the time the structure has been folded,
02:21
its physical location
02:23
is very different between individuals,
02:25
even identical twins.
02:27
There is no longer any consistency
02:30
in the surface signals.
02:32
Our breakthrough was to create an algorithm
02:34
that unfolds the cortex,
02:36
so that we can map the signals
02:38
closer to its source,
02:40
and therefore making it capable of working across a mass population.
02:42
The second challenge
02:46
is the actual device for observing brainwaves.
02:48
EEG measurements typically involve
02:51
a hairnet with an array of sensors,
02:53
like the one that you can see here in the photo.
02:56
A technician will put the electrodes
02:59
onto the scalp
03:01
using a conductive gel or paste
03:03
and usually after a procedure of preparing the scalp
03:05
by light abrasion.
03:08
Now this is quite time consuming
03:10
and isn't the most comfortable process.
03:12
And on top of that, these systems
03:14
actually cost in the tens of thousands of dollars.
03:16
So with that, I'd like to invite onstage
03:20
Evan Grant, who is one of last year's speakers,
03:23
who's kindly agreed
03:25
to help me to demonstrate
03:27
what we've been able to develop.
03:29
(Applause)
03:31
So the device that you see
03:37
is a 14-channel, high-fidelity
03:39
EEG acquisition system.
03:41
It doesn't require any scalp preparation,
03:43
no conductive gel or paste.
03:46
It only takes a few minutes to put on
03:48
and for the signals to settle.
03:51
It's also wireless,
03:53
so it gives you the freedom to move around.
03:55
And compared to the tens of thousands of dollars
03:58
for a traditional EEG system,
04:01
this headset only costs
04:04
a few hundred dollars.
04:06
Now on to the detection algorithms.
04:08
So facial expressions --
04:11
as I mentioned before in emotional experiences --
04:13
are actually designed to work out of the box
04:15
with some sensitivity adjustments
04:17
available for personalization.
04:19
But with the limited time we have available,
04:22
I'd like to show you the cognitive suite,
04:24
which is the ability for you
04:26
to basically move virtual objects with your mind.
04:28
Now, Evan is new to this system,
04:32
so what we have to do first
04:34
is create a new profile for him.
04:36
He's obviously not Joanne -- so we'll "add user."
04:38
Evan. Okay.
04:41
So the first thing we need to do with the cognitive suite
04:43
is to start with training
04:46
a neutral signal.
04:48
With neutral, there's nothing in particular
04:50
that Evan needs to do.
04:52
He just hangs out. He's relaxed.
04:54
And the idea is to establish a baseline
04:56
or normal state for his brain,
04:58
because every brain is different.
05:00
It takes eight seconds to do this,
05:02
and now that that's done,
05:04
we can choose a movement-based action.
05:06
So Evan, choose something
05:08
that you can visualize clearly in your mind.
05:10
Evan Grant: Let's do "pull."
05:12
Tan Le: Okay, so let's choose "pull."
05:14
So the idea here now
05:16
is that Evan needs to
05:18
imagine the object coming forward
05:20
into the screen,
05:22
and there's a progress bar that will scroll across the screen
05:24
while he's doing that.
05:27
The first time, nothing will happen,
05:29
because the system has no idea how he thinks about "pull."
05:31
But maintain that thought
05:34
for the entire duration of the eight seconds.
05:36
So: one, two, three, go.
05:38
Okay.
05:49
So once we accept this,
05:51
the cube is live.
05:53
So let's see if Evan
05:55
can actually try and imagine pulling.
05:57
Ah, good job!
06:00
(Applause)
06:02
That's really amazing.
06:05
(Applause)
06:07
So we have a little bit of time available,
06:11
so I'm going to ask Evan
06:13
to do a really difficult task.
06:15
And this one is difficult
06:17
because it's all about being able to visualize something
06:19
that doesn't exist in our physical world.
06:22
This is "disappear."
06:24
So what you want to do -- at least with movement-based actions,
06:26
we do that all the time, so you can visualize it.
06:28
But with "disappear," there's really no analogies --
06:31
so Evan, what you want to do here
06:33
is to imagine the cube slowly fading out, okay.
06:35
Same sort of drill. So: one, two, three, go.
06:38
Okay. Let's try that.
06:50
Oh, my goodness. He's just too good.
06:53
Let's try that again.
06:57
EG: Losing concentration.
07:04
(Laughter)
07:06
TL: But we can see that it actually works,
07:08
even though you can only hold it
07:10
for a little bit of time.
07:12
As I said, it's a very difficult process
07:14
to imagine this.
07:17
And the great thing about it is that
07:19
we've only given the software one instance
07:21
of how he thinks about "disappear."
07:23
As there is a machine learning algorithm in this --
07:26
(Applause)
07:29
Thank you.
07:33
Good job. Good job.
07:35
(Applause)
07:38
Thank you, Evan, you're a wonderful, wonderful
07:40
example of the technology.
07:43
So, as you can see, before,
07:46
there is a leveling system built into this software
07:48
so that as Evan, or any user,
07:51
becomes more familiar with the system,
07:53
they can continue to add more and more detections,
07:55
so that the system begins to differentiate
07:58
between different distinct thoughts.
08:00
And once you've trained up the detections,
08:04
these thoughts can be assigned or mapped
08:06
to any computing platform,
08:08
application or device.
08:10
So I'd like to show you a few examples,
08:12
because there are many possible applications
08:14
for this new interface.
08:16
In games and virtual worlds, for example,
08:19
your facial expressions
08:21
can naturally and intuitively be used
08:23
to control an avatar or virtual character.
08:25
Obviously, you can experience the fantasy of magic
08:29
and control the world with your mind.
08:31
And also, colors, lighting,
08:36
sound and effects
08:39
can dynamically respond to your emotional state
08:41
to heighten the experience that you're having, in real time.
08:43
And moving on to some applications
08:47
developed by developers and researchers around the world,
08:49
with robots and simple machines, for example --
08:52
in this case, flying a toy helicopter
08:55
simply by thinking "lift" with your mind.
08:57
The technology can also be applied
09:00
to real world applications --
09:02
in this example, a smart home.
09:04
You know, from the user interface of the control system
09:06
to opening curtains
09:09
or closing curtains.
09:11
And of course, also to the lighting --
09:22
turning them on
09:25
or off.
09:28
And finally,
09:30
to real life-changing applications,
09:32
such as being able to control an electric wheelchair.
09:34
In this example,
09:37
facial expressions are mapped to the movement commands.
09:39
Man: Now blink right to go right.
09:42
Now blink left to turn back left.
09:50
Now smile to go straight.
10:02
TL: We really -- Thank you.
10:08
(Applause)
10:10
We are really only scratching the surface of what is possible today,
10:15
and with the community's input,
10:18
and also with the involvement of developers
10:20
and researchers from around the world,
10:22
we hope that you can help us to shape
10:25
where the technology goes from here. Thank you so much.
10:27

▲Back to top

About the Speaker:

Tan Le - Entrepreneur
Tan Le is the founder & CEO of Emotiv, a bioinformatics company that's working on identifying biomarkers for mental and other neurological conditions using electroencephalography (EEG).

Why you should listen

Tan Le is the co-founder and president of Emotiv. Before this, she headed a firm that worked on a new form of remote control that uses brainwaves to control digital devices and digital media. It's long been a dream to bypass the mechanical (mouse, keyboard, clicker) and have our digital devices respond directly to what we think. Emotiv's EPOC headset uses 16 sensors to listen to activity across the entire brain. Software "learns" what each user's brain activity looks like when one, for instance, imagines a left turn or a jump.

Le herself has an extraordinary story -- a refugee from Vietnam at age 4, she entered college at 16 and has since become a vital young leader in her home country of Australia.

More profile about the speaker
Tan Le | Speaker | TED.com