11:49
TEDxBeaconStreet

Alex Wissner-Gross: A new equation for intelligence

Filmed:

Is there an equation for intelligence? Yes. It’s F = T ∇ Sτ. In a fascinating and informative talk, physicist and computer scientist Alex Wissner-Gross explains what in the world that means. (Filmed at TEDxBeaconStreet.)

- Scientist, entrepreneur, inventor
Alex Wissner-Gross applies science and engineering principles to big (and diverse) questions, like: "What is the equation for intelligence?" and "What's the best way to raise awareness about climate change?" Full bio

Intelligence -- what is it?
00:12
If we take a look back at the history
00:16
of how intelligence has been viewed,
00:18
one seminal example has been
00:21
Edsger Dijkstra's famous quote that
00:25
"the question of whether a machine can think
00:28
is about as interesting
00:31
as the question of whether a submarine
00:32
can swim."
00:35
Now, Edsger Dijkstra, when he wrote this,
00:37
intended it as a criticism
00:41
of the early pioneers of computer science,
00:43
like Alan Turing.
00:46
However, if you take a look back
00:48
and think about what have been
00:50
the most empowering innovations
00:52
that enabled us to build
00:54
artificial machines that swim
00:56
and artificial machines that [fly],
00:58
you find that it was only through understanding
01:01
the underlying physical mechanisms
01:05
of swimming and flight
01:07
that we were able to build these machines.
01:10
And so, several years ago,
01:13
I undertook a program to try to understand
01:15
the fundamental physical mechanisms
01:19
underlying intelligence.
01:21
Let's take a step back.
01:24
Let's first begin with a thought experiment.
01:26
Pretend that you're an alien race
01:29
that doesn't know anything about Earth biology
01:32
or Earth neuroscience or Earth intelligence,
01:35
but you have amazing telescopes
01:38
and you're able to watch the Earth,
01:40
and you have amazingly long lives,
01:43
so you're able to watch the Earth
01:45
over millions, even billions of years.
01:46
And you observe a really strange effect.
01:50
You observe that, over the course of the millennia,
01:53
Earth is continually bombarded with asteroids
01:57
up until a point,
02:02
and that at some point,
02:04
corresponding roughly to our year, 2000 AD,
02:05
asteroids that are on
02:09
a collision course with the Earth
02:11
that otherwise would have collided
02:13
mysteriously get deflected
02:15
or they detonate before they can hit the Earth.
02:17
Now of course, as earthlings,
02:20
we know the reason would be
02:23
that we're trying to save ourselves.
02:24
We're trying to prevent an impact.
02:26
But if you're an alien race
02:29
who doesn't know any of this,
02:31
doesn't have any concept of Earth intelligence,
02:32
you'd be forced to put together
02:34
a physical theory that explains how,
02:36
up until a certain point in time,
02:39
asteroids that would demolish the surface of a planet
02:41
mysteriously stop doing that.
02:46
And so I claim that this is the same question
02:49
as understanding the physical nature of intelligence.
02:53
So in this program that I
undertook several years ago,
02:57
I looked at a variety of different threads
03:01
across science, across a variety of disciplines,
03:04
that were pointing, I think,
03:07
towards a single, underlying mechanism
03:09
for intelligence.
03:12
In cosmology, for example,
03:13
there have been a variety of
different threads of evidence
03:16
that our universe appears to be finely tuned
03:18
for the development of intelligence,
03:22
and, in particular, for the development
03:24
of universal states
03:26
that maximize the diversity of possible futures.
03:28
In game play, for example, in Go --
03:32
everyone remembers in 1997
03:35
when IBM's Deep Blue beat
Garry Kasparov at chess --
03:38
fewer people are aware
03:42
that in the past 10 years or so,
03:43
the game of Go,
03:45
arguably a much more challenging game
03:46
because it has a much higher branching factor,
03:48
has also started to succumb
03:51
to computer game players
03:53
for the same reason:
03:54
the best techniques right now
for computers playing Go
03:56
are techniques that try to maximize future options
03:59
during game play.
04:02
Finally, in robotic motion planning,
04:04
there have been a variety of recent techniques
04:08
that have tried to take advantage
04:10
of abilities of robots to maximize
04:12
future freedom of action
04:15
in order to accomplish complex tasks.
04:17
And so, taking all of these different threads
04:20
and putting them together,
04:22
I asked, starting several years ago,
04:24
is there an underlying mechanism for intelligence
04:27
that we can factor out
04:29
of all of these different threads?
04:31
Is there a single equation for intelligence?
04:33
And the answer, I believe, is yes.
["F = T ∇ Sτ"]
04:37
What you're seeing is probably
04:41
the closest equivalent to an E = mc²
04:43
for intelligence that I've seen.
04:46
So what you're seeing here
04:49
is a statement of correspondence
04:51
that intelligence is a force, F,
04:53
that acts so as to maximize future freedom of action.
04:58
It acts to maximize future freedom of action,
05:02
or keep options open,
05:05
with some strength T,
05:06
with the diversity of possible accessible futures, S,
05:08
up to some future time horizon, tau.
05:13
In short, intelligence doesn't like to get trapped.
05:16
Intelligence tries to maximize
future freedom of action
05:19
and keep options open.
05:22
And so, given this one equation,
05:25
it's natural to ask, so what can you do with this?
05:27
How predictive is it?
05:30
Does it predict human-level intelligence?
05:31
Does it predict artificial intelligence?
05:33
So I'm going to show you now a video
05:36
that will, I think, demonstrate
05:38
some of the amazing applications
05:41
of just this single equation.
05:44
(Video) Narrator: Recent research in cosmology
05:46
has suggested that universes that produce
05:48
more disorder, or "entropy," over their lifetimes
05:50
should tend to have more favorable conditions
05:54
for the existence of intelligent
beings such as ourselves.
05:56
But what if that tentative cosmological connection
05:59
between entropy and intelligence
06:02
hints at a deeper relationship?
06:04
What if intelligent behavior doesn't just correlate
06:05
with the production of long-term entropy,
06:08
but actually emerges directly from it?
06:10
To find out, we developed a software engine
06:12
called Entropica, designed to maximize
06:14
the production of long-term entropy
06:17
of any system that it finds itself in.
06:19
Amazingly, Entropica was able to pass
06:21
multiple animal intelligence
tests, play human games,
06:23
and even earn money trading stocks,
06:27
all without being instructed to do so.
06:29
Here are some examples of Entropica in action.
06:31
Just like a human standing
upright without falling over,
06:34
here we see Entropica
06:37
automatically balancing a pole using a cart.
06:38
This behavior is remarkable in part
06:41
because we never gave Entropica a goal.
06:43
It simply decided on its own to balance the pole.
06:45
This balancing ability will have appliactions
06:48
for humanoid robotics
06:51
and human assistive technologies.
06:52
Just as some animals can use objects
06:55
in their environments as tools
06:57
to reach into narrow spaces,
06:58
here we see that Entropica,
07:00
again on its own initiative,
07:02
was able to move a large
disk representing an animal
07:04
around so as to cause a small disk,
07:07
representing a tool, to reach into a confined space
07:09
holding a third disk
07:12
and release the third disk
from its initially fixed position.
07:13
This tool use ability will have applications
07:16
for smart manufacturing and agriculture.
07:18
In addition, just as some other animals
07:21
are able to cooperate by pulling
opposite ends of a rope
07:23
at the same time to release food,
07:25
here we see that Entropica is able to accomplish
07:27
a model version of that task.
07:30
This cooperative ability has interesting implications
07:32
for economic planning and a variety of other fields.
07:34
Entropica is broadly applicable
07:38
to a variety of domains.
07:40
For example, here we see it successfully
07:42
playing a game of pong against itself,
07:44
illustrating its potential for gaming.
07:47
Here we see Entropica orchestrating
07:49
new connections on a social network
07:51
where friends are constantly falling out of touch
07:53
and successfully keeping
the network well connected.
07:56
This same network orchestration ability
07:58
also has applications in health care,
08:01
energy, and intelligence.
08:03
Here we see Entropica directing the paths
08:06
of a fleet of ships,
08:08
successfully discovering and
utilizing the Panama Canal
08:10
to globally extend its reach from the Atlantic
08:13
to the Pacific.
08:15
By the same token, Entropica
08:17
is broadly applicable to problems
08:19
in autonomous defense, logistics and transportation.
08:20
Finally, here we see Entropica
08:26
spontaneously discovering and executing
08:28
a buy-low, sell-high strategy
08:30
on a simulated range traded stock,
08:32
successfully growing assets under management
08:35
exponentially.
08:37
This risk management ability
08:38
will have broad applications in finance
08:40
and insurance.
08:42
Alex Wissner-Gross: So what you've just seen
08:46
is that a variety of signature human intelligent
08:48
cognitive behaviors
08:52
such as tool use and walking upright
08:54
and social cooperation
08:57
all follow from a single equation,
08:59
which drives a system
09:02
to maximize its future freedom of action.
09:04
Now, there's a profound irony here.
09:07
Going back to the beginning
09:10
of the usage of the term robot,
09:12
the play "RUR,"
09:16
there was always a concept
09:19
that if we developed machine intelligence,
09:21
there would be a cybernetic revolt.
09:24
The machines would rise up against us.
09:27
One major consequence of this work
09:31
is that maybe all of these decades,
09:33
we've had the whole concept of cybernetic revolt
09:36
in reverse.
09:39
It's not that machines first become intelligent
09:41
and then megalomaniacal
09:44
and try to take over the world.
09:46
It's quite the opposite,
09:48
that the urge to take control
09:50
of all possible futures
09:53
is a more fundamental principle
09:55
than that of intelligence,
09:57
that general intelligence may in fact emerge
09:58
directly from this sort of control-grabbing,
10:02
rather than vice versa.
10:06
Another important consequence is goal seeking.
10:10
I'm often asked, how does the ability to seek goals
10:14
follow from this sort of framework?
10:18
And the answer is, the ability to seek goals
10:20
will follow directly from this
10:23
in the following sense:
10:24
just like you would travel through a tunnel,
10:26
a bottleneck in your future path space,
10:29
in order to achieve many other
10:32
diverse objectives later on,
10:34
or just like you would invest
10:36
in a financial security,
10:38
reducing your short-term liquidity
10:40
in order to increase your wealth over the long term,
10:42
goal seeking emerges directly
10:44
from a long-term drive
10:47
to increase future freedom of action.
10:48
Finally, Richard Feynman, famous physicist,
10:52
once wrote that if human civilization were destroyed
10:56
and you could pass only a single concept
11:00
on to our descendants
11:02
to help them rebuild civilization,
11:03
that concept should be
11:05
that all matter around us
11:07
is made out of tiny elements
11:09
that attract each other when they're far apart
11:11
but repel each other when they're close together.
11:14
My equivalent of that statement
11:17
to pass on to descendants
11:19
to help them build artificial intelligences
11:20
or to help them understand human intelligence,
11:23
is the following:
11:26
Intelligence should be viewed
11:27
as a physical process
11:29
that tries to maximize future freedom of action
11:30
and avoid constraints in its own future.
11:33
Thank you very much.
11:37
(Applause)
11:38

▲Back to top

About the Speaker:

Alex Wissner-Gross - Scientist, entrepreneur, inventor
Alex Wissner-Gross applies science and engineering principles to big (and diverse) questions, like: "What is the equation for intelligence?" and "What's the best way to raise awareness about climate change?"

Why you should listen

Alex Wissner-Gross is a serial big-picture thinker. He applies physics and computer science principles to a wide variety of topics, like human intelligence, climate change and financial trading.

Lately Wissner-Gross started wondering: Why have we searched for so long to understand intelligence? Can it really be this elusive? His latest work posits that intelligence can indeed be defined physically, as a dynamic force, rather than a static property. He explains intelligence in terms of causal entropic forces, ultimately defining it as "a force to maximize future freedom of action."

Wissner-Gross is a fellow at the Harvard Institute for Applied Computational Science and a research affiliate at the MIT Media Lab. He has a Ph.D. in physics from Harvard and bachelor's degrees in physics, electrical science and engineering, and mathematics from MIT.

More profile about the speaker
Alex Wissner-Gross | Speaker | TED.com