sponsored links
TEDxPortland

Maurice Conti: The incredible inventions of intuitive AI

April 9, 2016

What do you get when you give a design tool a digital nervous system? Computers that improve our ability to think and imagine, and robotic systems that come up with (and build) radical new designs for bridges, cars, drones and much more -- all by themselves. Take a tour of the Augmented Age with futurist Maurice Conti and preview a time when robots and humans will work side-by-side to accomplish things neither could do alone.

Maurice Conti - Designer, futurist
Maurice Conti explores new partnerships between technology, nature and humanity. Full bio

sponsored links
Double-click the English subtitles below to play the video.
How many of you are creatives,
00:12
designers, engineers,
entrepreneurs, artists,
00:14
or maybe you just have
a really big imagination?
00:18
Show of hands? (Cheers)
00:21
That's most of you.
00:22
I have some news for us creatives.
00:25
Over the course of the next 20 years,
00:28
more will change around
the way we do our work
00:33
than has happened in the last 2,000.
00:37
In fact, I think we're at the dawn
of a new age in human history.
00:40
Now, there have been four major historical
eras defined by the way we work.
00:45
The Hunter-Gatherer Age
lasted several million years.
00:51
And then the Agricultural Age
lasted several thousand years.
00:55
The Industrial Age lasted
a couple of centuries.
00:59
And now the Information Age
has lasted just a few decades.
01:02
And now today, we're on the cusp
of our next great era as a species.
01:06
Welcome to the Augmented Age.
01:13
In this new era, your natural human
capabilities are going to be augmented
01:15
by computational systems
that help you think,
01:19
robotic systems that help you make,
01:22
and a digital nervous system
01:24
that connects you to the world
far beyond your natural senses.
01:26
Let's start with cognitive augmentation.
01:31
How many of you are augmented cyborgs?
01:33
(Laughter)
01:36
I would actually argue
that we're already augmented.
01:38
Imagine you're at a party,
01:42
and somebody asks you a question
that you don't know the answer to.
01:43
If you have one of these,
in a few seconds, you can know the answer.
01:47
But this is just a primitive beginning.
01:51
Even Siri is just a passive tool.
01:54
In fact, for the last
three-and-a-half million years,
01:58
the tools that we've had
have been completely passive.
02:01
They do exactly what we tell them
and nothing more.
02:06
Our very first tool only cut
where we struck it.
02:09
The chisel only carves
where the artist points it.
02:13
And even our most advanced tools
do nothing without our explicit direction.
02:17
In fact, to date, and this
is something that frustrates me,
02:22
we've always been limited
02:26
by this need to manually
push our wills into our tools --
02:27
like, manual,
literally using our hands,
02:31
even with computers.
02:33
But I'm more like Scotty in "Star Trek."
02:35
(Laughter)
02:38
I want to have a conversation
with a computer.
02:40
I want to say, "Computer,
let's design a car,"
02:42
and the computer shows me a car.
02:45
And I say, "No, more fast-looking,
and less German,"
02:47
and bang, the computer shows me an option.
02:49
(Laughter)
02:51
That conversation might be
a little ways off,
02:54
probably less than many of us think,
02:56
but right now,
02:59
we're working on it.
03:00
Tools are making this leap
from being passive to being generative.
03:02
Generative design tools
use a computer and algorithms
03:06
to synthesize geometry
03:10
to come up with new designs
all by themselves.
03:12
All it needs are your goals
and your constraints.
03:15
I'll give you an example.
03:18
In the case of this aerial drone chassis,
03:20
all you would need to do
is tell it something like,
03:22
it has four propellers,
03:25
you want it to be
as lightweight as possible,
03:26
and you need it to be
aerodynamically efficient.
03:29
Then what the computer does
is it explores the entire solution space:
03:31
every single possibility that solves
and meets your criteria --
03:36
millions of them.
03:40
It takes big computers to do this.
03:41
But it comes back to us with designs
03:43
that we, by ourselves,
never could've imagined.
03:45
And the computer's coming up
with this stuff all by itself --
03:49
no one ever drew anything,
03:52
and it started completely from scratch.
03:53
And by the way, it's no accident
03:56
that the drone body looks just like
the pelvis of a flying squirrel.
03:59
(Laughter)
04:03
It's because the algorithms
are designed to work
04:05
the same way evolution does.
04:08
What's exciting is we're starting
to see this technology
04:10
out in the real world.
04:13
We've been working with Airbus
for a couple of years
04:14
on this concept plane for the future.
04:16
It's a ways out still.
04:18
But just recently we used
a generative-design AI
04:20
to come up with this.
04:24
This is a 3D-printed cabin partition
that's been designed by a computer.
04:27
It's stronger than the original
yet half the weight,
04:32
and it will be flying
in the Airbus A320 later this year.
04:35
So computers can now generate;
04:39
they can come up with their own solutions
to our well-defined problems.
04:40
But they're not intuitive.
04:46
They still have to start from scratch
every single time,
04:47
and that's because they never learn.
04:51
Unlike Maggie.
04:54
(Laughter)
04:56
Maggie's actually smarter
than our most advanced design tools.
04:57
What do I mean by that?
05:01
If her owner picks up that leash,
05:02
Maggie knows with a fair
degree of certainty
05:04
it's time to go for a walk.
05:06
And how did she learn?
05:07
Well, every time the owner picked up
the leash, they went for a walk.
05:09
And Maggie did three things:
05:12
she had to pay attention,
05:14
she had to remember what happened
05:16
and she had to retain and create
a pattern in her mind.
05:18
Interestingly, that's exactly what
05:23
computer scientists
have been trying to get AIs to do
05:25
for the last 60 or so years.
05:27
Back in 1952,
05:30
they built this computer
that could play Tic-Tac-Toe.
05:31
Big deal.
05:36
Then 45 years later, in 1997,
05:38
Deep Blue beats Kasparov at chess.
05:41
2011, Watson beats these two
humans at Jeopardy,
05:45
which is much harder for a computer
to play than chess is.
05:50
In fact, rather than working
from predefined recipes,
05:53
Watson had to use reasoning
to overcome his human opponents.
05:57
And then a couple of weeks ago,
06:02
DeepMind's AlphaGo beats
the world's best human at Go,
06:04
which is the most difficult
game that we have.
06:09
In fact, in Go, there are more
possible moves
06:11
than there are atoms in the universe.
06:14
So in order to win,
06:18
what AlphaGo had to do
was develop intuition.
06:19
And in fact, at some points,
AlphaGo's programmers didn't understand
06:22
why AlphaGo was doing what it was doing.
06:27
And things are moving really fast.
06:31
I mean, consider --
in the space of a human lifetime,
06:33
computers have gone from a child's game
06:36
to what's recognized as the pinnacle
of strategic thought.
06:39
What's basically happening
06:43
is computers are going
from being like Spock
06:46
to being a lot more like Kirk.
06:49
(Laughter)
06:51
Right? From pure logic to intuition.
06:55
Would you cross this bridge?
07:00
Most of you are saying, "Oh, hell no!"
07:02
(Laughter)
07:04
And you arrived at that decision
in a split second.
07:06
You just sort of knew
that bridge was unsafe.
07:08
And that's exactly the kind of intuition
07:11
that our deep-learning systems
are starting to develop right now.
07:13
Very soon, you'll literally be able
07:17
to show something you've made,
you've designed,
07:19
to a computer,
07:21
and it will look at it and say,
07:22
"Sorry, homie, that'll never work.
You have to try again."
07:24
Or you could ask it if people
are going to like your next song,
07:27
or your next flavor of ice cream.
07:31
Or, much more importantly,
07:35
you could work with a computer
to solve a problem
07:38
that we've never faced before.
07:40
For instance, climate change.
07:42
We're not doing a very
good job on our own,
07:43
we could certainly use
all the help we can get.
07:45
That's what I'm talking about,
07:47
technology amplifying
our cognitive abilities
07:49
so we can imagine and design things
that were simply out of our reach
07:51
as plain old un-augmented humans.
07:55
So what about making
all of this crazy new stuff
07:59
that we're going to invent and design?
08:02
I think the era of human augmentation
is as much about the physical world
08:05
as it is about the virtual,
intellectual realm.
08:09
How will technology augment us?
08:13
In the physical world, robotic systems.
08:16
OK, there's certainly a fear
08:19
that robots are going to take
jobs away from humans,
08:21
and that is true in certain sectors.
08:23
But I'm much more interested in this idea
08:26
that humans and robots working together
are going to augment each other,
08:28
and start to inhabit a new space.
08:34
This is our applied research lab
in San Francisco,
08:36
where one of our areas of focus
is advanced robotics,
08:38
specifically, human-robot collaboration.
08:41
And this is Bishop, one of our robots.
08:44
As an experiment, we set it up
08:47
to help a person working in construction
doing repetitive tasks --
08:49
tasks like cutting out holes for outlets
or light switches in drywall.
08:53
(Laughter)
08:58
So, Bishop's human partner
can tell what to do in plain English
09:01
and with simple gestures,
09:04
kind of like talking to a dog,
09:06
and then Bishop executes
on those instructions
09:07
with perfect precision.
09:09
We're using the human
for what the human is good at:
09:11
awareness, perception and decision making.
09:14
And we're using the robot
for what it's good at:
09:17
precision and repetitiveness.
09:19
Here's another cool project
that Bishop worked on.
09:22
The goal of this project,
which we called the HIVE,
09:24
was to prototype the experience
of humans, computers and robots
09:27
all working together to solve
a highly complex design problem.
09:31
The humans acted as labor.
09:35
They cruised around the construction site,
they manipulated the bamboo --
09:37
which, by the way,
because it's a non-isomorphic material,
09:40
is super hard for robots to deal with.
09:43
But then the robots
did this fiber winding,
09:45
which was almost impossible
for a human to do.
09:47
And then we had an AI
that was controlling everything.
09:49
It was telling the humans what to do,
telling the robots what to do
09:53
and keeping track of thousands
of individual components.
09:56
What's interesting is,
09:59
building this pavilion
was simply not possible
10:00
without human, robot and AI
augmenting each other.
10:04
OK, I'll share one more project.
This one's a little bit crazy.
10:09
We're working with Amsterdam-based artist
Joris Laarman and his team at MX3D
10:13
to generatively design
and robotically print
10:17
the world's first autonomously
manufactured bridge.
10:20
So, Joris and an AI are designing
this thing right now, as we speak,
10:24
in Amsterdam.
10:27
And when they're done,
we're going to hit "Go,"
10:29
and robots will start 3D printing
in stainless steel,
10:31
and then they're going to keep printing,
without human intervention,
10:34
until the bridge is finished.
10:38
So, as computers are going
to augment our ability
10:40
to imagine and design new stuff,
10:43
robotic systems are going to help us
build and make things
10:46
that we've never been able to make before.
10:49
But what about our ability
to sense and control these things?
10:52
What about a nervous system
for the things that we make?
10:56
Our nervous system,
the human nervous system,
11:00
tells us everything
that's going on around us.
11:03
But the nervous system of the things
we make is rudimentary at best.
11:06
For instance, a car doesn't tell
the city's public works department
11:09
that it just hit a pothole at the corner
of Broadway and Morrison.
11:13
A building doesn't tell its designers
11:16
whether or not the people inside
like being there,
11:18
and the toy manufacturer doesn't know
11:21
if a toy is actually being played with --
11:24
how and where and whether
or not it's any fun.
11:26
Look, I'm sure that the designers
imagined this lifestyle for Barbie
11:29
when they designed her.
11:33
(Laughter)
11:34
But what if it turns out that Barbie's
actually really lonely?
11:36
(Laughter)
11:39
If the designers had known
11:43
what was really happening
in the real world
11:44
with their designs -- the road,
the building, Barbie --
11:46
they could've used that knowledge
to create an experience
11:49
that was better for the user.
11:51
What's missing is a nervous system
11:53
connecting us to all of the things
that we design, make and use.
11:55
What if all of you had that kind
of information flowing to you
11:59
from the things you create
in the real world?
12:03
With all of the stuff we make,
12:07
we spend a tremendous amount
of money and energy --
12:08
in fact, last year,
about two trillion dollars --
12:11
convincing people to buy
the things we've made.
12:13
But if you had this connection
to the things that you design and create
12:16
after they're out in the real world,
12:19
after they've been sold
or launched or whatever,
12:21
we could actually change that,
12:25
and go from making people want our stuff,
12:26
to just making stuff that people
want in the first place.
12:30
The good news is, we're working
on digital nervous systems
12:33
that connect us to the things we design.
12:36
We're working on one project
12:40
with a couple of guys down in Los Angeles
called the Bandito Brothers
12:41
and their team.
12:45
And one of the things these guys do
is build insane cars
12:47
that do absolutely insane things.
12:50
These guys are crazy --
12:54
(Laughter)
12:56
in the best way.
12:57
And what we're doing with them
13:00
is taking a traditional race-car chassis
13:02
and giving it a nervous system.
13:05
So we instrumented it
with dozens of sensors,
13:06
put a world-class driver behind the wheel,
13:09
took it out to the desert
and drove the hell out of it for a week.
13:12
And the car's nervous system
captured everything
13:15
that was happening to the car.
13:18
We captured four billion data points;
13:19
all of the forces
that it was subjected to.
13:22
And then we did something crazy.
13:24
We took all of that data,
13:27
and plugged it into a generative-design AI
we call "Dreamcatcher."
13:28
So what do get when you give
a design tool a nervous system,
13:33
and you ask it to build you
the ultimate car chassis?
13:37
You get this.
13:40
This is something that a human
could never have designed.
13:44
Except a human did design this,
13:48
but it was a human that was augmented
by a generative-design AI,
13:50
a digital nervous system
13:54
and robots that can actually
fabricate something like this.
13:56
So if this is the future,
the Augmented Age,
13:59
and we're going to be augmented
cognitively, physically and perceptually,
14:03
what will that look like?
14:07
What is this wonderland going to be like?
14:09
I think we're going to see a world
14:12
where we're moving
from things that are fabricated
14:14
to things that are farmed.
14:17
Where we're moving from things
that are constructed
14:20
to that which is grown.
14:23
We're going to move from being isolated
14:26
to being connected.
14:28
And we'll move away from extraction
14:30
to embrace aggregation.
14:32
I also think we'll shift
from craving obedience from our things
14:35
to valuing autonomy.
14:39
Thanks to our augmented capabilities,
14:42
our world is going to change dramatically.
14:44
We're going to have a world
with more variety, more connectedness,
14:47
more dynamism, more complexity,
14:50
more adaptability and, of course,
14:53
more beauty.
14:55
The shape of things to come
14:57
will be unlike anything
we've ever seen before.
14:58
Why?
15:01
Because what will be shaping those things
is this new partnership
15:02
between technology, nature and humanity.
15:05
That, to me, is a future
well worth looking forward to.
15:11
Thank you all so much.
15:15
(Applause)
15:16
Translator:Leslie Gauthier
Reviewer:Camille Martínez

sponsored links

Maurice Conti - Designer, futurist
Maurice Conti explores new partnerships between technology, nature and humanity.

Why you should listen

Maurice is a designer, futurist and innovator. He's worked with startups, government agencies, artists and corporations to explore the things that will matter to us in the future, and to design solutions to get them there.

Conti is currently Director of Applied Research & Innovation at Autodesk. He also leads Autodesk's Applied Research Lab, which he built from the ground up. Conti and his team are responsible for exploring the trends and technologies that will shape our future and to begin building the solutions that can help make our world a better place.

His team's research focuses on advanced robotics, applied machine learning, the Internet of Things and climate change/sea level rise.

Conti is also an explorer of geographies and cultures. He has circumnavigated the globe once and been half-way around twice. In 2009 he was awarded the Medal for Exceptional Bravery at Sea by the United Nations, the New Zealand Bravery Medal and a U.S. Coast Guard Citation for Bravery for saving the lives of three shipwrecked sailors.

Conti lives in Muir Beach, CA, where he serves his local community as a volunteer firefighter.

sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.