English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TEDxCERN

Sean Follmer: Shape-shifting tech will change work as we know it

Filmed
Views 1,419,612

What will the world look like when we move beyond the keyboard and mouse? Interaction designer Sean Follmer is building a future with machines that bring information to life under your fingers as you work with it. In this talk, check out prototypes for a 3D shape-shifting table, a phone that turns into a wristband, a deformable game controller and more that may change the way we live and work.

- Human-computer interaction researcher and designer
Sean Follmer designs shape-changing and deformable interfaces that take advantage of our natural dexterity and spatial abilities. Full bio

We've evolved with tools,
and tools have evolved with us.
00:12
Our ancestors created these
hand axes 1.5 million years ago,
00:16
shaping them to not only
fit the task at hand
00:21
but also their hand.
00:24
However, over the years,
00:26
tools have become
more and more specialized.
00:28
These sculpting tools
have evolved through their use,
00:31
and each one has a different form
which matches its function.
00:35
And they leverage
the dexterity of our hands
00:38
in order to manipulate things
with much more precision.
00:41
But as tools have become
more and more complex,
00:45
we need more complex controls
to control them.
00:48
And so designers have become
very adept at creating interfaces
00:52
that allow you to manipulate parameters
while you're attending to other things,
00:57
such as taking a photograph
and changing the focus
01:00
or the aperture.
01:03
But the computer has fundamentally
changed the way we think about tools
01:05
because computation is dynamic.
01:10
So it can do a million different things
01:12
and run a million different applications.
01:14
However, computers have
the same static physical form
01:17
for all of these different applications
01:20
and the same static
interface elements as well.
01:22
And I believe that this
is fundamentally a problem,
01:25
because it doesn't really allow us
to interact with our hands
01:28
and capture the rich dexterity
that we have in our bodies.
01:31
And my belief is that, then,
we must need new types of interfaces
01:36
that can capture these
rich abilities that we have
01:40
and that can physically adapt to us
01:44
and allow us to interact in new ways.
01:46
And so that's what I've been doing
at the MIT Media Lab
01:49
and now at Stanford.
01:51
So with my colleagues,
Daniel Leithinger and Hiroshi Ishii,
01:53
we created inFORM,
01:57
where the interface can actually
come off the screen
01:58
and you can physically manipulate it.
02:01
Or you can visualize
3D information physically
02:03
and touch it and feel it
to understand it in new ways.
02:06
Or you can interact through gestures
and direct deformations
02:15
to sculpt digital clay.
02:19
Or interface elements can arise
out of the surface
02:26
and change on demand.
02:29
And the idea is that for each
individual application,
02:30
the physical form can be matched
to the application.
02:33
And I believe this represents a new way
02:37
that we can interact with information,
02:39
by making it physical.
02:41
So the question is, how can we use this?
02:43
Traditionally, urban planners
and architects build physical models
02:45
of cities and buildings
to better understand them.
02:49
So with Tony Tang at the Media Lab,
we created an interface built on inFORM
02:52
to allow urban planners
to design and view entire cities.
02:56
And now you can walk around it,
but it's dynamic, it's physical,
03:01
and you can also interact directly.
03:05
Or you can look at different views,
03:07
such as population or traffic information,
03:09
but it's made physical.
03:12
We also believe that these dynamic
shape displays can really change
03:14
the ways that we remotely
collaborate with people.
03:18
So when we're working together in person,
03:21
I'm not only looking at your face
03:24
but I'm also gesturing
and manipulating objects,
03:25
and that's really hard to do
when you're using tools like Skype.
03:28
And so using inFORM,
you can reach out from the screen
03:33
and manipulate things at a distance.
03:36
So we used the pins of the display
to represent people's hands,
03:39
allowing them to actually touch
and manipulate objects at a distance.
03:42
And you can also manipulate
and collaborate on 3D data sets as well,
03:50
so you can gesture around them
as well as manipulate them.
03:54
And that allows people to collaborate
on these new types of 3D information
03:58
in a richer way than might
be possible with traditional tools.
04:02
And so you can also
bring in existing objects,
04:07
and those will be captured on one side
and transmitted to the other.
04:10
Or you can have an object that's linked
between two places,
04:13
so as I move a ball on one side,
04:16
the ball moves on the other as well.
04:18
And so we do this by capturing
the remote user
04:22
using a depth-sensing camera
like a Microsoft Kinect.
04:25
Now, you might be wondering
how does this all work,
04:28
and essentially, what it is,
is 900 linear actuators
04:31
that are connected to these
mechanical linkages
04:35
that allow motion down here
to be propagated in these pins above.
04:37
So it's not that complex
compared to what's going on at CERN,
04:41
but it did take a long time
for us to build it.
04:45
And so we started with a single motor,
04:47
a single linear actuator,
04:49
and then we had to design
a custom circuit board to control them.
04:51
And then we had to make a lot of them.
04:55
And so the problem with having
900 of something
04:57
is that you have to do
every step 900 times.
05:00
And so that meant that we had
a lot of work to do.
05:03
So we sort of set up
a mini-sweatshop in the Media Lab
05:06
and brought undergrads in and convinced
them to do "research" --
05:09
(Laughter)
05:13
and had late nights
watching movies, eating pizza
05:14
and screwing in thousands of screws.
05:17
You know -- research.
05:19
(Laughter)
05:20
But anyway, I think that we were
really excited by the things
05:22
that inFORM allowed us to do.
05:25
Increasingly, we're using mobile devices,
and we interact on the go.
05:27
But mobile devices, just like computers,
05:31
are used for so many
different applications.
05:34
So you use them to talk on the phone,
05:36
to surf the web, to play games,
to take pictures
05:38
or even a million different things.
05:41
But again, they have the same
static physical form
05:43
for each of these applications.
05:46
And so we wanted to know how can we take
some of the same interactions
05:48
that we developed for inFORM
05:52
and bring them to mobile devices.
05:53
So at Stanford, we created
this haptic edge display,
05:56
which is a mobile device
with an array of linear actuators
06:00
that can change shape,
06:03
so you can feel in your hand
where you are as you're reading a book.
06:04
Or you can feel in your pocket
new types of tactile sensations
06:09
that are richer than the vibration.
06:12
Or buttons can emerge from the side
that allow you to interact
06:14
where you want them to be.
06:17
Or you can play games
and have actual buttons.
06:21
And so we were able to do this
06:25
by embedding 40 small, tiny
linear actuators inside the device,
06:27
and that allow you not only to touch them
06:32
but also back-drive them as well.
06:34
But we've also looked at other ways
to create more complex shape change.
06:36
So we've used pneumatic actuation
to create a morphing device
06:41
where you can go from something
that looks a lot like a phone ...
06:44
to a wristband on the go.
06:48
And so together with Ken Nakagaki
at the Media Lab,
06:51
we created this new
high-resolution version
06:54
that uses an array of servomotors
to change from interactive wristband
06:57
to a touch-input device
07:03
to a phone.
07:06
(Laughter)
07:07
And we're also interested
in looking at ways
07:10
that users can actually
deform the interfaces
07:12
to shape them into the devices
that they want to use.
07:14
So you can make something
like a game controller,
07:17
and then the system will understand
what shape it's in
07:20
and change to that mode.
07:22
So, where does this point?
07:26
How do we move forward from here?
07:27
I think, really, where we are today
07:29
is in this new age
of the Internet of Things,
07:32
where we have computers everywhere --
07:34
they're in our pockets,
they're in our walls,
07:36
they're in almost every device
that you'll buy in the next five years.
07:38
But what if we stopped
thinking about devices
07:42
and think instead about environments?
07:45
And so how can we have smart furniture
07:47
or smart rooms or smart environments
07:50
or cities that can adapt to us physically,
07:53
and allow us to do new ways
of collaborating with people
07:56
and doing new types of tasks?
08:01
So for the Milan Design Week,
we created TRANSFORM,
08:03
which is an interactive table-scale
version of these shape displays,
08:06
which can move physical objects
on the surface; for example,
08:10
reminding you to take your keys.
08:13
But it can also transform
to fit different ways of interacting.
08:16
So if you want to work,
08:20
then it can change to sort of
set up your work system.
08:21
And so as you bring a device over,
08:24
it creates all the affordances you need
08:26
and brings other objects
to help you accomplish those goals.
08:29
So, in conclusion,
08:37
I really think that we need to think
about a new, fundamentally different way
08:38
of interacting with computers.
08:42
We need computers
that can physically adapt to us
08:45
and adapt to the ways
that we want to use them
08:48
and really harness the rich dexterity
that we have of our hands,
08:51
and our ability to think spatially
about information by making it physical.
08:55
But looking forward, I think we need
to go beyond this, beyond devices,
09:00
to really think about new ways
that we can bring people together,
09:04
and bring our information into the world,
09:08
and think about smart environments
that can adapt to us physically.
09:11
So with that, I will leave you.
09:15
Thank you very much.
09:16
(Applause)
09:17

▲Back to top

About the speaker:

Sean Follmer - Human-computer interaction researcher and designer
Sean Follmer designs shape-changing and deformable interfaces that take advantage of our natural dexterity and spatial abilities.

Why you should listen

Sean Follmer is a human-computer interaction researcher and designer. He is an Assistant Professor of Mechanical Engineering at Stanford University, where he teaches the design of smart and connected devices and leads research at the intersection between human-computer interaction (HCI) and robotics.

Follmer received a Ph.D. and a Masters degree from the MIT Media Lab in 2015 and 2011, respectively, and a BS in Engineering from Stanford University. He has worked at Nokia Research and Adobe Research on projects exploring the frontiers of HCI. 

Follmer has received numerous awards for his research and design work, including best paper awards and nominations from premier academic conferences in HCI (ACM UIST and CHI), Fast Company Innovation By Design Awards, a Red Dot Design Award and a Laval Virtual Award.

More profile about the speaker
Sean Follmer | Speaker | TED.com