ABOUT THE SPEAKER
Sean Follmer - Human-computer interaction researcher and designer
Sean Follmer designs shape-changing and deformable interfaces that take advantage of our natural dexterity and spatial abilities.

Why you should listen

Sean Follmer is a human-computer interaction researcher and designer. He is an Assistant Professor of Mechanical Engineering at Stanford University, where he teaches the design of smart and connected devices and leads research at the intersection between human-computer interaction (HCI) and robotics.

Follmer received a Ph.D. and a Masters degree from the MIT Media Lab in 2015 and 2011, respectively, and a BS in Engineering from Stanford University. He has worked at Nokia Research and Adobe Research on projects exploring the frontiers of HCI. 

Follmer has received numerous awards for his research and design work, including best paper awards and nominations from premier academic conferences in HCI (ACM UIST and CHI), Fast Company Innovation By Design Awards, a Red Dot Design Award and a Laval Virtual Award.

More profile about the speaker
Sean Follmer | Speaker | TED.com
TEDxCERN

Sean Follmer: Shape-shifting tech will change work as we know it

Filmed:
1,541,392 views

What will the world look like when we move beyond the keyboard and mouse? Interaction designer Sean Follmer is building a future with machines that bring information to life under your fingers as you work with it. In this talk, check out prototypes for a 3D shape-shifting table, a phone that turns into a wristband, a deformable game controller and more that may change the way we live and work.
- Human-computer interaction researcher and designer
Sean Follmer designs shape-changing and deformable interfaces that take advantage of our natural dexterity and spatial abilities. Full bio

Double-click the English transcript below to play the video.

00:12
We've evolved with tools,
and tools have evolved with us.
0
811
3553
00:16
Our ancestors created these
hand axes 1.5 million years ago,
1
4388
5019
00:21
shaping them to not only
fit the task at hand
2
9431
3022
00:24
but also their hand.
3
12477
1468
00:26
However, over the years,
4
14747
1580
00:28
tools have become
more and more specialized.
5
16351
2530
00:31
These sculpting tools
have evolved through their use,
6
19293
3837
00:35
and each one has a different form
which matches its function.
7
23154
3562
00:38
And they leverage
the dexterity of our hands
8
26740
2668
00:41
in order to manipulate things
with much more precision.
9
29432
3130
00:45
But as tools have become
more and more complex,
10
33338
3066
00:48
we need more complex controls
to control them.
11
36428
3878
00:52
And so designers have become
very adept at creating interfaces
12
40714
4444
00:57
that allow you to manipulate parameters
while you're attending to other things,
13
45182
3738
01:00
such as taking a photograph
and changing the focus
14
48944
2879
01:03
or the aperture.
15
51847
1336
01:05
But the computer has fundamentally
changed the way we think about tools
16
53918
4219
01:10
because computation is dynamic.
17
58161
2014
01:12
So it can do a million different things
18
60564
2151
01:14
and run a million different applications.
19
62739
2349
01:17
However, computers have
the same static physical form
20
65112
3745
01:20
for all of these different applications
21
68881
1936
01:22
and the same static
interface elements as well.
22
70841
2777
01:25
And I believe that this
is fundamentally a problem,
23
73976
2420
01:28
because it doesn't really allow us
to interact with our hands
24
76420
2996
01:31
and capture the rich dexterity
that we have in our bodies.
25
79440
3427
01:36
And my belief is that, then,
we must need new types of interfaces
26
84026
4536
01:40
that can capture these
rich abilities that we have
27
88586
3759
01:44
and that can physically adapt to us
28
92369
2370
01:46
and allow us to interact in new ways.
29
94763
2250
01:49
And so that's what I've been doing
at the MIT Media Lab
30
97037
2587
01:51
and now at Stanford.
31
99648
1324
01:53
So with my colleagues,
Daniel Leithinger and Hiroshi Ishii,
32
101901
3611
01:57
we created inFORM,
33
105536
1389
01:58
where the interface can actually
come off the screen
34
106949
2498
02:01
and you can physically manipulate it.
35
109471
2276
02:03
Or you can visualize
3D information physically
36
111771
2743
02:06
and touch it and feel it
to understand it in new ways.
37
114538
3514
02:15
Or you can interact through gestures
and direct deformations
38
123889
4076
02:19
to sculpt digital clay.
39
127989
2297
02:26
Or interface elements can arise
out of the surface
40
134474
3081
02:29
and change on demand.
41
137579
1372
02:30
And the idea is that for each
individual application,
42
138975
2508
02:33
the physical form can be matched
to the application.
43
141507
3300
02:37
And I believe this represents a new way
44
145196
2105
02:39
that we can interact with information,
45
147325
1950
02:41
by making it physical.
46
149299
1429
02:43
So the question is, how can we use this?
47
151142
2215
02:45
Traditionally, urban planners
and architects build physical models
48
153810
3690
02:49
of cities and buildings
to better understand them.
49
157524
2810
02:52
So with Tony Tang at the Media Lab,
we created an interface built on inFORM
50
160358
4215
02:56
to allow urban planners
to design and view entire cities.
51
164597
4983
03:01
And now you can walk around it,
but it's dynamic, it's physical,
52
169604
4257
03:05
and you can also interact directly.
53
173885
1700
03:07
Or you can look at different views,
54
175609
1738
03:09
such as population or traffic information,
55
177371
2817
03:12
but it's made physical.
56
180212
1562
03:14
We also believe that these dynamic
shape displays can really change
57
182996
3810
03:18
the ways that we remotely
collaborate with people.
58
186830
2960
03:21
So when we're working together in person,
59
189814
2303
03:24
I'm not only looking at your face
60
192141
1658
03:25
but I'm also gesturing
and manipulating objects,
61
193823
3038
03:28
and that's really hard to do
when you're using tools like Skype.
62
196885
3790
03:33
And so using inFORM,
you can reach out from the screen
63
201905
2986
03:36
and manipulate things at a distance.
64
204915
2112
03:39
So we used the pins of the display
to represent people's hands,
65
207051
3175
03:42
allowing them to actually touch
and manipulate objects at a distance.
66
210250
4506
03:50
And you can also manipulate
and collaborate on 3D data sets as well,
67
218519
4274
03:54
so you can gesture around them
as well as manipulate them.
68
222817
3669
03:58
And that allows people to collaborate
on these new types of 3D information
69
226510
4390
04:02
in a richer way than might
be possible with traditional tools.
70
230924
3611
04:07
And so you can also
bring in existing objects,
71
235870
2753
04:10
and those will be captured on one side
and transmitted to the other.
72
238647
3214
04:13
Or you can have an object that's linked
between two places,
73
241885
2786
04:16
so as I move a ball on one side,
74
244695
2084
04:18
the ball moves on the other as well.
75
246803
1927
04:22
And so we do this by capturing
the remote user
76
250278
3103
04:25
using a depth-sensing camera
like a Microsoft Kinect.
77
253405
2805
04:28
Now, you might be wondering
how does this all work,
78
256758
3017
04:31
and essentially, what it is,
is 900 linear actuators
79
259799
3651
04:35
that are connected to these
mechanical linkages
80
263474
2286
04:37
that allow motion down here
to be propagated in these pins above.
81
265784
3746
04:41
So it's not that complex
compared to what's going on at CERN,
82
269554
3567
04:45
but it did take a long time
for us to build it.
83
273145
2326
04:47
And so we started with a single motor,
84
275495
2255
04:49
a single linear actuator,
85
277774
1415
04:51
and then we had to design
a custom circuit board to control them.
86
279816
3163
04:55
And then we had to make a lot of them.
87
283003
2052
04:57
And so the problem with having
900 of something
88
285079
3614
05:00
is that you have to do
every step 900 times.
89
288717
3120
05:03
And so that meant that we had
a lot of work to do.
90
291861
2357
05:06
So we sort of set up
a mini-sweatshop in the Media Lab
91
294242
3732
05:09
and brought undergrads in and convinced
them to do "research" --
92
297998
3712
05:13
(Laughter)
93
301734
1014
05:14
and had late nights
watching movies, eating pizza
94
302772
3018
05:17
and screwing in thousands of screws.
95
305814
1828
05:19
You know -- research.
96
307666
1198
05:20
(Laughter)
97
308888
1547
05:22
But anyway, I think that we were
really excited by the things
98
310459
3318
05:25
that inFORM allowed us to do.
99
313801
1696
05:27
Increasingly, we're using mobile devices,
and we interact on the go.
100
315521
4200
05:31
But mobile devices, just like computers,
101
319745
2679
05:34
are used for so many
different applications.
102
322448
2311
05:36
So you use them to talk on the phone,
103
324783
1993
05:38
to surf the web, to play games,
to take pictures
104
326800
3156
05:41
or even a million different things.
105
329980
1709
05:43
But again, they have the same
static physical form
106
331713
2984
05:46
for each of these applications.
107
334721
2118
05:48
And so we wanted to know how can we take
some of the same interactions
108
336863
3363
05:52
that we developed for inFORM
109
340250
1683
05:53
and bring them to mobile devices.
110
341957
1888
05:56
So at Stanford, we created
this haptic edge display,
111
344427
3647
06:00
which is a mobile device
with an array of linear actuators
112
348098
3177
06:03
that can change shape,
113
351299
1347
06:04
so you can feel in your hand
where you are as you're reading a book.
114
352670
3897
06:09
Or you can feel in your pocket
new types of tactile sensations
115
357058
3737
06:12
that are richer than the vibration.
116
360819
1802
06:14
Or buttons can emerge from the side
that allow you to interact
117
362645
3235
06:17
where you want them to be.
118
365904
1708
06:21
Or you can play games
and have actual buttons.
119
369334
3397
06:25
And so we were able to do this
120
373786
1516
06:27
by embedding 40 small, tiny
linear actuators inside the device,
121
375326
4754
06:32
and that allow you not only to touch them
122
380104
2055
06:34
but also back-drive them as well.
123
382183
1904
06:36
But we've also looked at other ways
to create more complex shape change.
124
384911
4178
06:41
So we've used pneumatic actuation
to create a morphing device
125
389113
3392
06:44
where you can go from something
that looks a lot like a phone ...
126
392529
3865
06:48
to a wristband on the go.
127
396418
2230
06:51
And so together with Ken Nakagaki
at the Media Lab,
128
399720
2839
06:54
we created this new
high-resolution version
129
402583
2554
06:57
that uses an array of servomotors
to change from interactive wristband
130
405161
5950
07:03
to a touch-input device
131
411135
3128
07:06
to a phone.
132
414287
1245
07:07
(Laughter)
133
415556
1658
07:10
And we're also interested
in looking at ways
134
418104
2172
07:12
that users can actually
deform the interfaces
135
420300
2627
07:14
to shape them into the devices
that they want to use.
136
422951
2888
07:17
So you can make something
like a game controller,
137
425863
2408
07:20
and then the system will understand
what shape it's in
138
428295
2630
07:22
and change to that mode.
139
430949
1619
07:26
So, where does this point?
140
434052
1572
07:27
How do we move forward from here?
141
435648
1928
07:29
I think, really, where we are today
142
437600
2595
07:32
is in this new age
of the Internet of Things,
143
440219
2754
07:34
where we have computers everywhere --
144
442997
1791
07:36
they're in our pockets,
they're in our walls,
145
444812
2118
07:38
they're in almost every device
that you'll buy in the next five years.
146
446954
3566
07:42
But what if we stopped
thinking about devices
147
450544
2881
07:45
and think instead about environments?
148
453449
2394
07:47
And so how can we have smart furniture
149
455867
2512
07:50
or smart rooms or smart environments
150
458403
3316
07:53
or cities that can adapt to us physically,
151
461743
3092
07:56
and allow us to do new ways
of collaborating with people
152
464859
4231
08:01
and doing new types of tasks?
153
469114
2238
08:03
So for the Milan Design Week,
we created TRANSFORM,
154
471376
3384
08:06
which is an interactive table-scale
version of these shape displays,
155
474784
3824
08:10
which can move physical objects
on the surface; for example,
156
478632
3183
08:13
reminding you to take your keys.
157
481839
2257
08:16
But it can also transform
to fit different ways of interacting.
158
484120
4482
08:20
So if you want to work,
159
488626
1317
08:21
then it can change to sort of
set up your work system.
160
489967
2992
08:24
And so as you bring a device over,
161
492983
1951
08:26
it creates all the affordances you need
162
494958
2738
08:29
and brings other objects
to help you accomplish those goals.
163
497720
4800
08:37
So, in conclusion,
164
505139
1561
08:38
I really think that we need to think
about a new, fundamentally different way
165
506724
3999
08:42
of interacting with computers.
166
510747
2158
08:45
We need computers
that can physically adapt to us
167
513551
2954
08:48
and adapt to the ways
that we want to use them
168
516529
2601
08:51
and really harness the rich dexterity
that we have of our hands,
169
519154
4547
08:55
and our ability to think spatially
about information by making it physical.
170
523725
4271
09:00
But looking forward, I think we need
to go beyond this, beyond devices,
171
528663
3996
09:04
to really think about new ways
that we can bring people together,
172
532683
3393
09:08
and bring our information into the world,
173
536100
3018
09:11
and think about smart environments
that can adapt to us physically.
174
539142
3953
09:15
So with that, I will leave you.
175
543119
1564
09:16
Thank you very much.
176
544707
1151
09:17
(Applause)
177
545882
3592

▲Back to top

ABOUT THE SPEAKER
Sean Follmer - Human-computer interaction researcher and designer
Sean Follmer designs shape-changing and deformable interfaces that take advantage of our natural dexterity and spatial abilities.

Why you should listen

Sean Follmer is a human-computer interaction researcher and designer. He is an Assistant Professor of Mechanical Engineering at Stanford University, where he teaches the design of smart and connected devices and leads research at the intersection between human-computer interaction (HCI) and robotics.

Follmer received a Ph.D. and a Masters degree from the MIT Media Lab in 2015 and 2011, respectively, and a BS in Engineering from Stanford University. He has worked at Nokia Research and Adobe Research on projects exploring the frontiers of HCI. 

Follmer has received numerous awards for his research and design work, including best paper awards and nominations from premier academic conferences in HCI (ACM UIST and CHI), Fast Company Innovation By Design Awards, a Red Dot Design Award and a Laval Virtual Award.

More profile about the speaker
Sean Follmer | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee