ABOUT THE SPEAKER
Golan Levin - Experimental audio-visual artist
Half performance artist, half software engineer, Golan Levin manipulates the computer to create improvised soundscapes with dazzling corresponding visuals. He is at the forefront of defining new parameters for art.

Why you should listen

Having worked as an academic at MIT and a researcher specializing in computer technology and software engineering, Golan Levin now spends most of his time working as a performance artist. Rest assured his education hasn't gone to waste, however, as Levin blends high tech and customized software programs to create his own extraordinary audio and visual compositions. The results are inordinately experimental sonic and visual extravaganzas from the furthest left of the field.

Many of his pieces force audience participation, such as Dialtones: A Telesymphony, a concert from 2001 entirely composed of the choreographed ringtones of his audience. Regularly exhibiting pieces in galleries around the world, and also working as an Assistant Professor of Electronic Time-Based Art at Carnegie Mellon University, Levin is unapologetically pushing boundaries to define a brave new world of what is possible.

His latest piece, Double-Taker (Snout), is installed at the Pittsburg Museum of Art.

More profile about the speaker
Golan Levin | Speaker | TED.com
TED2009

Golan Levin: Art that looks back at you

Filmed:
823,350 views

Golan Levin, an artist and engineer, uses modern tools -- robotics, new software, cognitive research -- to make artworks that surprise and delight. Watch as sounds become shapes, bodies create paintings, and a curious eye looks back at the curious viewer.
- Experimental audio-visual artist
Half performance artist, half software engineer, Golan Levin manipulates the computer to create improvised soundscapes with dazzling corresponding visuals. He is at the forefront of defining new parameters for art. Full bio

Double-click the English transcript below to play the video.

00:12
Hello! My name is Golan Levin.
0
0
3000
00:15
I'm an artist and an engineer,
1
3000
2000
00:17
which is, increasingly, a more common kind of hybrid.
2
5000
2000
00:19
But I still fall into this weird crack
3
7000
3000
00:22
where people don't seem to understand me.
4
10000
2000
00:24
And I was looking around and I found this wonderful picture.
5
12000
4000
00:28
It's a letter from "Artforum" in 1967
6
16000
3000
00:31
saying "We can't imagine ever doing a special issue
7
19000
3000
00:34
on electronics or computers in art." And they still haven't.
8
22000
3000
00:37
And lest you think that you all, as the digerati, are more enlightened,
9
25000
5000
00:42
I went to the Apple iPhone app store the other day.
10
30000
3000
00:45
Where's art? I got productivity. I got sports.
11
33000
4000
00:49
And somehow the idea that one would want to make art for the iPhone,
12
37000
4000
00:53
which my friends and I are doing now,
13
41000
2000
00:55
is still not reflected in our understanding
14
43000
3000
00:58
of what computers are for.
15
46000
2000
01:00
So, from both directions, there is kind of, I think, a lack of understanding
16
48000
2000
01:02
about what it could mean to be an artist who uses the materials
17
50000
2000
01:04
of his own day, or her own day,
18
52000
2000
01:06
which I think artists are obliged to do,
19
54000
2000
01:08
is to really explore the expressive potential of the new tools that we have.
20
56000
4000
01:12
In my own case, I'm an artist,
21
60000
2000
01:14
and I'm really interested in
22
62000
2000
01:16
expanding the vocabulary of human action,
23
64000
2000
01:18
and basically empowering people through interactivity.
24
66000
3000
01:21
I want people to discover themselves as actors,
25
69000
3000
01:24
as creative actors, by having interactive experiences.
26
72000
4000
01:28
A lot of my work is about trying to get away from this.
27
76000
3000
01:31
This a photograph of the desktop of a student of mine.
28
79000
2000
01:33
And when I say desktop, I don't just mean
29
81000
2000
01:35
the actual desk where his mouse has worn away the surface of the desk.
30
83000
3000
01:38
If you look carefully, you can even see
31
86000
2000
01:40
a hint of the Apple menu, up here in the upper left,
32
88000
3000
01:43
where the virtual world has literally
33
91000
2000
01:45
punched through to the physical.
34
93000
2000
01:47
So this is, as Joy Mountford once said,
35
95000
4000
01:51
"The mouse is probably the narrowest straw
36
99000
2000
01:53
you could try to suck all of human expression through."
37
101000
2000
01:55
(Laughter)
38
103000
3000
01:58
And the thing I'm really trying to do is enabling people to have more rich
39
106000
3000
02:01
kinds of interactive experiences.
40
109000
2000
02:03
How can we get away from the mouse and use our full bodies
41
111000
2000
02:05
as a way of exploring aesthetic experiences,
42
113000
3000
02:08
not necessarily utilitarian ones.
43
116000
2000
02:10
So I write software. And that's how I do it.
44
118000
3000
02:13
And a lot of my experiences
45
121000
2000
02:15
resemble mirrors in some way.
46
123000
2000
02:17
Because this is, in some sense, the first way,
47
125000
2000
02:19
that people discover their own potential as actors,
48
127000
2000
02:21
and discover their own agency.
49
129000
2000
02:23
By saying "Who is that person in the mirror? Oh it's actually me."
50
131000
3000
02:26
And so, to give an example,
51
134000
2000
02:28
this is a project from last year,
52
136000
2000
02:30
which is called the Interstitial Fragment Processor.
53
138000
2000
02:32
And it allows people to explore the negative shapes that they create
54
140000
4000
02:36
when they're just going about their everyday business.
55
144000
3000
02:53
So as people make shapes with their hands or their heads
56
161000
2000
02:55
and so forth, or with each other,
57
163000
2000
02:57
these shapes literally produce sounds and drop out of thin air --
58
165000
3000
03:00
basically taking what's often this, kind of, unseen space,
59
168000
4000
03:04
or this undetected space, and making it something real,
60
172000
3000
03:07
that people then can appreciate and become creative with.
61
175000
3000
03:10
So again, people discover their creative agency in this way.
62
178000
3000
03:13
And their own personalities come out
63
181000
2000
03:15
in totally unique ways.
64
183000
3000
03:18
So in addition to using full-body input,
65
186000
3000
03:21
something that I've explored now, for a while,
66
189000
2000
03:23
has been the use of the voice,
67
191000
2000
03:25
which is an immensely expressive system for us, vocalizing.
68
193000
4000
03:29
Song is one of our oldest ways
69
197000
2000
03:31
of making ourselves heard and understood.
70
199000
3000
03:34
And I came across this fantastic research by Wolfgang Köhler,
71
202000
2000
03:36
the so-called father of gestalt psychology, from 1927,
72
204000
4000
03:40
who submitted to an audience like yourselves
73
208000
2000
03:42
the following two shapes.
74
210000
2000
03:44
And he said one of them is called Maluma.
75
212000
2000
03:46
And one of them is called Taketa. Which is which?
76
214000
2000
03:48
Anyone want to hazard a guess?
77
216000
4000
03:52
Maluma is on top. Yeah. So.
78
220000
2000
03:54
As he says here, most people answer without any hesitation.
79
222000
3000
03:57
So what we're really seeing here is a phenomenon
80
225000
2000
03:59
called phonaesthesia,
81
227000
2000
04:01
which is a kind of synesthesia that all of you have.
82
229000
2000
04:03
And so, whereas Dr. Oliver Sacks has talked about
83
231000
2000
04:05
how perhaps one person in a million
84
233000
2000
04:07
actually has true synesthesia,
85
235000
2000
04:09
where they hear colors or taste shapes, and things like this,
86
237000
2000
04:11
phonaesthesia is something we can all experience to some extent.
87
239000
2000
04:13
It's about mappings between different perceptual domains,
88
241000
3000
04:16
like hardness, sharpness, brightness and darkness,
89
244000
3000
04:19
and the phonemes that we're able to speak with.
90
247000
2000
04:21
So 70 years on, there's been some research where
91
249000
2000
04:23
cognitive psychologists have actually sussed out
92
251000
2000
04:25
the extent to which, you know,
93
253000
2000
04:27
L, M and B are more associated with shapes that look like this,
94
255000
4000
04:31
and P, T and K are perhaps more associated with shapes like this.
95
259000
4000
04:35
And here we suddenly begin to have a mapping between curvature
96
263000
2000
04:37
that we can exploit numerically,
97
265000
2000
04:39
a relative mapping between curvature and shape.
98
267000
3000
04:42
So it occurred to me, what happens if we could run these backwards?
99
270000
3000
04:45
And thus was born the project called Remark,
100
273000
2000
04:47
which is a collaboration with Zachary Lieberman
101
275000
2000
04:49
and the Ars Electronica Futurelab.
102
277000
2000
04:51
And this is an interactive installation which presents
103
279000
2000
04:53
the fiction that speech casts visible shadows.
104
281000
2000
04:55
So the idea is you step into a kind of a magic light.
105
283000
3000
04:58
And as you do, you see the shadows of your own speech.
106
286000
3000
05:01
And they sort of fly away, out of your head.
107
289000
2000
05:03
If a computer speech recognition system
108
291000
3000
05:06
is able to recognize what you're saying, then it spells it out.
109
294000
4000
05:10
And if it isn't then it produces a shape which is very phonaesthetically
110
298000
2000
05:12
tightly coupled to the sounds you made.
111
300000
2000
05:14
So let's bring up a video of that.
112
302000
3000
06:03
(Applause)
113
351000
2000
06:05
Thanks. So. And this project here,
114
353000
3000
06:08
I was working with the great abstract vocalist, Jaap Blonk.
115
356000
3000
06:11
And he is a world expert in performing "The Ursonate,"
116
359000
3000
06:14
which is a half-an-hour nonsense poem
117
362000
2000
06:16
by Kurt Schwitters, written in the 1920s,
118
364000
2000
06:18
which is half an hour of very highly patterned nonsense.
119
366000
4000
06:22
And it's almost impossible to perform.
120
370000
2000
06:24
But Jaap is one of the world experts in performing it.
121
372000
3000
06:27
And in this project we've developed
122
375000
2000
06:29
a form of intelligent real-time subtitles.
123
377000
3000
06:32
So these are our live subtitles,
124
380000
3000
06:35
that are being produced by a computer that knows the text of "The Ursonate" --
125
383000
3000
06:38
fortunately Jaap does too, very well --
126
386000
3000
06:41
and it is delivering that text at the same time as Jaap is.
127
389000
5000
06:53
So all the text you're going to see
128
401000
2000
06:55
is real-time generated by the computer,
129
403000
2000
06:57
visualizing what he's doing with his voice.
130
405000
3000
08:10
Here you can see the set-up where there is a screen with the subtitles behind him.
131
478000
3000
08:34
Okay. So ...
132
502000
2000
08:36
(Applause)
133
504000
5000
08:41
The full videos are online if you are interested.
134
509000
2000
08:43
I got a split reaction to that during the live performance,
135
511000
2000
08:45
because there is some people who understand
136
513000
2000
08:47
live subtitles are a kind of an oxymoron,
137
515000
2000
08:49
because usually there is someone making them afterwards.
138
517000
3000
08:52
And then a bunch of people who were like, "What's the big deal?
139
520000
3000
08:55
I see subtitles all the time on television."
140
523000
2000
08:57
You know? They don't imagine the person in the booth, typing it all.
141
525000
3000
09:00
So in addition to the full body, and in addition to the voice,
142
528000
3000
09:03
another thing that I've been really interested in,
143
531000
2000
09:05
most recently, is the use of the eyes,
144
533000
2000
09:07
or the gaze, in terms of how people relate to each other.
145
535000
4000
09:11
It's a really profound amount of nonverbal information
146
539000
2000
09:13
that's communicated with the eyes.
147
541000
2000
09:15
And it's one of the most interesting technical challenges
148
543000
2000
09:17
that's very currently active in the computer sciences:
149
545000
2000
09:19
being able to have a camera that can understand,
150
547000
2000
09:21
from a fairly big distance away,
151
549000
2000
09:23
how these little tiny balls are actually pointing in one way or another
152
551000
3000
09:26
to reveal what you're interested in,
153
554000
2000
09:28
and where your attention is directed.
154
556000
2000
09:30
So there is a lot of emotional communication that happens there.
155
558000
3000
09:33
And so I've been beginning, with a variety of different projects,
156
561000
4000
09:37
to understand how people can relate to machines with their eyes.
157
565000
3000
09:40
And basically to ask the questions:
158
568000
3000
09:43
What if art was aware that we were looking at it?
159
571000
5000
09:48
How could it respond, in a way,
160
576000
2000
09:50
to acknowledge or subvert the fact that we're looking at it?
161
578000
3000
09:53
And what could it do if it could look back at us?
162
581000
3000
09:56
And so those are the questions that are happening in the next projects.
163
584000
2000
09:58
In the first one which I'm going to show you, called Eyecode,
164
586000
3000
10:01
it's a piece of interactive software
165
589000
2000
10:03
in which, if we read this little circle,
166
591000
2000
10:05
"the trace left by the looking of the previous observer
167
593000
3000
10:08
looks at the trace left by the looking of previous observer."
168
596000
3000
10:11
The idea is that it's an image wholly constructed
169
599000
2000
10:13
from its own history of being viewed
170
601000
2000
10:15
by different people in an installation.
171
603000
2000
10:17
So let me just switch over so we can do the live demo.
172
605000
5000
10:22
So let's run this and see if it works.
173
610000
4000
10:26
Okay. Ah, there is lots of nice bright video.
174
614000
3000
10:29
There is just a little test screen that shows that it's working.
175
617000
2000
10:31
And what I'm just going to do is -- I'm going to hide that.
176
619000
2000
10:33
And you can see here that what it's doing
177
621000
2000
10:35
is it's recording my eyes every time I blink.
178
623000
3000
10:44
Hello? And I can ... hello ... okay.
179
632000
4000
10:48
And no matter where I am, what's really going on here
180
636000
2000
10:50
is that it's an eye-tracking system that tries to locate my eyes.
181
638000
3000
10:53
And if I get really far away I'm blurry.
182
641000
2000
10:55
You know, you're going to have these kind of blurry spots like this
183
643000
2000
10:57
that maybe only resemble eyes in a very very abstract way.
184
645000
3000
11:00
But if I come up really close and stare directly at the camera
185
648000
3000
11:03
on this laptop then you'll see these nice crisp eyes.
186
651000
2000
11:05
You can think of it as a way of, sort of, typing, with your eyes.
187
653000
4000
11:09
And what you're typing are recordings of your eyes
188
657000
2000
11:11
as you're looking at other peoples' eyes.
189
659000
2000
11:13
So each person is looking at the looking
190
661000
3000
11:16
of everyone else before them.
191
664000
2000
11:18
And this exists in larger installations
192
666000
2000
11:20
where there are thousands and thousands of eyes
193
668000
2000
11:22
that people could be staring at,
194
670000
2000
11:24
as you see who's looking at the people looking
195
672000
2000
11:26
at the people looking before them.
196
674000
2000
11:28
So I'll just add a couple more. Blink. Blink.
197
676000
3000
11:31
And you can see, just once again, how it's sort of finding my eyes
198
679000
3000
11:34
and doing its best to estimate when it's blinking.
199
682000
3000
11:37
Alright. Let's leave that.
200
685000
2000
11:39
So that's this kind of recursive observation system.
201
687000
3000
11:42
(Applause)
202
690000
2000
11:44
Thank you.
203
692000
2000
11:46
The last couple pieces I'm going to show
204
694000
2000
11:48
are basically in the new realm of robotics -- for me, new for me.
205
696000
2000
11:50
It's called Opto-Isolator.
206
698000
2000
11:52
And I'm going to show a video of the older version of it,
207
700000
3000
11:55
which is just a minute long. Okay.
208
703000
2000
12:06
In this case, the Opto-Isolator is blinking
209
714000
2000
12:08
in response to one's own blinks.
210
716000
2000
12:10
So it blinks one second after you do.
211
718000
3000
12:13
This is a device which is intended to reduce
212
721000
3000
12:16
the phenomenon of gaze down to the simplest possible materials.
213
724000
3000
12:19
Just one eye,
214
727000
2000
12:21
looking at you, and eliminating everything else about a face,
215
729000
2000
12:23
but just to consider gaze in an isolated way
216
731000
3000
12:26
as a kind of, as an element.
217
734000
3000
12:29
And at the same time, it attempts to engage in what you might call
218
737000
3000
12:32
familiar psycho-social gaze behaviors.
219
740000
2000
12:34
Like looking away if you look at it too long
220
742000
2000
12:36
because it gets shy,
221
744000
2000
12:38
or things like that.
222
746000
3000
12:41
Okay. So the last project I'm going to show
223
749000
3000
12:44
is this new one called Snout.
224
752000
3000
12:47
(Laughter)
225
755000
2000
12:49
It's an eight-foot snout,
226
757000
2000
12:51
with a googly eye.
227
759000
2000
12:53
(Laughter)
228
761000
1000
12:54
And inside it's got an 800-pound robot arm
229
762000
3000
12:57
that I borrowed,
230
765000
2000
12:59
(Laughter)
231
767000
1000
13:00
from a friend.
232
768000
2000
13:02
(Laughter)
233
770000
1000
13:03
It helps to have good friends.
234
771000
2000
13:05
I'm at Carnegie Mellon; we've got a great Robotics Institute there.
235
773000
3000
13:08
I'd like to show you thing called Snout, which is --
236
776000
2000
13:10
The idea behind this project is to
237
778000
2000
13:12
make a robot that appears as if it's continually surprised to see you.
238
780000
4000
13:16
(Laughter)
239
784000
4000
13:20
The idea is that basically --
240
788000
2000
13:22
if it's constantly like "Huh? ... Huh?"
241
790000
2000
13:24
That's why its other name is Doubletaker, Taker of Doubles.
242
792000
4000
13:28
It's always kind of doing a double take: "What?"
243
796000
2000
13:30
And the idea is basically, can it look at you
244
798000
2000
13:32
and make you feel as if like,
245
800000
2000
13:34
"What? Is it my shoes?"
246
802000
2000
13:36
"Got something on my hair?" Here we go. Alright.
247
804000
3000
14:10
Checking him out ...
248
838000
2000
14:20
For you nerds, here's a little behind-the-scenes.
249
848000
2000
14:22
It's got a computer vision system,
250
850000
2000
14:24
and it tries to look at the people who are moving around the most.
251
852000
3000
14:39
Those are its targets.
252
867000
2000
14:42
Up there is the skeleton,
253
870000
2000
14:44
which is actually what it's trying to do.
254
872000
3000
14:54
It's really about trying to create a novel body language for a new creature.
255
882000
3000
14:57
Hollywood does this all the time, of course.
256
885000
2000
14:59
But also have the body language communicate something
257
887000
2000
15:01
to the person who is looking at it.
258
889000
2000
15:03
This language is communicating that it is surprised to see you,
259
891000
2000
15:05
and it's interested in looking at you.
260
893000
3000
15:08
(Laughter)
261
896000
2000
15:10
(Applause)
262
898000
9000
15:19
Thank you very much. That's all I've got for today.
263
907000
2000
15:21
And I'm really happy to be here. Thank you so much.
264
909000
3000
15:24
(Applause)
265
912000
3000

▲Back to top

ABOUT THE SPEAKER
Golan Levin - Experimental audio-visual artist
Half performance artist, half software engineer, Golan Levin manipulates the computer to create improvised soundscapes with dazzling corresponding visuals. He is at the forefront of defining new parameters for art.

Why you should listen

Having worked as an academic at MIT and a researcher specializing in computer technology and software engineering, Golan Levin now spends most of his time working as a performance artist. Rest assured his education hasn't gone to waste, however, as Levin blends high tech and customized software programs to create his own extraordinary audio and visual compositions. The results are inordinately experimental sonic and visual extravaganzas from the furthest left of the field.

Many of his pieces force audience participation, such as Dialtones: A Telesymphony, a concert from 2001 entirely composed of the choreographed ringtones of his audience. Regularly exhibiting pieces in galleries around the world, and also working as an Assistant Professor of Electronic Time-Based Art at Carnegie Mellon University, Levin is unapologetically pushing boundaries to define a brave new world of what is possible.

His latest piece, Double-Taker (Snout), is installed at the Pittsburg Museum of Art.

More profile about the speaker
Golan Levin | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee