English-Video.net comment policy

The comment field is common to all languages

Let's write in your language and use "Google Translate" together

Please refer to informative community guidelines on TED.com

TED2009

Golan Levin: Art that looks back at you

Filmed
Views 773,248

Golan Levin, an artist and engineer, uses modern tools -- robotics, new software, cognitive research -- to make artworks that surprise and delight. Watch as sounds become shapes, bodies create paintings, and a curious eye looks back at the curious viewer.

- Experimental audio-visual artist
Half performance artist, half software engineer, Golan Levin manipulates the computer to create improvised soundscapes with dazzling corresponding visuals. He is at the forefront of defining new parameters for art. Full bio

Hello! My name is Golan Levin.
00:12
I'm an artist and an engineer,
00:15
which is, increasingly, a more common kind of hybrid.
00:17
But I still fall into this weird crack
00:19
where people don't seem to understand me.
00:22
And I was looking around and I found this wonderful picture.
00:24
It's a letter from "Artforum" in 1967
00:28
saying "We can't imagine ever doing a special issue
00:31
on electronics or computers in art." And they still haven't.
00:34
And lest you think that you all, as the digerati, are more enlightened,
00:37
I went to the Apple iPhone app store the other day.
00:42
Where's art? I got productivity. I got sports.
00:45
And somehow the idea that one would want to make art for the iPhone,
00:49
which my friends and I are doing now,
00:53
is still not reflected in our understanding
00:55
of what computers are for.
00:58
So, from both directions, there is kind of, I think, a lack of understanding
01:00
about what it could mean to be an artist who uses the materials
01:02
of his own day, or her own day,
01:04
which I think artists are obliged to do,
01:06
is to really explore the expressive potential of the new tools that we have.
01:08
In my own case, I'm an artist,
01:12
and I'm really interested in
01:14
expanding the vocabulary of human action,
01:16
and basically empowering people through interactivity.
01:18
I want people to discover themselves as actors,
01:21
as creative actors, by having interactive experiences.
01:24
A lot of my work is about trying to get away from this.
01:28
This a photograph of the desktop of a student of mine.
01:31
And when I say desktop, I don't just mean
01:33
the actual desk where his mouse has worn away the surface of the desk.
01:35
If you look carefully, you can even see
01:38
a hint of the Apple menu, up here in the upper left,
01:40
where the virtual world has literally
01:43
punched through to the physical.
01:45
So this is, as Joy Mountford once said,
01:47
"The mouse is probably the narrowest straw
01:51
you could try to suck all of human expression through."
01:53
(Laughter)
01:55
And the thing I'm really trying to do is enabling people to have more rich
01:58
kinds of interactive experiences.
02:01
How can we get away from the mouse and use our full bodies
02:03
as a way of exploring aesthetic experiences,
02:05
not necessarily utilitarian ones.
02:08
So I write software. And that's how I do it.
02:10
And a lot of my experiences
02:13
resemble mirrors in some way.
02:15
Because this is, in some sense, the first way,
02:17
that people discover their own potential as actors,
02:19
and discover their own agency.
02:21
By saying "Who is that person in the mirror? Oh it's actually me."
02:23
And so, to give an example,
02:26
this is a project from last year,
02:28
which is called the Interstitial Fragment Processor.
02:30
And it allows people to explore the negative shapes that they create
02:32
when they're just going about their everyday business.
02:36
So as people make shapes with their hands or their heads
02:53
and so forth, or with each other,
02:55
these shapes literally produce sounds and drop out of thin air --
02:57
basically taking what's often this, kind of, unseen space,
03:00
or this undetected space, and making it something real,
03:04
that people then can appreciate and become creative with.
03:07
So again, people discover their creative agency in this way.
03:10
And their own personalities come out
03:13
in totally unique ways.
03:15
So in addition to using full-body input,
03:18
something that I've explored now, for a while,
03:21
has been the use of the voice,
03:23
which is an immensely expressive system for us, vocalizing.
03:25
Song is one of our oldest ways
03:29
of making ourselves heard and understood.
03:31
And I came across this fantastic research by Wolfgang Köhler,
03:34
the so-called father of gestalt psychology, from 1927,
03:36
who submitted to an audience like yourselves
03:40
the following two shapes.
03:42
And he said one of them is called Maluma.
03:44
And one of them is called Taketa. Which is which?
03:46
Anyone want to hazard a guess?
03:48
Maluma is on top. Yeah. So.
03:52
As he says here, most people answer without any hesitation.
03:54
So what we're really seeing here is a phenomenon
03:57
called phonaesthesia,
03:59
which is a kind of synesthesia that all of you have.
04:01
And so, whereas Dr. Oliver Sacks has talked about
04:03
how perhaps one person in a million
04:05
actually has true synesthesia,
04:07
where they hear colors or taste shapes, and things like this,
04:09
phonaesthesia is something we can all experience to some extent.
04:11
It's about mappings between different perceptual domains,
04:13
like hardness, sharpness, brightness and darkness,
04:16
and the phonemes that we're able to speak with.
04:19
So 70 years on, there's been some research where
04:21
cognitive psychologists have actually sussed out
04:23
the extent to which, you know,
04:25
L, M and B are more associated with shapes that look like this,
04:27
and P, T and K are perhaps more associated with shapes like this.
04:31
And here we suddenly begin to have a mapping between curvature
04:35
that we can exploit numerically,
04:37
a relative mapping between curvature and shape.
04:39
So it occurred to me, what happens if we could run these backwards?
04:42
And thus was born the project called Remark,
04:45
which is a collaboration with Zachary Lieberman
04:47
and the Ars Electronica Futurelab.
04:49
And this is an interactive installation which presents
04:51
the fiction that speech casts visible shadows.
04:53
So the idea is you step into a kind of a magic light.
04:55
And as you do, you see the shadows of your own speech.
04:58
And they sort of fly away, out of your head.
05:01
If a computer speech recognition system
05:03
is able to recognize what you're saying, then it spells it out.
05:06
And if it isn't then it produces a shape which is very phonaesthetically
05:10
tightly coupled to the sounds you made.
05:12
So let's bring up a video of that.
05:14
(Applause)
06:03
Thanks. So. And this project here,
06:05
I was working with the great abstract vocalist, Jaap Blonk.
06:08
And he is a world expert in performing "The Ursonate,"
06:11
which is a half-an-hour nonsense poem
06:14
by Kurt Schwitters, written in the 1920s,
06:16
which is half an hour of very highly patterned nonsense.
06:18
And it's almost impossible to perform.
06:22
But Jaap is one of the world experts in performing it.
06:24
And in this project we've developed
06:27
a form of intelligent real-time subtitles.
06:29
So these are our live subtitles,
06:32
that are being produced by a computer that knows the text of "The Ursonate" --
06:35
fortunately Jaap does too, very well --
06:38
and it is delivering that text at the same time as Jaap is.
06:41
So all the text you're going to see
06:53
is real-time generated by the computer,
06:55
visualizing what he's doing with his voice.
06:57
Here you can see the set-up where there is a screen with the subtitles behind him.
08:10
Okay. So ...
08:34
(Applause)
08:36
The full videos are online if you are interested.
08:41
I got a split reaction to that during the live performance,
08:43
because there is some people who understand
08:45
live subtitles are a kind of an oxymoron,
08:47
because usually there is someone making them afterwards.
08:49
And then a bunch of people who were like, "What's the big deal?
08:52
I see subtitles all the time on television."
08:55
You know? They don't imagine the person in the booth, typing it all.
08:57
So in addition to the full body, and in addition to the voice,
09:00
another thing that I've been really interested in,
09:03
most recently, is the use of the eyes,
09:05
or the gaze, in terms of how people relate to each other.
09:07
It's a really profound amount of nonverbal information
09:11
that's communicated with the eyes.
09:13
And it's one of the most interesting technical challenges
09:15
that's very currently active in the computer sciences:
09:17
being able to have a camera that can understand,
09:19
from a fairly big distance away,
09:21
how these little tiny balls are actually pointing in one way or another
09:23
to reveal what you're interested in,
09:26
and where your attention is directed.
09:28
So there is a lot of emotional communication that happens there.
09:30
And so I've been beginning, with a variety of different projects,
09:33
to understand how people can relate to machines with their eyes.
09:37
And basically to ask the questions:
09:40
What if art was aware that we were looking at it?
09:43
How could it respond, in a way,
09:48
to acknowledge or subvert the fact that we're looking at it?
09:50
And what could it do if it could look back at us?
09:53
And so those are the questions that are happening in the next projects.
09:56
In the first one which I'm going to show you, called Eyecode,
09:58
it's a piece of interactive software
10:01
in which, if we read this little circle,
10:03
"the trace left by the looking of the previous observer
10:05
looks at the trace left by the looking of previous observer."
10:08
The idea is that it's an image wholly constructed
10:11
from its own history of being viewed
10:13
by different people in an installation.
10:15
So let me just switch over so we can do the live demo.
10:17
So let's run this and see if it works.
10:22
Okay. Ah, there is lots of nice bright video.
10:26
There is just a little test screen that shows that it's working.
10:29
And what I'm just going to do is -- I'm going to hide that.
10:31
And you can see here that what it's doing
10:33
is it's recording my eyes every time I blink.
10:35
Hello? And I can ... hello ... okay.
10:44
And no matter where I am, what's really going on here
10:48
is that it's an eye-tracking system that tries to locate my eyes.
10:50
And if I get really far away I'm blurry.
10:53
You know, you're going to have these kind of blurry spots like this
10:55
that maybe only resemble eyes in a very very abstract way.
10:57
But if I come up really close and stare directly at the camera
11:00
on this laptop then you'll see these nice crisp eyes.
11:03
You can think of it as a way of, sort of, typing, with your eyes.
11:05
And what you're typing are recordings of your eyes
11:09
as you're looking at other peoples' eyes.
11:11
So each person is looking at the looking
11:13
of everyone else before them.
11:16
And this exists in larger installations
11:18
where there are thousands and thousands of eyes
11:20
that people could be staring at,
11:22
as you see who's looking at the people looking
11:24
at the people looking before them.
11:26
So I'll just add a couple more. Blink. Blink.
11:28
And you can see, just once again, how it's sort of finding my eyes
11:31
and doing its best to estimate when it's blinking.
11:34
Alright. Let's leave that.
11:37
So that's this kind of recursive observation system.
11:39
(Applause)
11:42
Thank you.
11:44
The last couple pieces I'm going to show
11:46
are basically in the new realm of robotics -- for me, new for me.
11:48
It's called Opto-Isolator.
11:50
And I'm going to show a video of the older version of it,
11:52
which is just a minute long. Okay.
11:55
In this case, the Opto-Isolator is blinking
12:06
in response to one's own blinks.
12:08
So it blinks one second after you do.
12:10
This is a device which is intended to reduce
12:13
the phenomenon of gaze down to the simplest possible materials.
12:16
Just one eye,
12:19
looking at you, and eliminating everything else about a face,
12:21
but just to consider gaze in an isolated way
12:23
as a kind of, as an element.
12:26
And at the same time, it attempts to engage in what you might call
12:29
familiar psycho-social gaze behaviors.
12:32
Like looking away if you look at it too long
12:34
because it gets shy,
12:36
or things like that.
12:38
Okay. So the last project I'm going to show
12:41
is this new one called Snout.
12:44
(Laughter)
12:47
It's an eight-foot snout,
12:49
with a googly eye.
12:51
(Laughter)
12:53
And inside it's got an 800-pound robot arm
12:54
that I borrowed,
12:57
(Laughter)
12:59
from a friend.
13:00
(Laughter)
13:02
It helps to have good friends.
13:03
I'm at Carnegie Mellon; we've got a great Robotics Institute there.
13:05
I'd like to show you thing called Snout, which is --
13:08
The idea behind this project is to
13:10
make a robot that appears as if it's continually surprised to see you.
13:12
(Laughter)
13:16
The idea is that basically --
13:20
if it's constantly like "Huh? ... Huh?"
13:22
That's why its other name is Doubletaker, Taker of Doubles.
13:24
It's always kind of doing a double take: "What?"
13:28
And the idea is basically, can it look at you
13:30
and make you feel as if like,
13:32
"What? Is it my shoes?"
13:34
"Got something on my hair?" Here we go. Alright.
13:36
Checking him out ...
14:10
For you nerds, here's a little behind-the-scenes.
14:20
It's got a computer vision system,
14:22
and it tries to look at the people who are moving around the most.
14:24
Those are its targets.
14:39
Up there is the skeleton,
14:42
which is actually what it's trying to do.
14:44
It's really about trying to create a novel body language for a new creature.
14:54
Hollywood does this all the time, of course.
14:57
But also have the body language communicate something
14:59
to the person who is looking at it.
15:01
This language is communicating that it is surprised to see you,
15:03
and it's interested in looking at you.
15:05
(Laughter)
15:08
(Applause)
15:10
Thank you very much. That's all I've got for today.
15:19
And I'm really happy to be here. Thank you so much.
15:21
(Applause)
15:24

▲Back to top

About the speaker:

Golan Levin - Experimental audio-visual artist
Half performance artist, half software engineer, Golan Levin manipulates the computer to create improvised soundscapes with dazzling corresponding visuals. He is at the forefront of defining new parameters for art.

Why you should listen

Having worked as an academic at MIT and a researcher specializing in computer technology and software engineering, Golan Levin now spends most of his time working as a performance artist. Rest assured his education hasn't gone to waste, however, as Levin blends high tech and customized software programs to create his own extraordinary audio and visual compositions. The results are inordinately experimental sonic and visual extravaganzas from the furthest left of the field.

Many of his pieces force audience participation, such as Dialtones: A Telesymphony, a concert from 2001 entirely composed of the choreographed ringtones of his audience. Regularly exhibiting pieces in galleries around the world, and also working as an Assistant Professor of Electronic Time-Based Art at Carnegie Mellon University, Levin is unapologetically pushing boundaries to define a brave new world of what is possible.

His latest piece, Double-Taker (Snout), is installed at the Pittsburg Museum of Art.

More profile about the speaker
Golan Levin | Speaker | TED.com