ABOUT THE SPEAKER
Tom Chatfield - Gaming theorist
Tom Chatfield thinks about games -- what we want from them, what we get from them, and how we might use our hard-wired desire for a gamer's reward to change the way we learn.

Why you should listen

It can be difficult to wrap one's mind around the size and the reach of modern video- and online-game culture. But gaming is not only outstripping more-traditional media in revenue (it overtook music in 2008), it's become a powerful lens to re-examine our culture at large. Tom Chatfield, a longtime gamer, is the arts and books editor at the UK current-affairs magazine Prospect. In his book Fun Inc., he argues that games, with their immersive quests and deeply satisfying (and carefully designed) virtual rewards, are a great place to test new approaches to real-world systems that need a reboot.

More than a game journalist, Chatfield is a game theorist, looking at neurological research on how games engage our pleasure centers -- and then looking at a world where millions of videogame-veteran Generation Z'ers are entering the workforce and the voters' rolls. They're good with complex rule sets; they're used to forming ad hoc groups to reach a goal; and they love to tweak and mod existing systems. What if society harnessed that energy to redefine learning? Or voting? Understanding the psychology of the videogame reward schedule, Chatfield believes, is not only important for understanding the world of our children -- it's a stepping stone to improving our world right now.

More profile about the speaker
Tom Chatfield | Speaker | TED.com
TEDGlobal 2010

Tom Chatfield: 7 ways games reward the brain

Filmed:
1,288,061 views

We're bringing gameplay into more aspects of our lives, spending countless hours -- and real money -- exploring virtual worlds for imaginary treasures. Why? As Tom Chatfield shows, games are perfectly tuned to dole out rewards that engage the brain and keep us questing for more.
- Gaming theorist
Tom Chatfield thinks about games -- what we want from them, what we get from them, and how we might use our hard-wired desire for a gamer's reward to change the way we learn. Full bio

Double-click the English transcript below to play the video.

00:15
I love video games.
0
0
3000
00:18
I'm also slightly in awe of them.
1
3000
3000
00:21
I'm in awe of their power
2
6000
2000
00:23
in terms of imagination, in terms of technology,
3
8000
2000
00:25
in terms of concept.
4
10000
2000
00:27
But I think, above all,
5
12000
2000
00:29
I'm in awe at their power
6
14000
2000
00:31
to motivate, to compel us,
7
16000
3000
00:34
to transfix us,
8
19000
2000
00:36
like really nothing else we've ever invented
9
21000
3000
00:39
has quite done before.
10
24000
2000
00:41
And I think that we can learn some pretty amazing things
11
26000
3000
00:44
by looking at how we do this.
12
29000
2000
00:46
And in particular, I think we can learn things
13
31000
2000
00:48
about learning.
14
33000
3000
00:51
Now the video games industry
15
36000
2000
00:53
is far and away the fastest growing
16
38000
2000
00:55
of all modern media.
17
40000
2000
00:57
From about 10 billion in 1990,
18
42000
2000
00:59
it's worth 50 billion dollars globally today,
19
44000
3000
01:02
and it shows no sign of slowing down.
20
47000
3000
01:05
In four years' time,
21
50000
2000
01:07
it's estimated it'll be worth over 80 billion dollars.
22
52000
3000
01:10
That's about three times the recorded music industry.
23
55000
3000
01:13
This is pretty stunning,
24
58000
2000
01:15
but I don't think it's the most telling statistic of all.
25
60000
3000
01:18
The thing that really amazes me
26
63000
2000
01:20
is that, today,
27
65000
2000
01:22
people spend about
28
67000
2000
01:24
eight billion real dollars a year
29
69000
3000
01:27
buying virtual items
30
72000
2000
01:29
that only exist
31
74000
2000
01:31
inside video games.
32
76000
3000
01:34
This is a screenshot from the virtual game world, Entropia Universe.
33
79000
3000
01:37
Earlier this year,
34
82000
2000
01:39
a virtual asteroid in it
35
84000
2000
01:41
sold for 330,000 real dollars.
36
86000
4000
01:45
And this
37
90000
2000
01:47
is a Titan class ship
38
92000
3000
01:50
in the space game, EVE Online.
39
95000
2000
01:52
And this virtual object
40
97000
2000
01:54
takes 200 real people
41
99000
2000
01:56
about 56 days of real time to build,
42
101000
3000
01:59
plus countless thousands of hours
43
104000
3000
02:02
of effort before that.
44
107000
2000
02:04
And yet, many of these get built.
45
109000
3000
02:07
At the other end of the scale,
46
112000
2000
02:09
the game Farmville that you may well have heard of,
47
114000
3000
02:12
has 70 million players
48
117000
2000
02:14
around the world
49
119000
2000
02:16
and most of these players
50
121000
2000
02:18
are playing it almost every day.
51
123000
2000
02:20
This may all sound
52
125000
2000
02:22
really quite alarming to some people,
53
127000
2000
02:24
an index of something worrying
54
129000
2000
02:26
or wrong in society.
55
131000
2000
02:28
But we're here for the good news,
56
133000
2000
02:30
and the good news is
57
135000
2000
02:32
that I think we can explore
58
137000
2000
02:34
why this very real human effort,
59
139000
3000
02:37
this very intense generation of value, is occurring.
60
142000
4000
02:41
And by answering that question,
61
146000
2000
02:43
I think we can take something
62
148000
2000
02:45
extremely powerful away.
63
150000
2000
02:47
And I think the most interesting way
64
152000
2000
02:49
to think about how all this is going on
65
154000
2000
02:51
is in terms of rewards.
66
156000
2000
02:53
And specifically, it's in terms
67
158000
3000
02:56
of the very intense emotional rewards
68
161000
2000
02:58
that playing games offers to people
69
163000
2000
03:00
both individually
70
165000
2000
03:02
and collectively.
71
167000
2000
03:04
Now if we look at what's going on in someone's head
72
169000
2000
03:06
when they are being engaged,
73
171000
2000
03:08
two quite different processes are occurring.
74
173000
3000
03:11
On the one hand, there's the wanting processes.
75
176000
3000
03:14
This is a bit like ambition and drive -- I'm going to do that. I'm going to work hard.
76
179000
3000
03:17
On the other hand, there's the liking processes,
77
182000
2000
03:19
fun and affection
78
184000
2000
03:21
and delight
79
186000
2000
03:23
and an enormous flying beast with an orc on the back.
80
188000
2000
03:25
It's a really great image. It's pretty cool.
81
190000
2000
03:27
It's from the game World of Warcraft with more than 10 million players globally,
82
192000
3000
03:30
one of whom is me, another of whom is my wife.
83
195000
3000
03:33
And this kind of a world,
84
198000
2000
03:35
this vast flying beast you can ride around,
85
200000
2000
03:37
shows why games are so very good
86
202000
2000
03:39
at doing both the wanting and the liking.
87
204000
3000
03:42
Because it's very powerful. It's pretty awesome.
88
207000
2000
03:44
It gives you great powers.
89
209000
2000
03:46
Your ambition is satisfied, but it's very beautiful.
90
211000
3000
03:49
It's a very great pleasure to fly around.
91
214000
3000
03:52
And so these combine to form
92
217000
2000
03:54
a very intense emotional engagement.
93
219000
2000
03:56
But this isn't the really interesting stuff.
94
221000
3000
03:59
The really interesting stuff about virtuality
95
224000
2000
04:01
is what you can measure with it.
96
226000
2000
04:03
Because what you can measure in virtuality
97
228000
3000
04:06
is everything.
98
231000
2000
04:08
Every single thing that every single person
99
233000
2000
04:10
who's ever played in a game has ever done can be measured.
100
235000
3000
04:13
The biggest games in the world today
101
238000
2000
04:15
are measuring more than one billion points of data
102
240000
4000
04:19
about their players, about what everybody does --
103
244000
2000
04:21
far more detail than you'd ever get from any website.
104
246000
3000
04:24
And this allows something very special
105
249000
3000
04:27
to happen in games.
106
252000
2000
04:29
It's something called the reward schedule.
107
254000
3000
04:32
And by this, I mean looking
108
257000
2000
04:34
at what millions upon millions of people have done
109
259000
2000
04:36
and carefully calibrating the rate,
110
261000
2000
04:38
the nature, the type, the intensity of rewards in games
111
263000
3000
04:41
to keep them engaged
112
266000
2000
04:43
over staggering amounts of time and effort.
113
268000
3000
04:46
Now, to try and explain this
114
271000
2000
04:48
in sort of real terms,
115
273000
3000
04:51
I want to talk about a kind of task
116
276000
2000
04:53
that might fall to you in so many games.
117
278000
2000
04:55
Go and get a certain amount of a certain little game-y item.
118
280000
3000
04:58
Let's say, for the sake of argument,
119
283000
2000
05:00
my mission is to get 15 pies
120
285000
3000
05:03
and I can get 15 pies
121
288000
3000
05:06
by killing these cute, little monsters.
122
291000
2000
05:08
Simple game quest.
123
293000
2000
05:10
Now you can think about this, if you like,
124
295000
2000
05:12
as a problem about boxes.
125
297000
2000
05:14
I've got to keep opening boxes.
126
299000
2000
05:16
I don't know what's inside them until I open them.
127
301000
3000
05:19
And I go around opening box after box until I've got 15 pies.
128
304000
3000
05:22
Now, if you take a game like Warcraft,
129
307000
2000
05:24
you can think about it, if you like,
130
309000
2000
05:26
as a great box-opening effort.
131
311000
3000
05:29
The game's just trying to get people to open about a million boxes,
132
314000
3000
05:32
getting better and better stuff in them.
133
317000
2000
05:34
This sounds immensely boring
134
319000
3000
05:37
but games are able
135
322000
2000
05:39
to make this process
136
324000
2000
05:41
incredibly compelling.
137
326000
2000
05:43
And the way they do this
138
328000
2000
05:45
is through a combination of probability and data.
139
330000
3000
05:48
Let's think about probability.
140
333000
2000
05:50
If we want to engage someone
141
335000
2000
05:52
in the process of opening boxes to try and find pies,
142
337000
3000
05:55
we want to make sure it's neither too easy,
143
340000
2000
05:57
nor too difficult, to find a pie.
144
342000
2000
05:59
So what do you do? Well, you look at a million people --
145
344000
2000
06:01
no, 100 million people, 100 million box openers --
146
346000
3000
06:04
and you work out, if you make the pie rate
147
349000
3000
06:07
about 25 percent --
148
352000
2000
06:09
that's neither too frustrating, nor too easy.
149
354000
3000
06:12
It keeps people engaged.
150
357000
2000
06:14
But of course, that's not all you do -- there's 15 pies.
151
359000
3000
06:17
Now, I could make a game called Piecraft,
152
362000
2000
06:19
where all you had to do was get a million pies
153
364000
2000
06:21
or a thousand pies.
154
366000
2000
06:23
That would be very boring.
155
368000
2000
06:25
Fifteen is a pretty optimal number.
156
370000
2000
06:27
You find that -- you know, between five and 20
157
372000
2000
06:29
is about the right number for keeping people going.
158
374000
2000
06:31
But we don't just have pies in the boxes.
159
376000
2000
06:33
There's 100 percent up here.
160
378000
2000
06:35
And what we do is make sure that every time a box is opened,
161
380000
3000
06:38
there's something in it, some little reward
162
383000
2000
06:40
that keeps people progressing and engaged.
163
385000
2000
06:42
In most adventure games,
164
387000
2000
06:44
it's a little bit in-game currency, a little bit experience.
165
389000
3000
06:47
But we don't just do that either.
166
392000
2000
06:49
We also say there's going to be loads of other items
167
394000
2000
06:51
of varying qualities and levels of excitement.
168
396000
2000
06:53
There's going to be a 10 percent chance you get a pretty good item.
169
398000
3000
06:56
There's going to be a 0.1 percent chance
170
401000
2000
06:58
you get an absolutely awesome item.
171
403000
3000
07:01
And each of these rewards is carefully calibrated to the item.
172
406000
3000
07:04
And also, we say,
173
409000
2000
07:06
"Well, how many monsters? Should I have the entire world full of a billion monsters?"
174
411000
3000
07:09
No, we want one or two monsters on the screen at any one time.
175
414000
3000
07:12
So I'm drawn on. It's not too easy, not too difficult.
176
417000
3000
07:15
So all this is very powerful.
177
420000
2000
07:17
But we're in virtuality. These aren't real boxes.
178
422000
3000
07:20
So we can do
179
425000
2000
07:22
some rather amazing things.
180
427000
2000
07:24
We notice, looking at all these people opening boxes,
181
429000
4000
07:28
that when people get to about 13 out of 15 pies,
182
433000
3000
07:31
their perception shifts, they start to get a bit bored, a bit testy.
183
436000
3000
07:34
They're not rational about probability.
184
439000
2000
07:36
They think this game is unfair.
185
441000
2000
07:38
It's not giving me my last two pies. I'm going to give up.
186
443000
2000
07:40
If they're real boxes, there's not much we can do,
187
445000
2000
07:42
but in a game we can just say, "Right, well.
188
447000
2000
07:44
When you get to 13 pies, you've got 75 percent chance of getting a pie now."
189
449000
4000
07:48
Keep you engaged. Look at what people do --
190
453000
2000
07:50
adjust the world to match their expectation.
191
455000
2000
07:52
Our games don't always do this.
192
457000
2000
07:54
And one thing they certainly do at the moment
193
459000
2000
07:56
is if you got a 0.1 percent awesome item,
194
461000
3000
07:59
they make very sure another one doesn't appear for a certain length of time
195
464000
3000
08:02
to keep the value, to keep it special.
196
467000
2000
08:04
And the point is really
197
469000
2000
08:06
that we evolved to be satisfied by the world
198
471000
2000
08:08
in particular ways.
199
473000
2000
08:10
Over tens and hundreds of thousands of years,
200
475000
3000
08:13
we evolved to find certain things stimulating,
201
478000
2000
08:15
and as very intelligent, civilized beings,
202
480000
2000
08:17
we're enormously stimulated by problem solving and learning.
203
482000
3000
08:20
But now, we can reverse engineer that
204
485000
2000
08:22
and build worlds
205
487000
2000
08:24
that expressly tick our evolutionary boxes.
206
489000
3000
08:27
So what does all this mean in practice?
207
492000
2000
08:29
Well, I've come up
208
494000
2000
08:31
with seven things
209
496000
2000
08:33
that, I think, show
210
498000
2000
08:35
how you can take these lessons from games
211
500000
2000
08:37
and use them outside of games.
212
502000
3000
08:40
The first one is very simple:
213
505000
2000
08:42
experience bars measuring progress --
214
507000
2000
08:44
something that's been talked about brilliantly
215
509000
2000
08:46
by people like Jesse Schell earlier this year.
216
511000
3000
08:49
It's already been done at the University of Indiana in the States, among other places.
217
514000
3000
08:52
It's the simple idea that instead of grading people incrementally
218
517000
3000
08:55
in little bits and pieces,
219
520000
2000
08:57
you give them one profile character avatar
220
522000
2000
08:59
which is constantly progressing
221
524000
2000
09:01
in tiny, tiny, tiny little increments which they feel are their own.
222
526000
3000
09:04
And everything comes towards that,
223
529000
2000
09:06
and they watch it creeping up, and they own that as it goes along.
224
531000
3000
09:09
Second, multiple long and short-term aims --
225
534000
2000
09:11
5,000 pies, boring,
226
536000
2000
09:13
15 pies, interesting.
227
538000
2000
09:15
So, you give people
228
540000
2000
09:17
lots and lots of different tasks.
229
542000
2000
09:19
You say, it's about
230
544000
2000
09:21
doing 10 of these questions,
231
546000
2000
09:23
but another task
232
548000
2000
09:25
is turning up to 20 classes on time,
233
550000
2000
09:27
but another task is collaborating with other people,
234
552000
3000
09:30
another task is showing you're working five times,
235
555000
3000
09:33
another task is hitting this particular target.
236
558000
2000
09:35
You break things down into these calibrated slices
237
560000
3000
09:38
that people can choose and do in parallel
238
563000
2000
09:40
to keep them engaged
239
565000
2000
09:42
and that you can use to point them
240
567000
2000
09:44
towards individually beneficial activities.
241
569000
3000
09:48
Third, you reward effort.
242
573000
2000
09:50
It's your 100 percent factor. Games are brilliant at this.
243
575000
3000
09:53
Every time you do something, you get credit; you get a credit for trying.
244
578000
3000
09:56
You don't punish failure. You reward every little bit of effort --
245
581000
3000
09:59
a little bit of gold, a little bit of credit. You've done 20 questions -- tick.
246
584000
3000
10:02
It all feeds in as minute reinforcement.
247
587000
3000
10:05
Fourth, feedback.
248
590000
2000
10:07
This is absolutely crucial,
249
592000
2000
10:09
and virtuality is dazzling at delivering this.
250
594000
2000
10:11
If you look at some of the most intractable problems in the world today
251
596000
3000
10:14
that we've been hearing amazing things about,
252
599000
2000
10:16
it's very, very hard for people to learn
253
601000
3000
10:19
if they cannot link consequences to actions.
254
604000
3000
10:22
Pollution, global warming, these things --
255
607000
2000
10:24
the consequences are distant in time and space.
256
609000
2000
10:26
It's very hard to learn, to feel a lesson.
257
611000
2000
10:28
But if you can model things for people,
258
613000
2000
10:30
if you can give things to people that they can manipulate
259
615000
2000
10:32
and play with and where the feedback comes,
260
617000
2000
10:34
then they can learn a lesson, they can see,
261
619000
2000
10:36
they can move on, they can understand.
262
621000
3000
10:39
And fifth,
263
624000
2000
10:41
the element of uncertainty.
264
626000
2000
10:43
Now this is the neurological goldmine,
265
628000
3000
10:46
if you like,
266
631000
2000
10:48
because a known reward
267
633000
2000
10:50
excites people,
268
635000
2000
10:52
but what really gets them going
269
637000
2000
10:54
is the uncertain reward,
270
639000
2000
10:56
the reward pitched at the right level of uncertainty,
271
641000
2000
10:58
that they didn't quite know whether they were going to get it or not.
272
643000
3000
11:01
The 25 percent. This lights the brain up.
273
646000
3000
11:04
And if you think about
274
649000
2000
11:06
using this in testing,
275
651000
2000
11:08
in just introducing control elements of randomness
276
653000
2000
11:10
in all forms of testing and training,
277
655000
2000
11:12
you can transform the levels of people's engagement
278
657000
2000
11:14
by tapping into this very powerful
279
659000
2000
11:16
evolutionary mechanism.
280
661000
2000
11:18
When we don't quite predict something perfectly,
281
663000
2000
11:20
we get really excited about it.
282
665000
2000
11:22
We just want to go back and find out more.
283
667000
2000
11:24
As you probably know, the neurotransmitter
284
669000
2000
11:26
associated with learning is called dopamine.
285
671000
2000
11:28
It's associated with reward-seeking behavior.
286
673000
3000
11:31
And something very exciting is just beginning to happen
287
676000
3000
11:34
in places like the University of Bristol in the U.K.,
288
679000
3000
11:37
where we are beginning to be able to model mathematically
289
682000
3000
11:40
dopamine levels in the brain.
290
685000
2000
11:42
And what this means is we can predict learning,
291
687000
2000
11:44
we can predict enhanced engagement,
292
689000
3000
11:47
these windows, these windows of time,
293
692000
2000
11:49
in which the learning is taking place at an enhanced level.
294
694000
3000
11:52
And two things really flow from this.
295
697000
2000
11:54
The first has to do with memory,
296
699000
2000
11:56
that we can find these moments.
297
701000
2000
11:58
When someone is more likely to remember,
298
703000
2000
12:00
we can give them a nugget in a window.
299
705000
2000
12:02
And the second thing is confidence,
300
707000
2000
12:04
that we can see how game-playing and reward structures
301
709000
2000
12:06
make people braver, make them more willing to take risks,
302
711000
3000
12:09
more willing to take on difficulty,
303
714000
2000
12:11
harder to discourage.
304
716000
2000
12:13
This can all seem very sinister.
305
718000
2000
12:15
But you know, sort of "our brains have been manipulated; we're all addicts."
306
720000
2000
12:17
The word "addiction" is thrown around.
307
722000
2000
12:19
There are real concerns there.
308
724000
2000
12:21
But the biggest neurological turn-on for people
309
726000
2000
12:23
is other people.
310
728000
2000
12:25
This is what really excites us.
311
730000
3000
12:28
In reward terms, it's not money;
312
733000
2000
12:30
it's not being given cash -- that's nice --
313
735000
3000
12:33
it's doing stuff with our peers,
314
738000
2000
12:35
watching us, collaborating with us.
315
740000
2000
12:37
And I want to tell you a quick story about 1999 --
316
742000
2000
12:39
a video game called EverQuest.
317
744000
2000
12:41
And in this video game,
318
746000
2000
12:43
there were two really big dragons, and you had to team up to kill them --
319
748000
3000
12:46
42 people, up to 42 to kill these big dragons.
320
751000
3000
12:49
That's a problem
321
754000
2000
12:51
because they dropped two or three decent items.
322
756000
3000
12:54
So players addressed this problem
323
759000
3000
12:57
by spontaneously coming up with a system
324
762000
2000
12:59
to motivate each other,
325
764000
2000
13:01
fairly and transparently.
326
766000
2000
13:03
What happened was, they paid each other a virtual currency
327
768000
3000
13:06
they called "dragon kill points."
328
771000
3000
13:09
And every time you turned up to go on a mission,
329
774000
2000
13:11
you got paid in dragon kill points.
330
776000
2000
13:13
They tracked these on a separate website.
331
778000
2000
13:15
So they tracked their own private currency,
332
780000
2000
13:17
and then players could bid afterwards
333
782000
2000
13:19
for cool items they wanted --
334
784000
2000
13:21
all organized by the players themselves.
335
786000
2000
13:23
Now the staggering system, not just that this worked in EverQuest,
336
788000
3000
13:26
but that today, a decade on,
337
791000
2000
13:28
every single video game in the world with this kind of task
338
793000
3000
13:31
uses a version of this system --
339
796000
2000
13:33
tens of millions of people.
340
798000
2000
13:35
And the success rate
341
800000
2000
13:37
is at close to 100 percent.
342
802000
2000
13:39
This is a player-developed,
343
804000
2000
13:41
self-enforcing, voluntary currency,
344
806000
3000
13:44
and it's incredibly sophisticated
345
809000
2000
13:46
player behavior.
346
811000
2000
13:50
And I just want to end by suggesting
347
815000
2000
13:52
a few ways in which these principles
348
817000
2000
13:54
could fan out into the world.
349
819000
2000
13:56
Let's start with business.
350
821000
2000
13:58
I mean, we're beginning to see some of the big problems
351
823000
2000
14:00
around something like business are
352
825000
2000
14:02
recycling and energy conservation.
353
827000
2000
14:04
We're beginning to see the emergence of wonderful technologies
354
829000
2000
14:06
like real-time energy meters.
355
831000
2000
14:08
And I just look at this, and I think, yes,
356
833000
2000
14:10
we could take that so much further
357
835000
3000
14:13
by allowing people to set targets
358
838000
2000
14:15
by setting calibrated targets,
359
840000
2000
14:17
by using elements of uncertainty,
360
842000
3000
14:20
by using these multiple targets,
361
845000
2000
14:22
by using a grand, underlying reward and incentive system,
362
847000
3000
14:25
by setting people up
363
850000
2000
14:27
to collaborate in terms of groups, in terms of streets
364
852000
2000
14:29
to collaborate and compete,
365
854000
2000
14:31
to use these very sophisticated
366
856000
2000
14:33
group and motivational mechanics we see.
367
858000
2000
14:35
In terms of education,
368
860000
2000
14:37
perhaps most obviously of all,
369
862000
2000
14:39
we can transform how we engage people.
370
864000
3000
14:42
We can offer people the grand continuity
371
867000
2000
14:44
of experience and personal investment.
372
869000
3000
14:47
We can break things down
373
872000
2000
14:49
into highly calibrated small tasks.
374
874000
2000
14:51
We can use calculated randomness.
375
876000
2000
14:53
We can reward effort consistently
376
878000
2000
14:55
as everything fields together.
377
880000
3000
14:58
And we can use the kind of group behaviors
378
883000
2000
15:00
that we see evolving when people are at play together,
379
885000
3000
15:03
these really quite unprecedentedly complex
380
888000
3000
15:06
cooperative mechanisms.
381
891000
2000
15:08
Government, well, one thing that comes to mind
382
893000
2000
15:10
is the U.S. government, among others,
383
895000
3000
15:13
is literally starting to pay people
384
898000
2000
15:15
to lose weight.
385
900000
2000
15:17
So we're seeing financial reward being used
386
902000
2000
15:19
to tackle the great issue of obesity.
387
904000
2000
15:21
But again, those rewards
388
906000
2000
15:23
could be calibrated so precisely
389
908000
3000
15:26
if we were able to use the vast expertise
390
911000
3000
15:29
of gaming systems to just jack up that appeal,
391
914000
3000
15:32
to take the data, to take the observations,
392
917000
2000
15:34
of millions of human hours
393
919000
2000
15:36
and plow that feedback
394
921000
2000
15:38
into increasing engagement.
395
923000
2000
15:40
And in the end, it's this word, "engagement,"
396
925000
3000
15:43
that I want to leave you with.
397
928000
2000
15:45
It's about how individual engagement
398
930000
2000
15:47
can be transformed
399
932000
2000
15:49
by the psychological and the neurological lessons
400
934000
3000
15:52
we can learn from watching people that are playing games.
401
937000
3000
15:55
But it's also about collective engagement
402
940000
3000
15:58
and about the unprecedented laboratory
403
943000
3000
16:01
for observing what makes people tick
404
946000
2000
16:03
and work and play and engage
405
948000
2000
16:05
on a grand scale in games.
406
950000
3000
16:08
And if we can look at these things and learn from them
407
953000
3000
16:11
and see how to turn them outwards,
408
956000
2000
16:13
then I really think we have something quite revolutionary on our hands.
409
958000
3000
16:16
Thank you very much.
410
961000
2000
16:18
(Applause)
411
963000
4000

▲Back to top

ABOUT THE SPEAKER
Tom Chatfield - Gaming theorist
Tom Chatfield thinks about games -- what we want from them, what we get from them, and how we might use our hard-wired desire for a gamer's reward to change the way we learn.

Why you should listen

It can be difficult to wrap one's mind around the size and the reach of modern video- and online-game culture. But gaming is not only outstripping more-traditional media in revenue (it overtook music in 2008), it's become a powerful lens to re-examine our culture at large. Tom Chatfield, a longtime gamer, is the arts and books editor at the UK current-affairs magazine Prospect. In his book Fun Inc., he argues that games, with their immersive quests and deeply satisfying (and carefully designed) virtual rewards, are a great place to test new approaches to real-world systems that need a reboot.

More than a game journalist, Chatfield is a game theorist, looking at neurological research on how games engage our pleasure centers -- and then looking at a world where millions of videogame-veteran Generation Z'ers are entering the workforce and the voters' rolls. They're good with complex rule sets; they're used to forming ad hoc groups to reach a goal; and they love to tweak and mod existing systems. What if society harnessed that energy to redefine learning? Or voting? Understanding the psychology of the videogame reward schedule, Chatfield believes, is not only important for understanding the world of our children -- it's a stepping stone to improving our world right now.

More profile about the speaker
Tom Chatfield | Speaker | TED.com