sponsored links
TED2011

Deb Roy: The birth of a word

デブ・ロイ「初めて言えた時」

March 4, 2011

MIT研究員デブ・ロイは、まだ赤ちゃんの息子がどうやって言語を習得するのか解明すべく家中にカメラを取り付けました。子どもの日常を(例外を除いて)すべて撮影し、9万時間に及ぶホームビデオを整理し、「ガー」という発音が時間をかけてゆっくりと「ウォーター」に変わりゆく軌跡を聴かせてくれます。人はどう学習するのでしょう。驚愕するようなデータにあふれた研究成果を深い含蓄を織り交ぜて語ります。

Deb Roy - Cognitive scientist
Deb Roy studies how children learn language, and designs machines that learn to communicate in human-like ways. On sabbatical from MIT Media Lab, he's working with the AI company Bluefin Labs. Full bio

sponsored links
Double-click the English subtitles below to play the video.
Imagine if you could record your life --
人生を記録できるとしたらどうでしょう
00:15
everything you said, everything you did,
どんな発言も どんな振る舞いも
00:19
available in a perfect memory store at your fingertips,
手近な記憶装置に残しておけたら
00:22
so you could go back
過去に戻って
00:25
and find memorable moments and relive them,
心に残る思い出を 再生したり
00:27
or sift through traces of time
時間の流れを精査することで
00:30
and discover patterns in your own life
見過ごしていた
00:33
that previously had gone undiscovered.
生活パターンを見つけたりできます
00:35
Well that's exactly the journey
それこそ まさに
00:38
that my family began
私たち一家が
00:40
five and a half years ago.
5年半前に始めた旅なのです
00:42
This is my wife and collaborator, Rupal.
妻と 協力者ルーパルです
00:44
And on this day, at this moment,
この日 この時
00:47
we walked into the house with our first child,
第一子を迎え入れました
00:49
our beautiful baby boy.
かわいい男の子です
00:51
And we walked into a house
家には かなり特殊な
00:53
with a very special home video recording system.
ホームビデオの撮影機材を取り付けました
00:56
(Video) Man: Okay.
「じゃあ撮るよ」
01:07
Deb Roy: This moment
ほかにも
01:10
and thousands of other moments special for us
我が家の貴重な瞬間を
01:11
were captured in our home
たっぷり撮影しました
01:14
because in every room in the house,
どの部屋にも
01:16
if you looked up, you'd see a camera and a microphone,
見上げればカメラとマイクがあって
01:18
and if you looked down,
そこから下を眺めれば
01:21
you'd get this bird's-eye view of the room.
部屋を一望できます
01:23
Here's our living room,
リビング
01:25
the baby bedroom,
赤ちゃんの寝室
01:28
kitchen, dining room
キッチン ダイニング
01:31
and the rest of the house.
残りの部屋も
01:33
And all of these fed into a disc array
ディスク記録装置に全部送って
01:35
that was designed for a continuous capture.
継続的に記録しました
01:38
So here we are flying through a day in our home
一日の流れを見渡せます
01:41
as we move from sunlit morning
日の差す明け方から
01:44
through incandescent evening
燃えるような夕刻を迎え
01:47
and, finally, lights out for the day.
消灯して 一日を終えます
01:49
Over the course of three years,
3年間
01:53
we recorded eight to 10 hours a day,
毎日 8から10時間
01:56
amassing roughly a quarter-million hours
合計 約25万時間に及ぶ
01:58
of multi-track audio and video.
マルチトラックの音声と映像の記録です
02:01
So you're looking at a piece of what is by far
史上初の 壮大な
02:04
the largest home video collection ever made.
ホームビデオ全集なのです
02:06
(Laughter)
(笑い)
02:08
And what this data represents
私たち家族は
02:11
for our family at a personal level,
意義ある記録だと感じています
02:13
the impact has already been immense,
大きな衝撃を受けながら
02:17
and we're still learning its value.
今も 真価を探っています
02:19
Countless moments
意識せず 自然体で
02:22
of unsolicited natural moments, not posed moments,
身構えもしない膨大な時間が
02:24
are captured there,
記録されていますので
02:27
and we're starting to learn how to discover them and find them.
どう調べるか検討を始めたところです
02:29
But there's also a scientific reason that drove this project,
この取り組みには 科学的理由もあります
02:32
which was to use this natural longitudinal data
時系列に連なる この生データから
02:35
to understand the process
子どもの言語習得の過程を
02:39
of how a child learns language --
把握したいのです
02:41
that child being my son.
対象は 私の息子です
02:43
And so with many privacy provisions put in place
プライバシー保護規定を設けて
02:45
to protect everyone who was recorded in the data,
被写体のプライバシーを守ったうえで
02:49
we made elements of the data available
データを 部分的に参照可能にして
02:52
to my trusted research team at MIT
MITの信頼できる研究チームに開放しました
02:55
so we could start teasing apart patterns
こうして 一連の膨大なデータから
02:58
in this massive data set,
パターンを抽出できるようになり
03:01
trying to understand the influence of social environments
言語習得過程における社会環境の影響を探る―
03:04
on language acquisition.
挑戦が始まりました
03:07
So we're looking here
ご覧いただいているのは
03:09
at one of the first things we started to do.
初めて手をつけた解析です
03:11
This is my wife and I cooking breakfast in the kitchen,
キッチンで妻と私が朝食を作っています
03:13
and as we move through space and through time,
空間的・時間的に眺めると
03:17
a very everyday pattern of life in the kitchen.
キッチンでの毎日の生活パターンが現れます
03:20
In order to convert
この とらえ所のない
03:23
this opaque, 90,000 hours of video
9万時間の映像を
03:25
into something that we could start to see,
理解可能にするため
03:28
we use motion analysis to pull out,
動作分析の手法を使って
03:30
as we move through space and through time,
空間軸と時間軸でとらえ
03:32
what we call space-time worms.
「時空の虫」を描き出しました
03:34
And this has become part of our toolkit
このツールを使うと
03:37
for being able to look and see
活動が生じたデータ位置を
03:40
where the activities are in the data,
把握できます
03:43
and with it, trace the pattern of, in particular,
特に 子どもの動き回るパターンを
03:45
where my son moved throughout the home,
追跡できるので
03:48
so that we could focus our transcription efforts,
書き起こしに専念できました
03:50
all of the speech environment around my son --
息子を取り巻く会話環境
03:53
all of the words that he heard from myself, my wife, our nanny,
私 妻 おばあさんが口にした言葉
03:56
and over time, the words he began to produce.
やがて息子の口から出る言葉 すべてです
03:59
So with that technology and that data
この技術があって データがあって
04:02
and the ability to, with machine assistance,
装置も使って
04:05
transcribe speech,
会話を書き起こせたので
04:07
we've now transcribed
家族の発した言葉を
04:09
well over seven million words of our home transcripts.
700万以上書き起こせました
04:11
And with that, let me take you now
その成果を使って 初めて
04:14
for a first tour into the data.
データの旅へご案内いたします
04:16
So you've all, I'm sure,
低速度撮影の映像は
04:19
seen time-lapse videos
経験があると思います
04:21
where a flower will blossom as you accelerate time.
早送りで開花を見せる映像です
04:23
I'd like you to now experience
今回は 言葉の開花に
04:26
the blossoming of a speech form.
立ち会っていただきます
04:28
My son, soon after his first birthday,
息子は1歳になってすぐ
04:30
would say "gaga" to mean water.
ウォーター (water) をガガ (gaga)と言い
04:32
And over the course of the next half-year,
それから半年かけて ゆっくりと
04:35
he slowly learned to approximate
大人のように正確な
04:38
the proper adult form, "water."
ウォーター (water)に近づきました
04:40
So we're going to cruise through half a year
では 半年間を
04:43
in about 40 seconds.
40秒で体験します
04:45
No video here,
映像はありません
04:47
so you can focus on the sound, the acoustics,
初となる 音の軌跡に
04:49
of a new kind of trajectory:
耳を澄ませてください
04:52
gaga to water.
ガガ (gaga) から ウォーター (water) へ
04:54
(Audio) Baby: Gagagagagaga
(赤ちゃん)「Gagagagagaga」
04:56
Gaga gaga gaga
「Gaga gaga gaga」
05:08
guga guga guga
「guga guga guga」
05:12
wada gaga gaga guga gaga
「wada gaga gaga guga gaga」
05:17
wader guga guga
「wader guga guga」
05:22
water water water
「water water water」
05:26
water water water
「water water water」
05:29
water water
「water water」
05:35
water.
「water.」
05:39
DR: He sure nailed it, didn't he.
大成功です
05:41
(Applause)
(拍手)
05:43
So he didn't just learn water.
ウォーターだけではありません
05:50
Over the course of the 24 months,
24ヶ月 つまり
05:52
the first two years that we really focused on,
最初の2年間に絞って
05:54
this is a map of every word he learned in chronological order.
習得した言葉を 年代順に並べました
05:57
And because we have full transcripts,
一字一句 書き起こして
06:01
we've identified each of the 503 words
2歳になるまでに 口にした
06:04
that he learned to produce by his second birthday.
全503語を確認しました
06:06
He was an early talker.
息子は 早い方です
06:08
And so we started to analyze why.
こんな分析も始めました
06:10
Why were certain words born before others?
なぜ ある語が ほかの語より先なのか?
06:13
This is one of the first results
こちらは 1年少し前の
06:16
that came out of our study a little over a year ago
初期の研究成果の一つです
06:18
that really surprised us.
驚きの結果です
06:20
The way to interpret this apparently simple graph
一見シンプルですが 説明しますと
06:22
is, on the vertical is an indication
縦軸は 周囲の人が話す
06:25
of how complex caregiver utterances are
会話の複雑さです
06:27
based on the length of utterances.
基準は会話の長さです
06:30
And the [horizontal] axis is time.
横軸は時間です
06:32
And all of the data,
データはすべて
06:35
we aligned based on the following idea:
こんな手法で導き出しました
06:37
Every time my son would learn a word,
習得した言葉ごとに
06:40
we would trace back and look at all of the language he heard
その言葉を使った過去の会話を
06:43
that contained that word.
すべて洗い出して
06:46
And we would plot the relative length of the utterances.
会話文の相対的な長さをプロットしたのです
06:48
And what we found was this curious phenomena,
すると 面白いことに
06:52
that caregiver speech would systematically dip to a minimum,
きまって 大人は語数を減らし
06:55
making language as simple as possible,
できるだけ短くしてから
06:58
and then slowly ascend back up in complexity.
ゆっくりと複雑な言葉遣いに戻します
07:01
And the amazing thing was
驚いたことに
07:04
that bounce, that dip,
折り返し地点は
07:06
lined up almost precisely
その言葉を口にした時期と
07:08
with when each word was born --
ほぼ一致するのです
07:10
word after word, systematically.
どの言葉も 同じです
07:12
So it appears that all three primary caregivers --
つまり 3人の大人たち―
07:14
myself, my wife and our nanny --
私 妻 おばあさん のだれもが
07:16
were systematically and, I would think, subconsciously
いつも 何げなく
07:19
restructuring our language
言葉遣いを調整してあげて
07:22
to meet him at the birth of a word
息子がその言葉を口にしたら
07:24
and bring him gently into more complex language.
もっと複雑な言葉へ そっと導いていたのです
07:27
And the implications of this -- there are many,
示唆に富む結果ですが
07:31
but one I just want to point out,
伝えたいのは一点
07:33
is that there must be amazing feedback loops.
見事なフィードバック・ループが形成されていること
07:35
Of course, my son is learning
息子が 周りの言語環境から
07:38
from his linguistic environment,
学習している一方で
07:40
but the environment is learning from him.
言語環境も 息子から学習しているのです
07:42
That environment, people, are in these tight feedback loops
言語環境 つまり人が 堅固なループに加わり
07:45
and creating a kind of scaffolding
足場を固めているのです
07:48
that has not been noticed until now.
今まで 気付いた人はいません
07:50
But that's looking at the speech context.
以上は 話の環境です
07:54
What about the visual context?
視覚的な環境はどうか?
07:56
We're not looking at --
模型のように
07:58
think of this as a dollhouse cutaway of our house.
我が家をのぞいてみます
08:00
We've taken those circular fish-eye lens cameras,
魚眼レンズで撮影した映像を
08:02
and we've done some optical correction,
光学的に修正して
08:05
and then we can bring it into three-dimensional life.
三次元の世界に仕上げました
08:07
So welcome to my home.
ようこそ 我が家へ
08:11
This is a moment,
これは ある瞬間を
08:13
one moment captured across multiple cameras.
複数台で横断的にとらえた映像です
08:15
The reason we did this is to create the ultimate memory machine,
過去に戻って 自由に飛び回れる
08:18
where you can go back and interactively fly around
究極の記憶装置を
08:21
and then breathe video-life into this system.
実現させたかったのです
08:24
What I'm going to do
今から
08:27
is give you an accelerated view of 30 minutes,
30分を早送りで見ていただきます
08:29
again, of just life in the living room.
リビングの様子です
08:32
That's me and my son on the floor.
私と息子が 床にいます
08:34
And there's video analytics
映像解析で
08:37
that are tracking our movements.
二人の動きを追います
08:39
My son is leaving red ink. I am leaving green ink.
息子は赤 私は緑
08:41
We're now on the couch,
ソファーに移りました
08:44
looking out through the window at cars passing by.
窓から 車が走るのを眺めています
08:46
And finally, my son playing in a walking toy by himself.
最後は一人で歩行器に乗りました
08:49
Now we freeze the action, 30 minutes,
30分の動きを一つにまとめて
08:52
we turn time into the vertical axis,
上下に時間軸をとれば
08:55
and we open up for a view
ふれあいの軌跡を
08:57
of these interaction traces we've just left behind.
見ることができます
08:59
And we see these amazing structures --
驚くような形状が現れました
09:02
these little knots of two colors of thread
二色が絡み合っているのは
09:05
we call "social hot spots."
ふれあいの強い場所です
09:08
The spiral thread
らせんになっているのは
09:10
we call a "solo hot spot."
単独行動の場所です
09:12
And we think that these affect the way language is learned.
これは言語学習過程に影響を及ぼすと
09:14
What we'd like to do
考えられます
09:17
is start understanding
このパターンと
09:19
the interaction between these patterns
息子の耳に入る言葉との
09:21
and the language that my son is exposed to
関係を探ることによって
09:23
to see if we can predict
言葉を耳にする時期が
09:25
how the structure of when words are heard
習得時期に及ぼす影響を
09:27
affects when they're learned --
予測できるか探りたいのです
09:29
so in other words, the relationship
つまり 言葉と
09:31
between words and what they're about in the world.
言葉が表す現実との関係です
09:33
So here's how we're approaching this.
やり方を紹介しましょう
09:37
In this video,
この映像でも
09:39
again, my son is being traced out.
息子を追います
09:41
He's leaving red ink behind.
赤い線です
09:43
And there's our nanny by the door.
おばあさんはドアの前です
09:45
(Video) Nanny: You want water? (Baby: Aaaa.)
「ウォーター(water)いる?」「アァー(Aaaa)」
09:47
Nanny: All right. (Baby: Aaaa.)
「いいわよ」「アー(Aaaa)」
09:50
DR: She offers water,
水が欲しいか尋ねてから
09:53
and off go the two worms
二つの「虫」が
09:55
over to the kitchen to get water.
キッチンまでつながりました
09:57
And what we've done is use the word "water"
そこで「ウォーター」の語を
09:59
to tag that moment, that bit of activity.
この瞬間と結びつけました
10:01
And now we take the power of data
データの力を活用して
10:03
and take every time my son
ウォーターを耳にした
10:05
ever heard the word water
時と場所に
10:08
and the context he saw it in,
注目して
10:10
and we use it to penetrate through the video
映像全体から
10:12
and find every activity trace
ウォーターに絡む行動の軌跡を
10:15
that co-occurred with an instance of water.
すべて抽出しました
10:18
And what this data leaves in its wake
データ処理後に残る―
10:21
is a landscape.
ランドスケープ(地形)が
10:23
We call these wordscapes.
ワードスケープ(言葉の地形)です
10:25
This is the wordscape for the word water,
ウォーターの場合
10:27
and you can see most of the action is in the kitchen.
キッチンを中心に動いています
10:29
That's where those big peaks are over to the left.
左奥の高いピークの所です
10:31
And just for contrast, we can do this with any word.
ほかの言葉と対比することもできます
10:34
We can take the word "bye"
グッバーイ(good bye)の
10:37
as in "good bye."
バーイ(bye)はどうでしょう
10:39
And we're now zoomed in over the entrance to the house.
今回は玄関を見てみます
10:41
And we look, and we find, as you would expect,
予想どおりのランドスケープが現れます
10:43
a contrast in the landscape
バーイ(bye)の軌跡は
10:46
where the word "bye" occurs much more in a structured way.
一層わかりやすい形をしています
10:48
So we're using these structures
これらの形状を参考に
10:51
to start predicting
言葉の習得順序を
10:53
the order of language acquisition,
予測することにしました
10:55
and that's ongoing work now.
これは現在進行中です
10:58
In my lab, which we're peering into now, at MIT --
これは今の 私の研究室です
11:00
this is at the media lab.
MITメディアラボ内です
11:03
This has become my favorite way
どこでも こうやって
11:05
of videographing just about any space.
撮影してしまいます
11:07
Three of the key people in this project,
プロジェクトの主要メンバーは
11:09
Philip DeCamp, Rony Kubat and Brandon Roy are pictured here.
フィリップ・ド・キャンプ ロニー・クバート ブランドン・ロイです
11:11
Philip has been a close collaborator
フィリップが協力してくれたのは
11:14
on all the visualizations you're seeing.
先ほどの視覚化です
11:16
And Michael Fleischman
マイケル・フライシュマンは
11:18
was another Ph.D. student in my lab
博士課程の学生で
11:21
who worked with me on this home video analysis,
映像解析を手伝っています
11:23
and he made the following observation:
マイケルはこんな予測を立てました
11:26
that "just the way that we're analyzing
「共通基盤であるイベントに
11:29
how language connects to events
言葉がどう関係するか分析するという
11:31
which provide common ground for language,
この手法は
11:34
that same idea we can take out of your home, Deb,
家庭内だけでなく
11:36
and we can apply it to the world of public media."
公共のメディアにも応用できる」
11:40
And so our effort took an unexpected turn.
研究は 思わぬ方向に転がりました
11:43
Think of mass media
共通基盤を提供するのは
11:46
as providing common ground
マスメディアです
11:48
and you have the recipe
これまでのアイデアを
11:50
for taking this idea to a whole new place.
新境地に応用するやり方は分かっています
11:52
We've started analyzing television content
テレビの番組コンテンツを 同じ原理で
11:55
using the same principles --
解析しました
11:58
analyzing event structure of a TV signal --
テレビ信号で送られる要素 つまり
12:00
episodes of shows,
ドラマのストーリーや
12:03
commercials,
コマーシャルなど
12:05
all of the components that make up the event structure.
イベント構造を構築する全要素を解析します
12:07
And we're now, with satellite dishes, pulling and analyzing
アメリカで視聴可能な大半の番組は
12:10
a good part of all the TV being watched in the United States.
アンテナで拾えます
12:13
And you don't have to now go and instrument living rooms with microphones
会話は リビングのマイクで録音する必要など
12:16
to get people's conversations,
ありません
12:19
you just tune into publicly available social media feeds.
だれでも見られるソーシャルメディアから拾うだけです
12:21
So we're pulling in
月に30億のコメントを
12:24
about three billion comments a month,
入手しています すると
12:26
and then the magic happens.
不思議な結果が出ました
12:28
You have the event structure,
イベント構造 つまり
12:30
the common ground that the words are about,
言葉が関与する共通基盤を
12:32
coming out of the television feeds;
テレビ放送から取り出して
12:34
you've got the conversations
そのトピックに関連のある
12:37
that are about those topics;
会話を抽出しました
12:39
and through semantic analysis --
さらに 意味解析を通じて―
12:41
and this is actually real data you're looking at
こちらは実際に処理した
12:44
from our data processing --
データです
12:46
each yellow line is showing a link being made
黄色の線がつなぐのは
12:48
between a comment in the wild
生のコメントと
12:51
and a piece of event structure coming out of the television signal.
テレビ信号で送られるイベント構造です
12:54
And the same idea now
先ほどのアイデアを
12:57
can be built up.
発展させて こんな―
12:59
And we get this wordscape,
ワードスケープを作りました
13:01
except now words are not assembled in my living room.
言葉の出所はリビングではありません
13:03
Instead, the context, the common ground activities,
番組コンテンツが 環境つまり共通基盤となって
13:06
are the content on television that's driving the conversations.
会話を引き出しているのです
13:10
And what we're seeing here, these skyscrapers now,
この高層ビル群を作るのは
13:13
are commentary
番組コンテンツに関する
13:16
that are linked to content on television.
コメントです
13:18
Same concept,
コンセプトは同じですが
13:20
but looking at communication dynamics
先ほどとは全く別領域の
13:22
in a very different sphere.
コミュニケーション・ダイナミクスが見えます
13:24
And so fundamentally, rather than, for example,
基本的には 例えば
13:26
measuring content based on how many people are watching,
番組コンテンツの評価に視聴者数を使いません
13:28
this gives us the basic data
番組コンテンツへの関心度
13:31
for looking at engagement properties of content.
に焦点をあてた基本データを使います
13:33
And just like we can look at feedback cycles
家庭で起こるフィードバック・サイクルや
13:36
and dynamics in a family,
ダイナミクスと同じようなものです
13:39
we can now open up the same concepts
今や 同じコンセプトを発展させて
13:42
and look at much larger groups of people.
もっと大きな集団を対象にできるのです
13:45
This is a subset of data from our database --
これは データベースの一部―
13:48
just 50,000 out of several million --
数百万のうち5万人分のデータを
13:51
and the social graph that connects them
公の情報源に基づいて
13:54
through publicly available sources.
結びつけるソーシャル・グラフです
13:56
And if you put them on one plain,
第一の平面にそれを置き
13:59
a second plain is where the content lives.
第二の平面に番組コンテンツをのせます
14:01
So we have the programs
番組 例えば
14:04
and the sporting events
スポーツ大会
14:07
and the commercials,
コマーシャル
14:09
and all of the link structures that tie them together
すべてを結ぶリンク構造が
14:11
make a content graph.
コンテンツ・グラフです
14:13
And then the important third dimension.
重要なのは第三の次元です
14:15
Each of the links that you're seeing rendered here
浮かび上がる各リンクは
14:19
is an actual connection made
だれかの発言と
14:21
between something someone said
番組コンテンツとを
14:23
and a piece of content.
結んでいます
14:26
And there are, again, now tens of millions of these links
数千万のリンクが
14:28
that give us the connective tissue of social graphs
ソーシャル・グラフの結合組織となって
14:31
and how they relate to content.
番組コンテンツに結びついています
14:34
And we can now start to probe the structure
面白いやり方で
14:37
in interesting ways.
この構造を探検できます
14:39
So if we, for example, trace the path
コメントを引き出す番組コンテンツからのびる―
14:41
of one piece of content
経路を
14:44
that drives someone to comment on it,
たどりたいときは
14:46
and then we follow where that comment goes,
コメントの流れを追って
14:48
and then look at the entire social graph that becomes activated
ソーシャル・グラフ全体の活動を見渡して
14:51
and then trace back to see the relationship
ソーシャル・グラフと番組コンテンツとの関係性を
14:54
between that social graph and content,
再確認します
14:57
a very interesting structure becomes visible.
すると 興味深い構造―
14:59
We call this a co-viewing clique,
共視聴グループが現れます
15:01
a virtual living room if you will.
疑似リビングといえるかもしれません
15:03
And there are fascinating dynamics at play.
そこに興味深いダイナミクスが現れます
15:06
It's not one way.
一方向ではなく
15:08
A piece of content, an event, causes someone to talk.
番組コンテンツつまりイベントから会話が生まれ
15:10
They talk to other people.
別の人に広がって
15:13
That drives tune-in behavior back into mass media,
また マスメディアにチャンネルを合わせます
15:15
and you have these cycles
全体の行動を誘発する―
15:18
that drive the overall behavior.
サイクルがあるのです
15:20
Another example -- very different --
ほかにも例があります
15:22
another actual person in our database --
データベースには数千とまではいきませんが
15:24
and we're finding at least hundreds, if not thousands, of these.
ある種の人が 数百人います
15:27
We've given this person a name.
その名も
15:30
This is a pro-amateur, or pro-am media critic
プロ・アマチュア(プロアマ)・メディア批評家です
15:32
who has this high fan-out rate.
幅広く影響力が及ぶので
15:35
So a lot of people are following this person -- very influential --
批評を聞くと たくさんの人が
15:38
and they have a propensity to talk about what's on TV.
番組の話をします
15:41
So this person is a key link
マスメディアとソーシャルメディアとの
15:43
in connecting mass media and social media together.
架け橋になる人たちです
15:46
One last example from this data:
次は 最後の例となります
15:49
Sometimes it's actually a piece of content that is special.
番組コンテンツには特殊なものもあります
15:52
So if we go and look at this piece of content,
例えば ほんの数週間前の
15:55
President Obama's State of the Union address
オバマ大統領の
15:59
from just a few weeks ago,
一般教書演説です
16:02
and look at what we find in this same data set,
データ群から何が分かるか
16:04
at the same scale,
同じ視点で見ると
16:07
the engagement properties of this piece of content
コンテンツへの関心度は
16:10
are truly remarkable.
飛び抜けているのが分かります
16:12
A nation exploding in conversation
放送に反応して
16:14
in real time
リアルタイムに
16:16
in response to what's on the broadcast.
会話が広がっています
16:18
And of course, through all of these lines
このすべての線を通じて
16:21
are flowing unstructured language.
雑多な話が飛び交っています
16:23
We can X-ray
世界の脈動を
16:25
and get a real-time pulse of a nation,
リアルタイムに見通せて
16:27
real-time sense
ソーシャル・グラフ内で
16:29
of the social reactions in the different circuits in the social graph
番組コンテンツに誘発された様々な集団の
16:31
being activated by content.
社会的反応をリアルタイムに見通せます
16:34
So, to summarize, the idea is this:
まとめると
16:37
As our world becomes increasingly instrumented
この世界に手段が増えていき
16:40
and we have the capabilities
人々の発言と
16:43
to collect and connect the dots
その発言の背景との
16:45
between what people are saying
結びつきを
16:47
and the context they're saying it in,
把握できるようになって
16:49
what's emerging is an ability
見えてきたのは
16:51
to see new social structures and dynamics
今までにない新たな社会構造や
16:53
that have previously not been seen.
ダイナミクスです
16:56
It's like building a microscope or telescope
顕微鏡や望遠鏡を作り
16:58
and revealing new structures
コミュニケーションに絡む行動に
17:00
about our own behavior around communication.
新たな構造を見い出すようなものです
17:02
And I think the implications here are profound,
ここには 深い意味があると思っています
17:05
whether it's for science,
科学的にも
17:08
for commerce, for government,
商業的にも 政治的にも
17:10
or perhaps most of all,
そして特に
17:12
for us as individuals.
個人にとっても深い意味があります
17:14
And so just to return to my son,
息子の話に戻りますが
17:17
when I was preparing this talk, he was looking over my shoulder,
講演の準備中に 息子が肩越しにのぞくので
17:20
and I showed him the clips I was going to show to you today,
一連の動画を見せて
17:23
and I asked him for permission -- granted.
息子の許可を伺いました
17:25
And then I went on to reflect,
それからよく考えて
17:28
"Isn't it amazing,
「すごくない?
17:30
this entire database, all these recordings,
これだけのデータベースと記録を
17:33
I'm going to hand off to you and to your sister" --
君や妹に残すんだよ」
17:36
who arrived two years later --
2歳下の妹のことです
17:38
"and you guys are going to be able to go back and re-experience moments
「過去に戻って 追体験できるんだ
17:41
that you could never, with your biological memory,
生物学的には絶対記憶できないことを
17:44
possibly remember the way you can now?"
覚えておけるんだよ」
17:47
And he was quiet for a moment.
息子は口を開きません
17:49
And I thought, "What am I thinking?
「まだ5歳だった
17:51
He's five years old. He's not going to understand this."
分かるわけないな」
17:53
And just as I was having that thought, he looked up at me and said,
そう思っていると 息子は口を開きました
17:55
"So that when I grow up,
「ぼくが大きくなったら
17:58
I can show this to my kids?"
子どもに見せられるね」
18:00
And I thought, "Wow, this is powerful stuff."
これはすごいことだと思いました
18:02
So I want to leave you
では 最後に
18:05
with one last memorable moment
私たち家族にとって
18:07
from our family.
思い出に残る場面をご覧いただきます
18:09
This is the first time our son
息子が初めて
18:12
took more than two steps at once --
2歩以上歩いた瞬間を
18:14
captured on film.
とらえた映像です
18:16
And I really want you to focus on something
ぜひ注目してください
18:18
as I take you through.
これは 雑然とした―
18:21
It's a cluttered environment; it's natural life.
日常生活の様子です
18:23
My mother's in the kitchen, cooking,
母がキッチンで料理中
18:25
and, of all places, in the hallway,
私は廊下にいて
18:27
I realize he's about to do it, about to take more than two steps.
歩きそうだと気づきました
18:29
And so you hear me encouraging him,
息子に声援を送り
18:32
realizing what's happening,
これから起こる事を予感し
18:34
and then the magic happens.
感動の瞬間を迎えます
18:36
Listen very carefully.
注意して聞いてください
18:38
About three steps in,
三つの段階があります
18:40
he realizes something magic is happening,
素晴らしい瞬間を目にして
18:42
and the most amazing feedback loop of all kicks in,
最高のフィードバック・ループが始まり
18:44
and he takes a breath in,
一呼吸置いて
18:47
and he whispers "wow"
「やった」とつぶやきます
18:49
and instinctively I echo back the same.
無意識に何度も繰り返します
18:51
And so let's fly back in time
では 思い出のあの瞬間へ
18:56
to that memorable moment.
飛び立ちましょう
18:59
(Video) DR: Hey.
「よ~し」
19:05
Come here.
「おいで」
19:07
Can you do it?
「できるかな?」
19:09
Oh, boy.
「いい子だ」
19:13
Can you do it?
「できるかな?」
19:15
Baby: Yeah.
(赤ちゃん)「ヤァ~」
19:18
DR: Ma, he's walking.
「お~い 歩いてるよ~」
19:20
(Laughter)
(笑い)
19:24
(Applause)
(拍手)
19:26
DR: Thank you.
ありがとうございました
19:28
(Applause)
(拍手)
19:30
Translator:Satoshi Tatsuhara
Reviewer:Lily Yichen Shi

sponsored links

Deb Roy - Cognitive scientist
Deb Roy studies how children learn language, and designs machines that learn to communicate in human-like ways. On sabbatical from MIT Media Lab, he's working with the AI company Bluefin Labs.

Why you should listen

Deb Roy directs the Cognitive Machines group at the MIT Media Lab, where he studies how children learn language, and designs machines that learn to communicate in human-like ways. To enable this work, he has pioneered new data-driven methods for analyzing and modeling human linguistic and social behavior. He has authored numerous scientific papers on artificial intelligence, cognitive modeling, human-machine interaction, data mining, and information visualization.

Deb Roy was the co-founder and serves as CEO of Bluefin Labs, a venture-backed technology company. Built upon deep machine learning principles developed in his research over the past 15 years, Bluefin has created a technology platform that analyzes social media commentary to measure real-time audience response to TV ads and shows.

Follow Deb Roy on Twitter>

Roy adds some relevant papers:

Deb Roy. (2009). New Horizons in the Study of Child Language Acquisition. Proceedings of Interspeech 2009. Brighton, England. bit.ly/fSP4Qh

Brandon C. Roy, Michael C. Frank and Deb Roy. (2009). Exploring word learning in a high-density longitudinal corpus. Proceedings of the 31st Annual Meeting of the Cognitive Science Society. Amsterdam, Netherlands. bit.ly/e1qxej

Plenty more papers on our research including technology and methodology can be found here, together with other research from my lab at MIT: bit.ly/h3paSQ

The work that I mentioned on relationships between television content and the social graph is being done at Bluefin Labs (www.bluefinlabs.com). Details of this work have not been published. The social structures we are finding (and that I highlighted in my TED talk) are indeed new. The social media communication channels that are leading to their formation did not even exist a few years ago, and Bluefin's technology platform for discovering these kinds of structures is the first of its kind. We'll certainly have more to say about all this as we continue to dig into this fascinating new kind of data, and as new social structures continue to evolve!

The original video is available on TED.com
sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.