ABOUT THE SPEAKER
Blaise Agüera y Arcas - Software architect
Blaise Agüera y Arcas works on machine learning at Google. Previously a Distinguished Engineer at Microsoft, he has worked on augmented reality, mapping, wearable computing and natural user interfaces.

Why you should listen

Blaise Agüera y Arcas is principal scientist at Google, where he leads a team working on machine intelligence for mobile devices. His group works extensively with deep neural nets for machine perception and distributed learning, and it also investigates so-called "connectomics" research, assessing maps of connections within the brain.

Agüera y Arcas' background is as multidimensional as the visions he helps create. In the 1990s, he authored patents on both video compression and 3D visualization techniques, and in 2001, he made an influential computational discovery that cast doubt on Gutenberg's role as the father of movable type.

He also created Seadragon (acquired by Microsoft in 2006), the visualization technology that gives Photosynth its amazingly smooth digital rendering and zoom capabilities. Photosynth itself is a vastly powerful piece of software capable of taking a wide variety of images, analyzing them for similarities, and grafting them together into an interactive three-dimensional space. This seamless patchwork of images can be viewed via multiple angles and magnifications, allowing us to look around corners or “fly” in for a (much) closer look. Simply put, it could utterly transform the way we experience digital images.

He joined Microsoft when Seadragon was acquired by Live Labs in 2006. Shortly after the acquisition of Seadragon, Agüera y Arcas directed his team in a collaboration with Microsoft Research and the University of Washington, leading to the first public previews of Photosynth several months later. His TED Talk on Seadragon and Photosynth in 2007 is rated one of TED's "most jaw-dropping." He returned to TED in 2010 to demo Bing’s augmented reality maps.

Fun fact: According to the author, Agüera y Arcas is the inspiration for the character Elgin in the 2012 best-selling novel Where'd You Go, Bernadette?

More profile about the speaker
Blaise Agüera y Arcas | Speaker | TED.com
TED2007

Blaise Agüera y Arcas: How PhotoSynth can connect the world's images

布雷斯•A•雅克斯的「相片合成」技術展示

Filmed:
5,831,957 views

布雷斯•雅克斯進行了炫目的「相片合成」技術示範,這個軟體可以改變我們觀看數位影像的方式。使用從網路上所挑出的靜態照片,「相片合成」創建了令人歎為觀止的夢幻景象,並且讓我們自由穿梭在其中。
- Software architect
Blaise Agüera y Arcas works on machine learning at Google. Previously a Distinguished Engineer at Microsoft, he has worked on augmented reality, mapping, wearable computing and natural user interfaces. Full bio

Double-click the English transcript below to play the video.

00:25
What I'm going to show顯示 you first, as quickly很快 as I can,
0
0
2000
首先,我要用最快的速度為大家示範
00:27
is some foundational基礎 work, some new technology技術
1
2000
4000
一些新技術的基礎研究成果。
00:31
that we brought to Microsoft微軟 as part部分 of an acquisition獲得
2
6000
3000
正好是一年前, 微軟收購了我們公司,
00:34
almost幾乎 exactly究竟 a year ago. This is Seadragon海龍,
3
9000
3000
而我們為微軟帶來了這項技術,它就是「海龍」(Seadragon)。
00:37
and it's an environment環境 in which哪一個 you can either locally本地 or remotely遠程
4
12000
3000
「海龍」是一個軟體環境,你可以通過它以近景或遠景的方式
00:40
interact相互作用 with vast廣大 amounts of visual視覺 data數據.
5
15000
3000
流覽浩瀚的視覺化資料。
00:43
We're looking at many許多, many許多 gigabytes千兆字節 of digital數字 photos相片 here
6
18000
3000
我們在這裏看到的是,非常多千兆位元組的數位像片,
00:46
and kind of seamlessly無縫 and continuously一直 zooming縮放 in,
7
21000
3000
我們可以對它們可以進行持續並且平滑的放大,
00:50
panning搖攝 through通過 the thing, rearranging重排 it in any way we want.
8
25000
2000
可以通過全景的方式流覽它們,還可以對它們進行重新排列。
00:52
And it doesn't matter how much information信息 we're looking at,
9
27000
4000
不管所見到的資料有多少、
00:56
how big these collections集合 are or how big the images圖片 are.
10
31000
3000
圖像集有多大或是圖像本身有多大。
00:59
Most of them are ordinary普通 digital數字 camera相機 photos相片,
11
34000
2000
以上展示的圖片,大部分是來自一般數位相機的照片,
01:01
but this one, for example, is a scan掃描 from the Library圖書館 of Congress國會,
12
36000
3000
但這個例子不同,它是一張來自國會圖書館的掃描圖片,
01:05
and it's in the 300 megapixel百萬像素 range範圍.
13
40000
2000
擁有3億個像素。
01:08
It doesn't make any difference區別
14
43000
1000
然而,這沒有造成任何不同,
01:09
because the only thing that ought應該 to limit限制 the performance性能
15
44000
3000
因為能夠限制像這樣的系統效能的唯一因素,
01:12
of a system系統 like this one is the number of pixels像素 on your screen屏幕
16
47000
3000
是你所使用的螢幕所正在顯示的像素數量。
01:15
at any given特定 moment時刻. It's also very flexible靈活 architecture建築.
17
50000
3000
「海龍」同時也是一個非常靈活的架構。
01:18
This is an entire整個 book, so this is an example of non-image非圖像 data數據.
18
53000
3000
這是一本完整的書,是非圖形式資料的一個例子。
01:22
This is "Bleak蒼涼 House" by Dickens狄更斯. Every一切 column is a chapter章節.
19
57000
5000
這是狄更斯所著的《荒涼山莊》,每一欄就是一個章節。
01:27
To prove證明 to you that it's really text文本, and not an image圖片,
20
62000
4000
為了向你們證明這真的是文字而非圖片,
01:31
we can do something like so, to really show顯示
21
66000
2000
我們可以這樣操作,
01:33
that this is a real真實 representation表示 of the text文本; it's not a picture圖片.
22
68000
3000
大家可以看得出來這真的是文字,而不是一張圖片。
01:37
Maybe this is a kind of an artificial人造 way to read an e-book電子書.
23
72000
2000
也許這會是一種閱讀電子書的方式,
01:39
I wouldn't不會 recommend推薦 it.
24
74000
1000
但是我個人不會推薦這麼做。
01:40
This is a more realistic實際 case案件. This is an issue問題 of The Guardian監護人.
25
75000
3000
接下來是一個更加實際的例子,這是一期《衛報》。
01:43
Every一切 large image圖片 is the beginning開始 of a section部分.
26
78000
2000
你所看到的每一張大圖片,就是各版頭條,
01:45
And this really gives you the joy喜悅 and the good experience經驗
27
80000
3000
而報紙或雜誌的紙本,本身就包含了各種比例的媒材,
01:48
of reading the real真實 paper version of a magazine雜誌 or a newspaper報紙,
28
83000
5000
因此這樣閱讀的時候,讀者會得到更好的閱讀體驗,
01:54
which哪一個 is an inherently本質 multi-scale多尺度 kind of medium.
29
89000
1000
從而享受閱讀的樂趣。
01:56
We've我們已經 also doneDONE a little something
30
91000
1000
我們在這裏做了點小小的更動,
01:57
with the corner of this particular特定 issue問題 of The Guardian監護人.
31
92000
3000
在這一期《衛報》的一角。
02:00
We've我們已經 made製作 up a fake ad廣告 that's very high resolution解析度 --
32
95000
3000
我們刊登了一個非常高解析度的虛構廣告 —
02:03
much higher更高 than you'd be able能夠 to get in an ordinary普通 ad廣告 --
33
98000
2000
比你平常看到的普通廣告的解析度要高很多,
02:05
and we've我們已經 embedded嵌入式 extra額外 content內容.
34
100000
2000
我們並在圖片中嵌入了額外的內容。
02:07
If you want to see the features特徵 of this car汽車, you can see it here.
35
102000
2000
如果你希望看到這輛車的特性,你可以看這裏,
02:10
Or other models楷模, or even technical技術 specifications規格.
36
105000
4000
你還能看到其他的型號,甚至技術規格。
02:15
And this really gets得到 at some of these ideas思路
37
110000
2000
這種方式在一定程度上,
02:18
about really doing away with those limits範圍 on screen屏幕 real真實 estate房地產.
38
113000
4000
避開了螢幕面積的限制。
02:22
We hope希望 that this means手段 no more pop-ups彈出窗口
39
117000
2000
我們希望這個技術能夠減少不必要的彈出視窗,
02:24
and other kind of rubbish垃圾 like that -- shouldn't不能 be necessary必要.
40
119000
2000
及其他類似的,不必要的垃圾。
02:27
Of course課程, mapping製圖 is one of those really obvious明顯 applications應用
41
122000
2000
當然,對於像這樣的技術,
02:29
for a technology技術 like this.
42
124000
2000
數位地圖也是顯而易見的應用之一。
02:31
And this one I really won't慣於 spend any time on,
43
126000
2000
對此,我真的不想花費太多的時間進行介紹,
02:33
except to say that we have things to contribute有助於 to this field領域 as well.
44
128000
2000
我只想告訴大家,我們對這個領域也貢獻了一己之力。
02:37
But those are all the roads道路 in the U.S.
45
132000
2000
這些是將美國的所有道路,
02:39
superimposed疊加 on top最佳 of a NASANASA geospatial地理空間 image圖片.
46
134000
4000
疊加在太空總署的地理空間影像上。
02:44
So let's pull up, now, something else其他.
47
139000
2000
現在,我們先放下這些,看看其他的。
02:46
This is actually其實 live生活 on the Web捲筒紙 now; you can go check it out.
48
141000
3000
實際上,這項技術已經放到網路上了,大家可以自己去體驗一下。
02:49
This is a project項目 called PhotosynthPhotosynth的,
49
144000
1000
這個計畫名叫「相片合成」 (Photosynth),
02:51
which哪一個 really marries結婚 two different不同 technologies技術.
50
146000
1000
它實際上融合了兩個不同的技術:
02:52
One of them is Seadragon海龍
51
147000
1000
一個是「海龍」,
02:54
and the other is some very beautiful美麗 computer電腦 vision視力 research研究
52
149000
2000
而另一個則是源自華盛頓大學的研究生 Noah Snavely,
02:57
doneDONE by Noah諾亞 SnavelySnavely, a graduate畢業 student學生 at the University大學 of Washington華盛頓,
53
152000
2000
所進行的電腦視覺化研究的美麗成果。
03:00
co-advised共同建議 by Steve史蒂夫 Seitz塞茨 at U.W.
54
155000
2000
這項研究還得到了華盛頓大學 Steve Seitz
03:02
and Rick幹草堆 SzeliskiSzeliski at Microsoft微軟 Research研究. A very nice不錯 collaboration合作.
55
157000
4000
和微軟研究中心 Rick Szeliski 的共同指導。這是一個非常漂亮的合作成果。
03:07
And so this is live生活 on the Web捲筒紙. It's powered動力 by Seadragon海龍.
56
162000
2000
現在各位看到的是我們連上網路的即時示範,它是根基於「海龍」技術。
03:09
You can see that when we kind of do these sorts排序 of views意見,
57
164000
2000
你可以看到,我們輕鬆地對圖片進行多種方式的查看,
03:12
where we can dive潛水 through通過 images圖片
58
167000
1000
就好像潛入這些影像一般,
03:14
and have this kind of multi-resolution多分辨率 experience經驗.
59
169000
1000
擁有了這種多解析度的瀏覽體驗。
03:16
But the spatial空間的 arrangement安排 of the images圖片 here is actually其實 meaningful富有意義的.
60
171000
4000
不過,在這邊,這些圖片空間上的關係事實上是有意義的。
03:20
The computer電腦 vision視力 algorithms算法 have registered註冊 these images圖片 together一起
61
175000
3000
電腦視覺演算法將這些圖片聯繫到一起,
03:23
so that they correspond對應 to the real真實 space空間 in which哪一個 these shots鏡頭 --
62
178000
4000
那麼這些圖片就能將真實空間給呈現出來了,
03:27
all taken採取 near Grassi格拉西 Lakes in the Canadian加拿大 Rockies落基山脈 --
63
182000
2000
而我們正是在這個地方拍了上述的照片 — 這些照片都是在
03:31
all these shots鏡頭 were taken採取. So you see elements分子 here
64
186000
2000
加拿大洛磯山脈的格拉西湖附近拍下的 — 所有照片都是在這裏拍下的。
03:33
of stabilized穩定 slide-show幻燈片放映 or panoramic全景 imaging成像,
65
188000
4000
因此,這邊你可以看到穩定幻燈片播放的元素或者環景影像,
03:40
and these things have all been related有關 spatially空間地.
66
195000
2000
而這些內容在空間上都是互相關聯的。
03:42
I'm not sure if I have time to show顯示 you any other environments環境.
67
197000
3000
我不確定我是否有時間為你們示範其他環境的例子。
03:45
There are some that are much more spatial空間的.
68
200000
1000
有些其他例子比這個的空間感還要強。
03:47
I would like to jump straight直行 to one of Noah's諾亞 original原版的 data-sets數據集 --
69
202000
3000
下面讓我們來看一下去年夏天,
03:50
and this is from an early prototype原型 of PhotosynthPhotosynth的
70
205000
2000
Noah 早期所建立的資料集之一,
03:52
that we first got working加工 in the summer夏季 --
71
207000
2000
這是來自於「相片合成」技術早期的原型階段。
03:54
to show顯示 you what I think
72
209000
1000
我認為,
03:55
is really the punch沖床 line behind背後 this technology技術,
73
210000
3000
這是我們這項技術最搶眼之處。
03:59
the PhotosynthPhotosynth的 technology技術. And it's not necessarily一定 so apparent明顯的
74
214000
2000
「相片合成」技術不單單像我們剛剛在
04:01
from looking at the environments環境 that we've我們已經 put up on the website網站.
75
216000
3000
網站上所示範的環境般,那麼的簡單明瞭。
04:04
We had to worry擔心 about the lawyers律師 and so on.
76
219000
2000
主要因為我們製作網站時,要顧慮很多法律問題。
04:07
This is a reconstruction重建 of Notre巴黎 Dame貴婦人 Cathedral大教堂
77
222000
1000
這裡是利用 Flickr 網站上
04:09
that was doneDONE entirely完全 computationally計算
78
224000
2000
的照片,並完全以電腦重建的巴黎聖母院。
04:11
from images圖片 scraped from FlickrFlickr的. You just type類型 Notre巴黎 Dame貴婦人 into FlickrFlickr的,
79
226000
3000
你所要做的只是在 Flickr 網站上輸入「巴黎聖母院」,
04:14
and you get some pictures圖片 of guys in t-shirtsT卹, and of the campus校園
80
229000
3000
然後便能看到很多照片,包括在那邊留影的遊客等等。
04:17
and so on. And each of these orange橙子 cones represents代表 an image圖片
81
232000
4000
每一個橘色的錐形都代表了一張
04:22
that was discovered發現 to belong屬於 to this model模型.
82
237000
2000
用來建立模型的照片。
04:26
And so these are all FlickrFlickr的 images圖片,
83
241000
2000
這些全部是來自 Flickr 的圖片,
04:28
and they've他們已經 all been related有關 spatially空間地 in this way.
84
243000
3000
被這樣在空間裡被串聯起來。
04:31
And we can just navigate導航 in this very simple簡單 way.
85
246000
2000
接著,我們便可如此自然的進行瀏覽。
04:35
(Applause掌聲)
86
250000
9000
(掌聲)
04:44
You know, I never thought that I'd end結束 up working加工 at Microsoft微軟.
87
259000
2000
說實話,我從來沒想過我會為微軟工作,
04:46
It's very gratifying可喜 to have this kind of reception招待會 here.
88
261000
4000
這樣受到歡迎,真挺令人高興的。
04:50
(Laughter笑聲)
89
265000
3000
(笑聲)
04:53
I guess猜測 you can see
90
268000
3000
我想你們可以看出,
04:56
this is lots of different不同 types類型 of cameras相機:
91
271000
2000
這些照片來自很多不同的相機:
04:58
it's everything from cell細胞 phone電話 cameras相機 to professional專業的 SLRs單反相機,
92
273000
3000
從手機鏡頭到專業的單眼相機。
05:02
quite相當 a large number of them, stitched縫合
93
277000
1000
如此大量的不同品質的照片,全被在這個環境下
05:03
together一起 in this environment環境.
94
278000
1000
拼湊在一 起。
05:04
And if I can, I'll find some of the sort分類 of weird奇怪的 ones那些.
95
279000
2000
讓我來找些比較詭異的照片。
05:08
So many許多 of them are occluded閉塞 by faces面孔, and so on.
96
283000
3000
看,不少照片包含了遊客的大頭照等等。
05:13
Somewhere某處 in here there are actually其實
97
288000
1000
我記得這裡應該有
05:15
a series系列 of photographs照片 -- here we go.
98
290000
1000
一系列的照片 — 啊,在這兒。
05:17
This is actually其實 a poster海報 of Notre巴黎 Dame貴婦人 that registered註冊 correctly正確地.
99
292000
3000
這實際上是一張有巴黎聖母院照片的海報,
05:21
We can dive潛水 in from the poster海報
100
296000
2000
我們可以鑽到海報裡,
05:24
to a physical物理 view視圖 of this environment環境.
101
299000
3000
去看整個重建的環境。
05:31
What the point here really is is that we can do things
102
306000
3000
這裏的重點呢?便是我們可以
05:34
with the social社會 environment環境. This is now taking服用 data數據 from everybody每個人 --
103
309000
5000
有效地利用網路社群。我們可以從每個人那裡得到資料,
05:39
from the entire整個 collective集體 memory記憶
104
314000
1000
將每個人對不同環境
05:40
of, visually視覺, of what the Earth地球 looks容貌 like --
105
315000
2000
的視覺記憶蒐集在一起,
05:43
and link鏈接 all of that together一起.
106
318000
1000
並將它們連結起來。
05:44
All of those photos相片 become成為 linked關聯 together一起,
107
319000
2000
當所有這些圖片交織在一起時,
05:46
and they make something emergent應急
108
321000
1000
所衍生出的東西,
05:47
that's greater更大 than the sum of the parts部分.
109
322000
2000
要遠遠超過各部件的總和,
05:49
You have a model模型 that emerges出現 of the entire整個 Earth地球.
110
324000
2000
這個模型所衍生出的,是整個地球。
05:51
Think of this as the long tail尾巴 to Stephen斯蒂芬 Lawler's勞勒的 Virtual虛擬 Earth地球 work.
111
326000
5000
將之想像是 Stephen Lawler《虛擬地球》的長尾市場。(Stephen Lawler 是微軟「虛擬地球」專案主管)
05:56
And this is something that grows成長 in complexity複雜
112
331000
2000
這類模型, 會隨著人們的
05:58
as people use it, and whose誰的 benefits好處 become成為 greater更大
113
333000
3000
使用而不斷變得更複雜,
06:01
to the users用戶 as they use it.
114
336000
2000
變得更加有價值。
06:03
Their own擁有 photos相片 are getting得到 tagged標記 with meta-data元數據
115
338000
2000
用戶的照片,會被其他人
06:05
that somebody else其他 entered進入.
116
340000
1000
輸入標注資料。
06:07
If somebody bothered困擾 to tag標籤 all of these saints聖人
117
342000
3000
如果有人願意,為聖母院裡的所有聖賢輸入標注,
06:10
and say who they all are, then my photo照片 of Notre巴黎 Dame貴婦人 Cathedral大教堂
118
345000
3000
表明他們是誰,那我們聖母院的照片便會
06:13
suddenly突然 gets得到 enriched豐富 with all of that data數據,
119
348000
2000
一下子增加了許多資訊,
06:15
and I can use it as an entry條目 point to dive潛水 into that space空間,
120
350000
3000
然後呢,我們便能以這張照片為起點,進入這個空間,
06:18
into that meta-verse元詩, using運用 everybody每個人 else's別人的 photos相片,
121
353000
2000
這個由很多人的照片所搭建的虛擬世界,
06:21
and do a kind of a cross-modal跨模態
122
356000
2000
從而得到一種跨越模型,
06:25
and cross-user交用戶 social社會 experience經驗 that way.
123
360000
3000
跨越用戶的社交體驗。
06:28
And of course課程, a by-product副產品 of all of that
124
363000
1000
當然了,這一切所帶來另外一個寶貴產物便是,
06:30
is immensely非常 rich豐富 virtual虛擬 models楷模
125
365000
2000
我們擁有地球上每一個有趣的地方,
06:32
of every一切 interesting有趣 part部分 of the Earth地球, collected
126
367000
2000
非常豐富的模型。
06:35
not just from overhead高架 flights航班 and from satellite衛星 images圖片
127
370000
3000
這些模型的資料來源,不再僅限於空拍或衛星照片等等,
06:38
and so on, but from the collective集體 memory記憶.
128
373000
2000
而是來自全人類的集合記憶。
06:40
Thank you so much.
129
375000
2000
非常感謝!
06:42
(Applause掌聲)
130
377000
11000
(掌聲)
06:53
Chris克里斯 Anderson安德森: Do I understand理解 this right? That what your software軟件 is going to allow允許,
131
388000
4000
Chris Anderson: 如果我理解正確的話,你們的這個軟體將能夠
06:58
is that at some point, really within the next下一個 few少數 years年份,
132
393000
2000
在未來的幾年內,
07:01
all the pictures圖片 that are shared共享 by anyone任何人 across橫過 the world世界
133
396000
4000
將來自全球網路使用者所共享的照片
07:05
are going to basically基本上 link鏈接 together一起?
134
400000
2000
結合在一起?
07:07
BAABAA: Yes. What this is really doing is discovering發現.
135
402000
2000
BAA:是的。這個軟體的真正意義便是去探索,
07:09
It's creating創建 hyperlinks超鏈接, if you will, between之間 images圖片.
136
404000
3000
它在照片間建立超鏈結。
07:12
And it's doing that
137
407000
1000
這個結合的過程,
07:13
based基於 on the content內容 inside the images圖片.
138
408000
1000
完全是基於照片的內容。
07:14
And that gets得到 really exciting扣人心弦 when you think about the richness豐富
139
409000
3000
更令人興奮的
07:17
of the semantic語義 information信息 that a lot of those images圖片 have.
140
412000
2000
在於照片所包含的大量文字語義資訊。
07:19
Like when you do a web捲筒紙 search搜索 for images圖片,
141
414000
2000
譬如,你在網路上搜尋一張照片,
07:22
you type類型 in phrases短語, and the text文本 on the web捲筒紙 page
142
417000
2000
鍵入關鍵字後,網頁上的文字內容
07:24
is carrying攜帶 a lot of information信息 about what that picture圖片 is of.
143
419000
3000
將包含大量與這個照片相關的資訊。
07:27
Now, what if that picture圖片 links鏈接 to all of your pictures圖片?
144
422000
2000
現在,假設這些照片,全部都與你的照片互相連結,那將會怎樣?
07:29
Then the amount of semantic語義 interconnection互連
145
424000
2000
那時,所有這些語義資訊相互連結的
07:31
and the amount of richness豐富 that comes out of that
146
426000
1000
資訊量將是
07:32
is really huge巨大. It's a classic經典 network網絡 effect影響.
147
427000
3000
非常巨大的。這是非常典型的網路效應。
07:35
CACA: Blaise布萊斯, that is truly incredible難以置信. Congratulations祝賀.
148
430000
2000
CA:Blaise, 太難以置信了。祝賀你們!
07:37
BAABAA: Thanks謝謝 so much.
149
432000
1000
BAA: 非常感謝各位!
Translated by Wang Qian
Reviewed by Bill Hsiung

▲Back to top

ABOUT THE SPEAKER
Blaise Agüera y Arcas - Software architect
Blaise Agüera y Arcas works on machine learning at Google. Previously a Distinguished Engineer at Microsoft, he has worked on augmented reality, mapping, wearable computing and natural user interfaces.

Why you should listen

Blaise Agüera y Arcas is principal scientist at Google, where he leads a team working on machine intelligence for mobile devices. His group works extensively with deep neural nets for machine perception and distributed learning, and it also investigates so-called "connectomics" research, assessing maps of connections within the brain.

Agüera y Arcas' background is as multidimensional as the visions he helps create. In the 1990s, he authored patents on both video compression and 3D visualization techniques, and in 2001, he made an influential computational discovery that cast doubt on Gutenberg's role as the father of movable type.

He also created Seadragon (acquired by Microsoft in 2006), the visualization technology that gives Photosynth its amazingly smooth digital rendering and zoom capabilities. Photosynth itself is a vastly powerful piece of software capable of taking a wide variety of images, analyzing them for similarities, and grafting them together into an interactive three-dimensional space. This seamless patchwork of images can be viewed via multiple angles and magnifications, allowing us to look around corners or “fly” in for a (much) closer look. Simply put, it could utterly transform the way we experience digital images.

He joined Microsoft when Seadragon was acquired by Live Labs in 2006. Shortly after the acquisition of Seadragon, Agüera y Arcas directed his team in a collaboration with Microsoft Research and the University of Washington, leading to the first public previews of Photosynth several months later. His TED Talk on Seadragon and Photosynth in 2007 is rated one of TED's "most jaw-dropping." He returned to TED in 2010 to demo Bing’s augmented reality maps.

Fun fact: According to the author, Agüera y Arcas is the inspiration for the character Elgin in the 2012 best-selling novel Where'd You Go, Bernadette?

More profile about the speaker
Blaise Agüera y Arcas | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee