Saturday, November 24, 2007

Blaise Aguera y Arcas & his Photosynth demo

Saw this video via Spacecake's post:



Or watch the video here.

More about the software at this Microsoft Live Labs site.

The demo clearly blew the audience away (myself included, 'cos I couldn't help grinning as I watched the demo unfold).

I liked how the demonstrated technology allowed the user to give a very high-level overview of thumbnail images (very much like how we can look at several book cover on a shelf, for instance) and then effortlessly zoom to very minute level -- good enough to read the scanned page like a book.

The other part of the demo showed how the technology collated individual Flickr images of the Notre Dame to build a "multi-dimensional" zoom-able view of the structure.

You have to watch the video to see what I mean.

The social-media, educational and cultural aspects of such an application is just fantastic.

The speaker, Blaise Aguera y Arcas, said as much near the end of his presentation. About building "Collective Memories".

I can imagine how it would work.

Let's say you visit a nature reserve. You snap a few pictures. A tree, a frond, a monkey in a tree. Not from a researcher's point of view. Merely that you felt it was a "Kodak moment".

Later, you upload one or two images to a photo-sharing site.

It's just one or two photos from you. Nothing particularly fantastic.

If I happen to come across what you've uploaded, I see a 2-D view of your snapshots. I might look at other similar photos from others (perhaps with similar tags) but that's about it.

They are merely individual pictures.

Then let's say the software and technology is made widely available for use (Open Source, or freeware -- one does hope). Someone (with some free time, perhaps) constructs a multi-dimensional view using the photos of the nature reserve. Maybe even videos. I'd imagine the content having been deposited under the Creative Commons license, so there's no copyright clearance hassle.

Now the individual and separate pictures (and videos) begin to form a wider view of things.

Context is formed.

A bigger story is presented.

When you uploaded your photo of that monkey in a tree, it didn't really say much, from a scientific point of view.

But suppose a researcher views the multidimensional construct and is able to tell and later track the location of certain species.

Or the student who is doing a project on conservation, and is able to relate a recent field trip, or information read from a book or Internet resource, to the multidimensional view.

And other people could then add to this multidimensional view and build a picture over time. Those who make further contributions don't really have to know how or why. To them, they are just adding one more photo. But the software could automatically add to the view.

It's even possible to track missing persons (or track a person). Well, naturally the implications for individual privacy also widens.

I'm reminded of a SciFi story published by Asimov -- The Green Leopard Plague by Walter Jon Williams (you can read the online version here).

In the story, the protagonist was hired to track down a person. She eventually identified his location after looking at clues from images and video that have been uploaded in electronic formats (BTW, the whole story isn't about the technology per se. It's a lot more than that and worth a read).

Well, looks like Walter J. Williams was spot on.

And maybe his story needs to be classified as "non-fiction" rather than SciFi. lol

1 comment:

  1. I saw a demo of this earlier this year at the Microsoft Imagine Cup Singapore finals.

    It agree that its very very cool. It'd be great how they actually made something like this. Its crazy!

    ReplyDelete

Join the conversation. Leave a comment :)

Note: only a member of this blog may post a comment.