Synthetically-Generated Interactive 3-D Environments

paper-orbital-paths.jpgpaper-path-planning.jpg

The University of Washington and Microsoft Research presented a paper at SIGGRAPH 2008 describing their new technology for creating interactive 3-D environments from a collection of photos. Titled “Finding Paths through the World’s Photos,” the presentation described software techniques to create an interface that allows viewers to interactively move through collections of photos in a simulated 3-D environment. While similar to the previously demoed PhotoSynth software from Microsoft Research, the “Finding Paths” software is built on an new rendering engine that introduces a number of features that advance the techniques illustrated in PhotoSynth.

The new software detects and generates “orbital paths” and panoramic views of objects, allowing you, for example, to move around the exterior of a building as a three-dimensional object or pan across its interior space.

The transitions between views are smoother than with the previous PhotoSynth techniques, and “path planning” provides real-world travel between two points by following the trail of the available photos (rather than flying around on an unconstrained path).

“Appearance stabilization” creates smooth, cinematic-like transitions between photos by selecting images with similar lighting and then compensating for color variations to avoid jarring transitions. The demo shows the software generating both day and night views based on common color schemes.

Rather than reading about what the software can do, take a look at the demo on YouTube:

The techniques demonstrated in this software open up a wide range of possibilities for generating 3-D worlds from other source materials. Imagine the “Street View” Google Maps as a smooth, immersive 3-D environment without the awkward herky-jerky transitions between still images.

And, as I mentioned some time ago, I’ve long wondered whether the perspective shifts of a moving camera might allow you to capture sufficient data to create a three-dimensional model. In that earlier post, I cited the example of the 14-minute continuous traveling shot of San Francisco’s Market Street filmed in 1905:

sf-1905.jpgAs I wrote in that post:

Would the change in perspective as the camera travels down the street be sufficient to use a tool like [VideoTrace] to develop a 3-D model of the street scene? Imagine strolling down a virtual model of San Francisco’s Market Street in 1905 — one year before the quake and subsequent fire destroyed most of the buildings in this scene. (Although, thankfully, not the Flood Building seen near the middle of the left-hand side of the street.)

Somewhere in the vaults of Hollywood studios must be miles of similar footage used for rear projection shots displayed in the back window of vehicles to make them appear to be moving. Could we use these to reconstruct Park Avenue in the 1950s or 42nd Street in the 1970s and so on? The mind reels at the possibilities.

These new rendering techniques from the University of Washington and Microsoft Research may move us one step closer to making this a reality.

[Via istartedsomething]

One thought on “Synthetically-Generated Interactive 3-D Environments

  1. That technology is pretty amazing. I’m wondering if the probability of getting something similar but for the “micro” scale (microchips, the insides of blood vessels or organs, etc.).
    I guess what’d be missing is the whole “collection” aspect since those pictures aren’t generally available on the Flickr or Picasa.
    Still, something interesting to speculate over.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.