Computational Cameras @ SIGGRAPH

Refocussing a single light-field photograph

Refocussing a single light-field photograph

Last time I blogged about the interesting talk given by Will Wright at this year’s SIGGRAPH conference. Will’s talk was a spell-binding overview of current visualization and entertainment trends, and the role that human psychology and aesthetics plays within it. However, SIGGRAPH also has a more facts-and-numbers side – there actually is an academic conference hiding between all the glitz and glamour. (The one with technical papers.)

Now when I say “conference” I immediately conjure up images of badly dressed nerds in exotic locations, getting all excited about mind-numbingly boring topics, and then getting drunk and trying to hit on the only attractive girl among the several hundred conference-goers. But in SIGGRAPH’s case, at least, the topics are not all that mind-numbingly boring. Because in addition to the academic quality enforced by the reviewers, there is also a strong emphasis on originality, and a “what can this be used for, and does it look cool?” criteria.

There were several very interesting sessions, some of which focused on

  • Realistic simulation of fluid, fire, and computer animation physics
  • Human-computer interaction (Holographic teleconferencing with eye tracking, 3D with focused-ultrasound force-feedback)
  • Smart image editing (Resizing images without visibly cropping or distorting the content, and automatic black-and-white to colour conversion)
  • Making deformable 3D computer meshes, and then doing crazy things to them – like squashing a bunny and ripping a cow in two
  • 3D visualization of scientific data – complete with 3D glasses! (watching the Sun’s erupting surface, or travelling through an Egyptian mummy)

For this blog post, however, I want to talk about the “computational cameras” session. These talks might give you a glimpse into the future of photography…

Invertible motion blur

Deblurring images of a moving vehicle

Deblurring images of a moving vehicle

I always thought it would be quite possible to deblur an image if you knew how your subject moved. For example of you photograped a car driving past while using a slow shutter speed. Turns out it’s not that easy, since you actually lose data you cannot recover. Unless you’re clever, and take more than one photograph with different amounts of blur, and combine them.

Dark flash photography

UV + IR Flash. "Smile"!

UV + IR Flash. "Smile"!

We all know (right?) that using a flash spoils the mood and the lighting, and temporarily blinds the poor people looking at the camera. But at poorly lit venues using natural light only gives horribly noisy images. Solution? Take one photograph with a barely visible ultraviolet+infra-red flash, and paint in the colours using a noisy natural-light photograph. The result looks almost natural!

4D lens for universal depth-of-field

A new type of optimized plenoptic camera lens

A new type of optimized plenoptic camera lens

These researchers used clever frequency-domain mathematics to physically design and build a new lens (by cutting up an expensive Canon L lens and shuffling around the pieces). This Frankenstein’s creation makes a blurry photo (duh), but with some clever processing you can get an in-focus image, at any (or all) focal depths of your choice! The guys (and girl) also showed that “traditional” designs which did the same thing were suboptimal, since you only need a 3D subspace of the 4D image “light field” data. This lens focuses all its energy into this 3D sub-domain, thereby getting better performance than what was previously done. Does this sound like abstract gobbledygook? It actually works!

Bokodes

Bokode: Visual Tags for Camera-based Interaction from a Distance

Bokode: Visual Tags for Camera-based Interaction from a Distance

Getting its name from “Bokeh” + “code”, this is a really brilliant concept, which made me think: “now why couldn’t I think of something as elegant as this!?” Well done, MIT!

The idea is to place a tiny special reflector / LED light in an object. When you now take an out of focus photograph of it, you’ll get a sharp (in-focus) image of a barcode or data matrix superimposed on blurry background scene! The tiny little object will appear large in your view-finder, even over moderate distances. Say hello to ultra-small bar codes!

The authors then went on to show some really cool applications for this technology – e.g. you can use bokodes to accurately track an object in 3D using a single camera, you could use them to replace RFID tags (simpler and cheaper), use bokodes in an interactive classroom to identify audience members, or to get electronic business cards as you drive by in your car.

In conclusion

Innovation in photography – even fundamental innovation – is far from dead. I can’t wait to see what they’ll dream up for next year!