fmalan

The rise and fall of the digital SLR

During the latter days of film photography, almost every photographer used 35mm film, and almost every serious photographer used a 35mm film SLR. In the early days of digital photography the SLR was almost forgotten, only to make a huge comeback since 2003. There are signs that this is again to change. Why? This blog post will try to sum it up.

SLR is an acronym for “single lens reflex”, and means that the photographer physically looks through the lens by means of a rather complex mirror and prism setup. In the film days this was the only way to accurately show a photographer how his picture was going to look, since his eye was physically seeing the same image that was going to be projected onto the film when the shutter was pressed.

The complex path that light travels in a (digital) SLR camera

Compact “point and shoot” cameras had separate optical viewfinders through which the user framed the picture, but suffered from parallax error (especially when looking at close subjects), and could not properly show focus, depth of field or exposure.

A 35mm point and shoot camera

All of this changed with the advent of digital photography.

Read More»

It was a Nobel cause

A CCD sensor

A CCD sensor

The Royal Swedish Academy of Sciences has decided to award one half of the 2009 Nobel Prize in Physics to Willard S. Boyle and George E. Smith “for the invention of an imaging semiconductor circuit – the CCD sensor“. Willard and Boyle developed the first CCD sensor 40 years ago while working at Bell Labs. Luckily both of them are still alive to claim their prize, which just shows the advantage of being brilliant when you’re young!

Despite being superseded by the CMOS sensor in modern DSLR cameras, the CCD (Charged Coupled Device) remains the sensor of choice in almost all other digital cameras, ranging all the way from the cheapest cell phone to the space-grade sensor in the Hubble Space Telescope.

Alfred Nobel, inventor of dynamite and the founder of the Nobel prize, wrote in is final will that the prizes should go to  “those who … shall have conferred the greatest benefit on mankind. ” And a worthy and noble cause it is!

A Nobel prize medal

A Nobel prize medal

"Photoshop" as verb

And I could replace you with older pictures of you, from back when you looked happy.

I re-posted the xkcd cartoon of today. What do you think?

Computational Cameras @ SIGGRAPH

Refocussing a single light-field photograph

Refocussing a single light-field photograph

Last time I blogged about the interesting talk given by Will Wright at this year’s SIGGRAPH conference. Will’s talk was a spell-binding overview of current visualization and entertainment trends, and the role that human psychology and aesthetics plays within it. However, SIGGRAPH also has a more facts-and-numbers side – there actually is an academic conference hiding between all the glitz and glamour. (The one with technical papers.)

Now when I say “conference” I immediately conjure up images of badly dressed nerds in exotic locations, getting all excited about mind-numbingly boring topics, and then getting drunk and trying to hit on the only attractive girl among the several hundred conference-goers. But in SIGGRAPH’s case, at least, the topics are not all that mind-numbingly boring. Because in addition to the academic quality enforced by the reviewers, there is also a strong emphasis on originality, and a “what can this be used for, and does it look cool?” criteria.

There were several very interesting sessions, some of which focused on

  • Realistic simulation of fluid, fire, and computer animation physics
  • Human-computer interaction (Holographic teleconferencing with eye tracking, 3D with focused-ultrasound force-feedback)
  • Smart image editing (Resizing images without visibly cropping or distorting the content, and automatic black-and-white to colour conversion)
  • Making deformable 3D computer meshes, and then doing crazy things to them – like squashing a bunny and ripping a cow in two
  • 3D visualization of scientific data – complete with 3D glasses! (watching the Sun’s erupting surface, or travelling through an Egyptian mummy)

For this blog post, however, I want to talk about the “computational cameras” session. These talks might give you a glimpse into the future of photography…

Read More»

Will Wright @ SIGGRAPH

Will Wright

I am writing this post from the floor of SIGGRAPH 2009, this year being held in New Orleans, USA.

Yesterday I had the privilege of attending a keynote talk by Will Wright, the designer behind several landmark games such as SimCity, The Sims, and Spore. He presented an entertaining and panoramic talk entitled “Playing with Perception” which covered topics involving human psychology, visual processing, game design and the interrelatedness of multimedia technology. During his hour behind the microphone he swept through a staggering 270 slides, thereby wildly breaking the golden rule of  “one slide per minute”, but coming from him, it worked. Respect.

I am guessing here, but from his talk I deduced that he is also a hobby photographer, and he continued to dedicate several of his many slides to this topic. Here are a few of the points which caught my attention:

  • New technology fails to have its intented impact where there’s a misunderstanding, or ignorance of, human psychology.
  • Several decades ago everyone expected 3D imaging and displays to be widespread by the turn of the century. However, 2D photographic effects (like shallow depth-of-field) are used to great effect in movies and photography, where it focusses our attention where the director intended. This lack in 3D makes the experience poorer rather than richer. (For a live concert broadcast experience, however, 3D works well, since here we demand to focus our attention freely in an immersive “I’m there” experience.)
  • Visual art (painting) gradually evolved from crude rock painting to photorealism, before in recent times further developing into subjective and emotion-driven abstraction. The same may hold for animation and photography, at least in some contexts.
  • Case in point tilt-shift photography is taking us further away from reality, and yet has immense emotional appeal. A possible reason is that the toy-like appearance of the photographed reality gives us the feeling of being able to manipulate and play with the scene (à la SimCity) – this pleases our senses.
  • Being behind a camera lens gives you the ability to see the world in new ways, and trains your perception. After a while you see creative angles in everyday objects, even when your camera is not with you.
  • With the addition of digital and CG tools, film makers have the ability to create (almost) any conceivable image on screen – the question is therefore no longer “what is possible to create?”, but purely “what should we create?”

Indeed, what should we create? It’s often amazing to see how extremely simple technologies can be enormously immersive. A clear favourite of mine is this music video, which doesn’t make use of any computer animation. Digital effects can greatly enrich our experience, but the art lies in knowing when, and how much, it is needed.

Looking at his professional success it is clear that Will deeply understands the aesthetics of interactive technology. And this is the exactly what I admire in SIGGRAPH  – it brings technical innovation and aesthetics together in a single venue – something rare indeed.

Here’s to innovation, beauty and perception!

PS: Since writing the above blog, I have seen the 3D CG session featuring mind expanding scientific visualisation from NASA Goddard’s space flight centre. Stereoscopic 3D showing the sun’s coronal mass ejections in the UV spectrum (photographed in stereo by the twin SOHO spacecraft), a visualization of deep-ocean currents, seasonal ice cap variations, and much more. This was followed by a biomedical visualization tour of CTs obtained from 3000 year old Egyptian mummies. And then there were the animated shorts of Pixar, followed by an excerpt from U2 – live in 3D.

3D is indeed amazing when used correctly – it is just taking long to mature. When it comes into its own it will blow your mind – wow!

The next killer must-have camera feature

The Nikon E2s, a 1.3 Megapixel camera selling for $20,000 in 1995. What's changed since then?

The Nikon E2s, a 1.3 Megapixel camera selling for $20,000 in 1995. What's changed since then?

My housemate just walked into my room asking whether he should get the new iPhone 3Gs. He added “and it has a better camera: 3 megapixel instead of the 2 of the old one”. And this is coming from a very intelligent and technically educated person, which just shows how effectively the marketing guys mess with our minds. Three is better than two if you have a good lens and sufficient sensor size – but not here where probably neither improved since the older model.

In my post called “The Megapixel Race to nowhere” I talked at length about how the camera manufacturers are deceiving us by putting those stickers with “12 Megapixels” on cameras which take pictures which look crappy even when rescaled to a measly 300×300 pixels. Yes, the photo shown in that link was taken with a 12 megapixel camera. And yes, it looks crappy.  Compare that with this photograph taken with a 3.1 megapixel camera using technology of 7 years earlier. There are several reasons why this photograph taken by a new 12 megapixel camera looks so  bad compared to the older one – I won’t go into those reasons here, but suffice to say that the reason for the latter’s superiority lies not in the number of megapixels.

Luckily, it seems that now that most cameras have hit the 10 megapixel mark there has been a bit of a sanity check. The tag lines used to convince us to part with our hard-earned money focus on other features – and some of them are even useful! A killer feature is a feature that changes the way you use a piece of technology, or substantially improves the results you get from it.

Here’s a rough summary of the features which shaped digital camera evolution since the turn of the century:

Read More»

The Megapixel Race to nowhere

megapixels

How many of you have seen adverts for digital cameras announcing “12 Megapixel sensor for unprecedented image quality” ? The number of megapixels is often the only thing people enquire about a new camera, and the higher the number, the more impressed they are. That, as well as how much it costs, how it looks, and how many times it can zoom.

While I also think that cost and looks are important, I think the “megapixel” (MP) race has reached ridiculous levels. A few years ago, when 2 MP cameras were the norm, more was indeed better. But that was then. In fact, that shiny new 12 megapixel compact camera you bought will almost certainly take worse pictures than your neighbour’s 6 megapixel digital SLR which he bought 4 years ago. Let me try to shed some light on the situation…

When looking at your photographs (on a computer monitor, high definition plasma TV, or prints), you won’t see the detail-advantage of a 12 MP photograph over a 6 MP photograph because:

  • Your computer monitor can probably display somewhere between 1 and 2 MP
    (1920×1200 is pretty high-res, but still only 2.3MP)
  • An A4-size print on photo paper has less than 9 MP resolution, and 13x18cm less than 4MP

There *are* differences between good and bad photographs, but it is not a function of megapixels:

  • Your shiny National Geographic or Cosmopolitan magazines have limited resolution on a “full page” pic (probably < 5 MP), and yet those photographs look awesome!
  • Pictures on facebook are only about 0.27 MP, and yet you can often see the difference between a good and a bad (e.g. cell phone) camera. This is mostly due to differences in dynamic range and the quality of the lens, not megapixel count!

Having more megapixels is actually often a disadvantage:

  • A camera can only take pixel-level sharp pictures if the lens is able to form a sharp image on the sensor. The higher the pixel density, the sharper the lens needs to be. Very few cameras have lenses which can consistently provide detail to a sensor with 12 or more MP
  • A sensor pixel can only hold a certain amount of charge before “overflowing”. The smaller the pixel, the less charge it can hold. This directly influences dynamic range (see images below)
  • Every pixel needs light. Adding more pixels means each pixel will receive less light. This leads to “noisy” (speckled) images, since the camera needs to amplify the image signal more.
  • The size of the sensor directly influences the price and size and weight of the camera, and therefore small cheap cameras invariably have small sensors. The only way to keep pixel density low is by keeping the number of megapixels reasonable.
Relative sizes of sensors used in most current digital cameras.

Relative sizes of sensors used in most current digital cameras. (click for detail)

DPReview recently started listing the pixel density (MP/square cm) of all the cameras in their database. Because cameras have different size sensors the megapixel count on its own doesn’t tell you how densely they’re packed. Notice the enormous range in numbers! (a high pixel count combined with a low pixel density is best).

Camera Model Price* Pixel Count Pixel Density
Kodak EasyShare C913 110 $ 9.2 MP 37 MP/cm²
Canon PowerShot G9 500 $ 12.1 MP 28 MP/cm²
Panasonic Lumix LX3 400 $ 10.1 MP 24 MP/cm²
Panasonic Lumix G1 800 $ 12.1 MP 5.0 MP/cm²
Canon EOS 50D 1400 $ 15.0 MP 4.5 MP/cm²
Nikon D80 800 $ 10.2 MP 2.7 MP/cm²
Sigma DP1 680 $ 4.6 MP 1.6 MP/cm²
Nikon D3 4500 $ 12.1 MP 1.4 MP/cm²

*prices (including a basic lens for DSLRs) based on Amazon.com quotes for November 2008

“Dynamic Range” is the ratio between the brightest and the darkest values a camera can record at the same time. Large pixels generally have more dynamic range than small pixels. And that is something you notice even on small pictures on facebook. Cell phone cameras are terrible in this regard.

Having an excessive pixel density can lead to images with “blown highlights”, where regions of the photo are overexposed (just white), or underexposed (just black).

well_exposed ldr1
A well exposed photograph taken with sufficient dynamic range
(click for detail)
Blown highlights and underexposed shadows, due to low dynamic range
(click for detail)

Also, since the signal from each pixel is weaker, it needs to be amplified more. Amplifying the signal also amplifies the noise, which leads to random speckles and reduced detail. The noise can be suppressed with “noise reduction”, but this blurs the image and removes even more detail!

Low Noise Image High Image Noise (no noise reduction) Noisy image after "noise reduction"
Low-Noise Image
(click for detail)
Noisy Image
(click for detail)
Blurring due to “noise reduction”
(click for detail)

One fact, however, is that an image file with more pixels will take up more of you hard drive space. So, with your new little camera you might just end up with huge files showing millions and millions of noisy, blurry pixels which can’t cope with the really dark or really bright pictures.

In fact, when comparing the brand new semi-professional Canon 50D (15MP) against its predecessor the 40D (which had 10MP), the DPreview team concluded: “in terms of per-pixel sharpness the 50D cannot quite keep up with the better 10 MP cameras on the market … It appears that Canon has reached the limit of what is sensible, in terms of megapixels … Considering the disadvantages that come with higher pixel densities such as diffraction issues, increased sensitivity towards camera shake, reduced dynamic range, reduced high ISO performance and the need to store, move and process larger amounts of data, one could be forgiven for coming to the conclusion that at this point the megapixel race should probably stop.

Today’s new cameras can take better videos than ever before, have nice large displays, funky colours, are small and light, cost very little and can even detect faces and smiles. And these are all really cool. But we don’t need more megapixels for this!

To conclude: Beyond a certain point more pixels don’t improve picture quality – in fact, often the opposite. Sadly, it’s unlikely that manufacturers will stop the so-called “megapixel race” anytime soon. Why? Because numbers sell.

Moby killed my camera!

At Rock Werchter, on Friday 4 July 2008, a thin beam of intense green energy caused my 1-month old camera’s CCD sensor to have a stroke. Yes, dear readers, it’s true! Moby killed my camera. The high-powered(?) stage laser went straight into the lens, and – zap!

Here is what happened:

I thought cameras were designed to survive bright lights, but actions speak louder than words, and lasers shine brighter than than the brightest star.

Now, every picture I take with the camera looks exactly like this:

My camera now only sees this...

My camera now only sees this...

And now with a whacked camera, blasted ears and a stunned mind, I have also realised that this little blog has an existential crisis. In my opinion, successful blogs have a guiding theme or topic of interest – something this one has lacked thus far. That, and content. But now that will change… Over the coming weeks I’ll try to add more of the latter, so that the former might start to emerge. And I even made it a bit more pretty, by adding some eye candy to the to top bar… But then again, it’s only words, words, words. :)

Update: 15/07/2008

Indeed it seems as if things sometimes do go wrong at professional laser shows, as illuminated by the article in New Scientist entitled Party laser ‘blinds’ Russian ravers.
Ravers at the Aquamarine Open Air Festival in Kirzhach, 80 kilometres northeast of Moscow, began seeking medical help days after the show, complaining of eye and vision problems.

“They all have retinal burns, scarring is visible on them. Loss of vision in individual cases is as high as 80%, and regaining it is already impossible,” Kommersant quoted a treating ophthalmologist as saying.

Not cool.