We use cookies to provide you with a better experience. If you continue to use this site, we'll assume you're happy with this. Alternatively, click here to find out how to manage these cookies

hide cookie message
78,713 News Articles

The fine art of computational photography and iOS

Since light could first be captured accurately on a substrate in the 1800s, photography has nearly always meant one thing and one thing only: a bit of space preserved across a slice of time. Even as analog photography and filmmaking have given way to digital replacements, cameras still capture a piece of light focused through a lens for the length of a given exposure, from thousandths of a second to minutes, hours, or years, and fix it—now in bits rather than silver halide molecules.

That’s changing. You needn’t throw out your cameras, however, even though computational photography will fundamentally alter how you conceive of and take pictures, even (or perhaps, especially) snapshots. In fact, you may already be using computational photography and not even know it. Part of the reason is that such algorithmic techniques are in apps you use on a smartphone, and that don’t seem out of the ordinary. The ever-better processors and cameras in phones like the iPhone allow real-time processing or relatively fast post-processing of images, and we’re just seeing the beginning of what that looks like.

Software builds a better picture

The term computational photography encompasses the technique of using sophisticated algorithms to combine multiple exposures across either or both time and space. You may have used different exposures on layers in an image-editing program to selectively multiply, darken, or mask each other and produce a modified output. Or you might have taken several shots and tried to distort them and stitch edges to make a larger picture. Those are fairly primitive (albeit useful) approaches compared to the greater image-processing sophistication used in computational photography.

The two most common popular early entrants in this field are high-dynamic range (HDR) photography and panoramic images, which demonstrate, respectively, photos combined across time and photos combined across space. Both allow finished pictures that are incapable of being captured by a single camera lens in a single exposure. (A broader look at capabilities may be found at Stanford professor Marc Levoy’s webpage. He and colleague Pat Hanrahan helped define the field both mathematically and through practical research and tool creation.)

HDR images are a sort of dynamic-range composite derived from multiple shots of an identically framed scene captured at very close intervals. Each shot is made with a different exposure duration (the brief time the shutter is open), and thus each captures a unique dynamic range: the span from light to dark and subtleties in between. An HDR algorithm takes the multiple shots, analyzes variances in dynamic range, and enhances the weakest areas (where highlights are blown out or shadow details are missing, for instance) to produce an often supernaturally exposed photo. It can sometimes be jarring, but HDR software can let you adjust that to be as natural or hyper-real as you wish. (See The best apps to create High Dynamic Range (HDR) photos.)

HDR might seem at first like a form of bracketing, in which a photographer shoots a scene multiple times at slightly different exposure, apertures, focus, or flash. But bracketing is used to make sure you get one good shot. HDR relies on several shots that combine to make one enhanced photograph. (In fact, on most cameras, exposure bracketing captures the images needed for post-processing in HDR software.) HDR picture-taking appeared on the iPhone in iOS 4.1, although the smartphone can’t always snap pictures rapidly enough to make good HDR results. (We explained HDR on the iPhone at its introduction in 2010, and provided tips about how to use it best, too.)

Panoramic images, in contrast to HDR, work in the dimension of space, not time. To make a computed panorama, software captures a set of pictures across an area, so that simultaneity is less important than overlap. Some panorama software, whether built into a camera or a program you install on a desktop or mobile device, uses a literal approach, and simply joins pictures with distortion and overlay. But the best software performs analysis of intersections, and tries to remove skewing at the same time. You take a host of pictures, and out comes a landscape or a 360-degree view. These panoramas provide a richer visual description of an environment you’re in than a single shot, even with an extremely wide-angle lens, could ever do.

But HDR and panorama stitchers are two examples of what we’ll see an explosion of as developers start to put theories into practice, and see what Apple’s (and other makers’) powerful processors can do. There are already excellent apps available.

Putting it to use

Dr. Levoy put his money (or, rather, time) where his mouth was by learning to program iOS apps, and releasing SynthCam, which is now free. The app relies on video recording (a frame every 1/30th of a second) rather than taking still images to create a shallow depth of field, an effect in which just a small part of an image is in focus, and objects closer and further away are blurred. This usually requires a single-lens reflex (SLR) camera to achieve. It’s a proof of concept, and thus a little tricky to use, but worth the effort. The app can also produce “tilt-focus” images, which make large objects appear like miniatures and models, and produce seemingly miraculous low-light photos.

The similarly named Photosynth (free), from Microsoft, is a panorama generator that emerged from a research project. It stitches together images into a three-dimensional view as you take photos. You can capture a real sense of the space you are in, and store it in a form of browsable 3D or create flat panoramic output as well. Several panoramic iOS apps offer different controls, although some perform simple overlaps, and don’t work very hard to create a seamless, corrected image. Panoramatic 360 ($1) is the best I tested for robust controls and options. It requires that you line up overlaps, which can produce quite good results, even if it’s more work than PhotoSynth.

There’s also the potential for more interesting camera technology to assist. The Lytro camera, released earlier in 2011, is the first one designed to capture a light field rather than static flat plane of light. The consumer-oriented camera has an odd form factor and takes low-resolution photos, but it can refocus an image after it’s taken. The secret is that the camera uses an array of micro-lenses interposed between a main lens and the CCD array that captures light. (No surprise that the company’s founder is a former graduate student of Dr. Levoy’s, and the professor serves on the company’s advisory board.)

Instead of capturing where the light falls, as in a conventional digital camera, the Lytro can calculate the direction of light rays, and trace them back through space to where they originate or reflect off an object. Thus, it can reset the focus (the theoretical focal plane) by recalculating where light rays would fall at different distances. There is no picture, as such; only a data that can be reconstructed into a picture. The Lytro is expensive ($399 base model), requires a deep lens array, and uses proprietary technology of all sorts, including a Flash-based Web-page viewer.

The future

Computational photography is a great fit with the camera on an iPhone, iPod touch, or iPad, because the relatively small size of the lens and image sensor requires more cleverness to obtain good or interesting photos. One could imagine a future iOS with an array of automatically employed techniques that users didn’t even have to set. Today, we flip an HDR switch to On, and use separate apps. Tomorrow, iOS could simply “magically” produce impossible photos that capture the world around us in a manner closer to what we see with our eyes.

Glenn Fleishman, a Macworld senior contributor, worked at the Kodak Center for Creative Imaging in early 1990s, where he worked with the first commercially produced digital camera. His first SLR was a Canon AE-1.


IDG UK Sites

Motorola Moto G2 release date, price and specs: Best budget smartphone gets upgrades

IDG UK Sites

How to join Apple's OS X Beta Seed Program: Get OS X Yosemite on your Mac before public release

IDG UK Sites

Why the BBC iPlayer outage was caused by a DDoS attack: Topsy and Tim isn't *that* popular

IDG UK Sites

How to make an 'Apple iWatch' using an iPod nano and a 3D printer