- Computational photography
-
Computational imaging refers to any image formation method that involves a digital computer. Computational photography refers broadly to computational imaging techniques that enhance or extend the capabilities of digital photography. The output of these techniques is an ordinary photograph, but one that could not have been taken by a traditional camera.
Its current definition, which stems from a 2004 course at Stanford University and a 2005 symposium at MIT (see links below), has evolved to cover a number of subject areas in computer graphics, computer vision, and applied optics. These areas are given below, organized according to a taxonomy proposed by Shree K. Nayar. Within each area is a list of techniques, and for each technique one or two representative papers or books are cited. Deliberately omitted from the taxonomy are image processing (see also digital image processing) techniques applied to traditionally captured images in order to produce better images. Examples of such techniques are image scaling, dynamic range compression (i.e. tone mapping), color management, image completion (a.k.a. inpainting or hole filling), image compression, digital watermarking, and artistic image effects. Also omitted are techniques that produce range data, volume data, 3D models, 4D light fields, 4D, 6D, or 8D BRDFs, or other high-dimensional image-based representations.
Contents
Computational illumination
This is controlling photographic illumination in a structured fashion, then processing the captured images, to create new images. The applications include image-based relighting, image enhancement, image deblurring, geometry/material recovery and so forth.
Computational optics
This is capture of optically coded images, followed by computational decoding to produce new images. Coded aperture imaging was mainly applied in astronomy or X-ray imaging to boost the image quality. Instead of a single pin-hole, a pinhole pattern is applied in imaging, and deconvolution is performed to recover the image. In coded exposure imaging, the on/off state of the shutter is coded to modify the kernel of motion blur.[1] In this way motion deblurring becomes a well-conditioned problem. Similarly, in a lens based coded aperture, the aperture can be modified by inserting a broadband mask.[2] Thus, out of focus deblurring becomes a well-conditioned problem. The coded aperture can also improve the quality in light field acquisition using Hadamard transform optics.
Computational processing
This is processing of non-optically-coded images to produce new images.
Computational sensors
These are detectors that combine sensing and processing, typically in hardware.
Early work in computer vision
Although computational photography is a currently popular buzzword in computer graphics, many of its techniques first appeared in the computer vision literature, either under other names or within papers aimed at 3D shape analysis.
References
- ^ Raskar, Ramesh; Agrawal, Amit; Tumblin, Jack (2006). "Coded Exposure Photography: Motion Deblurring using Fluttered Shutter". http://web.media.mit.edu/~raskar/deblur/. Retrieved November 29, 2010.
- ^ Veeraraghavan, Ashok; Raskar, Ramesh; Agrawal, Amit; Mohan, Ankit; Tumblin, Jack (2007). "Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing". http://web.media.mit.edu/~raskar/Mask/. Retrieved November 29, 2010.
External links
- Computational Photography (Raskar, R., Tumblin, J.,), A.K. Peters. In press.
- Special issue on Computational Photography, IEEE Computer, August 2006.
- Camera Culture and Computational Journalism: Capturing and Sharing Visual Experiences, IEEE CG&A Special Issue, Feb 2011.
Categories:
Wikimedia Foundation. 2010.