Computational photography or computational imaging refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that are not possible at all with film based photography. Examples of computational photography include in-camera computation of digital panoramas ,  high-dynamic-range images , and light field cameras . Light field cameras, 3D image, enhanced depth-of-field, and selective de-focusing (or “post focus”). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.
The definition of computational photography has evolved to a number of areas in computer graphics , computer vision , and applied optics . These areas are given below, by Shree K. Nayar [ citation needed ] . Within each area are a list of techniques, and for each other Deliberately omitted from the taxonomy are image processing (voir aussi digital image processing ) technologies applied to traditionally captured image in order to Produce better images. Examples of such techniques are image scaling, dynamic range compression (ie tone mapping ), color management , image completion (aka inpainting or hole filling), image compression , digital watermarking , and artistic image effects. Also omitted are technical That Produce Range data , volume data , 3D models , 4D light fields , 4D, 6D, 8D gold BRDFs , or other high-dimensional picture-based representations. Epsilon Photography is a sub-field of computational photography.
This is controlling photographic illumination in a structured fashion, then processing the captured images, to create new images. The applications include image-based relighting, image enhancement, image deblurring , geometry / material recovery and so forth.
High-dynamic-range imaging uses the images of the same scene to extend dynamic range.  Other examples include processing and merging differently illuminated images of the same subject matter (“lightspace”).
This is an image of an image coded, followed by a computational decoding to produce new images. Coded aperture imaging was mainly applied in astronomy or X-ray imaging to boost the image quality. A pinhole is a pinhole, a pinhole pattern is applied in imaging, and a deconvolution is performed to recover the image.  In coded exposure imaging , the on / off state of the shutter is coded to modify the kernel of motion blur .  In this way motion deblurring becomes a well-conditioned problem . Similarly, in a lens based coded aperture, the aperture can be modified by inserting a broadband mask . Thus, out of focus deblurring becomes a well-conditioned problem . The coded aperture can also improve the quality in light field acquisition using Hadamard transform optics.
Coded aperture patterns can also be created using color filters, in different wavelengths.   This allows to increase the amount of light that reaches the camera sensor, compared to binary masks.
Computational imaging is a set of imaging techniques that combines data acquisition and data processing to create the image of an object through indirect means to yield enhanced resolution , or additional information such as optical phase or 3D reconstruction. The information is often recorded using a conventional optical microscope configuration or with limited datasets.
Computational imaging permits to go beyond physical limitations of optical systems, such as numerical aperture  , or even obliterates the need for optical elements  .
For parts of the optical spectrum Where Imaging Elements Such As are objective difficulties to manufacture gold Image sensors can not be miniaturized, computational imaging Provides Useful alternatives, in fields Such As X-Ray  and THz radiation .
Among computational imaging techniques are lensless imaging , computational speckle imaging  , ptychography and fourier ptychography .
Computational imaging technique often draws on compressive sensing or phase retrieval techniques, where the angular spectrum of the object is being reconstructed. Other techniques are related to the field of computational imaging, such as digital holography , computer vision and reverse problems such as tomography .
This is processing of non-optically-coded images to produce new images.
These are detectors that combine sensing and processing, typically in hardware, like the oversampled binary image sensor .
Early work in computer vision
Although computational photography is one of the most popular technologies in computer graphics, many of these techniques have appeared in the computer vision literature, or other forms of 3D drawing.
- Adaptive Optics
- Multispectral imaging
- Simultaneous localization and mapping
- Super-resolution microscopy
- Time-of-flight camera
- Jump up^ Steve Mann. “Compositing Multiple Pictures of the Same Scene”, Proceedings of the 46th Annual Imaging Science & Technology Conference, May 9-14, Cambridge, Massachusetts, 1993
- Jump up^ S. Mann, C. Manders, and J. Fung, “The Lightspace Change Constraint Equation (LCCE) with practical application to estimation of the projectivity + gain transformation between multiple pictures of the same subject matter” IEEE International Conference on Acoustics , Speech, and Signal Processing, 6-10 April 2003, pp III – 481-4 vol.3.
- Jump up^ joint parameter estimation in both domain and range of functions in the orbit of the projective-Wyckoff group “”, IEEE International Conference on Image Processing, Vol.3, 16-19, pp.193-196 September 1996
- Jump up^ Frank M. Candocia: Jointly registering images in the domain and range by linear parts comparametric analysis. IEEE Transactions on Image Processing 12 (4): 409-419 (2003)
- Jump up^ Frank M. Candocia: Simultaneous homographic and comparametric alignment of multiple exposure-adjusted pictures of the same scene. IEEE Transactions on Image Processing 12 (12): 1485-1494 (2003)
- Jump up^ Steve Mann and RW Picard. “Virtual bellows: constructing high-quality images from video.”, In Proceedings of the IEEE First International Conference on Image ProcessingAustin, Texas, November 13-16, 1994
- Jump up^ ON BEING `UNDIGITAL ‘WITH DIGITAL CAMERAS: EXTENDING DYNAMIC RANGE BY COMBINING DIFFERENTLY EXPOSED PICTURES, IS & T’s (Society for Imaging Science and Technology) 48th Annual Conference, Cambridge, Massachusetts, May 1995, pages 422-428
- Jump up^ Martinello, Manuel. “Coded Aperture Imaging” (PDF) .
- Jump up^ Raskar, Ramesh; Agrawal, Amit; Tumblin, Jack (2006). “Coded Exposure Photography: Motion Deblurring Using Fluttered Shutter” . Retrieved November 29, 2010 .
- Jump up^ Veeraraghavan, Ashok; Raskar, Ramesh; Agrawal, Amit; Mohan, Ankit; Tumblin, Jack (2007). “Dappled Photography: Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing” . Retrieved November 29, 2010 .
- Jump up^ Martinello, Manuel; Wajs, Andrew; Quan, Shuxue; Lee, Hank; Lim, Dog; Woo, Taekun; Lee, Wonho; Kim, Sang-Sik; Lee, David (2015). “Dual Aperture Photography: Image and Depth from a Mobile Camera” (PDF) . International Conference on Computational Photography .
- Jump up^ Chakrabarti, A .; Zickler, T. (2012). “Depth and deblurring from a spectrally-varying depth-of-field”. IEEE European Conference on Computer Vision . 7576 : 648-666.
- Jump up^ Or et al.,”High numerical aperture Fourier ptychography: principle, implementation and characterization”Optics Express 23, 3 (2015)
- Jump up^ Boominathan et al.,”Lensless Imaging: A Computational Renaissance”(2016)
- Jump up^ Miyakawa et al.,”Coded aperture detector: an image sensor with sub 20-nm pixel resolution”,Optics Express22, 16 (2014)
- Jump up^ Katz et al.,”Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations”,Nature Photonics 8, 784-790 (2014)