OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Editor: Joseph N. Mait
  • Vol. 52, Iss. 29 — Oct. 10, 2013
  • pp: 7152–7164

Passive depth estimation using chromatic aberration and a depth from defocus approach

Pauline Trouvé, Frédéric Champagnat, Guy Le Besnerais, Jacques Sabater, Thierry Avignon, and Jérôme Idier  »View Author Affiliations


Applied Optics, Vol. 52, Issue 29, pp. 7152-7164 (2013)
http://dx.doi.org/10.1364/AO.52.007152


View Full Text Article

Enhanced HTML    Acrobat PDF (2827 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper, we propose a new method for passive depth estimation based on the combination of a camera with longitudinal chromatic aberration and an original depth from defocus (DFD) algorithm. Indeed a chromatic lens, combined with an RGB sensor, produces three images with spectrally variable in-focus planes, which eases the task of depth extraction with DFD. We first propose an original DFD algorithm dedicated to color images having spectrally varying defocus blurs. Then we describe the design of a prototype chromatic camera so as to evaluate experimentally the effectiveness of the proposed approach for depth estimation. We provide comparisons with results of an active ranging sensor and real indoor/outdoor scene reconstructions.

© 2013 Optical Society of America

OCIS Codes
(100.0100) Image processing : Image processing
(100.3190) Image processing : Inverse problems
(110.0110) Imaging systems : Imaging systems
(110.1758) Imaging systems : Computational imaging

ToC Category:
Imaging Systems

History
Original Manuscript: June 19, 2013
Revised Manuscript: August 30, 2013
Manuscript Accepted: September 3, 2013
Published: October 9, 2013

Citation
Pauline Trouvé, Frédéric Champagnat, Guy Le Besnerais, Jacques Sabater, Thierry Avignon, and Jérôme Idier, "Passive depth estimation using chromatic aberration and a depth from defocus approach," Appl. Opt. 52, 7152-7164 (2013)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-52-29-7152


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987). [CrossRef]
  2. Y. Bando, B. Y. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graph. 27, 1–9 (2008).
  3. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–8.
  4. M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1988), pp. 149–155.
  5. C. Zhou and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.
  6. P. Favaro and S. Soatto, 3D Shape Estimation and Image Restoration (Springer, 2007).
  7. P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 1–7 (2007).
  8. H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable aperture camera using LCoS,” in Proceedings of the IEEE European Conference on Computer Vision (IEEE, 2010), pp. 337–350.
  9. S. Quirin and R. Piestun, “Depth estimation and image recovery using broadband, incoherent illumination with engineered point spread functions,” Appl. Opt. 52, A367–A376 (2013). [CrossRef]
  10. S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in International Conference on Computer Analysis of Images and Patterns (Springer2009), pp. 889–897.
  11. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).
  12. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 1–9 (2007).
  13. M. Martinello and P. Favaro, “Single image blind deconvolution with higher-order texture statistics,” Lect. Notes Comput. Sci. 7082, 124–151 (2011). [CrossRef]
  14. P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Single image local blur identification,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2011), pp. 613–616.
  15. M. Martinello, T. Bishop, and P. Favaro, “A Bayesian approach to shape from coded aperture,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 3521–3524.
  16. A. Chakrabarti and T. Zickler, “Depth and deblurring from a spectrally varying depth of field,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), pp. 648–661.
  17. J. Garcia, J. Sánchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2000), pp. 762–765.
  18. M. Robinson and D. Stork, “Joint digital-optical design of imaging systems for grayscale objects,” Proc. SPIE 7100, 710011 (2008).
  19. B. Milgrom, N. Konforti, M. Golub, and E. Marom, “Novel approach for extending the depth of field of Barcode decoders by using RGB channels of information,” Opt. Express 18, 17027–17039 (2010). [CrossRef]
  20. O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in Proceedings of IEEE Conference on Computational Photography (IEEE, 2010), p. 1–8.
  21. F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).
  22. J. Lim, J. Kang, and H. Ok, “Robust local restoration of space-variant blur image,” Proc. SPIE 6817, 68170S (2008).
  23. L. Waller, S. S. Kou, C. J. R. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express 18, 22817–22825 (2010). [CrossRef]
  24. S. Kebin, L. Peng, Y. Shizhuo, and L. Zhiwen, “Chromatic confocal microscopy using supercontinuum light,” Opt. Express 12, 2096–2101 (2004). [CrossRef]
  25. P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Chromatic depth from defocus, an theoretical and experimental study,” in Computational Optical Sensing and Imaging Conference, Imaging and Applied Optics Technical Papers (2012), paper CM3B.3.
  26. J. Idier, Bayesian Approach to Inverse Problems (Wiley, 2008).
  27. A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 88–101.
  28. G. Wahba, “A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem,” Ann. Stat. 13, 1378–1402 (1985). [CrossRef]
  29. A. Neumaier, “Solving ill-conditioned and singular linear systems: a tutorial on regularization,” SIAM Rev. 40, 636–666 (1998). [CrossRef]
  30. F. Champagnat, “Inference with gaussian improper distributions,” Internal Onera Report No.  (2012).
  31. L. Condat, “Color filter array design using random patterns with blue noise chromatic spectra,” Image Vis. Comput. 28, 1196–1202 (2010).
  32. D. L. Gilblom, K. Sang, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).
  33. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
  34. N. Joshi, R. Szeliski, and D. J. Kriegman, “PSF estimation using sharp edge prediction,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2008), pp. 1–8.
  35. M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-parametric sub-pixel local point spread function estimation is a well posed problem,” Int. J. Comput. Vis. 96, 175–194 (2012). [CrossRef]
  36. Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using calibrated lens simulations,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), p. 42.
  37. H. Tang and K. N. Kutulakos, “What does an aberrated photo tell us about the lens and the scene?” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2013), p. 86.
  38. J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analysis of low cost triangulation-based 3D camera: microsoft kinect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. 39, 175–180 (2012). [CrossRef]
  39. J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised multiresolution segmentation for images with low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 85–90 (2001). [CrossRef]
  40. P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier, “Design of a chromatic 3D camera with an end-to-end performance model approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (IEEE, 2013), pp. 953–960.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited