OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A

| OPTICS, IMAGE SCIENCE, AND VISION

  • Editor: Franco Gori
  • Vol. 30, Iss. 9 — Sep. 1, 2013
  • pp: 1787–1795

Rational-operator-based depth-from-defocus approach to scene reconstruction

Ang Li, Richard Staunton, and Tardi Tjahjadi  »View Author Affiliations


JOSA A, Vol. 30, Issue 9, pp. 1787-1795 (2013)
http://dx.doi.org/10.1364/JOSAA.30.001787


View Full Text Article

Enhanced HTML    Acrobat PDF (1031 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.

© 2013 Optical Society of America

OCIS Codes
(100.3010) Image processing : Image reconstruction techniques
(100.6890) Image processing : Three-dimensional image processing

ToC Category:
Image Processing

History
Original Manuscript: April 26, 2013
Revised Manuscript: July 19, 2013
Manuscript Accepted: July 25, 2013
Published: August 19, 2013

Citation
Ang Li, Richard Staunton, and Tardi Tjahjadi, "Rational-operator-based depth-from-defocus approach to scene reconstruction," J. Opt. Soc. Am. A 30, 1787-1795 (2013)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-30-9-1787


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recogn. 44, 839–853 (2011). [CrossRef]
  2. I. Lee, M. T. Mahmood, S. Shim, and T. Choi, “Optimizing image focus for 3D shape recovery through genetic algorithm,” Multimedia Tools Appl., doi: 10.1007/s11042-013-1433-9 (2013). [CrossRef]
  3. M. Born and E. Wolf, Principles of Optics (Pergamon, 1965).
  4. S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer, 1998).
  5. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987). [CrossRef]
  6. M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Second International Conference on Computer Vision (IEEE, 1988), pp. 149–155.
  7. J. Ens and P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–108 (1987). [CrossRef]
  8. Y. Xiong and S. A. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997).
  9. A. N. Rajagopalan and S. Chaudhuri, “An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 577–589 (1999). [CrossRef]
  10. P. Favaro and S. Soatto, “Learning shape from defocus,” in Proceedings of 7th European Conference on Computer Vision (Springer, 2002), pp. 735–745.
  11. L. Ma and R. C. Staunton, “Integration of multiresolution image segmentation and neural networks for object depth recovery,” Pattern Recogn. 38, 985–996 (2005). [CrossRef]
  12. A. Levin, R. Fergus, and F. Durand, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graphics (TOG) 26, 70 (2007).
  13. C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).
  14. L. Hong, J. Yu, and C. Hong, “Depth estimation from defocus images based on oriented heat-flows,” in Proceedings of IEEE 2nd International Conference on Machine Vision (IEEE, 2009), pp. 212–215.
  15. H. Wang, F. Cao, S. Fang, Y. Cao, and C. Fang, “Effective improvement for depth estimated based on defocus images,” J. Comput. 8, 888–895 (2013).
  16. Q. F. Wu, K. Q. Wang, and W. M. Zuo, “Depth from defocus using geometric optics regularization,” Advanced Materials Research 709, 511–514 (2013).
  17. M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vis. 27, 203–225 (1998). [CrossRef]
  18. A. N. J. Raj and R. C. Staunton, “Rational filter design for depth from defocus,” Pattern Recogn. 45, 198–207 (2012). [CrossRef]
  19. M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997). [CrossRef]
  20. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (CUP Archive, 1999).
  21. M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994).
  22. C. D. Claxton and R. C. Staunton, “Measurement of the point-spread function of a noisy imaging system,” J. Opt. Soc. Am. A 25, 159–170 (2008). [CrossRef]
  23. W. Gander and W. Gautschi, “Adaptive quadrature–revisited,” BIT Numer. Math. 40, 84–101 (2000).
  24. IEEE Acoustics Speech, and Signal Processing Society Digital Signal Processing Committee, Programs for Digital Signal Processing (IEEE, 1979).
  25. J. S. Lim, Two-Dimensional Signal and Image Processing (Prentice-Hall, 1990).
  26. S. F. Ray, Applied Photographic Optics: Imaging Systems for Photography, Film and Video (Focal, 1988).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited