OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 18, Iss. 13 — Jun. 21, 2010
  • pp: 14212–14224

An occlusion insensitive adaptive focus measurement method

Tarkan Aydin and Yusuf Sinan Akgul  »View Author Affiliations

Optics Express, Vol. 18, Issue 13, pp. 14212-14224 (2010)

View Full Text Article

Enhanced HTML    Acrobat PDF (1594 KB)

Browse Journals / Lookup Meetings

Browse by Journal and Year


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools



This paper proposes a new focus measurement method for Depth From Focus to recover depth of scenes. The method employs an all-focused image of the scene to address the focus measure ambiguity problem of the existing focus measures in the presence of occlusions. Depth discontinuities are handled effectively by using adaptively shaped and weighted support windows. The size of the support window can be increased conveniently for more robust depth estimation without introducing any window size related Depth From Focus problems. The experiments on the real and synthetically refocused images show that the introduced focus measurement method works effectively and efficiently in real world applications.

© 2010 Optical Society of America

OCIS Codes
(100.2000) Image processing : Digital image processing
(110.6880) Imaging systems : Three-dimensional image acquisition
(150.5670) Machine vision : Range finding
(150.6910) Machine vision : Three-dimensional sensing

ToC Category:
Imaging Systems

Original Manuscript: April 15, 2010
Revised Manuscript: June 15, 2010
Manuscript Accepted: June 15, 2010
Published: June 17, 2010

Tarkan Aydin and Yusuf S. Akgul, "An occlusion insensitive adaptive focus measurement method," Opt. Express 18, 14212-14224 (2010)

Sort:  Author  |  Year  |  Journal  |  Reset  


  1. M. Subbarao, and G. Surya, “Depth from defocus: A spatial domain approach,” Int. J. Comput. Vis. 13, 271–294 (1994). [CrossRef]
  2. A. Pentland, S. Scherock, T. Darrell, and B. Girod, “Simple range cameras based on focal error,” J. Opt. Soc. Am. A 11, 2925–2934 (1994). [CrossRef]
  3. S. K. Nayar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 1186–1198 (1996). [CrossRef]
  4. A. N. Rajagopalan, and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1158–1164 (1997). [CrossRef]
  5. V. Aslantas, and D. T. Pham, “Depth from automatic defocusing,” Opt. Express 15, 1011–1023 (2007). [CrossRef] [PubMed]
  6. P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 518–531 (2008). [CrossRef] [PubMed]
  7. E. Krotkov, “Focusing,” Int. J. Comput. Vis. 1, 223–237 (1987). [CrossRef]
  8. J. V. Michael Bove, “Entropy-based depth from focus,” J. Opt. Soc. Am. A 10, 561–566 (1993). [CrossRef]
  9. S. Nayar, and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994). [CrossRef]
  10. M. Subbarao, and T. Choi, “Accurate recovery of three-dimensional shape from image focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17, 266–274 (1995). [CrossRef]
  11. Y. Schechner, and N. Kiryati, “Depth from defocus vs. stereo: How different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000). [CrossRef]
  12. J. A. Marshall, C. A. Burbeck, D. Ariely, J. P. Rolland, and K. E. Martin, “Occlusion edge blur: a cue to relative visual depth,” J. Opt. Soc. Am. A 13, 681–688 (1996). [CrossRef]
  13. N. Asada, H. Fujiwara, and T. Matsuyama, “Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 155–167 (1998). [CrossRef]
  14. S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” Computer Vision, IEEE International Conference on 1, 488 (2001).
  15. P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture),” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on 2, 579 (2003).
  16. T. Aydin and Y. Akgul, “A new adaptive focus measure for shape from focus,” in “BMVC08,” (2008).
  17. H. Nair and C. Stewart, “Robust focus ranging,” in “Computer Vision and Pattern Recognition, 1992. Proceedings CVPR ’92, 1992 IEEE Computer Society Conference on,” (1992), pp. 309–314.
  18. J. M. Tenenbaum, “Accommodation in computer vision,” Ph.D. thesis, Stanford, CA, USA (1971).
  19. T. M. Subbarao and A. Nikzad, “Focusing technique,” Image Sig. Process. Anal. 32, 2824–2836 (1993).
  20. S. Jutamulia, T. Asakura, R. D. Bahuguna, and P. C. DeGuzman, “Autofocusing based on power-spectra analysis,” Appl. Opt. 33, 6210–6212 (1994). [CrossRef] [PubMed]
  21. Y. Xiong and S. Shafer, “Moment and hypergeometric filters for high precision computation of focus, stereo and optical flow,” Int. J. Comput. Vis. 22, 25–59 (1997). [CrossRef]
  22. J. Kautsky, J. Flusser, B. Zitov, and S. Simberov, “A new wavelet-based measure of image focus,” Pattern Recognit. Lett. 23, 1785–1794 (2002). [CrossRef]
  23. M. Kristan, J. Pers, M. Perse, and S. Kovacic, “A bayes-spectral-entropy-based measure of camera focus using a discrete cosine transform,” Pattern Recognit. Lett. 27, 1431–1439 (2006). [CrossRef]
  24. J. Meneses, M. A. Suarez, J. Braga, and T. Gharbi, “Extended depth of field using shapelet-based image analysis,” Appl. Opt. 47, 169–178 (2008). [CrossRef] [PubMed]
  25. W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recognit. Lett. 28, 493–500 (2007). [CrossRef]
  26. M. Subbarao and J.-K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 864–870 (1998). [CrossRef]
  27. Y. Tian, K. Shieh, and C. F. Wildsoet, “Performance of focus measures in the presence of nondefocus aberrations,” J. Opt. Soc. Am. A 24, B165–B173 (2007). [CrossRef]
  28. M. Subbarao, T.-C. Wei, and G. Surya, “Focused image recovery from two defocused images recorded with different camera settings,” IEEE Trans. Image Process. 4, 1613–1628 (1995). [CrossRef] [PubMed]
  29. P. Favaro and S. Soatto, 3-D Shape Estimation and Image Restoration: Exploiting Defocus and Motion-Blur (Springer London, 2007).
  30. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002). [CrossRef]
  31. R. Sakurai, “Irisfilter,” http://www.reiji.net/ (2004).
  32. J. Chen, S. Paris, and F. Durand, “Real-time edge-aware image processing with the bilateral grid,” in “SIGGRAPH 07,” (ACM, New York, NY, USA, 2007), p. 103.
  33. N. Joshi, R. Szeliski, and D. Kriegman, “PSF estimation using sharp edge prediction,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on pp. 1–8 (2008).
  34. N. Joshi, C. Zitnick, R. Szeliski, and D. Kriegman, “Image deblurring and denoising using color priors,” Computer Vision and Pattern Recognition, IEEE Computer Society Conference on pp. 1550–1557 (2009).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited