OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A

| OPTICS, IMAGE SCIENCE, AND VISION

  • Editor: Franco Gori
  • Vol. 27, Iss. 5 — May. 1, 2010
  • pp: 1203–1213

Joint image and depth completion in shape-from-focus: Taking a cue from parallax

Rajiv R. Sahay and A. N. Rajagopalan  »View Author Affiliations


JOSA A, Vol. 27, Issue 5, pp. 1203-1213 (2010)
http://dx.doi.org/10.1364/JOSAA.27.001203


View Full Text Article

Enhanced HTML    Acrobat PDF (852 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Shape-from-focus (SFF) uses a sequence of space-variantly defocused observations captured with relative motion between camera and scene. It assumes that there is no motion parallax in the frames. This is a restriction and constrains the working environment. Moreover, SFF cannot recover the structure information when there are missing data in the frames due to CCD sensor damage or unavoidable occlusions. The capability of filling-in plausible information in regions devoid of data is of critical importance in many applications. Images of 3D scenes captured by off-the-shelf cameras with relative motion commonly exhibit parallax-induced pixel motion. We demonstrate the interesting possibility of exploiting motion parallax cue in the images captured in SFF with a practical camera to jointly inpaint the focused image and depth map.

© 2010 Optical Society of America

OCIS Codes
(150.5670) Machine vision : Range finding
(150.6910) Machine vision : Three-dimensional sensing

ToC Category:
Machine Vision

History
Original Manuscript: September 14, 2009
Revised Manuscript: February 8, 2010
Manuscript Accepted: February 21, 2010
Published: April 30, 2010

Citation
Rajiv R. Sahay and A. N. Rajagopalan, "Joint image and depth completion in shape-from-focus: Taking a cue from parallax," J. Opt. Soc. Am. A 27, 1203-1213 (2010)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-27-5-1203


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994). [CrossRef]
  2. A. Nedzved, V. Bucha, and S. Ablameyko, “Augmented 3D endoscopy video,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 28–30 May 2008, Istanbul, Turkey (2008), pp. 349–352. [CrossRef]
  3. M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997). [CrossRef]
  4. R. G. Willson and S. A. Shafer, “What is the center of the image?” J. Opt. Soc. Am. A 11, 2946–2955 (1994). [CrossRef]
  5. T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.
  6. R. Kingslake, Optical System Design (Academic, 1983).
  7. R. R. Sahay and A. N. Rajagopalan, “Extension of the shape from focus method for reconstruction of high-resolution images,” J. Opt. Soc. Am. A 24, 3649–3657 (2007). [CrossRef]
  8. M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and image inpainting,” IEEE Trans. Image Process. 12, 882–889 (2003). [CrossRef]
  9. M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 2000), pp. 417–424.
  10. J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro, “Inpainting surface holes,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2003), pp. 903–906.
  11. A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process. 13, 1200–1212 (2004). [CrossRef] [PubMed]
  12. K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video inpainting under constrained camera motion,” IEEE Trans. Image Process. 16, 545–553 (2007). [CrossRef] [PubMed]
  13. S. Esedoglu and J. Shen, “Digital inpainting based on the Mumford-Shah-Euler image model,” Eur. J. Appl. Math. 13, 353–370 (2002). [CrossRef]
  14. C. A. Z. Barcelos and M. A. Batista, “Image restoration using digital inpainting and noise removal,” Image Vis. Comput. 25, 61–69 (2007). [CrossRef]
  15. A. C. Kokaram, “On missing data treatment for degraded video and film archives: a survey and a new Bayesian approach,” IEEE Trans. Image Process. 13, 397–415 (2004). [CrossRef] [PubMed]
  16. M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total variation regularization based super-resolution reconstruction algorithm for digital video,” EURASIP J. Advances Signal Process. 2007, Article ID, 74585 16 pages (2007).
  17. G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009). [CrossRef]
  18. Y. Shor and D. Lischinski, “The shadow meets the mask: Pyramid-based shadow removal,” Comput. Graph. Forum 27, 577–586 (2008). [CrossRef]
  19. P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 579–586.
  20. S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of the International Conference on Computer Vision (IEEE, 2001), pp. 488–493.
  21. S. W. Hasinoff and K. N. Kutulakos, “Confocal stereo,” Int. J. Comput. Vis. 81, 82–104 (2009). [CrossRef]
  22. M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005). [CrossRef]
  23. S. W. Hasinoff and K. N. Kutulakos, “A layer-based restoration framework for variable-aperture photography,” in Proceedings of the 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.
  24. M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004). [CrossRef]
  25. V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.
  26. L. Wang, H. Jin, R. Yang, and M. Gong, “Stereoscopic inpainting: joint color and depth completion from stereo images,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.
  27. P. Brodatz, Textures: A Photographic Album for Artists and Designers, (Dover, 1966).
  28. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987). [CrossRef] [PubMed]
  29. S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999). [CrossRef]
  30. S. Z. Li, Markov Random Field Modeling in Computer Vision (Springer-Verlag, 1995).
  31. J. Besag, “Spatial interaction and the statistical analysis of lattice systems,” J. R. Stat. Soc. Ser. B (Methodol.) 36, 192–236 (1974).
  32. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001). [CrossRef]
  33. V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004). [CrossRef] [PubMed]
  34. A. Raj and R. Zabih, “A graph cut algorithm for generalized image deconvolution,” in Proceedings of the International Conference on Computer Vision (IEEE, 2005), pp. 1048–1054.
  35. C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer, “Optimizing binary MRFs via extended roof duality,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  36. V. Kolmogorov and C. Rother, “Minimizing nonsubmodular functions with graph cuts-A review,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1274–1279 (2007). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited