OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A

| OPTICS, IMAGE SCIENCE, AND VISION

  • Editor: Franco Gori
  • Vol. 30, Iss. 6 — Jun. 1, 2013
  • pp: 1155–1165

Depth inpainting by tensor voting

Mandar Kulkarni and Ambasamudram N. Rajagopalan  »View Author Affiliations


JOSA A, Vol. 30, Issue 6, pp. 1155-1165 (2013)
http://dx.doi.org/10.1364/JOSAA.30.001155


View Full Text Article

Enhanced HTML    Acrobat PDF (1054 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.

© 2013 Optical Society of America

OCIS Codes
(100.3010) Image processing : Image reconstruction techniques
(100.3020) Image processing : Image reconstruction-restoration

ToC Category:
Machine Vision

History
Original Manuscript: October 26, 2012
Revised Manuscript: April 8, 2013
Manuscript Accepted: April 8, 2013
Published: May 15, 2013

Citation
Mandar Kulkarni and Ambasamudram N. Rajagopalan, "Depth inpainting by tensor voting," J. Opt. Soc. Am. A 30, 1155-1165 (2013)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-30-6-1155


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. Z. J. Chen and J. Samaranbandu, “Planar region depth filling using edge detection with embedded confidence technique and Hough transform,” in International Conference on Multimedia and Expo (IEEE, 2003), pp. 89–92.
  2. R. Duda and P. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM 15, 11–15 (1972). [CrossRef]
  3. J. Wang and M. Oliveira, “A hole-filling strategy for reconstruction of smooth surfaces in range images,” in Brazilian Symposium on Computer Graphics and Image Processing (IEEE, 2003), pp. 11–18.
  4. P. Stavrou, P. Mavridis, G. Papaioannou, G. Passalis, and T. Theoharis, “3D object repair using 2D algorithms,” in Proceedings of International Conference on Computational Science (ACM, 2006), pp. 271–278.
  5. R. Sahay and A. Rajagopalan, “Joint image and depth completion in shape-from-focus: taking a cue from parallax,” J. Opt. Soc. Am. A 27, 1203–1213 (2010). [CrossRef]
  6. A. Bhavsar and A. Rajagopalan, “Inpainting large missing regions in range images,” in IEEE Conference on Pattern Recognition (IEEE, 2010), pp. 3464–3467.
  7. Davis S. Marschner, M. Garr, and M. Levoy, “Filling holes in complex surfaces using volumetric diffusion,” in First International Symposium on 3D Data Processing, Visualization, and Transmission (IEEE, 2002), pp. 428–441.
  8. A. Sharf, M. Alexa, and D. Cohen-Or, “Context-based surface completion,” ACM Trans. Graph. 23, 878–887 (2004). [CrossRef]
  9. X. Liu, X. Yang, and H. Zhang, “Fusion of depth maps based on confidence,” in International Conference on Electronics, Communications, and Control (IEEE, 2011), pp. 2658–2661.
  10. C. Frueh, S. Jain, and A. Zakhor, “Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images,” Int. J. Comput. Vis. 61, 159–184 (2005). [CrossRef]
  11. C. Frueh, R. Sammon, and A. Zakhor, “Automated texture mapping of 3D city models with oblique aerial imagery,” in Proceedings of 2nd International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2004), pp. 396–403.
  12. A. Abdelhafiz, B. Riedel, and W. Niemeier, “Towards a 3D true colored space by the fusion of laser scanner point cloud and digital photos,” in Proceedings of the ISPRS Working Group V/4 Workshop (ISPRS, 2005).
  13. A. Brunton, S. Wuhrer, and C. Shu, “Image-based model completion,” in Proceedings of the 6th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2007), pp. 305–311.
  14. P. Dias, V. Sequeira, F. Vaz, and J. Goncalves, “Registration and fusion of intensity and range data for 3D modelling of real world scenes,” in Proceedings 4th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2003), pp. 418–425.
  15. S. Xu, A. Georghiades, H. Rushmeier, J. Dorsey, and L. McMillan, “Image guided geometry inference,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2006), pp. 310–317.
  16. I. Tosic, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011). [CrossRef]
  17. F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013). [CrossRef]
  18. R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009). [CrossRef]
  19. J. Jia and C.-K. Tang, “Image repairing: robust image synthesis by adaptive ND tensor voting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 643–650.
  20. M. Kulkarni, A. Rajagopalan, and G. Rigoll, “Depth inpainting with tensor voting using local geometry,” in Proceedings of International Conference on Computer Vision Theory and Applications (SciTePress, 2012), pp. 22–30.
  21. M. Bertalmo, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of ACM SIGGRAPH (ACM, 2000), pp. 417–424.
  22. G. Medioni, M. Lee, and C. Tang, A Computational Framework for Segmentation and Grouping (Elsevier, 2000).
  23. G. Medioni and G. Guy, “Inference of surfaces, 3D curves, and junctions from sparse, noisy, 3-D data,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1265–1277 (1997). [CrossRef]
  24. G. Medioni, C. K. Tang, and M. S. Lee, “Tensor voting—theory and applications,” Int. J. Comput. Inf. Sci. 5, 1–10 (2000).
  25. S. Lee and G. Medioni, “Non-uniform skew estimation by tensor voting,” in Proceedings of Workshop on Document Image Analysis (ACM, 1997), pp. 1–4.
  26. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002). [CrossRef]
  27. K.-J. Oh, S. Yea, and Y.-S. Ho, “Hole-filling method using depth based in-painting for view synthesis in free viewpoint television (FTV) and 3D video,” in Picture Coding Symposium (IEEE, 2009), pp. 1–4.
  28. P. Besl and H. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992). [CrossRef]
  29. J. Shi and C. Tomasi, “Good features to track,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 593–600.
  30. L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in 7th International Conference on Automatic Face and Gesture Recognition (IEEE, 2006), pp. 211–216.
  31. D. Scharstein and C. Pal, “Learning conditional random fields for stereo,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited