OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Editor: Joseph N. Mait
  • Vol. 52, Iss. 16 — Jun. 1, 2013
  • pp: 3680–3688

Hybrid optical system for three-dimensional shape acquisition

In Yeop Jang, Min Ki Park, and Kwan H. Lee  »View Author Affiliations


Applied Optics, Vol. 52, Issue 16, pp. 3680-3688 (2013)
http://dx.doi.org/10.1364/AO.52.003680


View Full Text Article

Enhanced HTML    Acrobat PDF (947 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Hybrid concepts are often used to improve existing methods in many fields. We developed a hybrid optical system that consists of multiple color cameras and one depth camera to make up the concavity problem of the visual hull construction. The heterogeneous data from the color cameras and the depth camera is fused in an effective way. The experimental results show that the proposed hybrid system can reconstruct concave objects successfully by combining the visual hull and the depth data.

© 2013 Optical Society of America

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(110.5200) Imaging systems : Photography
(110.6880) Imaging systems : Three-dimensional image acquisition
(150.6910) Machine vision : Three-dimensional sensing

ToC Category:
Imaging Systems

History
Original Manuscript: March 1, 2013
Revised Manuscript: April 21, 2013
Manuscript Accepted: April 22, 2013
Published: May 22, 2013

Citation
In Yeop Jang, Min Ki Park, and Kwan H. Lee, "Hybrid optical system for three-dimensional shape acquisition," Appl. Opt. 52, 3680-3688 (2013)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-52-16-3680


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. Y. Furukawa and J. Ponce, “Accurate, dense, and robust multi-view stereopsis,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1362–1376 (2010). [CrossRef]
  2. G. H. Liu, X. Y. Liu, and Q. Y. Feng, “High-accuracy three-dimensional shape acquisition of a large-scale object from multiple uncalibrated camera views,” Appl. Opt. 50, 3691–3701 (2011). [CrossRef]
  3. A. Laurentini, “The visual hull concept for silhouette based image understanding,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 150–162 (1994). [CrossRef]
  4. E. Simioni, F. Ratti, I. Calliari, and L. Poletto, “Three-dimensional modeling using x-ray shape-from-silhouette,” Appl. Opt. 50, 3282–3288 (2011). [CrossRef]
  5. W. Matusik, C. Buehler, R. Raskar, S. J. Gortler, and L. McMillan, “Image-based visual hulls,” in SIGGRAPH, Proceedings of the Conference on Computer Graphics and Interactive Techniques (ACM, 2000), pp. 369–374.
  6. J. Feng, B. Song, and B. Zhou, “Bottom and concave surface rendering in image-based visual hull,” in Proceedings of the 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry (2008), paper 3.
  7. http://www.cyberware.com/products/scanners/px.html .
  8. http://www.pmdtec.com/ .
  9. http://en.wikipedia.org/wiki/Canesta .
  10. http://www.mesa-imaging.ch/ .
  11. http://www.microsoft.com/en-us/kinectforwindows/ .
  12. P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Rob. Res. 31, 647–663 (2012). [CrossRef]
  13. R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. Fitzgibbon, “Kinectfusion: real-time dense surface mapping and tracking,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (IEEE, 2011), pp. 127–136.
  14. http://www.3d3solutions.com/products/3d-scanner/hdi-advance/ .
  15. F. Keith, V. Anthon, and B. Ndimi, “Using silhouette consistency constraints to build 3D models,” in Proceedings of Fourteenth Annual South African Workshop on Pattern Recognition (PRASA) (2003).
  16. J. Zhu, L. Wang, R. Yang, and J. Davis, “Fusion of time-of-flight depth and stereo for high accuracy depth maps,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008).
  17. U. Hahne and M. Alexa, “Depth imaging by combining time-of-flight and on-demand stereo,” in Proceedings of the Dynamic 3D Imaging Workshop (Dyn3D) (2009), pp. 70–83.
  18. Y. M. Kim, C. Theobalt, J. Diebel, J. Kosecka, B. Misusik, and S. Thrun, “Multi-view image and ToF sensor fusion for dense 3D reconstruction,” in Proceedings of IEEE Conference on Computer Vision Workshops (IEEE, 2009), pp. 1542–1549.
  19. E. Tola, C. Zhang, Q. Cai, and Z. Zhang, “Virtual view generation with a hybrid camera array,” CVLAB-Report-2009-001 (École Polytechnique Fédérale de Lausanne, 2009).
  20. C. Kuster, T. Popa, C. Zach, C. Gotsman, and M. Gross, “A hybrid camera system for interactive free-viewpoint video,” in Proceedings of Vison, Modeling, and Visualization (VMV) (2011), pp. 17–24.
  21. A. Bogomjakov, C. Gotsman, and M. Magnor, “Free-viewpoint video from depth cameras,” in Proceedings of the International Workshop on Vision, Modeling and Visualization (VMV) (2006), pp. 89–96.
  22. L. Guan, J. S. Franco, and M. Pollefeys, “3D object reconstruction with heterogeneous sensor data,” in Proceedings of International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT) (2008), paper 108.
  23. C. H. Esteban and F. Schmitt, “Silhouette and stereo fusion for 3D object modeling,” Comput. Vis. Image Underst. 96, 367–392 (2004). [CrossRef]
  24. J. Carranza, C. Theobalt, M. Magnor, and H. P. Seidel, “Free-viewpoint video of human actors,” ACM Trans. Graph. 22, 569–577 (2003). [CrossRef]
  25. A. Weiss, D. Hirshberg, and M. J. Black, “Home 3D body scans from noisy image and range data,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2011), pp. 1951–1958.
  26. J. Tong, J. Zhou, L. Liu, Z. Pan, and H. Yan, “Scanning 3D full human bodies using Kinects,” IEEE Trans. Vis. Comput. Graph. 18, 643–650 (2012).
  27. K. Nakano and H. Chikatsu, “Camera calibration techniques using multiple cameras of different resolutions and bundle of distances,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XXXVIII, 484–489 (2010).
  28. http://www.vision.caltech.edu/bouguetj/calib_doc/ .
  29. W. Matusik, C. Buehler, and L. McMillan, “Polyhedral visual hulls for real-time rendering,” in Proceedings of Twelfth Eurographics Workshop on Rendering (EGWR) (2001), pp. 115–125.
  30. B. Curless and M. Levoy, “A volumetric method for building complex models from range images,” in SIGGRAPH, Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1996), pp. 303–312.
  31. T. Akenine-Möller and E. Hanines, “Acceleration algorithms: backface culling,” in Real-Time Rendering, 2nd ed. (2002), Chap. 9.3, pp. 359–363.
  32. N. Otsu, “A threshold selection method from gray-level histogram,” IEEE Trans. Syst. Man Cybern. SMC-8, 62–66 (1979).
  33. Y. K. Liu and B. Zalik, “An efficient chain code with Huffman coding,” Pattern Recogn. 38, 553–557 (2005). [CrossRef]
  34. E. Haines, “Point in polygon strategies,” in Graphics Gems IV, P. S. Heckbert, ed. (Morgan Kauffman, 1994), pp. 24–46.
  35. O. Sorkine, D. Cohen-Or, Y. Lipman, M. Alexa, C. Rossl, and H. P. Seidel, “Laplacian surface editing,” in Proceedings of the 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing (SGP) (2004), pp. 175–184.
  36. http://pointclouds.org/ .

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited