OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 9, Iss. 3 — Mar. 6, 2014

Illuminant direction estimation for a single image based on local region complexity analysis and average gray value

Jizheng Yi, Xia Mao, Lijiang Chen, Yuli Xue, and Angelo Compare  »View Author Affiliations


Applied Optics, Vol. 53, Issue 2, pp. 226-236 (2014)
http://dx.doi.org/10.1364/AO.53.000226


View Full Text Article

Enhanced HTML    Acrobat PDF (1975 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Illuminant direction estimation is an important research issue in the field of image processing. Due to low cost for getting texture information from a single image, it is worthwhile to estimate illuminant direction by employing scenario texture information. This paper proposes a novel computation method to estimate illuminant direction on both color outdoor images and the extended Yale face database B. In our paper, the luminance component is separated from the resized YCbCr image and its edges are detected with the Canny edge detector. Then, we divide the binary edge image into 16 local regions and calculate the edge level percentage in each of them. Afterward, we use the edge level percentage to analyze the complexity of each local region included in the luminance component. Finally, according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model, we calculate the illuminant directions of the luminance component’s three local regions, which meet the requirements of lower complexity and larger average gray value, and synthesize them as the final illuminant direction. Unlike previous works, the proposed method requires neither all of the information of the image nor the texture that is included in the training set. Experimental results show that the proposed method works better at the correct rate and execution time than the existing ones.

© 2014 Optical Society of America

OCIS Codes
(100.0100) Image processing : Image processing
(200.4740) Optics in computing : Optical processing
(330.0330) Vision, color, and visual optics : Vision, color, and visual optics

ToC Category:
Vision, Color, and Visual Optics

History
Original Manuscript: September 17, 2013
Revised Manuscript: November 8, 2013
Manuscript Accepted: December 3, 2013
Published: January 9, 2014

Virtual Issues
Vol. 9, Iss. 3 Virtual Journal for Biomedical Optics

Citation
Jizheng Yi, Xia Mao, Lijiang Chen, Yuli Xue, and Angelo Compare, "Illuminant direction estimation for a single image based on local region complexity analysis and average gray value," Appl. Opt. 53, 226-236 (2014)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=ao-53-2-226


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. Bianco, A. Bruna, F. Naccari, and R. Schettini, “Color space transformations for digital photography exploiting information about the illuminant estimation process,” J. Opt. Soc. Am. A 29, 374–384 (2012). [CrossRef]
  2. S. Klammt, A. Neyer, and H. Müller, “Microoptics for efficient redirection of sunlight,” Appl. Opt. 51, 2051–2056 (2012). [CrossRef]
  3. S. Tominaga and T. Horiuchi, “Spectral imaging by synchronizing capture and illumination,” J. Opt. Soc. Am. A 29, 1764–1775 (2012). [CrossRef]
  4. B. Bringier, A. Bony, and M. Khoudeir, “Specularity and shadow detection for the multisource photometric reconstruction of a textured surface,” J. Opt. Soc. Am. A 29, 11–21 (2012). [CrossRef]
  5. H. L. Shen and Q. Y. Cai, “Simple and efficient method for specularity removal in an image,” Appl. Opt. 48, 2711–2719 (2009). [CrossRef]
  6. V. Diaz-Ramirez and V. Kober, “Target recognition under nonuniform illumination conditions,” Appl. Opt. 48, 1408–1418 (2009). [CrossRef]
  7. S. Karlsson, S. Pont, and J. Koenderink, “Illuminance flow over anisotropic surfaces,” J. Opt. Soc. Am. A 25, 282–291 (2008). [CrossRef]
  8. Y. F. Zhang and Y. H. Yang, “Multiple illuminant direction detection with application to image synthesis,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 915–920 (2001). [CrossRef]
  9. C. S. Bouganis and M. Brookes, “Multiple light source detection,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 509–514 (2004). [CrossRef]
  10. W. Zhou and C. Kambhamettu, “A unified framework for scene illuminant estimation,” Image Vis. Comput. 26, 415–429 (2008). [CrossRef]
  11. I. Sato, Y. Sato, and K. Ikeuchi, “Illumination distribution from shadows,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June1999, pp. 306–312.
  12. T. Kim and K. S. Hong, “A practical single image based approach for estimating illumination distribution from shadows,” in Proceedings of the IEEE International Conference on Computer Vision, October2005, pp. 266–271.
  13. S. Y. Cho and T. W. S. Chow, “Neural computation approach for developing a 3-D shape reconstruction model,” IEEE Trans. Neural Netw. 12, 1204–1214 (2001). [CrossRef]
  14. C. K. Chow and S. Y. Yuen, “Illumination direction estimation for augmented reality using a surface input real valued output regression network,” Pattern Recogn. 43, 1700–1716 (2010). [CrossRef]
  15. M. Chantler, M. Petrou, A. Penirsche, M. Schmidt, and G. MGunnigle, “Classifying surface texture while simultaneously estimating illumination direction,” Int. J. Comput. Vis. 62, 83–96 (2005).
  16. Q. F. Zheng and R. Chellappa, “Estimation of illuminant direction, albedo, and shape from shading,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 680–702 (1991). [CrossRef]
  17. K. Hara, K. Nishino, and K. Ikeuchi, “Light source position and reflectance estimation from a single view without the distant illumination assumption,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 493–505 (2005). [CrossRef]
  18. A. P. Pentland, “Local shading analysis,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6, 170–187 (1984). [CrossRef]
  19. J. Yang, Z. P. Deng, Y. K. Guo, and J. G. Li, “Two new approaches for illuminant direction estimation,” J. Shanghai Jiaotong Univ. 36, 894–896 (2002).
  20. H. Lee, H. Choi, B. Lee, S. Park, and B. Kang, “One dimensional conversion of color temperature in perceived illumination,” IEEE Trans. Consum. Electron. 47, 340–346 (2001).
  21. G. D. Finlayson, M. S. Drew, and B. V. Funt, “Color constancy: generalized diagonal transforms suffice,” J. Opt. Soc. Am. A 11, 3011–3019 (1994). [CrossRef]
  22. T. Zickler, S. P. Mallick, D. J. Kriegman, and P. N. Belhumeur, “Color subspaces as photometric invariants,” Int. J. Comput. Vis. 79, 13–30 (2008). [CrossRef]
  23. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679–698 (1986). [CrossRef]
  24. R. A. Peters and R. N. Strickland, “Image complexity metrics for automatic target recognizers,” in Proceedings of the Automatic Target Recognition System and Technology Conference, October1990, pp. 1–17.
  25. Z. Y. Gao, X. M. Yang, J. M. Gong, and H. Jin, “Research on image complexity description methods,” J. Image Graphics 15, 129–135 (2010).
  26. M. Chacon, L. E. Aguilar, and A. Delgado, “Fuzzy adaptive edge definition based on the complexity of the image,” in Proceedings of the 10th IEEE International Conference on Fuzzy Systems, December2001, pp. 675–678.
  27. M. Chacon, D. Alma, and S. Corral, “Image complexity measure: a human criterion free approach,” in Proceedings of the IEEE Annual Meeting of the North American Fuzzy Information Processing Society, June2005, pp. 241–246.
  28. J. H. Liu, J. F. Yang, and T. Fang, “Color property analysis of remote sensing imagery,” Acta Photon. Sin. 38, 441–447 (2009).
  29. Y. J. Yang, R. C. Zhao, and W. B. Wang, “The detection of shadow region in aerial image,” Signal Process. 18, 228–232 (2002).
  30. Y. Z. Li, J. Hu, S. Z. Niu, X. Z. Meng, and Y. L. Zhu, “Exposing digital image forgeries by detecting inconsistence in light source direction,” J. Beijing Univ. Posts Telecommun. 34, 26–30 (2011).
  31. Y. D. Lv, X. J. Shen, H. P. Chen, and Y. W. Wang, “Blind identification for digital images based on inconsistency of illuminant direction,” J. Jilin Univ. 34, 293–298 (2009).
  32. X. B. Sun, J. Yin, D. H. Li, and B. L. Xiao, “Point in polygon testing based on normal direction,” Opt. Precis. Eng. 16, 1122–1126 (2008).
  33. A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001). [CrossRef]
  34. K. C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell. 27, 684–698 (2005). [CrossRef]
  35. X. K. Wang, X. Mao, and I. Mitsuru, “Human face analysis with nonlinear manifold learning,” J. Electron. Inf. Technol. 33, 2531–2535 (2011).
  36. X. K. Wang, X. Mao, and C. D. Caleanu, “Nonlinear shape-texture manifold learning,” IEICE Trans. Inf. Syst. E93.D, 2016–2019 (2010).
  37. Y. L. Xue, X. Mao, C. D. Caleanu, and S. W. Lv, “Layered fuzzy facial expression generation of virtual agent,” Chin. J. Electron. 19, 69–74 (2010).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited