OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 22 — Nov. 4, 2013
  • pp: 27127–27141

Haze effect removal from image via haze density estimation in optical model

Chia-Hung Yeh, Li-Wei Kang, Ming-Sui Lee, and Cheng-Yang Lin  »View Author Affiliations


Optics Express, Vol. 21, Issue 22, pp. 27127-27141 (2013)
http://dx.doi.org/10.1364/OE.21.027127


View Full Text Article

Enhanced HTML    Acrobat PDF (7846 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Images/videos captured from optical devices are usually degraded by turbid media such as haze, smoke, fog, rain and snow. Haze is the most common problem in outdoor scenes because of the atmosphere conditions. This paper proposes a novel single image-based dehazing framework to remove haze artifacts from images, where we propose two novel image priors, called the pixel-based dark channel prior and the pixel-based bright channel prior. Based on the two priors with the haze optical model, we propose to estimate atmospheric light via haze density analysis. We can then estimate transmission map, followed by refining it via the bilateral filter. As a result, high-quality haze-free images can be recovered with lower computational complexity compared with the state-of-the-art approach based on patch-based dark channel prior.

© 2013 Optical Society of America

OCIS Codes
(100.2980) Image processing : Image enhancement
(100.3020) Image processing : Image reconstruction-restoration

ToC Category:
Image Processing

History
Original Manuscript: July 24, 2013
Revised Manuscript: October 19, 2013
Manuscript Accepted: October 23, 2013
Published: November 1, 2013

Virtual Issues
Vol. 9, Iss. 1 Virtual Journal for Biomedical Optics

Citation
Chia-Hung Yeh, Li-Wei Kang, Ming-Sui Lee, and Cheng-Yang Lin, "Haze effect removal from image via haze density estimation in optical model," Opt. Express 21, 27127-27141 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-22-27127


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis.48(3), 233–254 (2002). [CrossRef]
  2. K. Garg and S. K. Nayar, “Vision and rain,” Int. J. Comput. Vis.75(1), 3–27 (2007). [CrossRef]
  3. R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A17(5), 831–835 (2000). [CrossRef] [PubMed]
  4. S. Maji, A. C. Berg, and J. Malik, “Classification using intersection kernel support vector machines is efficient,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8. [CrossRef]
  5. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell.20(11), 1254–1259 (1998). [CrossRef]
  6. M. Jahangiri and M. Petrou, “An attention model for extracting components that merit identification,” in Proceedings of IEEE International Conference on Image Processing (Cairo, Egypt, 2009), pp. 965–968. [CrossRef]
  7. J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Proceedings of Advances in Neural Information Processing Systems (2007), pp. 545–552.
  8. M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and A. Radmanesh, “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. Intell. Transp. Syst.9(2), 349–360 (2008). [CrossRef]
  9. K. B. Gibson, D. T. Võ, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE Trans. Image Process.21(2), 662–673 (2012). [CrossRef] [PubMed]
  10. S. G. Narasimhan and S. K. Nayar, “Removing weather effects from monochrome images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 186–193.
  11. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001), pp. 325–332.
  12. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 1984 – 1991.
  13. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt.42(3), 511–525 (2003). [CrossRef] [PubMed]
  14. E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express17(2), 472–493 (2009). [CrossRef] [PubMed]
  15. S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in Proceedings of IEEE International Conference on Computer Vision (Kerkyra, Greece, 1999), pp. 820–827. [CrossRef]
  16. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Hilton Head Island, USA, 2000), pp. 598–605. [CrossRef]
  17. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell.25(6), 713–724 (2003). [CrossRef]
  18. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph.27(5), 1–10 (2008). [CrossRef]
  19. S. G. Narasimhan and S. K. Nayar, “Interactive deweathering of an image using physical models,” Proceedings of IEEE Workshop Color and Photometric Methods in Computer Vision (2003).
  20. F. Cozman and E. Krotkov, “Depth from scattering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (San Juan, USA, 1997), pp. 801–806. [CrossRef]
  21. K. K. Tan and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A18(10), 2460–2467 (2001). [CrossRef] [PubMed]
  22. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Anchorage, Alaska, USA, 2008), pp. 1–8. [CrossRef]
  23. R. Fattal, “Single image dehazing,” ACM Trans. Graph.27(3), 1–9 (2008). [CrossRef]
  24. J. P. Tarel and N. Hautière, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision (Kyoto, Japan, 2009), pp. 2201–2208. [CrossRef]
  25. N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Proceedings of MTS/IEEE OCEANS (Seattle, USA, 2010), pp. 1–8. [CrossRef]
  26. J. Zhang, L. Li, G. Yang, Y. Zhang, and J. Sun, “Local albedo-insensitive single image dehazing,” Vis. Comput.26(6–8), 761–768 (2010). [CrossRef]
  27. C. Xiao and J. Gan, “Fast image dehazing using guided joint bilateral filter,” Vis. Comput.28(6–8), 713–721 (2012). [CrossRef]
  28. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel Prior,” IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010). [PubMed]
  29. C. T. Chu and M. S. Lee, “A content-adaptive method for single image dehazing,” Lect. Notes Comput. Sci.6298, 350–361 (2010). [CrossRef]
  30. J. Yu and Q. Liao, “Fast single image fog removal using edge-preserving smoothing,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, (Prague, Czech Republic, 2011), pp. 1245–1248. [CrossRef]
  31. J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, and J. Sun, “Video dehazing with spatial and temporal coherence,” Vis. Comput.27(6–8), 749–757 (2011). [CrossRef]
  32. L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. Image Process.21(4), 1742–1755 (2012). [CrossRef] [PubMed]
  33. L. W. Kang, C. W. Lin, C. T. Lin, and Y. C. Lin, “Self-learning-based rain streak removal for image/video,” in Proceedings of IEEE International Symposium on Circuits and Systems (Seoul, Korea, 2012), pp. 1871–1874. [CrossRef]
  34. P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. Comput. Vis.86(2–3), 256–274 (2010). [CrossRef]
  35. J. Bossu, N. Hautière, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. Comput. Vis.93(3), 348–367 (2011). [CrossRef]
  36. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of IEEE International Conference on Computer Vision (Bombay, 1998), pp. 839–846. [CrossRef]
  37. A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to natural image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (New York, USA, 2006), pp. 61–68. [CrossRef]
  38. B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “Color and texture descriptors,” IEEE Trans. Circ. Syst. Video Tech.11(6), 703–715 (2001). [CrossRef]
  39. X. Wang and Z. Wang, “A novel method for image retrieval based on structure elements’ descriptor,” J. Vis. Commun. Image Represent.24(1), 63–74 (2013). [CrossRef]
  40. M. H. Pinson and S. Wolf, “Comparing subjective video quality testing methodologies,” in Proceedings of SPIE 5150, Visual Communications and Image Processing (2003).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited