OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 22, Iss. 16 — Aug. 11, 2014
  • pp: 19523–19537
« Show journal navigation

Image dehazing using polarization effects of objects and airlight

Shuai Fang, XiuShan Xia, Huo Xing, and ChangWen Chen  »View Author Affiliations


Optics Express, Vol. 22, Issue 16, pp. 19523-19537 (2014)
http://dx.doi.org/10.1364/OE.22.019523


View Full Text Article

Acrobat PDF (5786 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

The analysis of polarized filtered images has been proven useful in image dehazing. However, the current polarization-based dehazing algorithms are based on the assumption that the polarization is only associated with the airlight. This assumption does not hold up well in practice since both object radiance and airlight contribute to the polarization. In this study, a new polarization hazy imaging model is presented, which considers the joint polarization effects of the airlight and the object radiance in the imaging process. In addition, an effective method to synthesize the optimal polarized-difference (PD) image is introduced. Then, a decorrelation-based scheme is proposed to estimate the degree of polarization for the object from the polarized image input. After that, the haze-free image can be recovered based on the new polarization hazy imaging model. The qualitative and quantitative experimental results verify the effectiveness of this new dehazing scheme. As a by-product, this scheme also provides additional polarization properties of the objects in the image, which can be used in extended applications, such as scene segmentation and object recognition.

© 2014 Optical Society of America

1. Introduction

The image quality of outdoor scenes can be severely limited by atmospheric aerosols which scatter and absorb the target signal out of the optical path and scatter unwanted light into the optical path from the surroundings. To eliminate such negative effects, several dehazing methods have been developed in recent years. The existing dehazing methods can be divided into two categories: single image dehazing [1

1. K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010). [PubMed]

5

5. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8. [CrossRef]

] and polarization-based dehazing [6

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

10

10. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991. [CrossRef]

]. Because of their ill-posed nature, single image dehazing schemes rely on various assumptions to eliminate ambiguity, including dark channel prior [1

1. K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010). [PubMed]

], local consistency [2

2. C. H. Yeh, L.-W. Kang, M.-S. Lee, and C.-Y. Lin, “Haze effect removal from image via haze density estimation in optical model,” Opt. Express 21(22), 27127–27141 (2013). [CrossRef] [PubMed]

], neighboring smoothness and maximizing image contrast [3

3. J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2201–2208. [CrossRef]

]. Polarization-based dehazing methods use as few as two polarized images taken through a polarizer at different orientations. The representative schemes in polarization-based dehazing were proposed by Schechner et al. [6

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

8

8. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 36–45 (2005).

, 10

10. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991. [CrossRef]

], who assumed that only airlight is polarized (OAP assumption), as shown in Fig. 1(b)
Fig. 1 The components of a hazy image. (a) components of intensity. (b) Schechner et al.’s view of polarization’s components. (c) the proposed view of polarization’s components.
. In particular, Namer and Schechner [8

8. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 36–45 (2005).

] pointed out that this assumption works fine in most cases with the exception of occasional specular dielectric objects in a scene. Furthermore, they performed a correction of the airlight map by detecting incorrect areas and re-estimating the polarization property of the airlight. The correction process sharply increased the computation cost while only obvious mistakes can be corrected, such as areas of water bodies and shiny construction materials.

2. Optical models

2.1 Stokes vector and polarization imaging

A common representation of a polarization state is the Stokes vector representation [21

21. M. Bass, Devices, Measurements, and Properties, Vol. 2 of Handbook of Optics (McGraw-Hill, 1995), Chap. 22.

]. In this representation, a 4 × 1 column vector is assembled over a scene with x-y spatial coordinates as

S(x,y)=(S0(x,y)S1(x,y)S2(x,y)S3(x,y))=(I(x,y,00)+I(x,y,900)I(x,y,00)-I(x,y,900)I(x,y,450)-I(x,y,450)IL(x,y)-IR(x,y)).
(1)

where S0 represents the total intensity of the remitted and collected light; S1 represents the difference in intensities between the horizontal and vertical linearly polarized components; S2 represents the difference in intensities between linearly polarized components traveling at 45° and −45° with respect to the x-axis; and S3 represents the difference in intensities between right and left circularly polarized light. In this study, we have chosen to ignore S3 because circular polarization is relatively rare in airlight, and it has not appeared as a major component in natural scenes.

The Stokes vector for a partially polarized beam [21

21. M. Bass, Devices, Measurements, and Properties, Vol. 2 of Handbook of Optics (McGraw-Hill, 1995), Chap. 22.

] can be considered as a superposition of a completely polarized Stokes vector and a non-polarized Stokes vector. The polarized portion of the beam represents a net polarization ellipse traced by the electric field vector as a function of time. From the Stokes vector, the degree of linear polarization (DoLP) ρλ and the orientation angle of polarization (AOP) αλ ellipse are given by:
ρλ=(S1λ)2+(S2λ)2S0λ.
(2)
αλ=12arctan(S2λS1λ).
(3)
Here, λ[λR,λG,λB] represents the regular RGB wavebands of imaging system.

Suppose a perfect linear polarizer is placed in front of a camera. The observed intensity of image Iλ(x,y,θ) at pixel (x,y) is a function of the angle θ that the polarization analyzer lies with respect to a reference direction adopted in prior studies [22

22. M. W. Hyde, S. C. Cain, J. D. Schmidt, and M. J. Havrilla, “Material classification of an unknown object using turbulence-degraded polarimetric imagery,” in Proceedings of IEEE Transactions on Geoscience and Remote Sensing (IEEE, 2010), pp. 264–276. [CrossRef]

, 23

23. K. M. Yemelyanov, S. S. Lin, E. N. Pugh Jr, and N. Engheta, “Adaptive algorithms for 2-channel polarization sensing under various polarization statistics with nonuniform distributions,” Appl. Opt. 45(22), 5504–5520 (2006). [CrossRef] [PubMed]

], which can be described as:
Iλ(x,y,θ)=12S0λ(x,y)+12S1λ(x,y)cos2θ+12S2λ(x,y)sin2θ.
(4)
where S0λ, S1λ and S2λ are the first three Stokes parameters. The parameters S0λ, S1λ and S2λ can be obtained when we acquire three input polarized images with the linear polarizer setting at different orientations.

2.2 Polarization hazy imaging model

The following equation approximately represents the optical energy transferred through the homogeneous atmosphere medium by absorption and scattering processes [24

24. R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A 17(5), 831–835 (2000). [CrossRef] [PubMed]

, 25

25. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis. 48(3), 233–254 (2002). [CrossRef]

]. The total intensity of an image is proportional to its optical energy; therefore, image intensity is equal to the first element of the Stokes vector.
S0(x,y)=S0D(x,y)+S0A(x,y).
(5)
S0(x,y) is the intensity of the image captured by the camera at position (x, y). S0D(x,y) is called the direct transmission, which describes how the scene radiance is attenuated due to the atmosphere. S0A(x,y) is called airlight or path radiance. It originates from the environmental illumination, a portion of which is scattered into the line-of-sight by atmospheric particles. The expressions of S0D(x,y) and S0A(x,y) are:
S0D(x,y)=J(x,y)t(x,y).
(6)
S0A(x,y)=A(1-t(x,y)).
(7)
Here, J is the scene radiant intensity, A is the airlight radiant intensity corresponding to an object at an infinite distance (e.g the horizon), and t is the transmission map which can be expressed as:
t(x,y)=exp(βd(x,y)).
(8)
Here, β is the extinction coefficient due to scattering and absorption. Equation (8) describes the medium transmittance mainly depending on the distance d between the object and the observer. Thus the transmission map t can also be regarded as a scaled range map.

As reported in [6

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

, 8

8. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 36–45 (2005).

, 23

23. K. M. Yemelyanov, S. S. Lin, E. N. Pugh Jr, and N. Engheta, “Adaptive algorithms for 2-channel polarization sensing under various polarization statistics with nonuniform distributions,” Appl. Opt. 45(22), 5504–5520 (2006). [CrossRef] [PubMed]

], Eq. (2) can be equivalently expressed as Eq. (9)

ρ(x,y)=Imax(x,y,θmax)Imin(x,y,θmin)Imax(x,y,θmax)+Imin(x,y,θmin)=ΔI(x,y)S0(x,y).
(9)

where Imax(x,y,θmax) denotes the maximum intensity at position (x, y) when rotating the polarizer, and the angle of polarization analyzer at this time is called θmax. Imin(x,y,θmin) corresponds to the minimum intensity and the angle of polarization analyzer at this time is called θmin. We define ΔI as the polarized-difference (PD) image and S0 as the polarized-sum (PS) image [13

13. J. S. Tyo, M. P. Rowe, E. N. Pugh Jr, and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt. 35(11), 1855–1870 (1996). [CrossRef] [PubMed]

, 23

23. K. M. Yemelyanov, S. S. Lin, E. N. Pugh Jr, and N. Engheta, “Adaptive algorithms for 2-channel polarization sensing under various polarization statistics with nonuniform distributions,” Appl. Opt. 45(22), 5504–5520 (2006). [CrossRef] [PubMed]

, 26

26. J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41, 619–630 (2002). [CrossRef] [PubMed]

]. Equation (9) is more suitable for the wideband signals.

Similarly, the DoLP of the direct transmission S0D(x,y) and the airlight S0A(x,y) can be defined as:
ρD(x,y)=ImaxD(x,y,θmax)IminD(x,y,θmin)ImaxD(x,y,θmax)+IminD(x,y,θmin)=ΔD(x,y)S0D(x,y).
(10)
ρA(x,y)=ImaxA(x,y,θmax)IminA(x,y,θmin)ImaxA(x,y,θmax)+IminA(x,y,θmin)=ΔA(x,y)S0A(x,y).
(11)
Then, a polarization hazy image formation can be obtained and expressed as:
ρ(x,y)S0(x,y)=ρD(x,y)S0D(x,y)+ρA(x,y)S0A(x,y).
(12)
Note that A, J, S0, S0A, S0D, β, ρ, ρDandρAare functions of the light wavelength λ. Since RGB channels were available in the camera used in this study, the analysis for each channel can be performed independently.

3. An influence of polarization of object radiance

To prove that polarization of object radiance cannot be ignored in most cases, we select 40,000 polarized images from the two-year observation data for a fixed scene. The observation scene included sky, mountain, buildings and trees, as shown in Fig. 2(a)
Fig. 2 The description of the observed scene. (a) an observed scene. (b) the underlying topography of the light path. (c) the scene segmentation results.
. Directly in front of the imaging device several features were positioned at the following linear distances from the camera: a lawn (0-130 m), a pond (130-280 m), a reservoir (280-1000 m), buildings and trees (1000-6000 m), and a mountain (6000 m). The underlying topography of the light path can be categorized as a complex surface type, as shown Fig. 2(b). Owing to the high humidity of the region, haze is quite common throughout the year. Thus, it is easy to capture hazy images.

Each collected image was then segmented into five regions, as shown in Fig. 2(c). The means of the DoLP for each region of each image group were computed. The means of the DoLP for the sky region is considered as the DoLP of airlight ρAand that of the other regions are considered as the common DoLP ρof both the airlight and the object radiance. The statistical results are shown in Fig. 3
Fig. 3 The statistical results for mean of DoLP of each region. Each dot represents the DoLP of the corresponding region and each line corresponds to a polarized image group.
, where the points represent the DoLPs of corresponding regions and the points on the same dotted line represent the DoLPs of a certain image group.

Schechner et al assumed that light emanating from scene objects is not polarized, so its energy is evenly distributed between the polarization components [6

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

]. This means ΔD=0 and ρD=0. According to Eqs. (9) and (11), it can be concluded that the mean of the DoLP for the sky region should be greater than or equal to that of the other regions. However, it is not the case from the results shown in Fig. 3, where mean of the DoLP of the sky region is often less than or close to that of the other regions. We believe this is due to the effect of the polarization of the object radiance. Therefore, the assumption that polarization is associated only with the airlight is not always true.

Moreover, we further quantitatively analyzed on the relationship between ρA andρD. It can be derived that ρD-ρA=(S0/S0D)*(ρ-ρA) according to Eqs. (12) and (5). If ρ=ρA, then ρD=ρA. If ρis greater than ρA, since S0/S0D>1,then ρDis much larger than ρA. This means that ρD>ρ>ρA. This shows that the polarization of the object radiance in fact contributes greatly to the image polarization, and consequently cannot be ignored.

In fact, neither ρD nor ρA, should be ignored in the analysis of polarization images. There are two main reasons for this. The first reason is that the polarization of airlight varies with meteorological situations. The second reason is that the polarization of object radiance occurs with respect to the incident angle and the properties of material surface. Because polarizations of airlight and object radiance are dynamic and change considerably with specific situations, we cannot decide which one is more important. Therefore, neither of them can be ignored.

According to the above observation and analysis, a well-founded conclusion can be drawn that both polarization of airlight and object radiance typically contributes to the polarization of images.

4. Improved dehazing algorithm

4.1 Image restoration model

Since the polarization of the object radiance cannot be ignored, we will use the new polarization hazy model presented in Section 2, which considers both the polarization effects of the airlight and the object radiance, to remove the undesired haze.

Combining Eqs. (5), (7), (9), and (12), we get the expression of the transmission map t as follows:
t(x,y)=1ΔI(x,y)ρD(x,y)S0(x,y)A(ρA(x,y)ρD(x,y)).
(13)
Combining Eqs. (5)(7), and (13), we can get the expression of the intensity of the scenes radiance J (i.e. dehazed image) as follows:
J(x,y)=ΔI(x,y)ρA(x,y)S0(x,y)ρD(x,y)(1S0(x,y)/A)+ΔI(x,y)/AρA(x,y).
(14)
To get the dehazed image using Eq. (14), we need to estimate the following parameters: the PD image ΔI, the infinity atmospheric intensity A, the DoLP of the airlight ρA and the DoLP of the object ρD. The PD image ΔI can be obtained from the polarized images Imax and Imin. The method to synthesize the PD image will be described in Section 4.2. The infinity atmospheric intensity Aand the DoLP of the airlight ρA are global parameters, and the estimation methods for these parameters will be introduced in Section 4.3. As for the DoLP of the direct transmission, i.e. the DoLP of the object ρD, a decorrelation-based method will be proposed to deal with them and will be explained in detail in Section 4.4 and 4.5.

4.2 Synthesis of PD image ΔI

Tyo et al. [13

13. J. S. Tyo, M. P. Rowe, E. N. Pugh Jr, and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt. 35(11), 1855–1870 (1996). [CrossRef] [PubMed]

, 26

26. J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41, 619–630 (2002). [CrossRef] [PubMed]

, 27

27. J. S. Tyo, “Optimum linear combination strategy for an N-channel polarization sensitive vision or imaging system,” J. Opt. Soc. Am. A 15(2), 359–366 (1998). [CrossRef]

] proposed a measuring method for a PD image by automatically selecting the optimal orthogonal directions of the polarizer. However, since polarization properties of the object radiance in the scene are different, the best PD image cannot be captured only relying on the optimal orthogonal direction. Here, we propose a synthesis method to obtain the optimal PD image.

According to Eqs. (2) and (3), Eq. (4) can be equivalently described as [22

22. M. W. Hyde, S. C. Cain, J. D. Schmidt, and M. J. Havrilla, “Material classification of an unknown object using turbulence-degraded polarimetric imagery,” in Proceedings of IEEE Transactions on Geoscience and Remote Sensing (IEEE, 2010), pp. 264–276. [CrossRef]

]
Iλ(x,y,θ)=12(1ρλ(x,y))S0λ(x,y)+ρλ(x,y)S0λ(x,y)cos2(αλ(x,y)θ).
(15)
According to Eq. (15), the observed image intensity Iλ(x,y,θ) is maximum at θ=αλ and minimum at θ=αλ±π/2. Since the polarization properties of the observed objects are different (i.e. α is different for various object in a scene), we cannot directly capture the images Imax and Imin by rotating the polarizer. Here, we can solve ρ, α, and S0 when we have three polarized images as input with the linear polarizer setting at different orientations. However, we can synthesize images Imax andImin, which can be expressed as:
Imax(x,y,θmax)=12(1+ρ(x,y))S0(x,y).Imin(x,y,θmin)=12(1ρ(x,y))S0(x,y).
(16)
Clearly, the PD image can be obtained by subtracting Imax from Imin.

4.3 Estimation of ρA and A

In this study, the atmosphere was assumed to be homogeneous. Therefore, ρA and A are constant for all of the pixels in the image. As in prior studies [6

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

8

8. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 36–45 (2005).

], the pixel values of the sky region in the image can be used to estimate the two parameters, since they correspond to the airlight radiance from an object at an infinite distance. Then, the two parameters are estimated as:
A=1|Ω|(x,y)Ω(Imax(x,y,θmax)+Imin(x,y,θmin)).ρA=1|Ω|(x,y)Ω(Imax(x,y,θmax)Imin(x,y,θmin)Imax(x,y,θmax)+Imin(x,y,θmin)).
(17)
Here, Ω denotes the selected sky region and |Ω| represents the number of pixels inΩ. We automatically detect the sky region according to the prior method [28

28. D. Hoiem, A. A. Efros, and M. Hebert, “Automatic photo pop-up,” ACM Trans. Graph. 24(3), 577–584 (2005). [CrossRef]

].

4.4 Estimation of ρD and t

The transmission map t depends on the scene depth and the atmospheric attenuation coefficientβ, while the dehazed image J represents the inherent trait of the scene object. Therefore, it is reasonable to assume that they are not statistically correlated [4

4. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27(3), 988–992 (2008). [CrossRef]

, 23

23. K. M. Yemelyanov, S. S. Lin, E. N. Pugh Jr, and N. Engheta, “Adaptive algorithms for 2-channel polarization sensing under various polarization statistics with nonuniform distributions,” Appl. Opt. 45(22), 5504–5520 (2006). [CrossRef] [PubMed]

] over a localized set of pixels sharing the sameρD. It can be formulated as Covω(x,y)(t(x,y),J1(x,y))=0, (x,y)ω(x,y). However, it does not always hold up in practice due to noise. Therefore, we attempt to estimate an optimal value forρD:
ρD(x,y)=argminρD(x,y)|Covω(x,y)(t(x,y),J1(x,y))|.
(18)
whereCovω(x,y) denotes the covariance between two individual pixels in the local neighborhood ω(x,y). If we let E(ρD)=Covω(x,y)(t(x,y),J1(x,y)) and substitute Eqs. (13) and (14), we obtain:
E(ρD)=Cov{ρDS0(x,y)ΔI(x,y),ρDS0(x,y)ΔI(x,y)A(ρDρA)ΔI(x,y)ρAS0(x,y)}(ρDρA)A2.
(19)
If the pixels in the same patch belong to different objects, ρD varies across ω(x,y). To deal with this problem, we assign a weight to each pixel in ω(x,y).

The optimal solution of the above equation can be obtained by solving the differential equation dE2(ρD)/dρD=0. The method described here is similar to the Independent Component Analysis (ICA) method reported in prior work [4

4. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27(3), 988–992 (2008). [CrossRef]

].

4.5 Refining ρD(x,y) and t(x,y)

It is inevitable that some artifacts, such as block effects and estimation errors, are introduced into the results obtained based on the proposed scheme, as shown in Figs. 6(d) and 6(e). In order to overcome these problems, we apply image guided filtering [29

29. K. M. He, J. Sun, and X. Tang, “Guided image filtering,” in Proceedings of European Conference on Computer Vision, K. Daniilidis, P. Maragos, N. Paragios, eds. (Berlin, 2010), pp. 1–14.

] to post-process the estimated transmission map t. This filter is an excellent edge-preserving smoothing operator, so it can be used not only to filter the errors by neighborhood pixels smoothing but also to preserve the edges and structures. In this section, as t is in vector form, we use the one-dimensional symbol i to denote the pixel index for simplicity. We denote the refined transmission map by t^ and the original transmission map by t. Rewriting them in their vector form as t^ and t, the filtering output at a point i is expressed as:
t^i=jWij(I)tj.
(20)
The filter kernel Wij can be expressed by:
Wij(I)=1|ωk|2k:(i,j)ωk(1+(Iiμk)(Ijμk)σk2+ε).
(21)
where μk and σk are the mean and covariance matrix of pixels in window ωk, Ii and Ij are the pixels of the input image I at pixels i and j, respectively. ε is a regularization parameter, and |ωk| is the size of the window ωk. More details about this filter can be found in [29

29. K. M. He, J. Sun, and X. Tang, “Guided image filtering,” in Proceedings of European Conference on Computer Vision, K. Daniilidis, P. Maragos, N. Paragios, eds. (Berlin, 2010), pp. 1–14.

].

Moving the filter kernel through all positions, we can obtain the refined transmission map t^ . Then, we substitute t^ into Eq. (13) to obtain the refined DoLP map ρ^D.

Notes that only the transmission map t of R-channel is refined in our experiments, for reducing computing cost. According to Eq. (8), we can get refined t^gand t^b by a simple calculation. The expressions are t^g=exp((βg/βr)·lnt^r) and t^b=exp((βb/βr)·lnt^r), where βgβr=1Nlntg(x,y)lntr(x,y)and βbβr=1Nlntb(x,y)lntr(x,y), and N is image size.

5. Experimental results

5.1 Overview

Figure 4
Fig. 4 Flowchart of our proposed method.
shows the flowchart of the proposed scheme consisting of three steps. The first step aims at synthesizing the polarized images Imax and Imin. In the second step, we estimate the key model parameter ρD based on a decorrelation-based method. The other two parameters, ρA and t, are also estimated in this step. Finally in the third step, in order to correct the estimations and eliminate the block effects, we apply an image guided filtering to the transmission map t. Then, the improved t is used to refineρD. After that, the recovered image J is obtained according to Eq. (14).

5.2 Experimental data

5.3 Dehazing experiments

In this section, the performance of the proposed scheme is tested on polarized hazy images. Taking Scene 1 as an example [as shown in Fig. 5(a)], the flow of the process and the interim results are shown in Fig. 6
Fig. 6 Key parameter estimation and dehazing results for Scene 1. (a) synthesizing image Imax. (b) synthesizing image Imin. (c) polarized-difference (PD) image. (d) estimated rough DoLP map ρD. (e) Estimated rough transmission map t of R-channel. (f) refined transmission map t^ of R-channel. (g) final DoLP map ρ^D. (h) sky region detection. (i) dehazing results with rough ρD.(j) dehazing results with refined ρ^D.
. Since objects have different polarization properties, as shown in Fig. 6(c), the PD map shows the outline the contour of the objects. Accordingly, the PD map can be of great use for image processing applications, such as recognition and scene segmentation. There are some isolated bright spots distributed in the sky and building regions in Fig. 6(e), which are obviously errors. These errors can be corrected in the refined transmission map, as shown in Fig. 6(f). Furthermore, the refined transmission map succeeds in capturing the sharp edge discontinuities. Comparing the results shown in Figs. 6(d) and 6(g), the DoLP map of the refined object is much smoother. Figure 6(i) shows the dehazing result with the rough ρD, which has apparent errors in the regions outlined by the red rectangle and more noise in sky region than the final dehazing result Fig. 6(j). The contrast of Fig. 6(j) is 0.2501, and the contrast of hazy image is 0.0970. It can be seen that the proposed scheme can effectively improve image contrast and recover the details. Here the contrast is calculated according to the reference [30

30. R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Trans. Syst. Man Cybern. 3(6), 610–621 (1973). [CrossRef]

].

To evaluate the performance of proposed scheme on hazy images caused by marine aerosols, we also conducted experiments on Scenes 5 and 6. As can be seen in Fig. 7
Fig. 7 Experimental results for Scenes 5 and 6. (a) dehazing result of Scene 5. (b) dehazing result of Scene 6.
, the proposed approach can unveil the details and recover vivid color information even in some very dense haze regions. The experimental results show that the proposed scheme can also work well under the hazy weather of marine aerosols.

5.4 Dehazing comparison with and without considering ρD

Different from the existing polarization-based dehazing algorithms, the proposed algorithm considers the polarization of both airlight and object radiance. As mentioned in prior studies [6

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

8

8. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 36–45 (2005).

], if only the polarization of airlight is considered, the real radiance of a scene, J, is expressed as follows:

J(x,y)=ΔI(x,y)ρA(x,y)S0(x,y)ΔI(x,y)/AρA(x,y).
(22)

Taking Scene 2 as example, we perform several experiments to compare the proposed scheme with those that only consider the polarization of airlight [6

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

8

8. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 36–45 (2005).

]. Comparing Fig. 8(a)
Fig. 8 Comparision experiment with and without ρD for Scene 2. (a) dehazed image considering only the polarization of airlight. (b) dehazed images. (c) magnified region on the red rectangle in (a). (d) magnified region on the red rectangle in (b).
(contrast: 0.1170)with Fig. 8(b) (contrast: 0.1932), it can be seen that there is a significant improvement in image contrast, especially at the region outlined by the red rectangle in the dehazing results. The enlarged red rectangles shown in Figs. 8(c) and 8(d) are more apparent.

To further evaluate the effectiveness of the proposed model, another two sets of experiments are carried out the results are shown in Fig. 9
Fig. 9 Dehazing results for Scenes 3 and 4. (a) and (c) are the dehazed image without considering the DoLP of object. (b) and (d) are our dehazed images.
, whose input images are shown in Figs. 5(c) and 5(d). They are polarized images of the same scene at different atmospheric visibility. For Fig. 5(c), the DoLP of the sky region is similar to that of the object region. For Fig. 5(d), the DoLP of the sky region is less than that of the object region. Figures 9(a) and 9(c) (the contrasts: 0.1170 and 0.0344) are the dehazed images without considering the DoLP of object, which have lower contrast in comparison to the results shown in Figs. 9(b) and 9(d) (the contrasts: 0.1932 and 0.0572).

We calculate the noise levels of dehazing images according to the reference [31

31. S. Pyatykh, J. Hesser, and L. Zheng, “Image noise level estimation by principal component analysis,” IEEE Trans. Image Process. 22(2), 687–699 (2013). [CrossRef] [PubMed]

], which is shown in Table 1

Table 1. The noise level comparison of dehazing images with and without considering ρD

table-icon
View This Table
| View All Tables
. From these experimental results, we can see that the noise level is higher in the dehazed images without considering ρD. The value of ΔI/Ain Eq. (22) is usually small. When ρA is also very small, the image noise can be easily amplified in the process of image recovery. As for the model characterized by Eq. (14), the denominator is larger than that of Eq. (22) due to the contribution of the term ρD(1S0/A). Therefore, the proposed scheme can recover the dehazed image with reduced noise level.

5.5 Dehazing comparison with the method of Schechner et al.’s

Comparing Figs. 10(b)
Fig. 10 Comparison with Schechner’s dehazing results. (a) the two-group input polarized images. Each group has a worst-polarized image and a best-polarized image. (b) Schechner’s results. (c) our dehazed images.
and 10(c), it can be seen that the proposed scheme can achieve a better performance, especially in the region of specular reflection (outlined by the red circle in the dehazing results). This is because the scheme in [6

6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

] assumes that the light reflected from scene object has no significant polarization. This assumption fails to characterize the region of specular reflection. In order to deal with this situation, additional steps have to be applied to process the specular surface area individually, as in [8

8. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 36–45 (2005).

]. In contrast, the proposed scheme simultaneously considers the light polarization from object radiance and airlight, and consequently achieves better results without additional post-processing. The quantitative comparison between Figs. 10(b) and 10(c) are also shown in Table 2

Table 2. The six scene dehazed result’s image quality

table-icon
View This Table
| View All Tables
.

5.6 The quantitative evaluation of dehazed images

Due to the lack of ground truth images, we adopt the no-reference natural image quality evaluator (NIQE) [32

32. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a completely blind image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212.

] to quantitatively assess and compare the dehazing results. The NIQE could evaluate total image quality including image structure, image noise level, image blur degree, edge sharpness, and so on. The NIQE model constructed a collection of quality awareness features and then fit them to a multivariate Gaussian (MVG) model. The quality awareness features were derived from a spatial natural scene statistic (NSS) model, which was the framework of locally normalized luminance coefficients. Mittal et al. [33

33. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process. 21(12), 4695–4708 (2012). [CrossRef] [PubMed]

] showed that the natural image normalized luminance coefficients closely follow a Gaussian-like distribution, but the degraded image did not hold well for this distribution. Thus, the quality of a given test image was simply expressed as the distance between an MVG fit of the NSS features extracted from the test image and an MVG model from the corpus of natural images. Mathematically, this is defined as:
D(ν1,ν2,Σ1,Σ2)=(ν1ν2)T(Σ1+Σ22)1(ν1ν2).
(23)
where ν1,ν2 and Σ1,Σ2are the mean vectors and covariance matrices of the natural MVG model and the distorted image’s MVG model, respectively.

We ran the NIQE code (http://live.ece.utexas.edu/research/Quality/index.htm) to evaluate the dehazed images. The results are shown in Table 2. Values in Table 2 represent the distance between the model statistics of clear natural images and those of the dehazed image. The smaller the value is, the closer the dehazed image is to the clear images. The quantitative evaluation shows that all of the dehazed images based on the proposed scheme have been improved over the images processed by the existing schemes that only considered the polarization of airlight.

6. Conclusion

In this paper, a new approach is presented for the recovery of haze-free images from polarized hazy images. The main difference between this scheme and the existing polarization dehazing algorithms is that this scheme considers the polarization of the airlight and the object radiance jointly. Three novel aspects are studied, including (1) a new polarization hazy imaging model including ρA and ρD; (2) a decorrelation-based algorithm to obtain ρD from polarized hazy images; (3) an effective method to synthesize the optimal PD image. After obtaining these key parameters, the haze-free image can be recovered according to polarization hazy imaging model. The qualitative and quantitative experimental results show that our dehazing scheme can considerably improve the image quality in contrast, detail and signal-to-noise ratio.

This scheme can also generate two valuable derivative results, the PD image and the DoLP of the object. Unlike the traditional approach which uses the selection of measuring two optimal orthogonal directions of the polarizer [26

26. J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41, 619–630 (2002). [CrossRef] [PubMed]

], the PD image is obtained by synthesizing the polarized images Imaxand Imin. Since the polarization properties of the observed scene vary, the optimal PD image is actually unable to be directly measured in practice. However, the proposed scheme can obtain the optimal PD image, which can be applied into object perceptions, especially in foggy weather or underwater surroundings. In addition, this scheme can separate the DoLP of an object from the DoLP of an image. This scheme can obtain a much more stable and accurate DoLP of an object, comparing with the traditional approach which uses the DoLP of an image as a substitute. Since the DoLP of an object represents an essential attribute of the object, it will be useful in object recognition and scene segmentation.

Note that the proposed scheme requires that the input images contain some sky areas in order to estimate the parameters of airlight. However, the sky is sometimes cannot be seen within the field-of-view. Tarel [3

3. J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2201–2208. [CrossRef]

] proposed an atmospheric veil and an algorithm of inferring atmospheric veil. In our future work, we will apply an atmospheric veil of polarized images to obtain the polarization of airlight.

Acknowledgments

The authors thank Prof. Rao Ruizhong and Dr. Wu Pengfei for fruitful discussions. This work was supported by the National Natural Science Foundation of China (No.61175033) and the Fundamental Research Funds for Central Universities (No.2010HGXJ0018, No.2012HGCX0001).

References and links

1.

K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010). [PubMed]

2.

C. H. Yeh, L.-W. Kang, M.-S. Lee, and C.-Y. Lin, “Haze effect removal from image via haze density estimation in optical model,” Opt. Express 21(22), 27127–27141 (2013). [CrossRef] [PubMed]

3.

J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2201–2208. [CrossRef]

4.

R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27(3), 988–992 (2008). [CrossRef]

5.

R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8. [CrossRef]

6.

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]

7.

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42(3), 511–525 (2003). [CrossRef] [PubMed]

8.

E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE 5888, 36–45 (2005).

9.

E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express 17(2), 472–493 (2009). [CrossRef] [PubMed]

10.

S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991. [CrossRef]

11.

M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects using polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386. [CrossRef]

12.

H. Chen and L. B. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” Proc. SPIE 2599, 54–63 (1996). [CrossRef]

13.

J. S. Tyo, M. P. Rowe, E. N. Pugh Jr, and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt. 35(11), 1855–1870 (1996). [CrossRef] [PubMed]

14.

K. Yemelyanov, M. Lo, E. Pugh Jr, and N. Engheta, “Display of polarization information by coherently moving dots,” Opt. Express 11(13), 1577–1584 (2003). [CrossRef] [PubMed]

15.

L. B. Wolff, “Using polarization to separate reflection components,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369. [CrossRef]

16.

M. Ben-Ezra, “Segmentation with invisible keying signal,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 32–37. [CrossRef]

17.

K.M. Yemelyanov, S.S. Lin, E.N. Pugh, Jr., and N. Engheta, “Polarization-based segmentation for enhancement of target detection in adaptive polarization-difference imaging,” in Frontiers in Optics, OSA Technical Digest Series (Optical Society of America, 2005), paper JWA51.

18.

S. S. Lin, K. M. Yemelyanov, E. N. Pugh Jr, and N. Engheta, “Separation and contrast enhancement of overlapping cast shadow components using polarization,” Opt. Express 14(16), 7099–7108 (2006). [CrossRef] [PubMed]

19.

L. B. Wolff, “Polarization-based material classification from specular reflection,” IEEE Trans. Pattern Anal. Mach. Intell. 12(11), 1059–1071 (1990). [CrossRef]

20.

S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng. 47(12), 123201 (2008). [CrossRef]

21.

M. Bass, Devices, Measurements, and Properties, Vol. 2 of Handbook of Optics (McGraw-Hill, 1995), Chap. 22.

22.

M. W. Hyde, S. C. Cain, J. D. Schmidt, and M. J. Havrilla, “Material classification of an unknown object using turbulence-degraded polarimetric imagery,” in Proceedings of IEEE Transactions on Geoscience and Remote Sensing (IEEE, 2010), pp. 264–276. [CrossRef]

23.

K. M. Yemelyanov, S. S. Lin, E. N. Pugh Jr, and N. Engheta, “Adaptive algorithms for 2-channel polarization sensing under various polarization statistics with nonuniform distributions,” Appl. Opt. 45(22), 5504–5520 (2006). [CrossRef] [PubMed]

24.

R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A 17(5), 831–835 (2000). [CrossRef] [PubMed]

25.

S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis. 48(3), 233–254 (2002). [CrossRef]

26.

J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41, 619–630 (2002). [CrossRef] [PubMed]

27.

J. S. Tyo, “Optimum linear combination strategy for an N-channel polarization sensitive vision or imaging system,” J. Opt. Soc. Am. A 15(2), 359–366 (1998). [CrossRef]

28.

D. Hoiem, A. A. Efros, and M. Hebert, “Automatic photo pop-up,” ACM Trans. Graph. 24(3), 577–584 (2005). [CrossRef]

29.

K. M. He, J. Sun, and X. Tang, “Guided image filtering,” in Proceedings of European Conference on Computer Vision, K. Daniilidis, P. Maragos, N. Paragios, eds. (Berlin, 2010), pp. 1–14.

30.

R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Trans. Syst. Man Cybern. 3(6), 610–621 (1973). [CrossRef]

31.

S. Pyatykh, J. Hesser, and L. Zheng, “Image noise level estimation by principal component analysis,” IEEE Trans. Image Process. 22(2), 687–699 (2013). [CrossRef] [PubMed]

32.

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a completely blind image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212.

33.

A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process. 21(12), 4695–4708 (2012). [CrossRef] [PubMed]

OCIS Codes
(100.2980) Image processing : Image enhancement
(100.3020) Image processing : Image reconstruction-restoration

ToC Category:
Image Processing

History
Original Manuscript: May 13, 2014
Revised Manuscript: July 3, 2014
Manuscript Accepted: July 9, 2014
Published: August 5, 2014

Citation
Shuai Fang, XiuShan Xia, Huo Xing, and ChangWen Chen, "Image dehazing using polarization effects of objects and airlight," Opt. Express 22, 19523-19537 (2014)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-22-16-19523


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell.33(12), 2341–2353 (2010). [PubMed]
  2. C. H. Yeh, L.-W. Kang, M.-S. Lee, and C.-Y. Lin, “Haze effect removal from image via haze density estimation in optical model,” Opt. Express21(22), 27127–27141 (2013). [CrossRef] [PubMed]
  3. J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2201–2208. [CrossRef]
  4. R. Fattal, “Single image dehazing,” ACM Trans. Graph.27(3), 988–992 (2008). [CrossRef]
  5. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8. [CrossRef]
  6. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 325–332. [CrossRef]
  7. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt.42(3), 511–525 (2003). [CrossRef] [PubMed]
  8. E. Namer and Y. Y. Schechner, “Advanced visibility improvement based on polarization filtered images,” Proc. SPIE5888, 36–45 (2005).
  9. E. Namer, S. Shwartz, and Y. Y. Schechner, “Skyless polarimetric calibration and visibility enhancement,” Opt. Express17(2), 472–493 (2009). [CrossRef] [PubMed]
  10. S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1984–1991. [CrossRef]
  11. M. Saito, Y. Sato, K. Ikeuchi, and H. Kashiwagi, “Measurement of surface orientations of transparent objects using polarization in highlight,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1999), pp. 381–386. [CrossRef]
  12. H. Chen and L. B. Wolff, “Polarization phase-based method for material classification and object recognition in computer vision,” Proc. SPIE2599, 54–63 (1996). [CrossRef]
  13. J. S. Tyo, M. P. Rowe, E. N. Pugh, and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt.35(11), 1855–1870 (1996). [CrossRef] [PubMed]
  14. K. Yemelyanov, M. Lo, E. Pugh, and N. Engheta, “Display of polarization information by coherently moving dots,” Opt. Express11(13), 1577–1584 (2003). [CrossRef] [PubMed]
  15. L. B. Wolff, “Using polarization to separate reflection components,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1989), pp. 363–369. [CrossRef]
  16. M. Ben-Ezra, “Segmentation with invisible keying signal,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2000), pp. 32–37. [CrossRef]
  17. K.M. Yemelyanov, S.S. Lin, E.N. Pugh, Jr., and N. Engheta, “Polarization-based segmentation for enhancement of target detection in adaptive polarization-difference imaging,” in Frontiers in Optics, OSA Technical Digest Series (Optical Society of America, 2005), paper JWA51.
  18. S. S. Lin, K. M. Yemelyanov, E. N. Pugh, and N. Engheta, “Separation and contrast enhancement of overlapping cast shadow components using polarization,” Opt. Express14(16), 7099–7108 (2006). [CrossRef] [PubMed]
  19. L. B. Wolff, “Polarization-based material classification from specular reflection,” IEEE Trans. Pattern Anal. Mach. Intell.12(11), 1059–1071 (1990). [CrossRef]
  20. S. Tominaga and A. Kimachi, “Polarization imaging for material classification,” Opt. Eng.47(12), 123201 (2008). [CrossRef]
  21. M. Bass, Devices, Measurements, and Properties, Vol. 2 of Handbook of Optics (McGraw-Hill, 1995), Chap. 22.
  22. M. W. Hyde, S. C. Cain, J. D. Schmidt, and M. J. Havrilla, “Material classification of an unknown object using turbulence-degraded polarimetric imagery,” in Proceedings of IEEE Transactions on Geoscience and Remote Sensing (IEEE, 2010), pp. 264–276. [CrossRef]
  23. K. M. Yemelyanov, S. S. Lin, E. N. Pugh, and N. Engheta, “Adaptive algorithms for 2-channel polarization sensing under various polarization statistics with nonuniform distributions,” Appl. Opt.45(22), 5504–5520 (2006). [CrossRef] [PubMed]
  24. R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A17(5), 831–835 (2000). [CrossRef] [PubMed]
  25. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis.48(3), 233–254 (2002). [CrossRef]
  26. J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt.41, 619–630 (2002). [CrossRef] [PubMed]
  27. J. S. Tyo, “Optimum linear combination strategy for an N-channel polarization sensitive vision or imaging system,” J. Opt. Soc. Am. A15(2), 359–366 (1998). [CrossRef]
  28. D. Hoiem, A. A. Efros, and M. Hebert, “Automatic photo pop-up,” ACM Trans. Graph.24(3), 577–584 (2005). [CrossRef]
  29. K. M. He, J. Sun, and X. Tang, “Guided image filtering,” in Proceedings of European Conference on Computer Vision, K. Daniilidis, P. Maragos, N. Paragios, eds. (Berlin, 2010), pp. 1–14.
  30. R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Trans. Syst. Man Cybern.3(6), 610–621 (1973). [CrossRef]
  31. S. Pyatykh, J. Hesser, and L. Zheng, “Image noise level estimation by principal component analysis,” IEEE Trans. Image Process.22(2), 687–699 (2013). [CrossRef] [PubMed]
  32. A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a completely blind image quality analyzer,” in Proceedings of IEEE Conference on Signal Processing Letters (IEEE, 2013), pp. 209–212.
  33. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process.21(12), 4695–4708 (2012). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited