OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 22, Iss. 7 — Apr. 7, 2014
  • pp: 8114–8119
« Show journal navigation

Single image recovery in scattering medium by propagating deconvolution

Rui Wang and Guoyu Wang  »View Author Affiliations


Optics Express, Vol. 22, Issue 7, pp. 8114-8119 (2014)
http://dx.doi.org/10.1364/OE.22.008114


View Full Text Article

Acrobat PDF (2357 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper proposed an effective method called propagating deconvolution to recover single image degraded in a scattering medium. The propagating deconvolution is in the manner of the calculus, which is based on multi-layered decomposition of the scattering volume and modeled blurring function of a single-layer scattering volume. Parameters of the deconvolution algorithm are estimated just from in situ measurement of the pure scattered background from one single image.

© 2014 Optical Society of America

1. Introduction

Images in a scattering medium (such as fog, water and smoke) are inevitably degraded due to absorption and scattering effects, resulting in blurring details and color distortion as well as poor image contrast. Image recovery in a scattering medium is highly desired for many applications and a large number of contributions have been made by researchers. Narasimhan [1

1. S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on CVPR (IEEE, 2000), 598–605. [CrossRef]

] investigated image recovery in bad weather based on a light transmission model in a scattering atmosphere from multiple images of the same scene under different weather conditions. Schechner [2

2. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on CVPR (IEEE, 2001), 1, 325–332.

] used a polarization based method to remove the haze using images taken under different polarization. Recently, attention has been focused on single image recovery due to the needs of real time imaging tasks [3

3. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on CVPR (IEEE, 2008), 1–8. [CrossRef]

5

5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on CVPR (IEEE, 2009), 1956–1963.

]. Tan [3

3. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on CVPR (IEEE, 2008), 1–8. [CrossRef]

] proposed to remove haze by maximizing the local contrast of the restored single image. Fattal [4

4. R. Fattal, “Single image dehazing,” J. ACM Siggraph 27(3), 1–9 (2008). [CrossRef]

] estimated the irradiance of the scene and the medium transmission under the assumption that the transmission and the surface shading are locally uncorrelated. He [5

5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on CVPR (IEEE, 2009), 1956–1963.

] proposed the dark channel prior which is based on the statistics of outdoor haze-free images for haze removal. Image haze removal in underwater conditions has also been investigated [2

2. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on CVPR (IEEE, 2001), 1, 325–332.

] [6

6. W. Hou, D. J. Gray, A. D. Weidemann, and R. A. Arnone, “Comparison and validation of point spread models for imaging in natural waters,” Opt. Express 16(13), 9958–9965 (2008). [CrossRef] [PubMed]

, 7

7. K. Iqbal, R. Abdul Salam, A. Osman, and A. Zawawi Talib, “Underwater image enhancement using an integrated color model” J. Comput. Sci. 34, 2–12 (2007).

].

Currently, most researches in this area have focused on the improvement of image contrast. Image recovery based on linear degradation models is thought to facilitate the solutions more powerful. Image recovery attempts to recover a true image from a blurred image through a linear process using the following form:

g(x,y)=f(x,y)h(x,y)+n(x,y)
(1)

Where g(x, y) is the blurred image, f(x, y) is the true image, h(x, y) is the point spread function (PSF), n(x, y) is the additive noise and * denotes the convolution operation.

Under an assumption of the linear process of (1), recovery of an image degraded by transmission through a scattering medium can be formulated as a deconvolution problem. In applications, the PSF, representing the blurring kernel of the forward scattering, depends on some certain properties of the scattering medium, the so-called inherent optical property (IOP).

2. Image formation by layered decomposition model

When an object is illuminated, forward scattering occurs when the reflected light is scattered during transmission to the sensor. The amount by which the forward scattered image is blurred depends on the range as well as the IOP of the scattering medium. We can formulate the blurring function as the output of sub-systems through layer-based decomposition, which is illustrated in Fig. 1.
Fig. 1 Layered decomposition of the scattering medium between the target and the sensor.

The whole scattering medium is decomposed into m layers along the symmetrical optical axis, denoted as z. The model widely used to describe the formation of a hazy image is [2

2. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on CVPR (IEEE, 2001), 1, 325–332.

]

E=Jexp(βz)+A(1exp(βz))
(2)

Where E is the observed intensity, J is the scene radiance, A is the air light, β is the attenuation coefficient of the scattering medium and z is the scene depth.

During the forward transmission, each layer contributes its own unique blurring function, denoted as a thin single-layered PSF hΔ. The whole blurring function is just the output of all the successive sub-systems in form h = hΔ (1) * hΔ(2) *…* hΔ(m), and hΔ(1) = hΔ(2) == hΔ(m) = hΔ..

Assuming that the thickness of each layer is Δ (Δ is sufficiently small); the single-layered blurring function is modeled as a Gaussian PSF [8

8. G. Wang, B. Zheng, and F. F. Sun, “Estimation-based approach for underwater image restoration,” Opt. Lett. 36(13), 2384–2386 (2011). [CrossRef] [PubMed]

].

hΔ(x,y)=(1/πθΔ)exp{(x2+y2)/θΔ}(θΔ0,ifΔ0,hΔ(x,y)δ(x,y))
(3)

whose Fourier transform is

HΔ(u,v)=exp[θΔπ2(u2+v2)]
(4)

The length z is determined by the range between the target and the sensor andz=mΔ. Considering the attenuation in light transmission, the transfer function associated with the scattering volume of length z is

Hz(u,v)=exp(βz)exp{zρπ2(u2+v2)}(ρ=θΔ /Δ)
(5)

Defining k(x, y) as the unit-length scattering coefficient, the received back scattered light intensity generated by layer Δ = dz at distance z is

dnz(x,y)={k(x,y)Idz}hz(x,y)
(6)

where hz(x, y) is the PSF in the spatial domain associated with Hz(u,v); and I is the illumination intensity at the layer at distance z. In the atmosphere, environmental illumination projects equal intensity on each layer. Thus, the illumination intensity at each layer is just I = A exp(-βz) + A(1-exp(-βz)) = A. In the frequency domain, we can write (6) as

dNz(u,v)=K(u,v)AdzHz(u,v)
(7)

When we integrate (7) over all z,

Nz(u,v)=AK(u,v)0zeβ+ρπ2(u2+v2)zdz=AK(u,v)β+ρπ2(u2+v2)(1e[β+ρπ2(u2+v2)]z)
(8)

Let k(x, y) = k0 + γ, where k0 denotes the mean component and γ denotes the random component respectively. γ is allowed to be the white Gaussian noise with variance σ2, because the scattering generated by particles in a sufficiently thin layer is thought to be non-contributory. The mean component of the total back scattering intensity, associated to k0 is

Sd=Nz(0,0)=(k0A/β){1exp(βz)}
(9)

The power spectrum of the random component associated with γ can be represented as

Pn(u,v)={A2σ2/[β+ρπ2(u2+v2)]2}(1exp{[βz+zρπ2(u2+v2)]})2
(10)

The essential statistics of the back scattering noise of haze was analyzed and presented in (9) and (10).

3. Algorithm of propagating deconvolution

3.1 Principle of propagating deconvlution

The principle mentioned above implies that the restoration based on the image formation model through layer-based sub-system can be implemented through a successive iterative process. Supposing the original image f passes m layers whose total length is z, the received degraded image g can be formulated as

g=f(m1)*hm+sd(m)f(m1)=f(m2)*hm1+sd(m1)f(1)=f*h1+sd(1)
(11)

The expressions above suggest that the restoration of the original image f can be obtained through successive layer-based deconvolutions, each of which is a knowledge-based solution if the PSF of single layer has been predetermined. Without knowing the entire scattering volume, we can approach the final solution progressively, i.e., by the way of propagating deconvolution in range dimension.

Obviously the single-layered PSF depends on the thickness of the layer as well as the IOP parameters, as represented in (3). In the followings, we show that instead of quantifying the range metric, the PSF formulation can be configured in the parameter space.

Supposing the scattering volume is equally divided into a set of layers. For the mth layer with a thickness of Δ, according to (5), the PSF with respect to this layer is

Hα(u,v)=exp{α(1+ρβπ2(u2+v2))}(α=βΔ)
(12)

The scattered light intensity which occurs within this layer can be derived by integrating expressions (9) and (10) from z to z + Δ, giving rise to the mean level Sd(m) and the power spectrum of the random component Pn(m) of this layer respectively:

sd(m)=k0Azz+Δexp{βz}dz=k0Aβexp{βz}(1exp{βΔ})
(13)
Pn(m)(u,v)=A2σ2(zz+Δexp{βzzρπ2(u2+v2)}dz)2=A2σ2(β+ρπ2(u2+v2))2[exp{βzρzπ2(u2+v2)}(1exp{βΔρΔπ2(u2+v2)})]2
(14)

Further defining auxiliary variables:η1=ρ/β; η2=Aσ/β; η3=k0A/β, remembering z = , then

Hα(u,v)=exp{α(1+η1π2(u2+v2))}
(15)
sd(m)=η3exp{mα}(1exp{α})
(16)
Pn(m)(u,v)=η22[1+η1π2(u2+v2)]2[exp{mα(1+η1π2(u2+v2))(1exp{α(1+η1π2(u2+v2))})]2
(17)

It is noted that in underwater medium, the active illumination has to substitute for the environmental illumination in atmosphere. Therefore the above representations are modified with respect to the attenuation during transmission of the illumination intensity. Suppose the illumination source is located at the sensor in underwater, the light intensity projected onto the layer of the scattering medium at distance z is thus I = Aexp(-βz). Without loss of essentiality, the formulation of (7) are alternatively formulated as (18).

dNz'(u,v)=K(u,v)Aexp(βz)dzHz(u,v)
(18)

The corresponding formulations of underwater model needs to take 2 β instead of β in the formulation of (9), (10), (13) and (14).

When we obtain a patch image of the sky or water which means z→∞ and m→∞, there remains a pure scattering background. Fitting the observed scattering background to Sd and Pn gives rise to an in situ estimate of η1, η2 and η3,.

3.2 Control of the propagating process

Similar to most of the iterative deconvolution techniques, the stop of iterations needs to be controlled by a defined metric. The proposed algorithm does allow for the control of the convergence through the evaluation of image restoration, i.e., the blur measure. The approach of maximum entropy has been successfully applied for image restoration [9

9. M. Pinchas and B. Z. Bobrovsky, “A maximum entropy approach for blind deconvolution,” Signal Process. 86(10), 2913–2931 (2006). [CrossRef]

]. It was found that the blur measure of image entropy effectively favored the perceptual evaluation when applying the proposed methods in our experiment, which can be applied to control the iterations. The entropy of image f is defined as [9

9. M. Pinchas and B. Z. Bobrovsky, “A maximum entropy approach for blind deconvolution,” Signal Process. 86(10), 2913–2931 (2006). [CrossRef]

]:

S=i=1Lpilnpi
(19)

Where pi is the frequency of occurrence of pixels at ith grey level, i = 1, 2... L, The iteration stopped when the updated image possessed the maximum of S. It should be noted that the result of S at each iteration relies on the selection of the step length, i.e., the presetα.

4. Implementation and experimental results

According to (15)(17), once the variables η1, η2 and η3 were evaluated and α was preset, the original image can be regarded as f (0)(x, y), and well developed deconvolution techniques can be applied through an iterative routine. The Wiener filter is widely used in image restoration and can be applied to implement the algorithm. The computational steps are as below and finally end by control criteria:

f(m+1)(x,y)=f(m)(x,y)sd(m),m=0,1,2
(20)
F(m+1)(u,v)=Hα(u,v)|Hα(u,v)|2+Pn(m)(u,v)/Pf(m)(u,v)F(m)(u,v)
(21)

We conducted experiments on images degraded by different scattering medium. The propagating deconvolution routine was applied in three channels R, G and B of the color image respectively and the three results were recomposed into the output image. Recovery of a single fogy image was shown in Fig. 2.
Fig. 2 (a) is an original fog image, (b)-(d) are the successive recovered images of (a), choosing α = 0.3, for m = 1, 2, 3, respectively. The propagation stops at (c) associated to the maximum S.
It can be shown that propagating process can be properly controlled by the evaluation of the blur measure S in our experiments. However, there still exist uncertainties in the evaluation of the best recovery of the image, either caused by the disagreement between the objective metrics and the visual inspection, or due to the inappropriate quantification of the parameterα. A sophisticated solution is to “slow down” the propagating steps when approaching to the best evaluation by the defined blur measure, to refine the recovered output to agree to visual inspection as much as possible. Thus we can change the parameter α to smaller scales at the step before the maximum S is being reached to obtain the final output image.

Figure 3 shows the results using methods of Fattal [4

4. R. Fattal, “Single image dehazing,” J. ACM Siggraph 27(3), 1–9 (2008). [CrossRef]

], He [5

5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on CVPR (IEEE, 2009), 1956–1963.

] and our proposed method, respectively.
Fig. 3 column (a) are the original images; column (b) are the results by Fattal [4]; column (c) are the results by He [5]; column (d) are refined results by our method.
Outputs of each method were evaluated by blur measure S.

It was shown that for samples on first and third rows in Fig. 3, our method performed best by the evaluation criterion S, while the method of He [5

5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on CVPR (IEEE, 2009), 1956–1963.

] performed the best for the sample on second row. It is noted that the propagating deconvolution algorithm runs by way of deblurring the scene at successive depth associated to the propagating step, i.e., the depth based local recovery because corresponds to the volume of the scatter medium. Since the blur measure S is in fact computed globally, for a scene with large depth of view, such as the second row in Fig. 3, the globally computed S, but locally associated to a certain depth, is inevitably biased evaluated.

In general, while the method of He [5

5. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on CVPR (IEEE, 2009), 1956–1963.

] performed well in terms of image contrast by inspection, our method showed advantages of noise depression and color fidelity in image recovery. Because the proposed propagating deconvolution method models both the forward scattering and back scattering effects within a linear degradation framework, the deconvolution operation can properly filter the noise while deblurring the target. Moreover, since the optical properties (transmission and scattering) usually differ in scattering medium, especially in underwater, color compensation is an important requirement of image recovery. By the proposed algorithm, three color channels are processed independently, each of which employs color-specific parameters estimated from their own channel respectively. We thought this to be a reasonable solution to color compensation, and the experiments verified its good performance compared with other methods.

A comparison of a local region recovery at long depth is further given in Fig. 4.
Fig. 4 (a) is a selected patch from the column (a), the second row of Fig. 3, (b)-(d) are the recovered results by methods of Fattal [4], He [5], and our method, respectively.
This example implied that in the case of poor signal-to-noise ratio, the proposed method still performed the best in terms of noise depression and color fidelity for depth-dependent local recovery. Nevertheless, a global descattering can be expected by fusing successive recovered images.

4. Conclusion

This paper presents an algorithm of propagating deconvolution for single image recovery based on a linear degradation model to represent image formation as successive layer-based outputs in a scattering medium. The algorithm was actually designated and applied for real-time image recovery under unknown environments. It should be noted that the proposed PSF of the layer-based elementary scattering volume is only an approximated model and the solution is thus sub-optimal. The suitability in real-time applications and robustness in computation, particularly for an instantaneous imaging task in a scattering medium, have been stressed in this work. Nevertheless, the approximated solution is expected to be further refined through well-developed blind deconvolution techniques, now that a sufficiently good initial estimate is available. Finally, physical interpretations behind the method could be expected to stimulate alternative representations for space-varying image recovery and range estimation in a scattering medium.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (60772058) and partially supported by National Education Ministry Engineer Research Center of Ocean Information Technology, Ocean University of China.

References and links

1.

S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on CVPR (IEEE, 2000), 598–605. [CrossRef]

2.

Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on CVPR (IEEE, 2001), 1, 325–332.

3.

R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on CVPR (IEEE, 2008), 1–8. [CrossRef]

4.

R. Fattal, “Single image dehazing,” J. ACM Siggraph 27(3), 1–9 (2008). [CrossRef]

5.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on CVPR (IEEE, 2009), 1956–1963.

6.

W. Hou, D. J. Gray, A. D. Weidemann, and R. A. Arnone, “Comparison and validation of point spread models for imaging in natural waters,” Opt. Express 16(13), 9958–9965 (2008). [CrossRef] [PubMed]

7.

K. Iqbal, R. Abdul Salam, A. Osman, and A. Zawawi Talib, “Underwater image enhancement using an integrated color model” J. Comput. Sci. 34, 2–12 (2007).

8.

G. Wang, B. Zheng, and F. F. Sun, “Estimation-based approach for underwater image restoration,” Opt. Lett. 36(13), 2384–2386 (2011). [CrossRef] [PubMed]

9.

M. Pinchas and B. Z. Bobrovsky, “A maximum entropy approach for blind deconvolution,” Signal Process. 86(10), 2913–2931 (2006). [CrossRef]

OCIS Codes
(100.0100) Image processing : Image processing
(100.1830) Image processing : Deconvolution

ToC Category:
Scattering

History
Original Manuscript: January 9, 2014
Revised Manuscript: February 27, 2014
Manuscript Accepted: March 20, 2014
Published: March 31, 2014

Citation
Rui Wang and Guoyu Wang, "Single image recovery in scattering medium by propagating deconvolution," Opt. Express 22, 8114-8119 (2014)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-22-7-8114


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. G. Narasimhan, S. K. Nayar, “Chromatic framework for vision in bad weather,” in Proceedings of IEEE Conference on CVPR (IEEE, 2000), 598–605. [CrossRef]
  2. Y. Schechner, S. G. Narasimhan, S. K. Nayar, “Instant dehazing of images using polarization,” in Proceedings of IEEE Conference on CVPR (IEEE, 2001), 1, 325–332.
  3. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on CVPR (IEEE, 2008), 1–8. [CrossRef]
  4. R. Fattal, “Single image dehazing,” J. ACM Siggraph 27(3), 1–9 (2008). [CrossRef]
  5. K. He, J. Sun, X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of IEEE Conference on CVPR (IEEE, 2009), 1956–1963.
  6. W. Hou, D. J. Gray, A. D. Weidemann, R. A. Arnone, “Comparison and validation of point spread models for imaging in natural waters,” Opt. Express 16(13), 9958–9965 (2008). [CrossRef] [PubMed]
  7. K. Iqbal, R. Abdul Salam, A. Osman, A. Zawawi Talib, “Underwater image enhancement using an integrated color model” J. Comput. Sci. 34, 2–12 (2007).
  8. G. Wang, B. Zheng, F. F. Sun, “Estimation-based approach for underwater image restoration,” Opt. Lett. 36(13), 2384–2386 (2011). [CrossRef] [PubMed]
  9. M. Pinchas, B. Z. Bobrovsky, “A maximum entropy approach for blind deconvolution,” Signal Process. 86(10), 2913–2931 (2006). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited