## Two-kernel image deconvolution |

Optics Express, Vol. 21, Issue 22, pp. 27269-27276 (2013)

http://dx.doi.org/10.1364/OE.21.027269

Acrobat PDF (928 KB)

### Abstract

A method for deconvolution of the true image of an object from two recorded images is proposed. These two images have to be made by an imaging system with two different but interconnected kernels. The method is formulated as a system of Fredholm equations of the first kind reduced to a single functional equation in Fourier space. The kernels of the system and the true image of an object are found from the same recorded images.

© 2013 Optical Society of America

## 1. Introduction

*F*(

*x, y*) is the true image of an object,

*I*(

*x*

_{0}, y

_{0}) – a recorded image of the object, and

*K*[

*x*–

*x*

_{0}, y – y

_{0}] – the kernel of this equation. There are numerous approaches to solving this problem, including blind deconvolution [1

1. M. Demenikov and A. R. Harvey, “Parametric blind-deconvolution algorithm to remove image artifacts in hybrid imaging systems,” Opt. Express **18**(17), 18035–18040 (2010). [PubMed]

2. W. Dong, H. Feng, Z. Xu, and Q. Li, “A piecewise local regularized Richardson–Lucy algorithm for remote sensing image deconvolution,” Opt. Laser Technol. **43**(5), 926–933 (2011). [CrossRef]

4. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. **62**(1), 55–59 (1972). [CrossRef]

5. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. **79**, 745–754 (1974). [CrossRef]

8. V. A. Gorelik and A. V. Yakovenko, “Fine structure extraction without analyser function measurement,” J. Elec. Spec. Rel. Phenom. **73**(1), R1–R3 (1995). [CrossRef]

9. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. **25**(3), 795–804 (2006). [CrossRef]

10. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. **26**(3), 69 (2007). [CrossRef]

11. Y. Y. Schechner and N. Kiryati, “Depth from Defocus vs. Stereo: How Different Really Are They?” Int. J. Comput. Vis. **39**(2), 141–162 (2000). [CrossRef]

12. D. Rajan and S. Chaudhuri, “Simultaneous Estimation of Super-Resolved Scene and Depth Map from Low Resolution Defocused Observations,” IEEE Trans. Pattern Anal. Mach. Intell. **25**(9), 1102–1117 (2003). [CrossRef]

13. P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via Diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. **30**(3), 518–531 (2008). [CrossRef] [PubMed]

14. J. K. Doylend, M. J. R. Heck, J. T. Bovington, J. D. Peters, L. A. Coldren, and J. E. Bowers, “Two-dimensional free-space beam steering with an optical phased array on silicon-on-insulator,” Opt. Express **19**(22), 21595–21604 (2011). [CrossRef] [PubMed]

## 2. 1-D algorithm

*I(x*

_{0}

*)*can be expressed aswhere

*A(x - x*

_{0}

*)*is the kernel of an imaging system and

*S(x)*is the unknown true image of an object.

*A*

_{1}(

*x*-

*x*

_{0}); the second – with the kernel

*A*

_{2}(

*x*-

*x*

_{0}) =

*A*

_{1}[(

*x*-

*x*

_{0})/

*m*], where

*m*>1 (

*m*is not necessarily an integer number). Figure 1 illustrates the relationship between two kernels when

*m*= 2.

*S(x)*and

*A(x - x*This system can be rewritten in Fourier space as a system of two functional equations with two unknown functions

_{0}).*s*(

*w*) and

*a*(

*w*): where

*w*is an angular frequency and

*i*

_{1}(

*w*),

*i*

_{2}(

*w*),

*a*(

*w*), and

*s*(

*w*) are the Fourier transforms of

*I*

_{1}(

*x*

_{0}),

*I*

_{2}(

*x*

_{0}),

*A*(

*x*–

*x*

_{0}), and

*S*(

*x*) respectively. Rewriting Eq. (6) in the form

*i*

_{2}(

*w/m*) =

*ma*(

*w*)

*s*(

*w/m*) and dividing Eq. (5) by the rewritten Eq. (6) gives a single functional equation with one unknown function

*s*(w):

*w*, including those

*w*in which Eq. (6) turns into zero.

*s*(

*w*) of Eq. (7) is not unique:

*s*(

*w*) multiplied by a constant is also a solution. At the same time,

*s*(

*w*) is a unique solution among all functions that are equal to 1 when

*w*= 0. To prove this fact, suppose that there is a different solution

*r*(

*w*) of Eq. (7):Division of Eq. (8) by Eq. (7) givesBoth

*r*(0)

*=*1 and

*s*(0)

*=*1, so

*r*(

*w/m*) → 1 and

^{N}*s*(

*w/m*) → 1 while

^{N}*N*→ 0; hence,

*r*(

*w*)

*= s*(

*w*). So

*s*(

*w*) is unique up to a constant multiplier.

*N*→ ∞ give the identical equation that shows that Eq. (13) holds for any

*w*. It means that Eq. (13) has the same solution

*s*(

*w*) as the original system of Eqs. (5) and (6). Equation (13) remains an ill-posed problem: if

*H*(

*w*) has a zero value, the exact recovery of S(x) is impossible. However, now Eq. (13) has

*H*(

*w*) expressed in terms of the experimentally known function

*i*

_{2}(

*w*) and

*Y*(

*w*) expressed in terms of the experimentally known function

*i*

_{1}(

*w*). Now there is no need to measure the kernel

*H*(

*w)*or to make a theoretical assumption regarding its shape.

*m*= 2 and

*i*

_{1}(w) = exp(−3

*w*

^{2}):

*i*

_{1}(

*w*) has a first derivative, that this derivative is bounded at

*w*= 0, and that

*i*

_{1}(

*w*) is normalized so that

*i*

_{1}(

*w*) → 1 while w → 0. The last condition means that starting from some number

*k*all terms of the product are close to 1.

*L*(

*w*)

*=*log[

*i*

_{1}(

*w*)]. The choice of

*m*= 2 does not affect the logic of the proof (the logic is correct for any

*m*> 1). It is clear that

*L*(0)

*=*0 because

*i*

_{1}(

*w*) →1 while

*w*→ 0; besides the derivativeis bounded at

*w*= 0 because d

*i*

_{1}(

*w*)/d

*w*is bounded at

*w*= 0. In other words, there is a number

*w*

_{0}such that for all 0

*< w < w*

_{0}it is true that

*|L*(

*w*)

*| < wD*, where D is some finite number. It means that for a large enough number

*K*Hence, the product (11) converges.

*i*

_{2}(

*w*) has a first derivative, that this derivative is bounded at

*w*= 0, and that

*i*

_{2}(

*w*) is normalized so that

*i*

_{2}(

*w*) →

*m*while w → 0.

*Y(w)*is defined by Eq. (11),

*H(w)*is defined by Eq. (12), and a regularization parameter

*h*can be estimated as it is described in [16].

## 3. 2-D algorithm for a LIDAR imaging system

14. J. K. Doylend, M. J. R. Heck, J. T. Bovington, J. D. Peters, L. A. Coldren, and J. E. Bowers, “Two-dimensional free-space beam steering with an optical phased array on silicon-on-insulator,” Opt. Express **19**(22), 21595–21604 (2011). [CrossRef] [PubMed]

*I*

_{1}(

*x*

_{0},

*y*

_{0}) is recorded with a beam of photons having a wavelength λ

_{1}; the second image

*I*

_{2}(

*x*

_{0},

*y*

_{0}) – with a beam of photons having a wavelength λ

_{2}. The image

*I*

_{1}(

*x*

_{0},

*y*

_{0}) represents the average travel time of photons with the wavelength λ

_{1}from their source to the target point (

*x*

_{0},

*y*

_{0}) and back; the image

*I*

_{2}(

*x*

_{0},

*y*

_{0}) represents the average travel time of photons with a wavelength λ

_{2}. Because these wavelengths are different, the diameters of the beams are also different; one beam illuminates an area which is greater than an area illuminated by the other beam, and hence the areas have different topographies.

*x*,

*y*) of the object is proportional to the intensity of illumination of this point. This intensity is proportional to the radiation pattern

*K*[

*x*-

*x*

_{0},

*y*-

*y*

_{0}]; so

*K*[

*x*-

*x*

_{0},

*y*-

*y*

_{0}] serves as a weight in calculating the average return time of the photons. The return time of photons from a point (

*x*,

*y*) located at the distance

*d = F(x, y)*from the source of photons depends on

*d*linearly: it is equal to 2

*d*/

*c*, where c is the speed of light. So Eq. (1) represents the average return time measured at the moment when the LIDAR beam is aimed at the point (

*x*

_{0},

*y*

_{0}); the coefficient 2/

*c*is omitted as unimportant.

*m*is a stretching factor of the kernel in the direction

*x*, and

*n*– in the direction

*y*. (If the stretching is achieved by the same LIDAR but with different wavelengths of radiation, then

*m = n*.) Eq. (20) and Eq. (21) can be rewritten in Fourier space as where

*a*(

*w*),

*b*(

*v*), and

*f*(

*w*,

*v*) are Fourier transforms of

*A*[

*x*–

*x*

_{0}],

*B*[

*y*–

*y*

_{0}], and

*F*(

*x*,

*y*).

*f*(

*w*,

*v*). The solution of this equation exists and is unique up to a constant multiplier; the proof of that is as for Eq. (7) but repeated twice – first for

*v*and then for

*w*.

## 4. Numerical simulation of image deconvolution

*F*(

*x, y*), recorded images

*I*

_{1}and

*I*

_{2}, and the recovered image

*F*

_{recovered}are presented as bitmaps 256 by 256 pixels:

*I*

_{1}and

*I*

_{2}are calculated as the convolutions of

*F*(

*x, y*) with the following kernels A and B: where

*m*=

*n*= 1 for

*I*

_{1},

*m*=

*n*= 2 for

*I*

_{2}. Noise is added to these images (the standard deviation of the noise is equal to 0.03 of the maximum intensity of each image). The recovered image is a reversed Fourier transform of

*f*(

*w*,

*v*) calculated by formula (26). The regularization parameter φ

^{2}is equal to 0.00003. To remove a high frequency component from the recovered image, only 20 first harmonics are used and the intensities of the recovered image less than 50% of its maximum intensity are set to zero.

*N*used in Eqs. (27) and (28). For

*N*= 8, the error in the calculation of the kernels’ values at the half of their heights is equal to 0.003%; for

*N*= 2 – to 3%. At the same time, the visual presentation of

*F*

_{recovered}when

*N*= 8 is practically the same as when

*N*= 2.

## 5. Discussion

*di*(

_{1}*w*)/

*dw*and

*di*(

_{2}*w*)/

*dw*have to be bounded at

*w*= 0. In other words,

*d*[

*a(w)s(w)*]/

*dw*has to be bounded at

*w*= 0. For some kernels

*a(w)*, this requirement cannot be satisfied. At the same time, most of practically used kernels, for example, sombrero kernels have derivatives that are bounded and equal to zero at

*w*= 0.

## 6. Conclusions

## References and links

1. | M. Demenikov and A. R. Harvey, “Parametric blind-deconvolution algorithm to remove image artifacts in hybrid imaging systems,” Opt. Express |

2. | W. Dong, H. Feng, Z. Xu, and Q. Li, “A piecewise local regularized Richardson–Lucy algorithm for remote sensing image deconvolution,” Opt. Laser Technol. |

3. | N. Wiener, |

4. | W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. |

5. | L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. |

6. | A. N. Tikhonov and V. Y. Arsenin, |

7. | V. A. Gorelik, “Method of recovering the fine structure of a spectrum without measuring the instrument function of the spectrometer,” Tech. Phys. |

8. | V. A. Gorelik and A. V. Yakovenko, “Fine structure extraction without analyser function measurement,” J. Elec. Spec. Rel. Phenom. |

9. | R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph. |

10. | A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. |

11. | Y. Y. Schechner and N. Kiryati, “Depth from Defocus vs. Stereo: How Different Really Are They?” Int. J. Comput. Vis. |

12. | D. Rajan and S. Chaudhuri, “Simultaneous Estimation of Super-Resolved Scene and Depth Map from Low Resolution Defocused Observations,” IEEE Trans. Pattern Anal. Mach. Intell. |

13. | P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via Diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. |

14. | J. K. Doylend, M. J. R. Heck, J. T. Bovington, J. D. Peters, L. A. Coldren, and J. E. Bowers, “Two-dimensional free-space beam steering with an optical phased array on silicon-on-insulator,” Opt. Express |

15. | A. R. Vasishtha, |

16. | W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, |

**OCIS Codes**

(100.0100) Image processing : Image processing

**ToC Category:**

Image Processing

**History**

Original Manuscript: May 20, 2013

Manuscript Accepted: October 23, 2013

Published: November 1, 2013

**Citation**

V. Gorelik, "Two-kernel image deconvolution," Opt. Express **21**, 27269-27276 (2013)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-22-27269

Sort: Year | Journal | Reset

### References

- M. Demenikov and A. R. Harvey, “Parametric blind-deconvolution algorithm to remove image artifacts in hybrid imaging systems,” Opt. Express18(17), 18035–18040 (2010). [PubMed]
- W. Dong, H. Feng, Z. Xu, and Q. Li, “A piecewise local regularized Richardson–Lucy algorithm for remote sensing image deconvolution,” Opt. Laser Technol.43(5), 926–933 (2011). [CrossRef]
- N. Wiener, The Extrapolation, Interpolation and Smoothing of Stationary Time series (John Wiley & Sons, 1949).
- W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am.62(1), 55–59 (1972). [CrossRef]
- L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J.79, 745–754 (1974). [CrossRef]
- A. N. Tikhonov and V. Y. Arsenin, Methods for Solution of Incorrect Problems (Nauka, 1986).
- V. A. Gorelik, “Method of recovering the fine structure of a spectrum without measuring the instrument function of the spectrometer,” Tech. Phys.39, 444–446 (1994).
- V. A. Gorelik and A. V. Yakovenko, “Fine structure extraction without analyser function measurement,” J. Elec. Spec. Rel. Phenom.73(1), R1–R3 (1995). [CrossRef]
- R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,” ACM Trans. Graph.25(3), 795–804 (2006). [CrossRef]
- A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph.26(3), 69 (2007). [CrossRef]
- Y. Y. Schechner and N. Kiryati, “Depth from Defocus vs. Stereo: How Different Really Are They?” Int. J. Comput. Vis.39(2), 141–162 (2000). [CrossRef]
- D. Rajan and S. Chaudhuri, “Simultaneous Estimation of Super-Resolved Scene and Depth Map from Low Resolution Defocused Observations,” IEEE Trans. Pattern Anal. Mach. Intell.25(9), 1102–1117 (2003). [CrossRef]
- P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via Diffusion,” IEEE Trans. Pattern Anal. Mach. Intell.30(3), 518–531 (2008). [CrossRef] [PubMed]
- J. K. Doylend, M. J. R. Heck, J. T. Bovington, J. D. Peters, L. A. Coldren, and J. E. Bowers, “Two-dimensional free-space beam steering with an optical phased array on silicon-on-insulator,” Opt. Express19(22), 21595–21604 (2011). [CrossRef] [PubMed]
- A. R. Vasishtha, Complex Analysis, 11th ed. (Krishna Prakashan Media, 2010).
- W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. (Cambridge University Press, 1992), Chap. 13.

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.