OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 23 — Nov. 7, 2011
  • pp: 23227–23239
« Show journal navigation

Marginal blind deconvolution of adaptive optics retinal images

L. Blanco and L. M. Mugnier  »View Author Affiliations


Optics Express, Vol. 19, Issue 23, pp. 23227-23239 (2011)
http://dx.doi.org/10.1364/OE.19.023227


View Full Text Article

Acrobat PDF (1425 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Adaptive Optics corrected flood imaging of the retina has been in use for more than a decade and is now a well-developed technique. Nevertheless, raw AO flood images are usually of poor contrast because of the three-dimensional nature of the imaging, meaning that the image contains information coming from both the in-focus plane and the out-of-focus planes of the object, which also leads to a loss in resolution. Interpretation of such images is therefore difficult without an appropriate post-processing, which typically includes image deconvolution. The deconvolution of retina images is difficult because the point spread function (PSF) is not well known, a problem known as blind deconvolution. We present an image model for dealing with the problem of imaging a 3D object with a 2D conventional imager in which the recorded 2D image is a convolution of an invariant 2D object with a linear combination of 2D PSFs. The blind deconvolution problem boils down to estimating the coefficients of the PSF linear combination. We show that the conventional method of joint estimation fails even for a small number of coefficients. We derive a marginal estimation of the unknown parameters (PSF coefficients, object Power Spectral Density and noise level) followed by a MAP estimation of the object. We show that the marginal estimation has good statistical convergence properties and we present results on simulated and experimental data.

© 2011 OSA

1. Introduction

However, AO flood imaging suffers from an intrinsic limitation that decreases image quality and makes both automatic post-processing (photoreceptor counting, blood vessel diameter measurements...) and visual interpretation difficult: the three-dimensional nature of the object and of the imaging process. Indeed, information from both the in-focus plane and out-of-focus planes in front of and behind the in-focus plane contribute to the final 2D image, which creates an important background that reduces image contrast and leads to a loss in resolution.

A hardware solution to this problem is Adaptive Optics Scanning Laser Ophtalmoscopy [4

4. A. Roorda, F. Romero-Borja, I. William Donnelly, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10, 405–412 (2002). [PubMed]

] (AO-SLO): by using a confocal pinhole, one selects only the photons coming from a specific layer of the tissue under examination.

An alternative software solution for mitigating these effects without any setup modification is image deconvolution. Retinal image deconvolution is difficult for two reasons:
  • imaging is fundamentally 3D and we only record 2D images. This aspect should be taken into account in the image model in order to enable a high-quality deconvolution ;
  • the point spread function (PSF) is not well known, therefore we must estimate the PSF together with the object, a technique known as blind deconvolution.

In this paper, we focus on the imaging of the photoreceptor layer of the retina. In order to deal with the lack of information associated with recording only 2D images of a 3D object, we propose an imaging model in which the photoreceptor layer is assumed to be approximately shift invariant along the optical axis of the imaging system (i.e., the photoreceptor size does not vary significantly over the depth of focus of the instrument and the photoreceptor are more or less parallel to the optical axis). We show that this hypothesis, although it is a simplifying one, is very effective on experimental AO retinal images with a visible and measurable effect on the lateral resolution of the images. Section 2 presents the imaging model and the PSF parameterization we will use. In Section 3, we describe the joint estimation of the object and the PSF before showing, both on simulation and theoretically, that it is not suited for our problem. In Section 4, we derive a marginal estimator and show its performance on simulation. In Section 5, we show results of blind marginal deconvolution of experimental in vivo retinal images. Section 6 summarizes the results.

2. Imaging model and PSF parameterization

The object and the imaging process are both three-dimensional (3D). If we record a stack i3D of 2D images focused at different depths in the object, a reasonable image formation model, after background subtraction, can be written as a 3D convolution:
i3D=h3D*3Do3D+n,
(1)
where i3D is the 3D image, o3D is the 3D object, *3D denotes the 3D convolution operator, h3D is the 3D PSF and n is the noise.

We assume that our object is shift invariant along the optical axis:
o3D(x,y,z)=o2D(x,y)α(z),
(2)
where α(z) is the normalized flux emitted by the plane at depth z (∫ α(z)dz = 1).

Strictly speaking, this assumption means that our object must be shift invariant in z over an infinite range. However, in practice this invariance must only be verified over the depth of focus of the instrument (≈ 50μm for an AO flood imager, ≈ 10–15μm for a confocal imager). Indeed, planes farther than the depth of focus from the image plane contribute to the image with a PSF that has a very narrow spectrum thus their contribution is almost a constant background.

In our case, we assume that the lateral size of the photoreceptors does not vary significantly and that the photoreceptors are almost parallel to the optical axis. Additionally, the depth of focus is about the length of a cone photoreceptor. Hence, the structures in front and behind the photoreceptor layer (pigment epithelium or inner retina layers) are way out of focus and only contribute as a background.

Current flood imaging systems only record data in one plane of interest. Using Eq. (1) and Eq. (2), it is easy to show that, in plane z = 0 for instance:
i(x,y)i3D(x,y,z)|z=0=α(z)h3D(x,y,z)o2D(xx,yy)dxdydz+n(x,y)=(h2Do2D*2D)(x,y)+n(x,y),
(3)
with h2D an effective 2D PSF which depends on the longitudinal brightness distribution of the object α(z) and on the 3D PSF:
h2D(x,y)=α(z)h3D(x,y,z)dz.

The 2D image i(x,y) at the focal plane of the instrument is the 2D convolution of a 2D object and a global PSF h which is the linear combination of the individual 2D PSFs (each one conjugated with a different plane of the object) weighted by the back-scattered flux at each plane.

3. Joint estimation

There is a large body of work on blind deconvolution, originating in good part from astronomy (see, e.g., Blanc-Féraud [5

5. L. Blanc-Féraud, L. Mugnier, and A. Jalobeanu, “Blind image deconvolution,” in “Inverse Problems in Vision and 3D Tomography,”, A. Mohammad-Djafari, ed. (ISTE / John Wiley, London, 2010), chap. 3, pp. 97–121.

]). The conventional blind deconvolution approach is to perform an estimation of both the object and the PSF, jointly (see, e.g., Ayers [6

6. G. R. Ayers and J. C. Dainty, “Iterative blind deconvolution and its applications,” Opt. Lett. 13, 547–549 (1988). [CrossRef] [PubMed]

] for pioneering works and Mugnier [7

7. L. M. Mugnier, T. Fusco, and J.-M. Conan, “Mistral: a myopic edge-preserving image restoration method, with application to astronomical adaptive-optics-corrected long-exposure images,” J. Opt. Soc. Am. A 21, 1841–1854 (2004). [CrossRef]

] for more recent results on astronomical data).

3.1. Method

The joint estimation can be cast in a Bayesian framework as the computation of the joint maximum a posteriori (jmap) estimator:
(o^,α^)=argmaxo,αp(i,o,α;θ)
(5)
=argmaxo,αp(i|o,α;θ)×p(o;θ)×p(α;θ)
(6)
where, p(i, o, α;θ) is the joint probability density of the data (i), of the 2D object (o), and of the PSF decomposition coefficients (α). It may depend on set of regularization parameters or hyperparameters (θ). p(i|o, α;θ) is the likelihood of the data i, p(o;θ) is the a priori probability density function of the object o and p(α;θ) is the a priori probability density function of the coefficients α. In the following, we will not use any regularization on the set of coefficients α because we don not have any probability law for the PSF coefficients. However, since we only need to estimate a small number of these coefficients, this is not a problem.

The noise on the images is mainly photon noise which has a Poisson distribution. However, AO retinal images are dominated by a strong and quite homogeneous background. In the following, we will therefore assume that the noise is stationary white Gaussian with a variance σ2. For the object, we choose a stationary Gaussian prior probability distribution with a mean value om and a covariance matrix Ro. The set of hyperarameters is therefore θ = (σ2,om, Ro). Under these assumptions, we have:
p(i,o,α;θ)=1(2π)N22σN2exp(12σ2(iHo)t(iHo))×1(2π)N22det(Ro)1/2exp(12(oom)tRo1(oom)),
where H is the operator performing the convolution by the PSF h, det(x) is the determinant of matrix x and N2 is the number of pixels in the image. ô and α̂ can therefore be defined as the estimated object and coefficients that minimize a criterion J(o, α) defined as follows:
Jjmap(o,α)=Ji(o,α)+Jo(o,α),
(7)
where Ji(o, α) = −ln p(i|o, α;θ) (data-fidelity) and Jo = − ln p(o;θ) (regularization term). The criterion to be minimized reads:
Jjmap(o,α)=lnp(i|o,α;θ)lnp(o;θ)
(8)
Jjmap(o,α)=12N2lnσ2+12σ2(iHo)t(iHo)+12lndet(Ro)+12(oom)tRo1(oom)+C,
(9)
where C is a constant. By cancelling the derivative of J(o, α) with respect to the object, we obtain an analytical expression of the object ô(α;θ) that minimizes the criterion for a given (α;θ) :
o^(α,θ)=(HtH+σ2R01)1(Hti+σ2R01om)
(10)
Since the matrices H (convolution operator) and Ro (covariance matrix of an object with a stationary probability density) are Toeplitz-block-Toeplitz, we can write the joint criterion Jjmap and the analytical expression of the object ô(α, θ) in the Fourier domain with a circulant approximation:
Jjmap(o,α)=12N2lnSn+12ν|i˜(ν)h˜(ν)o˜(ν)|2Sn+12νlnSo(ν)+12ν|o˜(ν)o˜m(ν)|2So(ν)
(11)
ando˜^(α)=h˜*(ν)i˜(ν)+SnSo(ν)o˜m(ν)|h˜(ν)|2+SnSo(ν),
(12)
where Sn is the noise power spectral density (PSD), So is the object PSD (the new set of hyperparameters in the Fourier domain is {Sn, So}), ν is the spatial frequency and denotes the two-dimensional Fast Fourier Transform of x.

o˜^(α) is the estimated object after classical Wiener filtering of the image i and is easily computed.

If we substitute Eq. (12) into Eq. (11), we obtain a new expression of Jjmap that does not depend explicitly on the object:
Jjmap(α)=12N2lnSn+12νlnSo(ν)+12ν1So(ν)|i˜(ν)h˜(ν)o˜m(ν)|2|h˜(ν)|2+SnSo(ν).
(13)
The joint MAP solution is thus the pair (ô(α), α) for the value of α that minimizes Eq. 13.

3.2. Simulation results

The following simulation was performed to evaluate the performance of the joint estimator in our problem.

A simulated image is built in the following manner:
i=(α*hfoc+(1α)hdefoc)*o+n,
(14)
where
  • the global PSF is the sum of only two weighted PSF’s, the first one hfoc being focused and the second one hdefoc defocused. We assume that the focused PSF has no aberration (AO correction is perfect). The defocus is equal to π radian RMS;
  • The object used is a 128×128 pixel portion of an experimental AO image obtained with the XV-XX retinal imager developed by the Observatoire de Paris [2

    2. M. Glanc, E. Gendron, F. Lacombe, D. Lafaille, J.-F. Le Gargasson, and P. Léna, “Towards wide-field retinal imaging with adaptive optics,” Opt. Commun. 230, 225–238 (2004). [CrossRef]

    ];
  • Noise n is stationary Gaussian with a standard deviation σ = 0.01*max(o), corresponding roughly to photon noise for an average of 10000 photons/pixel;
  • α = 0.3.

We assume for the sake of this simulation that the object PSD So and the noise PSD Sn are known although it is not the case in practice. Therefore, we perform a so-called “supervised” estimation of α: we compute the joint criterion Jjmap(α;So, Sn) (see Eq. 13)) for values of α ranging from 0 to 1 to find the value of α that minimizes the joint criterion. Figure 3 shows the result of such a computation.

Fig. 1 Simulated object
Fig. 2 Simulated image
Fig. 3 Joint criterion for 0 ≤ α ≤ 1

We see that the joint estimator is minimum for α = 1 whereas the real value of α is 0.3. The joint estimation fails to retrieve the actual value even in this very simple case (two point spread functions, known hyperparameters). Figure 4 shows the restored object for the value of α that minimizes Jjmap(α;So, Sn). The image is poorly deconvolved since the estimated PSF is perfectly focused whereas the actual global PSF is only 30% focused.

Fig. 4 Jointly estimated object

The joint estimator does not work well in the case of myopic deconvolution of retinal images, even in the most simple case of only two PSF’s with known Sn and So. Actually, a close look at Eq. (13) helps us understand why this joint estimator is actually degenerate in this case: if, for instance, the mean object we use to compute the joint criterion is constant (om = β), and since the PSF and the set of parameters are both normalized, then the numerator does not depend on the set of parameters α. Minimizing Jjmap is equivalent to maximizing this denominator, i.e., to choosing the PSF with the highest MTF ||, which is the most focused PSF.

One might wonder why the joint estimation, although known to show poor results for blind deconvolution [8

8. R. J. A. Little and D. B. Rubin, “On jointly estimating parameters and missing data by maximizing the complete-data likelihood,” The American Statistician 37, 218–220 (1983). [CrossRef]

] is actually quite used in other contexts such as astronomical imaging. The joint estimation works fairly well thanks to constraints such as PSF support or positivity (which effectively acts as an object support constraint) that help remove ambiguities between the object and the PSF. In our case, since we cannot use such constraints (the photoreceptor signal is superimposed over a strong background and the object extends far beyond the recorded field of view of the system), the joint estimator is not well suited for retinal image deconvolution.

Multi-frame joint deconvolution [9

9. J. C. Christou, A. Roorda, and D. R. Williams, “Deconvolution of adaptive optics retinal images,” J. Opt. Soc. Am. A 21, 1393–1401 (2004). [CrossRef]

] can help since it increases the number of data for the same object but is only effective if the PSFs are different enough [10

10. G. Harikumar and Y. Bresler, “Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms,” IEEE Trans. Image Processing8, 202–219 (1999). [CrossRef]

], such as in the case of phase diversity [11

11. J. Idier, L. Mugnier, and A. Blanc, “Statistical behavior of joint least square estimation in the phase diversity context,” IEEE Trans. Image Processing14, 2107–2116 (2005). [CrossRef]

]. Therefore, another estimator with better statistical properties would be preferable, ideally capable of restoring the PSF on a single frame.

4. Marginal estimation

The poor results of the joint estimation led us to propose another estimator for our imaging problem. The estimator proposed is the marginal estimator, which has better properties [12

12. E. Lehmann, Theory of point estimation (John Wiley, New York, NY, 1983).

] and has never been used previously in retinal images deconvolution although it has already been proposed in the literature in other contexts including estimation of aberrations by use of phase diversity [13

13. A. Blanc, L. M. Mugnier, and J. Idier, “Marginal estimation of aberrations and image restoration by use of phase diversity,” J. Opt. Soc. Am. A 20, 1035–1045 (2003). [CrossRef]

]. The principle of marginal estimation is to integrate the object o out of the problem (i.e., marginalize the posterior likelihood [5

5. L. Blanc-Féraud, L. Mugnier, and A. Jalobeanu, “Blind image deconvolution,” in “Inverse Problems in Vision and 3D Tomography,”, A. Mohammad-Djafari, ed. (ISTE / John Wiley, London, 2010), chap. 3, pp. 97–121.

]). We integrate the joint probability of the object o and the PSF parameters α over all the possible values of object o.
α^=argmaxαp(i,o,α;θ)do.
(15)

Marginalization reduces the number of unknowns to be retrieved (from the total number of pixels of the image + the PSF parameters in the joint estimation case to just a few PSF parameters) and gives us a true maximum likelihood or maximum a posteriori (depending on the prior on the estimated parameters) estimator of the parameters of interest (namely, the PSF parameters). After estimation of the PSF parameters α, the object is restored by Wiener filtering of the image with the estimated global PSF and hyperparameters.

4.1. Marginal criterion

α^ML=argmaxαp(i,α;θ)=argmaxαp(i|α;θ)p(α;θ).
(16)

We keep the assumptions made for the joint estimation: a stationary white Gaussian noise with variance σ2, stationary Gaussian prior probability distribution with a mean value om and covariance matrix Ro for the object. Since i is a linear combination of a Gaussian object and a Gaussian noise, it is also Gaussian. Its associated probability density reads:
p(i|α;θ)=A(detRi)1/2exp(12(iim)tRi1(iim)),
(17)
where A is a constant, Ri is the image covariance matrix and im = Hom. Since we only need to estimate a small number of parameters, there is no need to regularize the solution over α. We therefore use a Maximum Likelihood (ML) estimator rather than a Maximum A Posteriori (MAP) estimator.

Maximizing p(i|α;θ) is equivalent to minimizing the opposite of its logarithm:
JML(α)=12lndet(Ri)+12(iim)t(iim)+B
(18)
where B is a constant and Ri = HRoHt + σ2Id (Id is the identity matrix). The marginal criterion can be written in the Fourier domain as follows:
JML(α)=12νlnSo(ν)+12νln(|h˜(ν)|2+SnSo(ν))+12ν1So(ν)|i˜(ν)h˜(ν)o˜m(ν)|2|h˜(ν)|2+SnSo(ν)+B.
(19)
Using Eq. 13 and Eq. 19, we obtain the following relationship between the marginal estimator and the joint estimator:
JML(α)=Jjmap(α)+12νln(|h˜(ν)|2+SnSo(ν))12N2lnSn.
(20)
The marginal estimator is therefore very similar to the joint estimator in its mathematical expression (as shown by Goussard when the hyperparameters are known [14

14. Y. Goussard, G. Demoment, and J. Idier, “A new algorithm for iterative deconvolution of sparse spike,” in “ICASSP,” (1990), pp. 1547–1550.

] and Blanc [13

13. A. Blanc, L. M. Mugnier, and J. Idier, “Marginal estimation of aberrations and image restoration by use of phase diversity,” J. Opt. Soc. Am. A 20, 1035–1045 (2003). [CrossRef]

] for unknown hyperparameters. Its properties, as we will show in the following, are nevertheless significantly different to those of the joint estimator.

4.2. Marginal estimation results

We now present the results of the marginal estimation on simulated data. The same simulation as in the joint estimation was performed (i = (α* hfoc + (1 – α)hdefoc) * o + n, with α = 0.3). The marginal criterion of Eq. (19) was computed for 0 ≤ α ≤ 1 with known hyperparameters. The object is restored by Wiener filtering. Results of the computation are shown on figure 5 and the restored object on figure 6.

Fig. 5 Marginal criterion for 0 ≤ α ≤ 1
Fig. 6 Marginaly estimated object

Figure 5 shows that marginal criterion is minimum for α = 0.3, which is the true value of α used in the simulation. The marginal estimator accurately estimates the parameter of interest in our simulation. As a result, the restored object, shown in Figure 6, is much sharper than the simulated image and much closer to the actual object used in the simulation than the object restored with the joint estimation.

4.3. Hyperparameter estimation

The marginal estimator allows us to estimate the set of hyperparameters θ (actually the object PSD So and noise PSD Sn) together with the PSF coefficients in an automatic manner. This method is called unsupervised estimation:
(α^,S^n,S^o)=argmaxα,Sn,Sof(i,α;Sn,So).
(21)
In order to reduce the number of hyperparameters we must estimate, we choose to model the object PSD So in the following way [15

15. J.-M. Conan, L. M. Mugnier, T. Fusco, V. Michau, and G. Rousset, “Myopic deconvolution of adaptive optics images by use of object and point-spread function power spectra,” Appl. Opt. 37, 4614–4622 (1998). [CrossRef]

]:
So(ν)=k1+(νν0)p.
(22)
Such PSD parametrization has been successfully used in various imaging applications such as astronomical imaging or earth observation from satellites. Since the noise is assumed to be Gaussian and homogeneous, Sn = constant. The criterion JML(α) becomes JML(α,Sn, k,νo, p) and must now be minimized versus the PSF coefficients α and the hyperparameters Sn, k,νo and p.

With the change of variable μ = Sn/k, if we cancel the derivative of the criterion with respect to k, we obtain an analytical expression for (α, μ, νo, p) that minimizes the criterion for a given value of the other parameters therefore only four hyperparameters remain, μ̂, ν̂0, and Sn [13

13. A. Blanc, L. M. Mugnier, and J. Idier, “Marginal estimation of aberrations and image restoration by use of phase diversity,” J. Opt. Soc. Am. A 20, 1035–1045 (2003). [CrossRef]

].

PSF coefficients and hyperparameters are estimated in an alternate way. We initialize the algorithm with a perfectly focused global PSF.

There is no analytical expression for the minimum value of the criterion so the minimization has to be done numerically. In our case, the minimization is performed with a Variable Metric with Limited Memory, Bounded (VMLM-B) method developed by E. Thiébaut [16

16. É. Thiébaut, “Optimization issues in blind deconvolution algorithms,” in “Astronomical Data Analysis II,”, vol. 4847, J.-L. Starck and F. D. Murtagh, eds. (Proc. Soc. Photo-Opt. Instrum. Eng., 2002), vol. 4847, pp. 174–183.

].

4.4. Asymptotic properties

The maximum likelihood estimator is known to be a consistent estimator, i.e., it tends toward the actual values of the estimated parameters as the noise tends toward zero or as the size of data tends toward infinity. It is also known to be asymptotically normal [12

12. E. Lehmann, Theory of point estimation (John Wiley, New York, NY, 1983).

] so that the neg-log-likelihood is asymptotically quadratic thus convex.

Extensive simulations were performed to validate the statistical behavior of our unsupervised marginal estimation. The simulation conditions are the same as previously:
  • i = (α* hfoc + (1 – α)hdefoc) * o + n, with α = 0.3;
  • Noise RMS varies from 1% of the maximum value of the image to 20% of the maximum value of the image;
  • 50 noise realizations were computed for each noise RMS value;
  • The simulation was performed on 3 different subimages varying in size: a 32×32 pixel central region of image i, a 64×64 pixel central region of image i and the whole 128×128 pixel image i.

Figure 7 shows the RMS error on estimation of the PSF coefficients for the different values of noise and the varying data size, both in the supervised and unsupervised cases. For a given data size and both in the supervised and unsupervised estimation, the marginal estimator RMS error tends towards zero (i.e., the estimated parameters α tends towards the exact value) when noise decreases. Even more interestingly, for a given noise value, error tends towards zero as the size of data increases. In particular, for a 128 × 128 pixel image and for noise σ = 5% of the max value of the image, the RMS error on the PSF coefficient α estimation is less than 3%. For a noise RMS value of 1% of the image maximum, the unsupervised estimator basically shows the same performance as the supervised estimation.

Fig. 7 RMS error on the estimation of the PSF coefficients as a function of noise level in percent (noise standard deviation over image maximum). The black, red and blue lines correspond, respectively, to 32×32, 64×64 and 128×128 pixels images. Supervised case is in dashed lines, unsupervised case in solid line.

This simulation shows that the unsupervised marginal estimator exhibits, in practice, its appealing theoretical properties, which opens the way to its use on experimental images.

5. Preliminary experimental results

We now show experimental results of the marginal blind deconvolution on in vivo retinal images:
  • The experimental image (Figure 8) is a 256 × 256 pixel image recorded on a healthy subject with the AO eye-fundus imager of the Center for Clinical Investigation of the Quinze-Vingts Hospital in Paris, developed by the Observatoire de Paris-Meudon [2

    2. M. Glanc, E. Gendron, F. Lacombe, D. Lafaille, J.-F. Le Gargasson, and P. Léna, “Towards wide-field retinal imaging with adaptive optics,” Opt. Commun. 230, 225–238 (2004). [CrossRef]

    ];
  • We model the global PSF as a linear combination of 3 PSFs, the first one being focused, the second one being defocused with a focus ϕ1 = π/2 rad RMS and the third one being defocused by ϕ2 = π rad RMS.
  • We assume that the adaptive optics has perfectly corrected the wavefront and that the focused PSF is a Airy disk.

We must estimate α = {α1,α2,α3}.

Fig. 8 Experimental image

The unsupervised marginal estimation gives α = {0.24, 0.22, 0.54}, the resulting estimated PSF is shown on Figure 9. For this image, the main contribution (more than half of the energy) comes from the most out-of-focus plane and only a little less than 25% of the energy comes from the in-focus-plane.

Fig. 9 Estimated PSF

The object is computed thanks to Eq. (12). Figure 10 shows the restored object after unsupervised marginal estimation. It is clearly visible that the restored object is much sharper than the original image. The photoreceptors have a much better contrast and can be seen clearly throughout the image. The restored object also is much less noisy than the original image.

Fig. 10 Restored object

Figure 11 shows a comparison of the power spectral densities of the experimental image and of the restored object: we can clearly see an improvement at the medium-high spatial frequency, with an improvement of a factor of 5 around 200–250 cycles/mm, i.e., the spatial frequency corresponding to the cone photoreceptor size and separation. This frequency enhancement is clearly visible on Figure 12 that shows, in solid line, the estimated Optical Transfer Function (OTF) of our instrument (AO Flood imager+eye) as well as the deconvolution transfer function (dotted line) and the global (instrument+deconvolution) transfer function (dashed line). The deconvolution restores the spatial frequencies damped by the instrument transfer function up to 300 cycles/mm, a frequency that is beyond the spatial frequency of the cone photoreceptors in our image. These preliminary results show that our image model and the marginal estimator are well adapted to the deconvolution of adaptive optics corrected photoreceptor images. Motion artifacts due to eye movement during image acquisition and resulting in blurred images could possibly be addressed by changing the PSF basis to include motion induced PSFs and not only purely diffractive PSFs.

Fig. 11 PSD comparison between experimental image (dotted line), restored object (solid line) and object prior PSD (dashed line) with the estimated hyperparameters
Fig. 12 Radial average of the estimated instrument optical transfer function, deconvolution transfer function and global (instrument+deconvolution) transfer function.

6. Conclusion

Blind deconvolution is a much needed tool for the interpretation and further processing of AO-corrected retinal images. We have proposed a reasonable imaging model to deal with the problem of only having 2D images of a 3D object that results in a useful PSF parameterization. We have shown, both analytically and through simulations, that the classical blind joint estimation of object and PSF is not suited for this problem. We have derived a marginal estimator of the PSF and extended it to estimate also the hyperparameters (object and noise PSDs), i.e., to perform an unsupervised estimation. We have showed on simulations that this estimator is capable of restoring the PSF accurately even in the unsupervised case. The good statistical properties of the unsupervised marginal estimation have also been demonstrated.

Finally, we have shown preliminary results on experimental data, showing the efficiency of the marginal estimator for myopic deconvolution of adaptive optics retinal images, with a measurable improvement of the contrast at the spatial frequencies corresponding to the cone photoreceptors. Although developed in the context of AO flood illumination retinal imaging, this marginal blind deconvolution method could also be applicable to other kinds of data such as confocal retinal imaging or more general microscopy data, or even astronomical data, mainly by changing the PSF basis used.

Acknowledgments

The authors are grateful to Marie Glanc for her support and her guidance on the use of the LESIA AO bench she designed and built.

References and links

1.

J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14, 2884–2892 (1997). [CrossRef]

2.

M. Glanc, E. Gendron, F. Lacombe, D. Lafaille, J.-F. Le Gargasson, and P. Léna, “Towards wide-field retinal imaging with adaptive optics,” Opt. Commun. 230, 225–238 (2004). [CrossRef]

3.

J. Rha, R. S. Jonnal, K. E. Thorn, J. Qu, Y. Zhang, and D. T. Miller, “Adaptive optics flood-illumination camera for high speed retinal imaging,” Opt. Express 14, 4552–4569 (2006). [CrossRef] [PubMed]

4.

A. Roorda, F. Romero-Borja, I. William Donnelly, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10, 405–412 (2002). [PubMed]

5.

L. Blanc-Féraud, L. Mugnier, and A. Jalobeanu, “Blind image deconvolution,” in “Inverse Problems in Vision and 3D Tomography,”, A. Mohammad-Djafari, ed. (ISTE / John Wiley, London, 2010), chap. 3, pp. 97–121.

6.

G. R. Ayers and J. C. Dainty, “Iterative blind deconvolution and its applications,” Opt. Lett. 13, 547–549 (1988). [CrossRef] [PubMed]

7.

L. M. Mugnier, T. Fusco, and J.-M. Conan, “Mistral: a myopic edge-preserving image restoration method, with application to astronomical adaptive-optics-corrected long-exposure images,” J. Opt. Soc. Am. A 21, 1841–1854 (2004). [CrossRef]

8.

R. J. A. Little and D. B. Rubin, “On jointly estimating parameters and missing data by maximizing the complete-data likelihood,” The American Statistician 37, 218–220 (1983). [CrossRef]

9.

J. C. Christou, A. Roorda, and D. R. Williams, “Deconvolution of adaptive optics retinal images,” J. Opt. Soc. Am. A 21, 1393–1401 (2004). [CrossRef]

10.

G. Harikumar and Y. Bresler, “Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms,” IEEE Trans. Image Processing8, 202–219 (1999). [CrossRef]

11.

J. Idier, L. Mugnier, and A. Blanc, “Statistical behavior of joint least square estimation in the phase diversity context,” IEEE Trans. Image Processing14, 2107–2116 (2005). [CrossRef]

12.

E. Lehmann, Theory of point estimation (John Wiley, New York, NY, 1983).

13.

A. Blanc, L. M. Mugnier, and J. Idier, “Marginal estimation of aberrations and image restoration by use of phase diversity,” J. Opt. Soc. Am. A 20, 1035–1045 (2003). [CrossRef]

14.

Y. Goussard, G. Demoment, and J. Idier, “A new algorithm for iterative deconvolution of sparse spike,” in “ICASSP,” (1990), pp. 1547–1550.

15.

J.-M. Conan, L. M. Mugnier, T. Fusco, V. Michau, and G. Rousset, “Myopic deconvolution of adaptive optics images by use of object and point-spread function power spectra,” Appl. Opt. 37, 4614–4622 (1998). [CrossRef]

16.

É. Thiébaut, “Optimization issues in blind deconvolution algorithms,” in “Astronomical Data Analysis II,”, vol. 4847, J.-L. Starck and F. D. Murtagh, eds. (Proc. Soc. Photo-Opt. Instrum. Eng., 2002), vol. 4847, pp. 174–183.

OCIS Codes
(010.1080) Atmospheric and oceanic optics : Active or adaptive optics
(100.3190) Image processing : Inverse problems
(100.6890) Image processing : Three-dimensional image processing
(170.4470) Medical optics and biotechnology : Ophthalmology
(100.1455) Image processing : Blind deconvolution

ToC Category:
Adaptive Optics

History
Original Manuscript: July 12, 2011
Revised Manuscript: September 16, 2011
Manuscript Accepted: September 17, 2011
Published: November 1, 2011

Virtual Issues
Vol. 7, Iss. 1 Virtual Journal for Biomedical Optics

Citation
L. Blanco and L. M. Mugnier, "Marginal blind deconvolution of adaptive optics retinal images," Opt. Express 19, 23227-23239 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-23-23227


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A14, 2884–2892 (1997). [CrossRef]
  2. M. Glanc, E. Gendron, F. Lacombe, D. Lafaille, J.-F. Le Gargasson, and P. Léna, “Towards wide-field retinal imaging with adaptive optics,” Opt. Commun.230, 225–238 (2004). [CrossRef]
  3. J. Rha, R. S. Jonnal, K. E. Thorn, J. Qu, Y. Zhang, and D. T. Miller, “Adaptive optics flood-illumination camera for high speed retinal imaging,” Opt. Express14, 4552–4569 (2006). [CrossRef] [PubMed]
  4. A. Roorda, F. Romero-Borja, I. William Donnelly, H. Queener, T. Hebert, and M. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express10, 405–412 (2002). [PubMed]
  5. L. Blanc-Féraud, L. Mugnier, and A. Jalobeanu, “Blind image deconvolution,” in “Inverse Problems in Vision and 3D Tomography,”, A. Mohammad-Djafari, ed. (ISTE / John Wiley, London, 2010), chap. 3, pp. 97–121.
  6. G. R. Ayers and J. C. Dainty, “Iterative blind deconvolution and its applications,” Opt. Lett.13, 547–549 (1988). [CrossRef] [PubMed]
  7. L. M. Mugnier, T. Fusco, and J.-M. Conan, “Mistral: a myopic edge-preserving image restoration method, with application to astronomical adaptive-optics-corrected long-exposure images,” J. Opt. Soc. Am. A21, 1841–1854 (2004). [CrossRef]
  8. R. J. A. Little and D. B. Rubin, “On jointly estimating parameters and missing data by maximizing the complete-data likelihood,” The American Statistician37, 218–220 (1983). [CrossRef]
  9. J. C. Christou, A. Roorda, and D. R. Williams, “Deconvolution of adaptive optics retinal images,” J. Opt. Soc. Am. A21, 1393–1401 (2004). [CrossRef]
  10. G. Harikumar and Y. Bresler, “Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms,” IEEE Trans. Image Processing8, 202–219 (1999). [CrossRef]
  11. J. Idier, L. Mugnier, and A. Blanc, “Statistical behavior of joint least square estimation in the phase diversity context,” IEEE Trans. Image Processing14, 2107–2116 (2005). [CrossRef]
  12. E. Lehmann, Theory of point estimation (John Wiley, New York, NY, 1983).
  13. A. Blanc, L. M. Mugnier, and J. Idier, “Marginal estimation of aberrations and image restoration by use of phase diversity,” J. Opt. Soc. Am. A20, 1035–1045 (2003). [CrossRef]
  14. Y. Goussard, G. Demoment, and J. Idier, “A new algorithm for iterative deconvolution of sparse spike,” in “ICASSP,” (1990), pp. 1547–1550.
  15. J.-M. Conan, L. M. Mugnier, T. Fusco, V. Michau, and G. Rousset, “Myopic deconvolution of adaptive optics images by use of object and point-spread function power spectra,” Appl. Opt.37, 4614–4622 (1998). [CrossRef]
  16. É. Thiébaut, “Optimization issues in blind deconvolution algorithms,” in “Astronomical Data Analysis II,”, vol. 4847, J.-L. Starck and F. D. Murtagh, eds. (Proc. Soc. Photo-Opt. Instrum. Eng., 2002), vol. 4847, pp. 174–183.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited