OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 19 — Sep. 12, 2011
  • pp: 18135–18148
« Show journal navigation

Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach

Guozhong Liu, Siavash Yousefi, Zhongwei Zhi, and Ruikang K. Wang  »View Author Affiliations


Optics Express, Vol. 19, Issue 19, pp. 18135-18148 (2011)
http://dx.doi.org/10.1364/OE.19.018135


View Full Text Article

Acrobat PDF (2249 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper proposes an automatic point spread function (PSF) estimation method to de-blur out-of-focus optical coherence tomography (OCT) images. The method utilizes Richardson-Lucy deconvolution algorithm to deconvolve noisy defocused images with a family of Gaussian PSFs with different beam spot sizes. Then, the best beam spot size is automatically estimated based on the discontinuity of information entropy of recovered images. Therefore, it is not required a prior knowledge of the parameters or PSF of OCT system for de-convoluting image. The model does not account for the diffraction and the coherent scattering of light by the sample. A series of experiments are performed on digital phantoms, a custom-built phantom doped with microspheres, fresh onion as well as the human fingertip in vivo to show the performance of the proposed method. The method may also be useful in combining with other deconvolution algorithms for PSF estimation and image recovery.

© 2011 OSA

1. Introduction

Optical coherence tomography (OCT) is a non-invasive, micrometer resolution, three-dimensional optical imaging technique capable of capturing backscattering photons from within optical scattering media such as biological tissues [1

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. Z. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef] [PubMed]

, 2

2. P. H. Tomolins and R. K. Wang, “Theory, developments and applications of optical coherence tomography,” J. Phys. D Appl. Phys. 38(15), 2519–2535 (2005). [CrossRef]

]. One of the advantages of OCT is that the axial and the lateral resolutions are decoupled from each other. The axial resolution of the OCT system is determined by the spectral bandwidth of light source and is defined as the half of the coherence length of the light source which can be expressed by
Ra=lc20.44λ02Δλ
(1)
where lc is the coherence length, λ0is the center wavelength and Δλ is the spectral full-width at half maximum (FWHM) of the light source.

The lateral resolution of the OCT system is determined by the objective lens that is used to deliver/focus the light onto the sample and is expressed by
Rl=1.22λ02NAobj
(2)
where NAobj is the numerical aperture (NA) of the objective lens which is inversely proportional to the lateral resolution. On the other hand, the depth-of-focus (DOF) is given by
ZDOF=2λ0nNAobj2
(3)
where n is the average refractive index of the sample. In general, the lateral resolution can be enhanced by the increase of numerical aperture. Unfortunately, the increase of the numerical aperture leads to the reduction of DOF. The relationship between the lateral resolution and DOF is schematically illustrated in Fig. 1
Fig. 1 Schematic illustration on the effect of numerical aperture on the desired lateral resolution of OCT images, in the case of (a) low NA, and (b) high NA.
for the cases of low (left figure) and high NAs (right figure). From the illustrations, we can appreciate that only the portion of the 3D image within DOF exhibits the desired lateral resolution, while that in the de-focal plane is blurred. There exist several dynamic focusing methods in the literature to mitigate the compromise between lateral resolution and DOF. In general, these methods can be divided into hardware-based and software-based (digital) approaches.

1.1 Hardware-based approaches

Lexer et al. [3

3. F. Lexer, C. K. Hitzenberger, W. Drexler, S. Molebny, H. Sattmann, M. Sticker, and A. F. Fercher, “Dynamic coherent focus of OCT with depth-independent transversal resolution,” J. Mod. Opt. 46, 541–553 (1999).

] investigated an optical setup to shift the focus of the beam illuminating the object through the object depth without changing the path length in the corresponding interferometer arm and achieved 10μm lateral resolution over an object depth of 430 μm in human cornea. Qi et al. [4

4. B. Qi, A. O. Himmer, L. M. Gordon, X. D. Yang, L. D. Dickensheets, and I. A. Vitkin, “Dynamic focus control in high-speed optical coherence tomography based on a micro-electromechanical mirror,” Opt. Commun. 232(1-6), 123–128 (2004). [CrossRef]

] designed a high-speed dynamic focus control system based on a micro-electro-mechanical mirror, in which the silicon nitride deformable mirror shifted the focus of the sample beam of the OCT interferometer in synchrony with the coherence-gate scan in the reference arm. As a result, the coherence gate remained at the beam focus during the whole imaging process. Divetia et al. [5

5. A. Divetia, T.-H. Hsieh, J. Zhang, Z. Chen, M. Bachman, and G.-P. Li, “Dynamically focused optical coherence tomography for endoscopic applications,” Appl. Phys. Lett. 86(10), 103902 (2005). [CrossRef]

] used a 1 mm liquid-filled polymer lens for endoscopic OCT applications that dynamically provided scanning depth of focus by changing the hydraulic pressure within the lens which enabled dynamic control of the focal depth without the need for articulated parts. This configuration was shown to have resolving power of 5 μm over an object depth of 2.5 mm. Xie et al. [6

6. T. Xie, S. Guo, Z. Chen, D. Mukai, and M. Brenner, “GRIN lens rod based probe for endoscopic spectral domain optical coherence tomography with fast dynamic focus tracking,” Opt. Express 14(8), 3238–3246 (2006). [CrossRef] [PubMed]

] developed a gradient index (GRIN) lens rod based probe for endoscopic spectral domain OCT with dynamic focus tracking. The focus of their endoscopic OCT probe could be dynamically adjusted at 500 mm/s without changing reference arm length to obtain high-quality OCT images for contact or non-contact tissue applications. The dynamic focusing range of the probe was achieved between 0 to 7.5 mm without moving the probe beam itself, while the imaging depth was 2.8 mm. To overcome the trade-off between lateral resolution and focusing depth, Ding et al. [7

7. Z. H. Ding, H. W. Ren, Y. H. Zhao, J. S. Nelson, and Z. P. Chen, “High-resolution optical coherence tomography over a large depth range with an axicon lens,” Opt. Lett. 27(4), 243–245 (2002). [CrossRef] [PubMed]

] incorporated an axicon lens with a top angle of 160° into the sample arm of an interferometer and maintained 10μm lateral resolution over a focusing depth of 6 mm. Holmes et al. [8

8. J. Holmes and S. Hattersley, “Image blending and speckle noise reduction in multi-beam OCT,” Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XIII, Proc. of SPIE Vol. 7168, 71681N (2009).

] presented a multi-beam OCT system that overcomes the problem of limited lateral resolution inherent in single-beam Fourier domain OCT and reduces speckle noise. However, the hardware-based methods require special hardware set-up configuration in the system design which can limit the scanning speed and raise the cost of the OCT system.

1.2 Digital approaches

Yasuno et al. [11

11. Y. Yasuno, J. I. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, “Non-iterative numerical method for laterally superresolving Fourier domain optical coherence tomography,” Opt. Express 14(3), 1006–1020 (2006). [CrossRef] [PubMed]

] applied a numerical deconvolution method to cancel lateral defocus in Fourier-domain OCT using a depth-dependent lateral PSF and applied the method to OCT images of in vivo human anterior eye segment. Kulkarni et al. [12

12. M. D. Kulkarni, C. W. Thomas, and J. A. Izatt, “Image enhancement in optical coherence tomography using deconvolution,” Electron. Lett. 33(16), 1365–1367 (1997). [CrossRef]

] proposed a linear shift-invariant system model to describe coherent light specimen interactions in OCT images and applied an iterative deconvolution algorithm to enhance the sharpness of the images of biological structures. A constrained iterative restoration algorithm was utilized to estimate the impulse response of the system model using a prior knowledge of the properties of that response [13

13. R. K. Wang, “Resolution improved optical coherence-gating tomography for imaging biological tissue,” J. Mod. Opt. 46, 1905–1913 (1999).

]. The algorithm was applied to a defocused OCT image of fresh onion sample and the deconvolved OCT image revealed polygonal cellular structure in the specimen. Ralston et al. [14

14. T. S. Ralston, D. L. Marks, F. Kamalabadi, and S. A. Boppart, “Deconvolution Methods for Mitigation of Transverse Blurring in Optical Coherence Tomography,” IEEE Trans. Image Process. 14(9), 1254–1264 (2005). [CrossRef] [PubMed]

] proposed a method for reducing transverse blurring and improving the transverse resolution in OCT using Gaussian beam deconvolution. First, the direct inverse problem was investigated using two types of regularization, truncated singular value decomposition and Tikhonov. Then, an iterative expectation-maximization algorithm, the Richardson-Lucy algorithm, with a beam-width-dependent iteration scheme was developed. Using a dynamically iterative Richardson-Lucy algorithm reduced the transverse blurring by providing an improvement in the transverse PSF for sparse scattering samples in regions up to two times larger than the confocal region of the lens. Also, Wiener and Richardson-Lucy OCT image restoration algorithms were implemented and compared with each other where the images deconvolved using Richardson-Lucy had a better contrast and quality [15

15. Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A 26(1), 72–77 (2009). [CrossRef] [PubMed]

]. Besides, JM Schmitt [16

16. J. M. Schmitt, “Restoration of optical coherence images of living tissue using the CLEAN algorithm,” J. Biomed. Opt. 3(1), 66–75 (1998). [CrossRef]

] derived CLEAN deconvolution kernel to provide an effective way of improving resolution and contrast in optical coherence tomography. Rolland et al. [17

17. J. P. Rolland, P. Meemon, S. Murali, K. P. Thompson, and K. S. Lee, “Gabor-based fusion technique for Optical Coherence Microscopy,” Opt. Express 18(4), 3632–3642 (2010). [CrossRef] [PubMed]

] developed a Gabor-based fusion technique based on the concept of the inverse local Fourier transform and the Gabor’s signal expansion to produce a high lateral resolution image throughout the depth of imaging.

In this paper, by using the forward OCT model previously developed by Ralston et al [14

14. T. S. Ralston, D. L. Marks, F. Kamalabadi, and S. A. Boppart, “Deconvolution Methods for Mitigation of Transverse Blurring in Optical Coherence Tomography,” IEEE Trans. Image Process. 14(9), 1254–1264 (2005). [CrossRef] [PubMed]

], we propose an automatic PSF estimation method based on discontinuity of information entropy and Richardson-Lucy deconvolution algorithm to improve the image contrast and quality. The advantage of this method is that there is no need to implement a new optical hardware or to measure the optical parameters of the OCT system, and importantly it is relatively in-sensitive to the phase-instability of the measurements. Since the Richardson-Lucy algorithm tends to concentrate energies near boundaries, it provides a good approximation to cellular boundaries and subcellular features and is more robust to noise. Therefore, combining PSF estimation method and Richardson-Lucy algorithm can estimate PSF of OCT at different defocal planes automatically and objectively while decreasing the transverse blurring.

2. Image recovering principle and algorithms

In Gaussian optics model, the lateral PSF of OCT is approximated by [14

14. T. S. Ralston, D. L. Marks, F. Kamalabadi, and S. A. Boppart, “Deconvolution Methods for Mitigation of Transverse Blurring in Optical Coherence Tomography,” IEEE Trans. Image Process. 14(9), 1254–1264 (2005). [CrossRef] [PubMed]

]:
h(x,y)=exp(2x2+y2Wz2)
(4)
where Wzis the depth-dependent beam spot size given by
Wz=W0[1+(zzR)2]12
(5)
zR=πW02λn
(6)
where W0 is the minimum value of Wz which occurs at z = 0, z is the axial coordinate and z=zR at the boundary of the confocal region (Rayleigh range) . The assumption in (5) is that Gaussian beam propagates in a direction normal to the lens and the refractive index n is constant along the beam. However, the beam profile deviates from the ideal model and the transverse resolution is not constant because index of refraction is not constant in the sample. Also, the beam profile depends on the depth and the focal region of the lens.

In OCT system, the final image at each depth location I(x,y) is the convolution of the lateral PSF h(x,y) and the sample profile O(x,y) which can be formulated by
I(x,y)=O(x,y)h(x,y)+n=yxO(xx0,yy0)h(x0,y0)dx0dy0+n
(7)
where n is additive noise and tissue heterogeneous noise [23

23. R. K. Wang and Z. Ma, “Real-time flow imaging by removing texture pattern artifacts in spectral-domain optical Doppler tomography,” Opt. Lett. 31(20), 3001–3003 (2006). [CrossRef] [PubMed]

]. By estimating the best beam spot size Wzat each depth location, the lateral resolution at out-of-focus depth locations can be improved by solving an inverse problem.

2.1 Image recovering flow diagram

The flow diagram of the proposed method for recovering defocused FD-OCT images is given in Fig. 2
Fig. 2 Flow diagram for recovering defocused OCT images.
.

Step1: The non-linear spectral interferogram captured in an A-scan is rescaled into linear k-space followed by Fast Fourier Transform (FFT) to become A-line complex information (phase and amplitude) along the z axis, whose amplitude represents the structural information. The 3D structure of a sample is acquired by two-dimensional scanning of the probe beam by a paired galvanometer mirror.

Step2 and Step3: The 3D structure image is resampled along the depth (in the z direction) to obtain a sequence of en-face images I(x,y) at different depth locations followed by a Gaussian low-pass filter. Here, the Gaussian LPF acts to reduce the noise levels in the image, including the system noise and the spurious effects from speckle, so that the recovered image becomes smooth. It should be noted that applying a Gaussian LPF will reduce the lateral PSF of OCT system as follows:
I=fI=f(hO+n)=fhO+fn(fh)O=hO
(8)
where f is the impulse response of Gaussian LPF, I is the raw image, h is the lateral PSF, O is the object being imaged (sample), n is additive noise, I is the filtered image and h is the lateral PSF after low-pass filtering. In the experiments, the low-pass filter was empirically selected such that its bandwidth is approximately 5 times broader than that of the system PSF.

Step4 and Step5: Richardson-Lucy deconvolution method requires a prior knowledge about the lateral PSF, h(x,y), which is a function of the actual beam spot size Wza. In order to estimate the best choice of Wzat each depth location, the filtered en-face images I(x,y) are deconvolved using Richardson-Lucy algorithm by a Gaussian family PSFs with stepped beam spot size Wz[Wz1,Wz2]and an evaluation function defined in subsection 2.3 is analyzed. The evaluation functions tends to have a discontinuity around Wz=Wza, so, PSF(The best choice of Wz) is acquired by searching a peak of the Laplacian of evaluation function .

Richardson-Lucy is an iterative method based on the maximum-likelihood estimation for Poisson statistical data, which is updated by
On+1(x,y)=On(x,y)[h(x,y)I(x,y)h(x,y)On(x,y)]
(9)
whereI(x,y) is the degraded image after Gaussian LPF, On(x,y)is the estimate of the original image after n iterations, h(x,y)is the modified lateral PSF and h(x,y)On(x,y) is referred to as the reblurred image. The iterations will stop when the difference between On+1 and On goes below a threshold.

Step6: Each original en-face image can be recovered using Richardson-Lucy algorithm and estimated lateral PSF (Wz).

2.2 Discontinuity of recovered image around Wz=Wza

The output of OCT imaging system is described mathematically by convolving its PSF with an input image. Let’s assume that a sufficiently small object can be modeled as a Dirac delta function. Then, the output of the imaging system is its lateral PSF as follows:
I(x,y)=O(x,y)h(x,y)=δ(x,y)h(x,y)=h(x,y)=exp(2x2+y2Wza2)
(10)
where I is the received image, O is the object profile, h is the lateral PSF and Wza is the actual beam spot size. When received image I is deconvolved using different beam spot size Wz, it can be described as follows:
I(x,y)=O(x,y)h(x,y)=O(x,y)exp(2x2+y2Wz2)
(11)
where O(x,y) is the recovered image when beam spot size is Wz. Equation (12) is obtained from Eq. (10) and Eq. (11) and O(x,y) can be solved by using 2-dimentional convolution theorem as Eqs. (12) and (13).

exp(2x2+y2Wza2)=O(x,y)exp(2x2+y2Wz2)
(12)
O(x,y)=2Wz2Wza2(Wz2Wza2)exp(2x2+y2Wz2Wza2)
(13)

It can be observed that the recovered image O(x,y)will have a discontinuity at Wz = Wza.

According to sampling property of delta function, any input image structure can be modeled as weighted superposition of delta functions.
O(x,y)=n=1Nm=1MAnmδ(xn,ym)
(14)
where Anm is the weight (intensity) of the image at each n and m location. Based on the definition of linearity, the results from a delta function can be generalized to this case. Therefore, the recovered image O(x,y) is also discontinuous at the point Wz = Wza. This is exactly the theoretical basis to estimate the actual Wza. In order to find the discontinuity of the recovered image around Wz = Wza, an evaluation function based on information entropy is defined and the discontinuity of this evaluation function is used to estimate the actual Wza.

2.3 Evaluation function for PSF estimation

Information entropy of the recovered image O(x,y) and the blurry out-of-focus raw image I(x,y)can be expressed by
E(O)=ap(O(a)).log(p(O(a)))
(15)
E(I)=bp(I(b)).log(p(I(b)))
(16)
where p(O(a))is the marginal probability of the image O, which is the probability that the intensity value of the image is equal to a. The joint entropy ofO and I images, which is an measure of uncertainty associated with them, can be expressed by
E(O,I)joint=a,bp(IO(a,b)).log(p(IO(a,b)))
(17)
wherep(IO(a,b))is the joint probability distribution of O and I images, which is the probability that the intensity value of the image O is equal to a and the intensity value of the image I is equal to b, respectively. Also, the mutual information entropy of the O and I images can be expressed by

E(O,I)mutual=E(O)+E(I)E(O,I)joint
(18)

The evaluation function is defined by
F(Wz)=E(O)E(O,I)jointE(O)
(19)
which is a function for Wz[Wz1,Wz2]. Because of the discontinuity of the recovered O, the evaluation function tends to have discontinuity at Wz = Wza. The discontinuity of the evaluation function is equivalent to a peak in the Laplacian (second derivative) of that function which is used to locate the discontinuity of the evaluation function. Typical curves of evaluation function and Laplacian of evaluation function are displayed in Fig. 3
Fig. 3 The peak of the Laplacian of evaluation function (b) corresponding to the discontinuous point of evaluation function (a) is used to locate the actual beam spot sizeWza.
.

2.4 Image recovery

If the object en-face imageO(x,y) is located in the focal plane (Wz = W0), the received image will be given by

I0(x,y)=O(x,y)h0(x,y)=O(x,y)exp(2x2+y2W02)
(20)

Assuming that the object en-face image O(x,y) is located in the out-of-focus plane (Wz = Wza), the received image will be given by

I(x,y)=O(x,y)h(x,y)=O(x,y)exp(2x2+y2Wza2)
(21)

The focused image I0(x,y) can be recovered from out-of-focus image I(x,y) by filtering the out-of-focus image with the compensation function g(x,y) as follows
I0(x,y)=g(x,y)I(x,y)=g(x,y)O(x,y)h(x,y)=O(x,y)h0(x,y)
(22)
which yields a compensation function for image recovery

g(x,y)=2Wza2W02(Wza2W02)exp(2x2+y2Wza2W02)
(23)

3. Simulation studies

In order to show the performance of the proposed method, we performed a series of simulations on digital phantoms. First, a Dirac delta function digital phantom was generated and manually defocused at different Wza. For each Wza scenario, Richardson-Lucy algorithm was performed to recover the image on a family of Wz values and the discontinuity (peak of the Laplacian) of the evaluation function was measured to estimate the beam spot size. Figure 4(a)
Fig. 4 When digital phantom was defocused at different Wza, (a) the discontinuous point of evaluation function approximately equals Wza for all Wzas, and (b) the peak of Laplacian of the evaluation function approximately equals Wza for all Wzas.
shows a 3D plot of the evaluation function for different Wzas where the x axis is Wz, the y axis is Wza and z axis (intensity) is the evaluation function. It can be observed that the discontinuity of the evaluation function is at Wz = Wza. Also, the Laplacian of the evaluation function is shown in Fig. 4(b) where x axis is Wz, y axis is Wza and z axis is the Laplacian of the evaluation function. The peak values are observed around Wz = Wza, which was expected.

Figure 5(a)
Fig. 5 The performance of the proposed method on a digital phantom. (a) Original image. (b) Out of focus image. Out of focus image was then degraded with (c) speckle noise, (d) Poisson noise and (e) both speckle and Poisson noise. Recovered images: (f) without the presence of noise. With the presence of different noise conditions, (g) speckle noise, (h) Poisson noise and (i) both speckle and Poisson noise.
shows another digital phantom to simulate the cellular structures of onion. The image was manually defocused at Wza = 3(Fig. 5(b)) and the proposed method was performed to recover the image where the recovered image is shown in Fig. 5(f). In order to study the effectiveness of the proposed method in suppressing noise, the image was degraded by Poisson and speckle noise. Figure 5(c-e) show defocused and degraded images by speckle noise, Poisson noise, and speckle and Poisson noise, respectively. Figure 5(g-i) show the recovered images from different noise conditions. The multiplicative speckle noise was a zero-mean uniformly distributed random noise with variance 0.05.

The experiment was repeated for different defocused depth locations (Wza) and the difference between the actual and estimated a number of beam spot size (estimation error) was measured at different noise conditions. Figure 6
Fig. 6 Estimation errors when digital phantom was manually defocused at different Wza for different noise conditions. The maximal absolute error is about 0.2 within the range 1 to12 of Wza.
shows the error in estimating actualWza under different noise conditions. It can be observed that the algorithm can effectively suppress the noise effect and the results are comparable with the experiment when no noise is present. The proposed method was specifically more effective when the image was slightly defocused (Wza [4 9]). In general, the estimation error was acceptable and the proposed method could effectively estimate the beam spot size at the present of different noise conditions.

4. Experimental results

In order to show the performance of the proposed method in real OCT images, three sets of experiments were performed on: 1) a phantom doped with microspheres that simulate the PSF of the system, 2) a fresh onion to show the efficiency of the proposed method on the still sample, and 3) in vivo tissue sample to demonstrate whether the proposed method has the potential for in vivo imaging applications. For the results presented below, the processing time of one frame image (512*512) was 1.23s for a HP dv6 laptop computer (processor: Intel i3 2.4GHz; RAM: 6G) when number of iterations was 10. Also note that during the image reconstruction process, there was no threshold applied to the images. Only when the OCT images were displayed, a fixed threshold (equal to the noise level in the raw OCT image) was applied to better show the final images.

First, a phantom was made from gelatin, doped with microspheres, having a predominant diameter of 300 nm. Since the microspheres are small relative to the system resolutions, they can be approximated by delta functions and the phantom can be modeled as a superposition of delta functions. 3D OCT volume imaging was obtained from the phantom and the focus of the system was set to the center of the phantom.

Figure 8(a)
Fig. 8 (a) Original image where the focus is in the center and the upper and lower part of the image is out of focus. (b) De-blurred image. (c) Recovered image knowing that each point is a delta function.
shows one B-scan of the phantom where the center of the image is in focus but the upper and lower portion of the image has a degraded lateral resolution and the microspheres look blurry. Then, the proposed automatic PSF estimation and image recovery (see Eq. (22) and (23)) method was applied on to the 3D volume data at different depth locations and the recovered image of the same B-scan is shown in Fig. 8(b). Note that the blurred image in the out-of-focus area is now focused after performing the proposed algorithm on the expérimental data. In addition, we can also recover the raw B-scan image (Fig. 8(a)) to object profile (Fig. 8(c)), which are delta functions when microspheres are sufficiently small, by deconvolution (see Eq. (7)) using estimated beam spot size Wzaat different depth locations. Note that some connected microspheres in raw B-scan image are seperated and the image quality is further improved.

Then, an experiment was performed on a fresh onion and 3D OCT images with 256 A-scans and 256 B scans were acquired from the focus area and 1.5 mm away from the focal point. Figure 9(a)
Fig. 9 (a) The en face image of the onion layer in the focal depth, (b) the defocused same en face image with (a) when the sample was shifted downwards by 1.5 mm, (b) and (d) are recovered noise-suppressed images from (a) and (c) respectively. The axial and lateral resolution is 12 µm and 16 µm respectively.
shows one en face image of the onion in the focal depth and the proposed method is applied to improve the contrast and resolution of the image in the presence of noise. Figure 9(b) shows the recovered image from Fig. 9(a). Then, while the probe beam was kept unchanged, the sample was shifted downwards by 1.5 mm. Figure 9(c) is the resulted image that corresponded to the same depth location of sample shown in Fig. 9(a). The proposed method was applied to recover that image (Fig. 9(c)), and the result is shown in Fig. 9(d). Note the similarity between Fig. 9(d) and Fig. 9(b) shows onion structures at the same depth location.

The recovered images of onion layers at different depths are shown in Fig. 10
Fig. 10 (a-d) Out-of-focus noising images at different depth locations: 1500 µm, 1604 µm, 1725 µm and 1823 µm respectively. (e-h) Recovered images from the corresponding depth locations. The axial and lateral resolution is 12 µm and 16 µm respectively.
where Figs. 10(a-d) show the defocused onion layers at 1500 µm, 1604 µm, 1725 µm and 1823 µm, respectively. Figures 10(e-h) show the recovered images at their corresponding depth locations in the onion.

To show the discontinuity during the recovering process, Fig. 11
Fig. 11 PSF estimation curve for recovering the image from Fig. 10(a) to Fig. 10(e): (a) the change of the evaluation function with the beam spot size Wz and its discontinuity (pointed by arrow) and (b) the corresponding Laplacian of the evaluation function with the beam spot size Wz and its peak value.
gives the 2D plots of PSF estimation curve for recovering from Fig. 10(a) to Fig. 10(e), where the changes of evaluation function and Laplacian of the evaluation function with the beam spot size Wz are displayed. It can be observed that the discontinuity of the evaluation function and peak value of the Laplacian of the evaluation function occur when Wz = 4.0, meaning that the estimated beam size was 4.0 for this en face image.

The theoretical beam spot size Wza is compared with the estimated beam size Wzain the onion at different depths and is shown in Fig. 12
Fig. 12 The theoretical beam spot size at different depths (dashed line) in the fresh onion sample is compared with estimated beam spot size (solid line). The maximum estimation error is 4.2 µm and the maximum relative error is 12.8% within a depth from 1.5 mm to 2.3 mm.
where the x axis is the depth in um and y axis is the beam spot size in µm. It can be observed that the estimated beam spot sizes agree with the expected theoretical values. The maximum estimation error is 4.2 µm and the maximum relative error is 12.8% for a depth location between 1.5 mm to 2.3 mm.

Lastly, the preliminary experiments were performed on the OCT images that were acquired from the human fingertip using the system described in Fig. 7 to demonstrate whether the proposed method is applicable in vivo. For this set of experiments, the system was running at 120 frames per second. 3D OCT images consisted of 256 A-scans and 200 B scans, covering an area of 1.5x1.2 mm2 on the skin surface. The data was first obtained from the focus region. And then with the other system setup fixed, the finger was moved 500 µm away from the focal point, and another 3D image was acquired. Because of the system spatial resolution was about 16 μm, the image quality enhancement for the regions within dermis was not easily appreciated. For this reason, we decided to show here the ability of proposed method to provide enhanced imaging quality for the de-focused sweat glands. Figure 13(a)
Fig. 13 The en-face images obtained from fingertip of a human volunteer, showing a plane cut through the sweat glands at ~90°. (a) Image in focal area. (b) Defocused image after moving ~0.5 mm away from the focal point. (c) Recovered image from (b).
shows one typical en-face image located at the focus region, which was dissected from the 3D volume data set. This en-face image represented a plane approximately cut through the upward oriented sweat glands with 90 degrees, in which the sweat glands (shown as the bright spots) and the skin ridges are clearly seen. Figure 13(b) shows one defocused en-face image at the approximately the same location as in Fig. 13(a). The proposed method was then used to de-blur the de-focused image. The result is given in Fig. 13(c), where it can be seen that the recovered image has qualitatively the same quality as the focused image. Quantitatively, the estimated entropies for the three images were 6.9699 (Fig. 13(a)), 7.0128 (Fig. 13(b)), and 6.9356 (Fig. 13(c)), respectively. Thus, these preliminary results demonstrate that the proposed approach is efficient in de-blurring the in vivo OCT images. Further refinement of the current model for in vivo imaging applications are still in progress.

5. Conclusion

We proposed an automatic PSF estimation method for recovering defocused OCT images based on the discontinuity of information entropy. A family of beam spot sizes were used to recover the original image using Richardson-Lucy algorithm and the maxima in the Laplacian of the introduced evaluation function was utilized to find the best PSF. The performance of the proposed method was first tested on digital phantoms and then on a custom-built phantom, onion cells as well as on human finger in vivo. The quantitative performance of the method was quantized by knowing the parameters of the system and comparing the estimated parameters with their real values. It was observed that the proposed method is effective to recover defocused images in the presence of noise and extend depth of imaging range to 2.8mm. This method can be used in recovering defocused OCT images where the PSF is not known, especially when the index of refraction at different depths of the sample is unknown and not fixed. The proposed method used an approximated OCT forward model when NA of OCT system is relatively small. In future, the combination of the proposed PSF estimation method with accurate OCT forward model, such as that described in interferometric synthetic aperture microscopy [9

9. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]

], will be explored to account for the diffraction and the coherent scattering of light by the sample, so that a better representation of the sample’s microstructure outside the focus region is presented. The proposed image recovery approach assumes that the PSF exhibits linear shift invariance throughout an en face plane, which was achieved by confining the beam scanning to the central region of the objective lens.

The proposed PSF estimation method can be combined with other deconvolution algorithms such as wiener filters for PSF estimation and image recovery. Furthermore, the proposed PSF estimation and image deconvolution method is not limited to OCT and may be useful for image defocusing in other confocal optical system.

Acknowledgments

This work was supported in part by research grants from the National Institute of Health (NIH) (R01HL093140 and R01HL093140S).

References and links

1.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. Z. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef] [PubMed]

2.

P. H. Tomolins and R. K. Wang, “Theory, developments and applications of optical coherence tomography,” J. Phys. D Appl. Phys. 38(15), 2519–2535 (2005). [CrossRef]

3.

F. Lexer, C. K. Hitzenberger, W. Drexler, S. Molebny, H. Sattmann, M. Sticker, and A. F. Fercher, “Dynamic coherent focus of OCT with depth-independent transversal resolution,” J. Mod. Opt. 46, 541–553 (1999).

4.

B. Qi, A. O. Himmer, L. M. Gordon, X. D. Yang, L. D. Dickensheets, and I. A. Vitkin, “Dynamic focus control in high-speed optical coherence tomography based on a micro-electromechanical mirror,” Opt. Commun. 232(1-6), 123–128 (2004). [CrossRef]

5.

A. Divetia, T.-H. Hsieh, J. Zhang, Z. Chen, M. Bachman, and G.-P. Li, “Dynamically focused optical coherence tomography for endoscopic applications,” Appl. Phys. Lett. 86(10), 103902 (2005). [CrossRef]

6.

T. Xie, S. Guo, Z. Chen, D. Mukai, and M. Brenner, “GRIN lens rod based probe for endoscopic spectral domain optical coherence tomography with fast dynamic focus tracking,” Opt. Express 14(8), 3238–3246 (2006). [CrossRef] [PubMed]

7.

Z. H. Ding, H. W. Ren, Y. H. Zhao, J. S. Nelson, and Z. P. Chen, “High-resolution optical coherence tomography over a large depth range with an axicon lens,” Opt. Lett. 27(4), 243–245 (2002). [CrossRef] [PubMed]

8.

J. Holmes and S. Hattersley, “Image blending and speckle noise reduction in multi-beam OCT,” Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XIII, Proc. of SPIE Vol. 7168, 71681N (2009).

9.

T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]

10.

L. Yu, B. Rao, J. Zhang, J. Su, Q. Wang, S. Guo, and Z. Chen, “Improved lateral resolution in optical coherence tomography by digital focusing using two-dimensional numerical diffraction method,” Opt. Express 15(12), 7634–7641 (2007). [CrossRef] [PubMed]

11.

Y. Yasuno, J. I. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, “Non-iterative numerical method for laterally superresolving Fourier domain optical coherence tomography,” Opt. Express 14(3), 1006–1020 (2006). [CrossRef] [PubMed]

12.

M. D. Kulkarni, C. W. Thomas, and J. A. Izatt, “Image enhancement in optical coherence tomography using deconvolution,” Electron. Lett. 33(16), 1365–1367 (1997). [CrossRef]

13.

R. K. Wang, “Resolution improved optical coherence-gating tomography for imaging biological tissue,” J. Mod. Opt. 46, 1905–1913 (1999).

14.

T. S. Ralston, D. L. Marks, F. Kamalabadi, and S. A. Boppart, “Deconvolution Methods for Mitigation of Transverse Blurring in Optical Coherence Tomography,” IEEE Trans. Image Process. 14(9), 1254–1264 (2005). [CrossRef] [PubMed]

15.

Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A 26(1), 72–77 (2009). [CrossRef] [PubMed]

16.

J. M. Schmitt, “Restoration of optical coherence images of living tissue using the CLEAN algorithm,” J. Biomed. Opt. 3(1), 66–75 (1998). [CrossRef]

17.

J. P. Rolland, P. Meemon, S. Murali, K. P. Thompson, and K. S. Lee, “Gabor-based fusion technique for Optical Coherence Microscopy,” Opt. Express 18(4), 3632–3642 (2010). [CrossRef] [PubMed]

18.

P. H. Tomlins, P. Woolliams, M. Tedaldi, A. Beaumont, and C. Hart, “Measurement of the three-dimensional point-spread function in an optical coherence tomography imaging system,” Proc. SPIE 6847, 68472Q, 68472Q-8 (2008). [CrossRef]

19.

P. H. Tomlins, R. A. Ferguson, C. Hart, and P. D. Woolliams, “Point-spread function phantoms for optical coherence tomography,” NPL Report OP 2 (National Physical Laboratory, pp: 1754–2944, (2009).

20.

P. D. Woolliams, R. A. Ferguson, C. Hart, A. Grimwood, and P. H. Tomlins, “Spatially deconvolved optical coherence tomography,” Appl. Opt. 49(11), 2014–2021 (2010). [CrossRef] [PubMed]

21.

P. H. Tomlins, G. N. Smith, P. D. Woolliams, J. Rasakanthan, and K. Sugden, “Femtosecond laser micro-inscription of optical coherence tomography resolution test artifacts,” Biomed. Opt. Express 2(5), 1319–1327 (2011). [CrossRef] [PubMed]

22.

A. Agrawal, T. J. Pfefer, N. Gilani, and R. Drezek, “Three-dimensional characterization of optical coherence tomography point spread functions with a nanoparticle-embedded phantom,” Opt. Lett. 35(13), 2269–2271 (2010). [CrossRef] [PubMed]

23.

R. K. Wang and Z. Ma, “Real-time flow imaging by removing texture pattern artifacts in spectral-domain optical Doppler tomography,” Opt. Lett. 31(20), 3001–3003 (2006). [CrossRef] [PubMed]

24.

Z. Zhi, Y. Jung, Y. Jia, L. An, and R. K. Wang, “Highly sensitive imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography,” Biomed. Opt. Express 2(5), 1059–1068 (2011). [CrossRef] [PubMed]

OCIS Codes
(100.1830) Image processing : Deconvolution
(100.3190) Image processing : Inverse problems
(100.6950) Image processing : Tomographic image processing
(110.3000) Imaging systems : Image quality assessment
(110.4850) Imaging systems : Optical transfer functions
(170.4500) Medical optics and biotechnology : Optical coherence tomography

ToC Category:
Image Processing

History
Original Manuscript: July 8, 2011
Revised Manuscript: August 14, 2011
Manuscript Accepted: August 17, 2011
Published: August 31, 2011

Virtual Issues
Vol. 6, Iss. 10 Virtual Journal for Biomedical Optics

Citation
Guozhong Liu, Siavash Yousefi, Zhongwei Zhi, and Ruikang K. Wang, "Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach," Opt. Express 19, 18135-18148 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-19-18135


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. Z. G. Fujimoto, “Optical coherence tomography,” Science254(5035), 1178–1181 (1991). [CrossRef] [PubMed]
  2. P. H. Tomolins and R. K. Wang, “Theory, developments and applications of optical coherence tomography,” J. Phys. D Appl. Phys.38(15), 2519–2535 (2005). [CrossRef]
  3. F. Lexer, C. K. Hitzenberger, W. Drexler, S. Molebny, H. Sattmann, M. Sticker, and A. F. Fercher, “Dynamic coherent focus of OCT with depth-independent transversal resolution,” J. Mod. Opt.46, 541–553 (1999).
  4. B. Qi, A. O. Himmer, L. M. Gordon, X. D. Yang, L. D. Dickensheets, and I. A. Vitkin, “Dynamic focus control in high-speed optical coherence tomography based on a micro-electromechanical mirror,” Opt. Commun.232(1-6), 123–128 (2004). [CrossRef]
  5. A. Divetia, T.-H. Hsieh, J. Zhang, Z. Chen, M. Bachman, and G.-P. Li, “Dynamically focused optical coherence tomography for endoscopic applications,” Appl. Phys. Lett.86(10), 103902 (2005). [CrossRef]
  6. T. Xie, S. Guo, Z. Chen, D. Mukai, and M. Brenner, “GRIN lens rod based probe for endoscopic spectral domain optical coherence tomography with fast dynamic focus tracking,” Opt. Express14(8), 3238–3246 (2006). [CrossRef] [PubMed]
  7. Z. H. Ding, H. W. Ren, Y. H. Zhao, J. S. Nelson, and Z. P. Chen, “High-resolution optical coherence tomography over a large depth range with an axicon lens,” Opt. Lett.27(4), 243–245 (2002). [CrossRef] [PubMed]
  8. J. Holmes and S. Hattersley, “Image blending and speckle noise reduction in multi-beam OCT,” Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XIII, Proc. of SPIE Vol. 7168, 71681N (2009).
  9. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys.3(2), 129–134 (2007). [CrossRef]
  10. L. Yu, B. Rao, J. Zhang, J. Su, Q. Wang, S. Guo, and Z. Chen, “Improved lateral resolution in optical coherence tomography by digital focusing using two-dimensional numerical diffraction method,” Opt. Express15(12), 7634–7641 (2007). [CrossRef] [PubMed]
  11. Y. Yasuno, J. I. Sugisaka, Y. Sando, Y. Nakamura, S. Makita, M. Itoh, and T. Yatagai, “Non-iterative numerical method for laterally superresolving Fourier domain optical coherence tomography,” Opt. Express14(3), 1006–1020 (2006). [CrossRef] [PubMed]
  12. M. D. Kulkarni, C. W. Thomas, and J. A. Izatt, “Image enhancement in optical coherence tomography using deconvolution,” Electron. Lett.33(16), 1365–1367 (1997). [CrossRef]
  13. R. K. Wang, “Resolution improved optical coherence-gating tomography for imaging biological tissue,” J. Mod. Opt.46, 1905–1913 (1999).
  14. T. S. Ralston, D. L. Marks, F. Kamalabadi, and S. A. Boppart, “Deconvolution Methods for Mitigation of Transverse Blurring in Optical Coherence Tomography,” IEEE Trans. Image Process.14(9), 1254–1264 (2005). [CrossRef] [PubMed]
  15. Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A26(1), 72–77 (2009). [CrossRef] [PubMed]
  16. J. M. Schmitt, “Restoration of optical coherence images of living tissue using the CLEAN algorithm,” J. Biomed. Opt.3(1), 66–75 (1998). [CrossRef]
  17. J. P. Rolland, P. Meemon, S. Murali, K. P. Thompson, and K. S. Lee, “Gabor-based fusion technique for Optical Coherence Microscopy,” Opt. Express18(4), 3632–3642 (2010). [CrossRef] [PubMed]
  18. P. H. Tomlins, P. Woolliams, M. Tedaldi, A. Beaumont, and C. Hart, “Measurement of the three-dimensional point-spread function in an optical coherence tomography imaging system,” Proc. SPIE6847, 68472Q, 68472Q-8 (2008). [CrossRef]
  19. P. H. Tomlins, R. A. Ferguson, C. Hart, and P. D. Woolliams, “Point-spread function phantoms for optical coherence tomography,” NPL Report OP 2 (National Physical Laboratory, pp: 1754–2944, (2009).
  20. P. D. Woolliams, R. A. Ferguson, C. Hart, A. Grimwood, and P. H. Tomlins, “Spatially deconvolved optical coherence tomography,” Appl. Opt.49(11), 2014–2021 (2010). [CrossRef] [PubMed]
  21. P. H. Tomlins, G. N. Smith, P. D. Woolliams, J. Rasakanthan, and K. Sugden, “Femtosecond laser micro-inscription of optical coherence tomography resolution test artifacts,” Biomed. Opt. Express2(5), 1319–1327 (2011). [CrossRef] [PubMed]
  22. A. Agrawal, T. J. Pfefer, N. Gilani, and R. Drezek, “Three-dimensional characterization of optical coherence tomography point spread functions with a nanoparticle-embedded phantom,” Opt. Lett.35(13), 2269–2271 (2010). [CrossRef] [PubMed]
  23. R. K. Wang and Z. Ma, “Real-time flow imaging by removing texture pattern artifacts in spectral-domain optical Doppler tomography,” Opt. Lett.31(20), 3001–3003 (2006). [CrossRef] [PubMed]
  24. Z. Zhi, Y. Jung, Y. Jia, L. An, and R. K. Wang, “Highly sensitive imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography,” Biomed. Opt. Express2(5), 1059–1068 (2011). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited