OSA's Digital Library

Biomedical Optics Express

Biomedical Optics Express

  • Editor: Joseph A. Izatt
  • Vol. 4, Iss. 10 — Oct. 1, 2013
  • pp: 2032–2050
« Show journal navigation

Multi-penalty conditional random field approach to super-resolved reconstruction of optical coherence tomography images

Ameneh Boroomand, Alexander Wong, Edward Li, Daniel S. Cho, Betty Ni, and Kostandinka Bizheva  »View Author Affiliations


Biomedical Optics Express, Vol. 4, Issue 10, pp. 2032-2050 (2013)
http://dx.doi.org/10.1364/BOE.4.002032


View Full Text Article

Acrobat PDF (2840 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Improving the spatial resolution of Optical Coherence Tomography (OCT) images is important for the visualization and analysis of small morphological features in biological tissue such as blood vessels, membranes, cellular layers, etc. In this paper, we propose a novel reconstruction approach to obtaining super-resolved OCT tomograms from multiple lower resolution images. The proposed Multi-Penalty Conditional Random Field (MPCRF) method combines four different penalty factors (spatial proximity, first and second order intensity variations, as well as a spline-based smoothness of fit) into the prior model within a Maximum A Posteriori (MAP) estimation framework. Test carried out in retinal OCT images illustrate the effectiveness of the proposed MPCRF reconstruction approach in terms of spatial resolution enhancement, as compared to previously published super resolved image reconstruction methods. Visual assessment of the MPCRF results demonstrate the potential of this method in better preservation of fine details and structures of the imaged sample, as well as retaining the sharpness of biological tissue boundaries while reducing the effects of speckle noise inherent to OCT. Quantitative evaluation using imaging metrics such as Signal-to-Noise Ratio (SNR), Contrast to Noise Ratio (CNR), Equivalent Number of Looks (ENL), and Edge Preservation Parameter show significant visual quality improvement with the MPCRF approach. Therefore, the proposed MPCRF reconstruction approach is an effective tool for enhancing the spatial resolution of OCT images without the necessity for significant imaging hardware modifications.

© 2013 OSA

1. Introduction

Over the past two decades, significant efforts have been directed toward optimization of the OCT system’s design in attempts to improve the spatial resolution of OCT technology [10

10. B. Bouma and Tearney, Handbook of optical coherence tomography (Marcel DekkerNew York:, 2002).

]. The development of novel light sources with large spectral bandwidths, as well as different methods for spectral shaping have led to improvement of the OCT axial resolution in the range from sub-micron to a few microns [1

1. W. Drexler and J. G. Fujimoto, Optical coherence tomography: technology and applications (Springer, 2008). [CrossRef]

, 11

11. J. Gong, B. Liu, Y. Kim, Y. Liu, X. Li, and V. Backman, “Optimal spectral reshaping for resolution improvement in optical coherence tomography,” Opt. Express 14, 5909–5915 (2006). [CrossRef] [PubMed]

]. Furthermore, the increase of the spectral bandwidth of the OCT light sources caused reduction in the speckle noise in OCT tomograms [1

1. W. Drexler and J. G. Fujimoto, Optical coherence tomography: technology and applications (Springer, 2008). [CrossRef]

]. Significant improvement of the OCT lateral resolution in biological tissue was achieved by combining OCT with adaptive optics technology [12

12. D. Merino, C. Dainty, A. Bradu, and A. Podoleanu, “Adaptive optics enhanced simultaneous en-face optical coherence tomography and scanning laser ophthalmoscopy,” Opt. Express 14, 3345–3353 (2006). [CrossRef] [PubMed]

]. The development of high speed OCT technology based on fast tunable lasers or CCD/CMOS cameras, have significantly reduced the presence of motion artefact, that cause blurring and eventually degradation of the imaging resolution [13

13. M. Brezinski, Optical coherence tomography: principles and applications (Academic press, 2006).

]. Although the hardware approach to improving the OCT spatial resolution has the benefit of instantaneous results, it also has the drawback of greater cost and complexity of the novel technology. Due to such fundamental hardware limitations, a number of different image enhancement techniques have been developed for improving the spatial resolution and quality of OCT images [14

14. A. Fercher, W. Drexler, C. Hitzenberger, and T. Lasser, “Optical coherence tomography-principles and applications,” Rep. Prog. Phys. 66, 239 (2003). [CrossRef]

]. The majority of these methods focus on image enhancement based on a single data acquisition, which include: (i) de-convolution methods [15

15. Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A 26, 72–77 (2009). [CrossRef]

, 16

16. P. D. Woolliams, R. A. Ferguson, C. Hart, A. Grimwood, and P. H. Tomlins, “Spatially deconvolved optical coherence tomography,” Appl. Opt. 49, 2014–2021 (2010). [CrossRef] [PubMed]

] that aim to reduce the effect of OCT PSF on the spatial resolution of acquired OCT image; (ii) different sampling methods in the K-space for the reconstruction of OCT images with higher spatial resolution when the number of CCD pixels camera are not sufficient to generate high enough spatial resolution [17

17. C. Liu, A. Wong, K. Bizheva, P. Fieguth, and H. Bie, “Homotopic, non-local sparse reconstruction of optical coherence tomography imagery,” Opt. Express 20, 10200–10211 (2012). [CrossRef] [PubMed]

]; (iii) despeckling methods to compensate for the effect of speckle noise [5

5. M. Szkulmowski, I. Gorczynska, D. Szlag, M. Sylwestrzak, A. Kowalczyk, and M. Wojtkowski, “Efficient reduction of speckle noise in optical coherence tomography,” Opt. Express 20, 1337–1359 (2012). [CrossRef] [PubMed]

7

7. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3, 927–942 (2012). [CrossRef] [PubMed]

,18

18. M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25, 545–547 (2000). [CrossRef]

,19

19. F. Luan and Y. Wu, “Application of rpca in optical coherence tomography for speckle noise reduction,” Laser Phys. Lett. 10, 035603 (2013). [CrossRef]

]; and (iv) motion reduction techniques to overcome the effect of sample motion in spatial resolution degradation of acquired OCT images [8

8. R. Zawadzki, A. Fuller, S. Choi, D. Wiley, B. Hamann, and J. Werner, “Correction of motion artifacts and scanning beam distortions in 3d ophthalmic optical coherence tomography imaging,” in “Proc. SPIE ,” vol. 6426 (2007) pp. 642607. [CrossRef]

, 9

9. D. Sacchet, M. Brzezinski, J. Moreau, P. Georges, and A. Dubois, “Motion artifact suppression in full-field optical coherence tomography,” Appl. Opt. 49, 1480–1488 (2010). [CrossRef] [PubMed]

, 20

20. M. F. Kraus, B. Potsaid, M. A. Mayer, R. Bock, B. Baumann, J. J. Liu, J. Hornegger, and J. G. Fujimoto, “Motion correction in optical coherence tomography volumes on a per a-scan basis using orthogonal scan patterns,” Biomed. Opt. Express 3, 1182–1199 (2012). [CrossRef] [PubMed]

].

Different approaches have been proposed for the purpose of general SR reconstruction, and they can be categorized as follows: interpolation based approaches [33

33. D. Rajan and S. Chaudhuri, “Generalized interpolation and its application in super-resolution imaging,” Imag. Vision Comput. 19, 957–969 (2001). [CrossRef]

], frequency based approaches [34

34. R. Tsai and T. S. Huang, “Multiframe image restoration and registration,” A Comp. Vis. Image Process. 1, 317–339 (1984).

], regularization based approaches [35

35. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004). [CrossRef] [PubMed]

], sparsity based super-resolution methods [36

36. J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010). [CrossRef]

], wavelet based super-resolution methods [37

37. M. D. Robinson, C. A. Toth, J. Y. Lo, and S. Farsiu, “Efficient fourier-wavelet super-resolution,” IEEE Trans. Image Process. 19, 2669–2681 (2010). [CrossRef] [PubMed]

] and learning based super-resolution approaches [38

38. W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22, 56–65 (2002). [CrossRef]

].

To the best of our knowledge, the only reported study on SR OCT imagery has been a Kernel Based Interpolation (KBI) approach to reconstruct a SR OCT image from a set of aligned LR OCT images [39

39. M. Robinson, S. Chiu, J. Lo, C. Toth, J. Izatt, and S. Farsiu, “New applications of super-resolution in medical imaging,” in “Super-Resolution Imaging,” P. Milanfar, ed. (CRC Press, 2010), chap. 13, pp. 384–412.

]. In the KBI method, first, a multi-frame motion estimation algorithm [40

40. S. Farsiu, M. Elad, and P. Milanfar, “Constrained, globally optimal, multi-frame motion estimation,” in “Statistical Signal Processing, 2005 IEEE/SP 13th Workshop on,” (IEEE, 2005), pp. 1396–1401. [CrossRef]

] was used to estimate the amount of sub-pixel shifting between the summed voxel projections and then a fast zero-order classic kernel regression-based approach [41

41. H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007). [CrossRef] [PubMed]

] was applied to reconstruct the final SR OCT image. To evaluate the effectiveness of this SR image reconstruction method in enhancing the resolution of OCT images, the reconstructed SR OCT image was compared to a gold-standard OCT image with denser sampling rate [39

39. M. Robinson, S. Chiu, J. Lo, C. Toth, J. Izatt, and S. Farsiu, “New applications of super-resolution in medical imaging,” in “Super-Resolution Imaging,” P. Milanfar, ed. (CRC Press, 2010), chap. 13, pp. 384–412.

]. In this paper, we introduce a Multi-Penalty Conditional Random Field (MPCRF) reconstruction approach for the purpose of SR OCT imaging. The MPCRF approach combines four different penalty factors (spatial proximity, first and second order intensity variations, as well as a spline-based smoothness of fit) into the prior model within a Maximum a Posteriori (MAP) estimation framework. It’s noticeable that the combination of the spatial proximity and first order intensity variation penalty terms could act as a bilateral filter [42

42. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in “Computer Vision, 1998. Sixth International Conference on,” (IEEE, 1998), pp. 839–846.

]. It was shown that the combination of bilateral filter and Total Variation (TV) method with L1 norm can be used as a robust regularizer to improve the efficacy of SR image reconstruction methods [35

35. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004). [CrossRef] [PubMed]

]. Our proposed MPCRF reconstruction approach provides strong performance compared to existing approaches in terms of spatial resolution improvement and allows for the reconstruction of SR OCT images with the capability of preserving the fine structures of the imaged object as well as suppressing speckle noise inherent to OCT images. We believe that the proposed MPCRF method is useful for spatial resolution enhancement of OCT images from all different modalities such as Time Domain OCT (TD-OCT), Spectral Domain OCT(SD-OCT) as well as Swept Source OCT (SS-OCT).

2. Methodology

2.1. Problem formulation

Generating a SR image using a set of LR OCT data acquisitions can be viewed as a general inverse problem, where the solution to the problem is an image with higher spatial resolution [27

27. S. Park, M. Park, and M. Kang, “Super-resolution image reconstruction: a technical overview,” Signal Process. Mag., IEEE20, 21–36 (2003). [CrossRef]

]. Let S be a set of sites on a discrete lattice 𝔏 defining an OCT image, with sS denoting a site in 𝔏. Let Z = {Zs|sS}, U = {Us|sS}, and 𝔑 = {𝔑s|sS} be random fields on S representing the acquired LR OCT image (observed variable), the SR OCT image (latent variable), and noise. Let zk={zsk|sS} and ηk={ηsk|sS} be the kth corresponding realizations of Z and 𝔑, respectively (where 1 ≤ kn), and u = {us|sS} be a corresponding realization of U. For the case of SR OCT reconstruction, where we wish to account for the multiplicative nature of speckle noise, we formulate the forward model for the kth data acquisition as,
log(zk)=log(Hk(u))+log(ηk),
(1)
where Hk(.) denotes the observation function for the kth data acquisition that accounts for imaging factors such as decimation (effect of limited number of CCD camera pixels), blurring (effect of PSF in OCT system) and motion (this is due to the the motion of sample or scanning beam during the OCT acquisition process).

For a set of n data acquisitions, we arrive at the following system of equations:
log(z1)=log(H1(u))+log(η1)log(z2)=log(H2(u))+log(η2)log(zn)=log(Hn(u))+log(ηn)
(2)
Therefore, given Eq. (2), the goal of SR OCT reconstruction is to estimate the SR OCT image u given a set of n LR acquisitions z1, z2,..., zn, hence resulting in a data fusion problem.

2.2. Proposed MPCRF reconstruction method

For estimating the final SR OCT image u, we solve the inverse problem as a Maximum A Posteriori (MAP) optimization problem of the following form,
u^=argmaxu(P(u|z1,z2,,zn)),
(3)
where P (u|z1, z2,..., zn) denotes the posterior distribution. From Baye’s rule, one can reformulate the problem as
u^=argmaxu(P(z1,z2,,zn|u)P(u)),
(4)
where P (z1, z2,...,zn|u) is the likelihood and P(u) is the prior. Given the set of acquisitions (Eq. (2)) and assuming observational independence to reduce model complexity, one can model the likelihood P (z1, z2,...,zn|u) as
P(z1,z2,zn|u)=ΠkΠiP(zik|ui)=ΠkΠi1σ2πexp(α(log(zik)log(Hk(ui)))22σ2)
(5)
where α is a coefficient that controls the contribution of the likelihood term, and σ denotes standard deviation. To model the prior P(u), we employ a higher-order Conditional Random Field (CRF) modeling strategy, which can be defined as
P(u)=1fki,kexp(β|uiuj|w(zik,zjk)),
(6)
where f is the partition function that normalizes P(u), β is the coefficient that control the contribution of the prior term, ℵ denotes the neighborhood, and w(zik,zjk) denotes a penalty function dependent on zk. Given the prior model in Eq. (6), the proposed MPCRF method aims to solve the MAP problem in Eq. (4) by considering and combining different penalty factors.

The motivation behind the proposed MPCRF reconstruction approach is to design a set of prior penalties that enhances the effectiveness of the CRF framework in constructing a SR OCT image using a set of acquired LR OCT images from the imaged sample. Four different penalty terms are designed and enforced in the formulation of our proposed MPCRF method that penalize over: (i) the spatial proximity of neighboring pixels, (ii) first order variations of intensity in neighboring pixels, (iii) second order variations of intensity in neighboring pixels, as well as (iv) the smoothness of fit of the final estimate using a cubic spline model. In the following, we will describe and explain the effect of these penalty terms on the final estimate of SR OCT image.

2.2.1. Spatial proximity

We define the spatial proximity penalty term, wSP, as,
wSP(i,j)=exp(dE(i,j)2σSP2)
(7)
where the spatial proximity of i and j (quantified here by the Euclidean distance between their locations dE (.)) is penalized. We enforce this penalty term as it can be reasonable assumed that spatially distant pixels are less likely belong to a unique segment of an image. In the expression of Eq. (7), σSP represents the control factor of wSP and by increasing its value, stronger spatial closeness will be enforced. Using this penalty term results in better maintaining the homogeneity of surrounding pixels in the final estimate of the SR OCT image.

2.2.2. First order and second order variations of pixel intensity

The second penalty term penalizes over the First Order Variation (FOV) of intensity value between zi and zj. The FOV penalty term wFOV can be defined as,
wFOV(zi,zj)=exp(zizj2σFOV2)
(8)
where σFOV specifies the control factor of this penalty term. This penalty term helps preserve boundaries in the final estimate of SR OCT image.

The third penalty term penalizes over the Second Order Variation (SOV) of zi and zj. The SOV penalty term wSOV can be defined as
wSOV(zi,zj)=exp(D(zi)D(zj)2σSOV2)
(9)
where D(.) represents the first-order derivative and σSOV defines the control factor of wSOV. This penalty term, in conjunction with the FOV term, progressively enhances the edges and boundaries of the final estimate of the SR OCT image.

2.2.3. Spline-based Smoothness of Fit

The fourth penalty term enforces a smoothness of fit based on a Cubic Smoothing Spline (CSS) model, with the goal of penalizing over the global smoothness of fit of final estimate of SR OCT image. We formulate this penalty term (denoted as C) as,
C(u)=exp(γD2S(u))
(10)
where γ is the coefficient controlling the contribution of this penalty term, and the second-order derivative D2(.) of the spline function S(u) is enforced. The usefulness of this penalty term is in its capability to reduce the effect of block artifacts while still allowing for detail preservation given the flexibility of the CSS model.

Considering all aforementioned penalties into the prior formulation of Eq. (6), we can now formulate the prior model used in the proposed MPCRF approach for reconstructing a SR OCT image as,
P(u)=(exp(γD2S(u)))(1fki,jexp(β|uiuj|wSP(i,j)wFOV(zik,zjk)wSOV(zik,zjk))),
(11)
Given the likelihood P (z1, z2,...,zn|u) (Eq. (5)) and proposed multi-penalty prior P(u) (Eq. (11)), one can now solve the MAP problem in Eq. (4). To solve this problem in a effective way and find the estimate of SR OCT image û, we employ an efficient steepest descent algorithm. Details regarding the implementation of this efficient steepest descent algorithm can be found in [43

43. P. Rosenbloom, “The method of steepest descent,” in “Proc. of Symp. in Applied Math ,”, vol. 6 (1956), vol. 6, pp. 127–176. [CrossRef]

, 44

44. X. Wang, “Method of steepest descent and its applications,” IEEE Microw. Wireless Compon. Lett. 12, 24–26 (2008).

].

3. Experimental setup

To evaluate the effectiveness of the proposed MPCRF reconstruction method in constructing a SR OCT image using a set of LR OCT acquisitions, this method was applied to sets of OCT images acquired in-vivo from the human and the rat retina. The images were acquired with a high speed, ultrahigh resolution spectra domain OCT system operating in the 1060 nm spectral range that provided 3 μm axial and 5 μm lateral resolution in the rat retina and 5 μm axial and 15 μm lateral resolution in the human retina [45

45. P. Puvanathasan, P. Forbes, Z. Ren, D. Malchow, S. Boyd, and K. Bizheva, “High-speed, high-resolution fourier-domain optical coherence tomography system for retinal imaging in the 1060 nm wavelength region,” Opt. Lett. 33, 2479–2481 (2008). [PubMed]

]. The imaging procedures were approved by the research ethics board of University of Waterloo and were carried out in accordance with the Declaration of Helsinki and the ARVO statement for use of animals for ophthalmic and vision research.

3.1. Simulated low axial resolution images from the original high res OCT data

Volumetric images of the human retina of size 512 × 512 pixels were acquired in-vivo from a region near the optic nerve head of healthy volunteers. Since the OCT imaging system provided high axial resolution images, for testing the performance of the proposed MPCRF algorithm, we used 5 of these high resolution OCT images to generate a set of 16 LR OCT images of size 128 × 128 pixels. Each LR OCT image was generated from the original high resolution OCT data of size 512 × 512 pixels by applying sub-pixel translational shift of up to 4 pixels in both horizontal and vertical directions to simulate sub-pixel motion of the sample or scanning beam during the acquisition process, followed by a resolution reduction by a factor of 4 with respect to the original high resolution OCT images. The horizontal and vertical sub-pixel shifting for each simulated LR OCT frame as well as the decimation factor of 4 was taken into account by the observation function H. Figure 1(a) shows a representative high resolution cross-sectional image of the human retina (original data) that is used as the gold standard high resolution image, while Fig. 1(b) shows the corresponding simulated LR OCT image. Out of the 16 simulated LR OCT images, 5 were randomly selected and used to test the performance of the proposed MPCRF algorithm, as well as previously published approaches to reconstruct a SR OCT image using the conventional Bicubic Interpolation (BI) approach [46

46. R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. ASSP 29, 1153–1160 (1981). [CrossRef]

] and also reconstructing a SR OCT image from a set of aligned LR OCT images using the recently proposed KBI approach [39

39. M. Robinson, S. Chiu, J. Lo, C. Toth, J. Izatt, and S. Farsiu, “New applications of super-resolution in medical imaging,” in “Super-Resolution Imaging,” P. Milanfar, ed. (CRC Press, 2010), chap. 13, pp. 384–412.

]; and (iii) the conventional TV reconstruction approach [47

47. S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” MMS. 4, 460–489 (2005).

] and for comparison purposes. Another set of high resolution human retinal OCT images of size of 512 × 512 pixels, was acquired with the same explained imaging protocol and with the same procedure that was already explained for generating a LR image from a gold standard high resolution image. To this aim, a sub-pixel translational shifts of 2 pixels was randomly applied in both horizontal and vertical directions, followed by a resolution reduction by a factor of 2 with respect to the original high resolution OCT images. The high resolution image and one sample generated LR image are shown in Fig. 2(a) and Fig. 2(b).

Fig. 1: (a) High resolution human retina OCT image of size (512 × 512) pixels, (b) One sample LR OCT image of size (128 × 128) pixels generated from the reference OCT image in part (a).
Fig. 2: (a) High resolution human retina OCT image of size (512 × 512) pixels, (b) One sample LR OCT image of size (226 × 226) pixels generated from the reference OCT image in part (a).

3.2. Real high axial resolution OCT data

Figure 3 shows a representative cross-sectional OCT image acquired in-vivo from the rat retina. To evaluate the performance of different SR reconstruction methods, 3 OCT B-scans were acquired from the same location in the rat retina and SR reconstruction was carried out with a resolution enhancement factor of 2 pixels in each dimension. A sub-pixel motion estimation method based on the cross-correlation [48

48. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156–158 (2008). [CrossRef] [PubMed]

] was used to estimate the sub-pixel motion among the pixels in the different OCT B-scan. The observation matrix H was designed using the obtained sub-pixel motion among the pixels of these 3 OCT B-scans and by considering a decimation factor of 2 pixels in each dimension.

Fig. 3: One sample rat retinal OCT image of size (324 × 1102) pixels.

4. Results and discussion

To assess the performance of the proposed MPCRF method in constructing a SR OCT image against other existing methods, our proposed MPCRF method was compared to (i) the conventional Bicubic Interpolation (BI) approach [46

46. R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. ASSP 29, 1153–1160 (1981). [CrossRef]

] as a baseline, which utilizes a single LR acquisition to construct a SR OCT image (ii) the recently proposed KBI approach, for the reconstruction of a SR OCT image using multiple LR OCT acquisitions [39

39. M. Robinson, S. Chiu, J. Lo, C. Toth, J. Izatt, and S. Farsiu, “New applications of super-resolution in medical imaging,” in “Super-Resolution Imaging,” P. Milanfar, ed. (CRC Press, 2010), chap. 13, pp. 384–412.

]; and (iii) the conventional TV reconstruction approach for constructing a SR image using multiple LR images [47

47. S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” MMS. 4, 460–489 (2005).

]. In the KBI method, first, a multi-frame motion estimation algorithm [40

40. S. Farsiu, M. Elad, and P. Milanfar, “Constrained, globally optimal, multi-frame motion estimation,” in “Statistical Signal Processing, 2005 IEEE/SP 13th Workshop on,” (IEEE, 2005), pp. 1396–1401. [CrossRef]

] was used to estimate the amount of sub-pixel shifting between the summed voxel projections and then a fast zero-order classic kernel regression-based approach [41

41. H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007). [CrossRef] [PubMed]

] was applied to reconstruct the final SR OCT image. In the conventional TV method [47

47. S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” MMS. 4, 460–489 (2005).

], the SR image was estimated by solving a regularized inverse problem that is penalized based on the first ordered total variation of intensity between two interacting neighbors pixels. Our proposed MPCRF method is an enhanced version of TV approach that accounts for four different regularization terms (spatial proximity, first and second order intensity variations, as well as a spline-based smoothness of fit) in its formulation to compensate for the weakness of tractional TV. We believe that this improves the efficiency of the proposed MPCRF method compare to the conventional TV and KBI approach in terms of spatial resolution enhancement.

All SR methods were applied to the both simulated LR and the real OCT data sets described above. All SR algorithms were implemented using MATLAB software and were tested on an AMD Athlon II X3 3.10 GHz machine with 12GB of RAM. For the implementation, the value of control parameters in all BI, KBI and TV methods was set according the values that was reported in corresponding publications [39

39. M. Robinson, S. Chiu, J. Lo, C. Toth, J. Izatt, and S. Farsiu, “New applications of super-resolution in medical imaging,” in “Super-Resolution Imaging,” P. Milanfar, ed. (CRC Press, 2010), chap. 13, pp. 384–412.

,46

46. R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. ASSP 29, 1153–1160 (1981). [CrossRef]

,47

47. S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” MMS. 4, 460–489 (2005).

] and for giving the best results. In the case of MPCRF method, the algorithm was performed several times to choose the best values for the control parameters that ensures good results. This empirical testing led to a choice of σSP = 5 for controlling the spatial proximity as well as σFOV = 0.03, σSOV = 0.03 for the first order and second order penalty terms. Also, the neighborhood size for all different penalty terms was chosen as ℵ(i) = 11 × 11 pixels. Furthermore, empirical testing led to a choice of α = 0.85, β = 0.95, γ = 0.004 for the case of simulated OCT data and α = 0.85, β = 0.95, γ = 0.004 for the case of real OCT data. To ensure consistent comparison between results obtained with different SR methods, the same number of LR OCT images were utilized for the KBI, the conventional TV and the proposed MPCRF methods (5 for the case of simulated OCT data and 3 for the case of real OCT data). Figures 4(c–g) and Fig. 5(b–f) respectively show the resulted OCT image after averaging over 5 registered LR OCT images for the case of simulated OCT data and 3 registered OCT images for the case of real OCT data as well as the resulted SR OCT image using the BI, KBI, TV and proposed MPCRF methods.

Fig. 4: (a) High resolution in-vivo human retial OCT image, (b) Simulated LR OCT image chosen, (c) Average of 5 registered LR simulated human retinal OCT images (d) Reconstructed SR OCT image using BI method from a single LR OCT image, (e–g) Samples of reconstructed SR OCT images using KBI, TV and MPCRF methods from a set of 5 LR OCT images, randomly chosen from a set of 16 LR human retinal OCT images. Blue, red and pink boxes in (b) specifies different areas around intra retinal layers, retinal tissue and blood vessel that are zoomed for the aim of better visualization as represented in Fig. 6(a–l).
Fig. 5: (a) High resolution in-vivo rat retial OCT image, (b) Average of 3 registered rat retinal OCT images, (c) Reconstructed SR OCT image using BI method from the original high resolution rat retinal OCT image, (d–f) Samples of reconstructed SR OCT images using KBI, TV and MPCRF methods from a set of 3 LR OCT images, randomly chosen from a set of 512 high resolution rat retinal OCT images. Blue, red and pink boxes in (a) specifies different areas around intra retinal layers, retinal tissue and blood vessel that are zoomed for the aim of better visualization as represented in Fig. 7(a–l).

4.1. Visual assessment of results

Different morphological features such as individual retinal layers and blood vessels (dark circular regions in the image) are hardly distinguishable in the simulated LR OCT image of the human retina, shown in Fig. 1(b). Although the major retinal layers are better resolved in the rat retina image (Fig. 3) due to the higher axial resolution, small morphological features such as the reflections from individual retinal blood cells in capillaries in the inner rat retina appear faint and blurred. To evaluate the performance of the existing and the proposed MPCRF algorithms, three different Regions of Interest (ROI’s) containing specific morphological details such as thin retinal layers and blood vessels, were chosen in the LR OCT images of the human retina (Fig. 4(b)) and the rat retina (Fig. 5(a)) marked with blue, pink and red color boxes. Magnified views of the selected RIOs form the human and rat retinas are presented in Fig. 6(a–r) and Fig. 7(a–r) Respectively. Direct visual comparison of the results obtained with the different SR algorithms demonstrates the superior performance of the proposed MPCRF method in terms of better visualization of the different retinal layers and retinal blood vessels in both the human and rate retina tomograms. The proposed MPCRF method results in sharpening of the boundary between the blood vessels and the surrounding tissue (compare images 6(m) and 6(r) and images 7(m) and 7(r)), as well as the boundaries between neighboring retinal layers (compare images 6(a) and 6(f) and images 7(a) and 7(f)). Finally, the proposed MPCRF method enhances the visualization of fine morphological details in retinal tissue, while suppressing the speckle noise as shown in Fig. 6(l) and Fig. 7(l). Visual assessment and comparison of results obtained both with the TV and the MPCRF methods (Fig. 6(e, k, q) and Fig. 6(f, l, r) or Fig. 7(e, k, q) and Fig. 7(f, l, r) confirms that enforcing the designed adaptive multi-penalty terms in the proposed MPCRF method could be effective in retaining the fine details and structures of original imaged sample, improve the edge and boundaries as well as smoothen the homogeneous regions such as tissues surrounding the blood vessels.

Fig. 6: Specified enlarged regions in the LR OCT image of Fig. 4(b) and SR OCT images of Fig. 4(c–g), marked with blue as part of intra retinal layers, red as represents some morphological details of retina and pink as a region surrounding the blood vessel in Fig. 4(b). The first, second and third rows respectively represent those regions that are marked with blue, red and pink boxes.
Fig. 7: Specified enlarged regions in the high resolution OCT image of Fig. 5(a) and SR OCT images of Fig. 5(b–f), marked with blue as part of intra retinal layers, red as represents some morphological details of retina and pink as a region surrounding the blood vessel in Fig. 5(a). The first, second and third rows respectively represent those regions that are marked with blue, red and pink boxes.

4.2. Quantitative assessment of results

In the notation for SNR, μr and σr2 represent the mean and the variance of the rth local homogeneous ROI, shown by red boxes in Fig. 8(a–b) and R denotes the total number of ROIs that are used to obtain the averaged SNR. In formulation of CNR, μr1 represents the mean value of the red box number 1, as the reference region, and μr2 represent the mean values of other red boxes (number 2–6) that contains other regions of OCT image as shown in Fig. 8(a–b) and the σr1 and σr2 denote the corresponding variances for those arbitrary ROIs. The ENL metric measures the smoothness of an image in specified homogeneous areas, red boxes in Fig. 8(a–b), where a greater value depicts a smoother area. In formulation of ENL, μh2 and σh2 denote the mean and the variance of hth homogeneous ROI. Finally, the quantitative metric η is an edge preservation measure that assesses the edge deterioration in the final reconstructed SR OCT image. In Eq. (15), ∇2V and ∇2Ĝ are respectively representing the Laplacian operator performed on the original OCT image in Fig. 4(b) or Fig. 5(a) and the SR reconstructed OCT images of Fig. 4(d–g) or Fig. 5(c–f). Also, 2V¯ and 2G^¯ denote the mean values of a 3 × 3 neighborhood around the pixel location of ∇2V and ∇2Ĝ respectively. All quantitative metrics were calculated for each specified ROI’s in the OCT images, as represented in Fig. 8(a–b) by red color boxes, and an averaging was performed to obtain the final metrics values. Tables 1 and 2 report the averaged values of these quantitative metrics for all different tested methods and for the case of simulated and real OCT data. Experimental results with both simulated and real OCT data show that the proposed MPCRF approach is capable of achieving state-of-the-art performance when compared to the other tested methods in terms of SNR, CNR, edge preservation measure and equivalent number of looks (ENL). The application of proposed MPCRF method on both samples of simulated and real OCT images significantly improves the SNR of reconstructed SR OCT image over 13 dB for the case of simulated OCT data and about 6 dB for real OCT data-set. Considering the SNR values in Table 1 shows the general outperforming of all tested multi-image SR image reconstruction methods (KBT, TV and MPCRF) compare to the SR image reconstruction using a single image (BI) in suppressing the speckle noise. This quantitative result is in agreement with the visual results of Fig. 6(a–r) and Fig. 7(a–r) as they show smoother images for the case of KBI, TV and MPCRF methods and compare to the BI method. Applying the MPCRF method improved the CNR value over 4 dB in the case of simulated OCT data and more than 1 dB for the real OCT data that resulted in better visualization of different retinal layers in the reconstructed SR OCT images of Fig. 6(f) and Fig. 7(f), as well as better discrimination of the blood vessels from the surrounding tissue that as visible in Fig. 6(r) and Fig. 7(r). Furthermore, the proposed MPCRF method considerably improved the ENL value compare to the other tested methods, that resulted in obtaining smoother images from homogenous areas of sample such as the nerve fiber layer tissue surrounding the blood vessels in the human retina tomogram. For obtaining the edge preservation metric, all tested multi-image SR image reconstruction methods (KBI, TV and MPCRF) were compared to the reconstructed image using BI as a reference image. The obtained values demonstrate the ability of the proposed MPCRF method to preserve better the edge and boundaries between different tissue types as the ones that constitute the individual retinal layers in Fig. 6(f and r) and Fig. 7(f and r).

Fig. 8: (a) Simulated LR human retina OCT image., (b) High resolution rat retina OCT image. Red boxes represent some homogenous regions of image used for obtaining the SNR, ENL and CNR metrics. The CNR in different areas is calculated by comparing the region inside of red box number 1 as a reference area and other regions that are specified using red boxes 2–6.

Table 1:. Image quality metrics evaluated for the generated SR image from a set of 5 LR simulated human retinal OCT acquisitions and with different methods: BI, KBI, TV, MPCRF. In the case of BI method, one LR simulated human retinal OCT image was used to generate the SR image. Values are relative to the original OCT image.

table-icon
View This Table
| View All Tables

Table 2:. Image quality metrics evaluated for the generated SR image from a set of 3 LR rat retinal OCT acquisitions at the same location and with different methods: BI, KBI, TV, MPCRF. In the case of BI method, one LR rat retinal image was used to generate the SR image. Values are relative to the original OCT image.

table-icon
View This Table
| View All Tables

4.3. Constructing SR images from LR sequence with unknown motion

Fig. 9: (a) High resolution human retina OCT image, (b) a sample of LR OCT image, (c) SR OCT image generated using BI method from a single LR OCT image, (d–f) Generated SR OCT images using KBI, TV and MPCRF methods from a set of 4 LR OCT images with undetermined spatial shifting among them.

4.4. Spatial resolution improvement

A key aspect of increasing the spatial imaging resolution is the ability to improve the discrimination of adjacent structures in the imaged object. In order to demonstrate the spatial resolution improvement in the reconstructed SR OCT images for all tested methods (BI, KBI, TV, MPCRF), the axial and lateral Auto-Correlation Functions (ACF) R were calculated as,
RXX=E[Xn+mXn]
(16)
where E denotes the expected value. The axial or lateral ACF, R, was analyzed for several specified axial and lateral lines of OCT data, shown with the red color in Fig. 10(a–b). The location of each line is considered such that it encompasses a fine observable detail at its center (marked by red circle) and is surrounded by homogeneous regions of OCT data. This assessment was performed to investigate the ability of each tested method to enhance either the axial or the lateral resolution of reconstructed SR OCT image. The rationale is that the higher spatial resolution, either in axial or lateral direction, should result in an autocorrelation function with higher peak value and narrower full width at half maximum (FWHM) as it approaches a delta function in the ideal case. In Eq. (16), X represents a chosen axial or lateral line of OCT data with the length of l as −l/2 < n < l/2 and E(.) denotes the expected value operator. Fig. 11 and Fig. 12 represent the axial and lateral ACF curves and their normalized version (this is obtained by dividing the values of each curve to its peak value) for the case of SR OCT images in Fig. 4(d–g). The same graphs are plotted in Fig. 13 and Fig. 14 for the case of real rat retina OCT image (Fig. 5(c–f)). As these plots illustrate, the proposed MPCRF method provides the autocorrelation function with higher central peak value and narrowest FWHM as compared to all tested methods. This demonstrate the ability of MPCRF method in increasing the spatial resolution of reconstructed OCT image.

Fig. 10: (a) Generated SR OCT image using MPCRF method from a set of 5 LR simulated human retina OCT images, (b) Generated SR OCT image using MPCRF method form a set of 3 LR rat retina OCT images. Axial and lateral red lines encompass a fine details of represented OCT images (red circles) and are used to calculate the axial and lateral autocorrelation functions.
Fig. 11: Axial ACF for SR OCT images of Fig. 4(d–g) using tested methods (BI, KBI, TV, MPCRF). Left column: ACF of two specified axial lines shown in Fig. 10(a). Right column: Normalized axial ACF.
Fig. 12: Lateral ACF for the SR OCT images of Fig. 4(d–g) using tested methods (BI, KBI, TV, MPCRF). Left column: lateral ACF of two specified lateral lines shown in Fig. 10(a). Right column: Normalized lateral ACF.
Fig. 13: Axial ACF for the SR OCT images of Fig. 5(c–f) using tested methods (BI, KBI, TV, MPCRF). Left column: axial ACF of two specified axial lines shown in Fig. 10(b). Right column: Normalized axial ACF.
Fig. 14: Lateral ACF functions for the SR OCT images of Fig. 5(c–f) using tested methods (BI, KBI, TV, MPCRF). Left column: lateral ACF of two specified lateral lines shown in Fig. 10(b). Right column: Normalized lateral ACF.

5. Conclusion

In this study, we introduced a novel multi-penalty conditional random field method for reconstructing super resolved OCT imagery using a set of acquired lower resolution OCT images of biological tissue. This method utilities different types of information that each single LR OCT image is characterized with, to construct an image with higher spatial resolution. The proposed MPCRF method formulates the SR OCT reconstruction problem as a MAP estimation problem, and enforces multiple penalty terms such as spatial proximity, first and second order intensity variations as well as a spline-based smoothness of fit in the formulation of the prior model to better retain the finer morphological details and structures of an imaged sample. Through quantitative analysis, it was shown that the proposed MPCRF method could achieve the highest SNR, CNR, ENL and η values compared to the other tested methods and for both simulated and real OCT data. Furthermore, visual assessment of results for all tested methods demonstrates clearly the ability of the MPCRF method to preserve and enhance morphological structures in the imaged object MPCRF such as the boundaries of different retinal layers, as well as the boundaries between blood vessels and the surrounding tissue in the test OCT images of the human and rat retina. The proposed MPCRF method does not require significant hardware modifications, and can be used as an intermediate image processing step in order to improve the performance of subsequent post-processing tasks such as blood vessel segmentation and lesions detection.

Acknowledgments

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Ontario Research Fund (ORF) and the Ontario Ministry of Economic Development and Innovation.

References and links

1.

W. Drexler and J. G. Fujimoto, Optical coherence tomography: technology and applications (Springer, 2008). [CrossRef]

2.

D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, and C. A. Puliafito, “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef] [PubMed]

3.

A. C. Akcay, J. P. Rolland, and J. M. Eichenholz, “Spectral shaping to improve the point spread function in optical coherence tomography,” Opt. Lett. 28, 1921–1923 (2003). [CrossRef] [PubMed]

4.

J. Schmitt, S. Xiang, and K. Yung, “Speckle in optical coherence tomography,” Biomed. Opt. 4, 95–105 (1999). [CrossRef]

5.

M. Szkulmowski, I. Gorczynska, D. Szlag, M. Sylwestrzak, A. Kowalczyk, and M. Wojtkowski, “Efficient reduction of speckle noise in optical coherence tomography,” Opt. Express 20, 1337–1359 (2012). [CrossRef] [PubMed]

6.

D. Yin, Y. Gu, and P. Xue, “Speckle-constrained variational methods for image restoration in optical coherence tomography,” J. Opt. Soc. Am. A 30, 878–885 (2013). [CrossRef]

7.

L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3, 927–942 (2012). [CrossRef] [PubMed]

8.

R. Zawadzki, A. Fuller, S. Choi, D. Wiley, B. Hamann, and J. Werner, “Correction of motion artifacts and scanning beam distortions in 3d ophthalmic optical coherence tomography imaging,” in “Proc. SPIE ,” vol. 6426 (2007) pp. 642607. [CrossRef]

9.

D. Sacchet, M. Brzezinski, J. Moreau, P. Georges, and A. Dubois, “Motion artifact suppression in full-field optical coherence tomography,” Appl. Opt. 49, 1480–1488 (2010). [CrossRef] [PubMed]

10.

B. Bouma and Tearney, Handbook of optical coherence tomography (Marcel DekkerNew York:, 2002).

11.

J. Gong, B. Liu, Y. Kim, Y. Liu, X. Li, and V. Backman, “Optimal spectral reshaping for resolution improvement in optical coherence tomography,” Opt. Express 14, 5909–5915 (2006). [CrossRef] [PubMed]

12.

D. Merino, C. Dainty, A. Bradu, and A. Podoleanu, “Adaptive optics enhanced simultaneous en-face optical coherence tomography and scanning laser ophthalmoscopy,” Opt. Express 14, 3345–3353 (2006). [CrossRef] [PubMed]

13.

M. Brezinski, Optical coherence tomography: principles and applications (Academic press, 2006).

14.

A. Fercher, W. Drexler, C. Hitzenberger, and T. Lasser, “Optical coherence tomography-principles and applications,” Rep. Prog. Phys. 66, 239 (2003). [CrossRef]

15.

Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A 26, 72–77 (2009). [CrossRef]

16.

P. D. Woolliams, R. A. Ferguson, C. Hart, A. Grimwood, and P. H. Tomlins, “Spatially deconvolved optical coherence tomography,” Appl. Opt. 49, 2014–2021 (2010). [CrossRef] [PubMed]

17.

C. Liu, A. Wong, K. Bizheva, P. Fieguth, and H. Bie, “Homotopic, non-local sparse reconstruction of optical coherence tomography imagery,” Opt. Express 20, 10200–10211 (2012). [CrossRef] [PubMed]

18.

M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25, 545–547 (2000). [CrossRef]

19.

F. Luan and Y. Wu, “Application of rpca in optical coherence tomography for speckle noise reduction,” Laser Phys. Lett. 10, 035603 (2013). [CrossRef]

20.

M. F. Kraus, B. Potsaid, M. A. Mayer, R. Bock, B. Baumann, J. J. Liu, J. Hornegger, and J. G. Fujimoto, “Motion correction in optical coherence tomography volumes on a per a-scan basis using orthogonal scan patterns,” Biomed. Opt. Express 3, 1182–1199 (2012). [CrossRef] [PubMed]

21.

B. Sander, M. Larsen, L. Thrane, J. Hougaard, and T. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol. 89, 207–212 (2005). [CrossRef] [PubMed]

22.

M. Szkulmowski and M. Wojtkowski, “Averaging techniques for oct imaging,” Opt. Express 21, 9757–9773 (2013). [CrossRef] [PubMed]

23.

A. Giani, M. Pellegrini, A. Invernizzi, M. Cigada, and G. Staurenghi, “Aligning scan locations from consecutive spectral-domain optical coherence tomography examinations: a comparison among different strategies,” IOVS 53, 7637–7643 (2012).

24.

J. J. Liu, I. Grulkowski, M. F. Kraus, B. Potsaid, C. D. Lu, B. Baumann, J. S. Duker, J. Hornegger, and J. G. Fujimoto, “In vivo imaging of the rodent eye with swept source/fourier domain oct,” Biomed. Opt. Express 4, 351–363 (2013). [CrossRef] [PubMed]

25.

T. Jørgensen, J. Thomadsen, U. Christensen, W. Soliman, and B. Sander, “Enhancing the signal-to-noise ratio in ophthalmic optical coherence tomography by image registration method and clinical examples,” J. Biomed. Opt. 12, 41208–41208 (2007). [CrossRef]

26.

S. Borman and R. L. Stevenson, “Super-resolution from image sequences-a review,” in “Circuits and Systems, 1998. Proceedings. 1998 Midwest Symposium on,” (IEEE, 1998), pp. 374–378.

27.

S. Park, M. Park, and M. Kang, “Super-resolution image reconstruction: a technical overview,” Signal Process. Mag., IEEE20, 21–36 (2003). [CrossRef]

28.

S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances and challenges in super-resolution,” Int. J Imag. Syst. Tech. 14, 47–57 (2004). [CrossRef]

29.

P. Milanfar, Super-resolution imaging (CRC Press, 2010).

30.

Z. Hu, X. Wu, Y. Ouyang, Y. Ouyang, and S. R. Sadda, “Semiautomated segmentation of the choroid in spectral-domain optical coherence tomography volume scans,” IOVS 54, 1722–1729 (2013).

31.

D. Williams, Y. Zheng, F. Bao, and A. Elsheikh, “Automatic segmentation of anterior segment optical coherence tomography images,” J Biomed. Opt 18, 056003 (2013). [CrossRef]

32.

A. Yazdanpanah, G. Hamarneh, B. Smith, and M. Sarunic, “Intra-retinal layer segmentation in optical coherence tomography using an active contour approach,” MICCAI 5762, 649–656 (2009).

33.

D. Rajan and S. Chaudhuri, “Generalized interpolation and its application in super-resolution imaging,” Imag. Vision Comput. 19, 957–969 (2001). [CrossRef]

34.

R. Tsai and T. S. Huang, “Multiframe image restoration and registration,” A Comp. Vis. Image Process. 1, 317–339 (1984).

35.

S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004). [CrossRef] [PubMed]

36.

J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process. 19, 2861–2873 (2010). [CrossRef]

37.

M. D. Robinson, C. A. Toth, J. Y. Lo, and S. Farsiu, “Efficient fourier-wavelet super-resolution,” IEEE Trans. Image Process. 19, 2669–2681 (2010). [CrossRef] [PubMed]

38.

W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl. 22, 56–65 (2002). [CrossRef]

39.

M. Robinson, S. Chiu, J. Lo, C. Toth, J. Izatt, and S. Farsiu, “New applications of super-resolution in medical imaging,” in “Super-Resolution Imaging,” P. Milanfar, ed. (CRC Press, 2010), chap. 13, pp. 384–412.

40.

S. Farsiu, M. Elad, and P. Milanfar, “Constrained, globally optimal, multi-frame motion estimation,” in “Statistical Signal Processing, 2005 IEEE/SP 13th Workshop on,” (IEEE, 2005), pp. 1396–1401. [CrossRef]

41.

H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process. 16, 349–366 (2007). [CrossRef] [PubMed]

42.

C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in “Computer Vision, 1998. Sixth International Conference on,” (IEEE, 1998), pp. 839–846.

43.

P. Rosenbloom, “The method of steepest descent,” in “Proc. of Symp. in Applied Math ,”, vol. 6 (1956), vol. 6, pp. 127–176. [CrossRef]

44.

X. Wang, “Method of steepest descent and its applications,” IEEE Microw. Wireless Compon. Lett. 12, 24–26 (2008).

45.

P. Puvanathasan, P. Forbes, Z. Ren, D. Malchow, S. Boyd, and K. Bizheva, “High-speed, high-resolution fourier-domain optical coherence tomography system for retinal imaging in the 1060 nm wavelength region,” Opt. Lett. 33, 2479–2481 (2008). [PubMed]

46.

R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. ASSP 29, 1153–1160 (1981). [CrossRef]

47.

S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” MMS. 4, 460–489 (2005).

48.

M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156–158 (2008). [CrossRef] [PubMed]

49.

P. Puvanathasan and K. Bizheva, “Interval type-ii fuzzy anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images,” Opt. Express 17, 733–746 (2009). [CrossRef] [PubMed]

50.

A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express 18, 8338–8352 (2010). [CrossRef] [PubMed]

51.

K. H. Cheng, E. Y. Lam, B. A. Standish, and V. X. Yang, “Speckle reduction of endovascular optical coherence tomography using a generalized divergence measure,” Opt. Lett. 37, 2871–2873 (2012). [CrossRef] [PubMed]

OCIS Codes
(100.0100) Image processing : Image processing
(110.4500) Imaging systems : Optical coherence tomography
(330.6130) Vision, color, and visual optics : Spatial resolution

ToC Category:
Image Processing

History
Original Manuscript: June 3, 2013
Revised Manuscript: August 6, 2013
Manuscript Accepted: August 7, 2013
Published: September 6, 2013

Citation
Ameneh Boroomand, Alexander Wong, Edward Li, Daniel S. Cho, Betty Ni, and Kostandinka Bizheva, "Multi-penalty conditional random field approach to super-resolved reconstruction of optical coherence tomography images," Biomed. Opt. Express 4, 2032-2050 (2013)
http://www.opticsinfobase.org/boe/abstract.cfm?URI=boe-4-10-2032


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. W. Drexler and J. G. Fujimoto, Optical coherence tomography: technology and applications (Springer, 2008). [CrossRef]
  2. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and , “Optical coherence tomography,” Science254, 1178–1181 (1991). [CrossRef] [PubMed]
  3. A. C. Akcay, J. P. Rolland, and J. M. Eichenholz, “Spectral shaping to improve the point spread function in optical coherence tomography,” Opt. Lett.28, 1921–1923 (2003). [CrossRef] [PubMed]
  4. J. Schmitt, S. Xiang, and K. Yung, “Speckle in optical coherence tomography,” Biomed. Opt.4, 95–105 (1999). [CrossRef]
  5. M. Szkulmowski, I. Gorczynska, D. Szlag, M. Sylwestrzak, A. Kowalczyk, and M. Wojtkowski, “Efficient reduction of speckle noise in optical coherence tomography,” Opt. Express20, 1337–1359 (2012). [CrossRef] [PubMed]
  6. D. Yin, Y. Gu, and P. Xue, “Speckle-constrained variational methods for image restoration in optical coherence tomography,” J. Opt. Soc. Am. A30, 878–885 (2013). [CrossRef]
  7. L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express3, 927–942 (2012). [CrossRef] [PubMed]
  8. R. Zawadzki, A. Fuller, S. Choi, D. Wiley, B. Hamann, and J. Werner, “Correction of motion artifacts and scanning beam distortions in 3d ophthalmic optical coherence tomography imaging,” in “Proc. SPIE,” vol. 6426 (2007) pp. 642607. [CrossRef]
  9. D. Sacchet, M. Brzezinski, J. Moreau, P. Georges, and A. Dubois, “Motion artifact suppression in full-field optical coherence tomography,” Appl. Opt.49, 1480–1488 (2010). [CrossRef] [PubMed]
  10. B. Bouma and Tearney, Handbook of optical coherence tomography (Marcel DekkerNew York:, 2002).
  11. J. Gong, B. Liu, Y. Kim, Y. Liu, X. Li, and V. Backman, “Optimal spectral reshaping for resolution improvement in optical coherence tomography,” Opt. Express14, 5909–5915 (2006). [CrossRef] [PubMed]
  12. D. Merino, C. Dainty, A. Bradu, and A. Podoleanu, “Adaptive optics enhanced simultaneous en-face optical coherence tomography and scanning laser ophthalmoscopy,” Opt. Express14, 3345–3353 (2006). [CrossRef] [PubMed]
  13. M. Brezinski, Optical coherence tomography: principles and applications (Academic press, 2006).
  14. A. Fercher, W. Drexler, C. Hitzenberger, and T. Lasser, “Optical coherence tomography-principles and applications,” Rep. Prog. Phys.66, 239 (2003). [CrossRef]
  15. Y. Liu, Y. Liang, G. Mu, and X. Zhu, “Deconvolution methods for image deblurring in optical coherence tomography,” J. Opt. Soc. Am. A26, 72–77 (2009). [CrossRef]
  16. P. D. Woolliams, R. A. Ferguson, C. Hart, A. Grimwood, and P. H. Tomlins, “Spatially deconvolved optical coherence tomography,” Appl. Opt.49, 2014–2021 (2010). [CrossRef] [PubMed]
  17. C. Liu, A. Wong, K. Bizheva, P. Fieguth, and H. Bie, “Homotopic, non-local sparse reconstruction of optical coherence tomography imagery,” Opt. Express20, 10200–10211 (2012). [CrossRef] [PubMed]
  18. M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett.25, 545–547 (2000). [CrossRef]
  19. F. Luan and Y. Wu, “Application of rpca in optical coherence tomography for speckle noise reduction,” Laser Phys. Lett.10, 035603 (2013). [CrossRef]
  20. M. F. Kraus, B. Potsaid, M. A. Mayer, R. Bock, B. Baumann, J. J. Liu, J. Hornegger, and J. G. Fujimoto, “Motion correction in optical coherence tomography volumes on a per a-scan basis using orthogonal scan patterns,” Biomed. Opt. Express3, 1182–1199 (2012). [CrossRef] [PubMed]
  21. B. Sander, M. Larsen, L. Thrane, J. Hougaard, and T. Jørgensen, “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol.89, 207–212 (2005). [CrossRef] [PubMed]
  22. M. Szkulmowski and M. Wojtkowski, “Averaging techniques for oct imaging,” Opt. Express21, 9757–9773 (2013). [CrossRef] [PubMed]
  23. A. Giani, M. Pellegrini, A. Invernizzi, M. Cigada, and G. Staurenghi, “Aligning scan locations from consecutive spectral-domain optical coherence tomography examinations: a comparison among different strategies,” IOVS53, 7637–7643 (2012).
  24. J. J. Liu, I. Grulkowski, M. F. Kraus, B. Potsaid, C. D. Lu, B. Baumann, J. S. Duker, J. Hornegger, and J. G. Fujimoto, “In vivo imaging of the rodent eye with swept source/fourier domain oct,” Biomed. Opt. Express4, 351–363 (2013). [CrossRef] [PubMed]
  25. T. Jørgensen, J. Thomadsen, U. Christensen, W. Soliman, and B. Sander, “Enhancing the signal-to-noise ratio in ophthalmic optical coherence tomography by image registration method and clinical examples,” J. Biomed. Opt.12, 41208–41208 (2007). [CrossRef]
  26. S. Borman and R. L. Stevenson, “Super-resolution from image sequences-a review,” in “Circuits and Systems, 1998. Proceedings. 1998 Midwest Symposium on,” (IEEE, 1998), pp. 374–378.
  27. S. Park, M. Park, and M. Kang, “Super-resolution image reconstruction: a technical overview,” Signal Process. Mag., IEEE20, 21–36 (2003). [CrossRef]
  28. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances and challenges in super-resolution,” Int. J Imag. Syst. Tech.14, 47–57 (2004). [CrossRef]
  29. P. Milanfar, Super-resolution imaging (CRC Press, 2010).
  30. Z. Hu, X. Wu, Y. Ouyang, Y. Ouyang, and S. R. Sadda, “Semiautomated segmentation of the choroid in spectral-domain optical coherence tomography volume scans,” IOVS54, 1722–1729 (2013).
  31. D. Williams, Y. Zheng, F. Bao, and A. Elsheikh, “Automatic segmentation of anterior segment optical coherence tomography images,” J Biomed. Opt18, 056003 (2013). [CrossRef]
  32. A. Yazdanpanah, G. Hamarneh, B. Smith, and M. Sarunic, “Intra-retinal layer segmentation in optical coherence tomography using an active contour approach,” MICCAI5762, 649–656 (2009).
  33. D. Rajan and S. Chaudhuri, “Generalized interpolation and its application in super-resolution imaging,” Imag. Vision Comput.19, 957–969 (2001). [CrossRef]
  34. R. Tsai and T. S. Huang, “Multiframe image restoration and registration,” A Comp. Vis. Image Process.1, 317–339 (1984).
  35. S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Trans. Image Process.13, 1327–1344 (2004). [CrossRef] [PubMed]
  36. J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Trans. Image Process.19, 2861–2873 (2010). [CrossRef]
  37. M. D. Robinson, C. A. Toth, J. Y. Lo, and S. Farsiu, “Efficient fourier-wavelet super-resolution,” IEEE Trans. Image Process.19, 2669–2681 (2010). [CrossRef] [PubMed]
  38. W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl.22, 56–65 (2002). [CrossRef]
  39. M. Robinson, S. Chiu, J. Lo, C. Toth, J. Izatt, and S. Farsiu, “New applications of super-resolution in medical imaging,” in “Super-Resolution Imaging,” P. Milanfar, ed. (CRC Press, 2010), chap. 13, pp. 384–412.
  40. S. Farsiu, M. Elad, and P. Milanfar, “Constrained, globally optimal, multi-frame motion estimation,” in “Statistical Signal Processing, 2005 IEEE/SP 13th Workshop on,” (IEEE, 2005), pp. 1396–1401. [CrossRef]
  41. H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process.16, 349–366 (2007). [CrossRef] [PubMed]
  42. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in “Computer Vision, 1998. Sixth International Conference on,” (IEEE, 1998), pp. 839–846.
  43. P. Rosenbloom, “The method of steepest descent,” in “Proc. of Symp. in Applied Math,”, vol. 6 (1956), vol. 6, pp. 127–176. [CrossRef]
  44. X. Wang, “Method of steepest descent and its applications,” IEEE Microw. Wireless Compon. Lett.12, 24–26 (2008).
  45. P. Puvanathasan, P. Forbes, Z. Ren, D. Malchow, S. Boyd, and K. Bizheva, “High-speed, high-resolution fourier-domain optical coherence tomography system for retinal imaging in the 1060 nm wavelength region,” Opt. Lett.33, 2479–2481 (2008). [PubMed]
  46. R. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. ASSP29, 1153–1160 (1981). [CrossRef]
  47. S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” MMS.4, 460–489 (2005).
  48. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett.33, 156–158 (2008). [CrossRef] [PubMed]
  49. P. Puvanathasan and K. Bizheva, “Interval type-ii fuzzy anisotropic diffusion algorithm for speckle noise reduction in optical coherence tomography images,” Opt. Express17, 733–746 (2009). [CrossRef] [PubMed]
  50. A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express18, 8338–8352 (2010). [CrossRef] [PubMed]
  51. K. H. Cheng, E. Y. Lam, B. A. Standish, and V. X. Yang, “Speckle reduction of endovascular optical coherence tomography using a generalized divergence measure,” Opt. Lett.37, 2871–2873 (2012). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited