OSA's Digital Library

Optics Express

Optics Express

  • Vol. 16, Iss. 16 — Aug. 4, 2008
  • pp: 12313–12333
« Show journal navigation

Denoising and 4D visualization of OCT images

Madhusudhana Gargesha, Michael W. Jenkins, Andrew M. Rollins, and David L. Wilson  »View Author Affiliations


Optics Express, Vol. 16, Issue 16, pp. 12313-12333 (2008)
http://dx.doi.org/10.1364/OE.16.012313


View Full Text Article

Acrobat PDF (3905 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications.

© 2008 Optical Society of America

1. Introduction

We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. OCT allows one to non-invasively image living hearts with microscopic resolution, and to visually and quantitatively analyze development. Due to the diminutive size and rapid movements of the early embryonic heart, OCT imaging provides a unique ability to study anatomy and function. We believe that OCT has the requisite spatial and temporal resolution and is hence an important tool to facilitate understanding of the underlying mechanisms responsible for normal/abnormal heart development [1

1. M. W. Jenkins, D. C. Adler, M. Gargesha, R. Huber, F. Rothenberg, J. Belding, M. Watanabe, D. L. Wilson, J. G. Fujimoto, and A. M. Rollins, “Ultrahigh-speed optical coherence tomography imaging and visualization of the embryonic avian heart using a buffered Fourier Domain Mode Locked laser,” Opt. Express 15, 6251–6267 (2007). [CrossRef] [PubMed]

]. However, noise present in OCT imaging systems [2

2. M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25, 545–547 (2000). [CrossRef]

11

11. J. M. Schmitt, S. H. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4, 95–105 (1999). [CrossRef]

] limits our ability to interpret, visualize and analyze image data which is crucial to the understanding of early cardiac development. The purpose of our study is to address this limitation by creating an algorithm for noise reduction in OCT images and evaluating its performance both visually and quantitatively through volumetric visualization and image segmentation. The novelty of our noise reduction technique lies in its ability to optimally reduce noise based on the characteristics of a particular image data set.

Due to its deleterious effects on coherent imaging systems such as ultrasound and OCT, there has been significant effort to characterize and reduce noise [2

2. M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25, 545–547 (2000). [CrossRef]

21

21. D. F. Zha and T. S. Qiu, “A new algorithm for shot noise removal in medical ultrasound images based on alpha-stable model,” International Journal of Adaptive Control and Signal Processing 20, 251–263 (2006). [CrossRef]

]. The two most common noise sources are shot noise, which is additive in nature and can be adequately described by the Additive White Gaussian Noise (AWGN) process, and speckle noise, which is multiplicative in nature and harder to eliminate due to its signal dependency. In fact, speckle carries useful information about the underlying tissue structure [11

11. J. M. Schmitt, S. H. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4, 95–105 (1999). [CrossRef]

]. OCT is very similar to ultrasound and a brief review of shot and speckle noise reduction in ultrasound is in order. Shot noise reduction is applied both during acquisition [17

17. M. A. Kutay, A. P. Petropulu, and C. W. Piccoli, “On modeling biomedical ultrasound RF echoes using a power-law shot-noise model,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 48, 953–968 (2001). [CrossRef] [PubMed]

] and post-acquisition using simple image processing techniques [22

22. R. C Gonzalez and R. E Woods, Digital Image Processing (Prentice Hall, 2002).

24

24. M Sonka, V Hlavac, and R Boyle, Image Processing: Analysis and Machine Vision ( Brooks and Cole Publishing, 1998).

] such as an averaging filters, median filters and Gaussian low-pass filters. However, many of these filtering techniques tend to remove useful features from images. One of the most effective technique for shot noise removal is the phase preserving non-orthogonal wavelet (NW) filtering technique proposed by Kovesi [16

16. P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA’99. 212–217. 1999. Perth, WA.

]. As for speckle noise removal from ultrasound images, spatial domain techniques have been employed including the one proposed by Xie et al. [19

19. J. Xie, Y. F. Jiang, H. T. Tsui, and P. A. Heng, “Boundary enhancement and speckle reduction for ultrasound images via salient structure extraction,” IEEE Trans. Biomed. Eng. 53, 2300–2309 (2006). [CrossRef] [PubMed]

] who applied a salient boundary enhancement technique with a speckle suspension term, and Dutt and Greenleaf [13

13. V. Dutt and J. F. Greenleaf, “Adaptive speckle reduction filter for log-compressed B-scan images,” IEEE Trans. Med. Imaging. 15, 802–813 (1996). [CrossRef] [PubMed]

], who employed a local statistical model to quantify the extent of speckle formation and subsequently used an unsharp masking filter to suppress speckle. As for transform domain techniques, wavelet-based speckle suppression has been reported [12

12. A. Achim, A. Bezerianos, and P. Tsakalides, “Novel Bayesian multiscale method for speckle removal in medical ultrasound images,” IEEE Trans. Med. Imaging. 20, 772–783 (2001). [CrossRef] [PubMed]

,14

14. S. Gupta, R. C. Chauhan, and S. C. Sexana, “Wavelet-based statistical approach for speckle reduction in medical ultrasound images,” Medical & Biological Engineering & Computing 42, 189–192 (2004). [CrossRef] [PubMed]

,20

20. Y. Yue, M. M. Croitoru, J. B. Zwischenberger, and J. W. Clark, “Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images,” IEEE Trans. Med. Imaging. 25, 297–311 (2006). [CrossRef] [PubMed]

]. More recently, Fan et al. [25

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

] combined pyramid decomposition of images with anisotropic diffusion filtering to reduce speckle in ultrasound images of phantoms and liver. For OCT images, shot noise has been reduced using post-acquisition image processing techniques [22

22. R. C Gonzalez and R. E Woods, Digital Image Processing (Prentice Hall, 2002).

24

24. M Sonka, V Hlavac, and R Boyle, Image Processing: Analysis and Machine Vision ( Brooks and Cole Publishing, 1998).

] such as averaging filters, median filters and Gaussian filters. There are also reports on speckle reduction techniques in OCT including physical techniques [2

2. M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25, 545–547 (2000). [CrossRef]

,4

4. A. E. Desjardins, B. J. Vakoc, G. J. Tearney, and B. E. Bouma, “Speckle reduction in OCT using massively-parallel detection and frequency-domain ranging,” Opt. Express 14, 4736–4745 (2006). [CrossRef] [PubMed]

,5

5. A. I. Kholodnykh, I. Y. Petrova, K. V. Larin, M. Motamedi, and R. O. Esenaliev, “Precision of measurement of tissue optical properties with optical coherence tomography,” Appl. Opt. 42, 3027–3037 (2003). [CrossRef] [PubMed]

,7

7. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24, 1901–1910 (2007). [CrossRef]

,8

8. M. Pircher, E. Gotzinger, R. Leitgeb, A. F. Fercher, and C. K. Hitzenberger, “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt. 8, 565–569 (2003). [CrossRef] [PubMed]

], those applied prior to image formation [6

6. D. L. Marks, T. S. Ralston, and S. A. Boppart, “Speckle reduction by I-divergence regularization in optical coherence tomography,” J. Opt. Soc. Am. A 22, 2366–2371 (2005). [CrossRef]

], and post-acquisition, image processing techniques such as hybrid median filter (HMF), Wiener filter, ELEE filter, symmetric nearest neighbor (SNN) filter, Kuwahara filter, adaptive Wiener filter, rotating kernel transformation (RKT), anisotropic diffusion filtering, orthogonal and non-orthogonal wavelet filters [3

3. M. E Brezinski, Optical Coherence Tomography: Principles and Applications (Elsevier, 2006).

,7

7. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24, 1901–1910 (2007). [CrossRef]

,10

10. J. Rogowska and M. E. Brezinski, “Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images,” Phys. Med. Biol. 47, 641–655 (2002). [CrossRef] [PubMed]

]. Ozcan et. al [7

7. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24, 1901–1910 (2007). [CrossRef]

] have compared the relative performances of the ELEE filter, two wavelet transform based filters, the HMF, SNN, a Kuwahara filter, and the adaptive Wiener filter, and have argued that post-acquisition digital image processing is advantageous because it does not require the additional acquisition of compounding angles required by the physical technique for speckle reduction. Puvanathasan and Bizheva [9

9. P. Puvanathasan and K. Bizheva, “Speckle noise reduction algorithm for optical coherence tomography based on interval type II fuzzy set,” Opt. Express 15, 15747–15758 (2007). [CrossRef] [PubMed]

] have used a fuzzy thresholding algorithm in the wavelet domain for speckle reduction in OCT images of a human finger tip, and have compared their technique with the Wiener and Lee filters.

In this paper, we create an algorithm to reduce both shot and speckle noise through digital image processing. The Kovesi NW filtering technique, originally applied to video surveillance data, can greatly reduce shot noise. However, manual optimization of parameters can be a daunting and unsatisfying task. Hence, we will investigate methods for automatically optimizing the wavelet filter bank for OCT. We call our technique Optimized Non-orthogonal Wavelet (ONW) denoising. To reduce speckle, we use an enhanced version of the Laplacian Pyramid Nonlinear Diffusion (LPND) technique used by Fan et. al on ultrasound images of the liver and carotid artery [25

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

]. Since speckle size depends on imaging parameters such as the characteristics of the light source, the spot size, and sampling rate, we have investigated adaptive optimization of LPND parameters, and call the method Adaptive LPND (ALPND).

We have identified three approaches for evaluation of noise reduction. First, there are quantitative measures on individual images such as edge preservation (β) [15

15. X. H. Hao, S. K. Gao, and X. R. Gao, “A novel multiscale nonlinear thresholding method for ultrasonic speckle suppressing,” IEEE Trans. Med. Imaging. 18, 787–794 (1999). [CrossRef] [PubMed]

], structural similarity measure (SSIM) [26

26. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef] [PubMed]

], and contrast-to-noise ratio (CNR) [27

27. J. T. M. Verhoeven, J. M. Thijssen, and A. G. M. Theeuwes, “Improvement of Lesion Detection by Echographic Image-Processing - Signal-To-Noise-Ratio Imaging,” Ultrason. Imaging 13, 238–251 (1991). [CrossRef] [PubMed]

], as reported in a recent work by Fan et al [25

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

]. We will create a weighted sum of these measures and use this scalar image quality criterion to optimize ALPND. This measure will also be used to evaluate other noise reduction algorithms. Second, as described by Frangakis et al [28

28. A. S. Frangakis and R. Hegerl, “Noise reduction in electron tomographic reconstructions using nonlinear anisotropic diffusion,” J. Struct. Biol. 135, 239–250 (2001). [CrossRef] [PubMed]

], one can evaluate the effect of noise reduction on 3D image visualization. We will investigate how noise reduction affects both isosurface, surface rendering and direct volume rendering [29

29. The Visualization Handbook ( Elsevier Academic Press, 2005).

]. Gradients provide enhanced volume visualization of internal surfaces and tissue boundaries [30

30. B. Csebfalvi, L. Mroz, H. Hauser, A. Konig, and E. Groller, “Fast visualization of object contours by non-photorealistic volume rendering,” Computer Graphics Forum 20, C452-+ (2001). [CrossRef]

32

32. J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensional transfer functions for interactive volume rendering,” IEEE. Transactions on Visualization and Computer Graphics 8, 270–285 (2002). [CrossRef]

], and we are particularly interested in the role of noise reduction in improving visualization through accurate estimation of gradients in data. Third, one can determine the effect of noise reduction on segmentation [33

33. Y. Sato, S. Nakajima, H. Atsumi, T. Koller, G. Gerig, S. Yoshida, and R. Kikinis, “3D multi-scale line filter for segmentation and visualization of curvilinear structures in medical images,” Cvrmed-Mrcas’97 1205, 213–222 (1997). [CrossRef]

]. We used a simple tolerance based seeded region growing algorithm available within the visualization package Amira [34

34. D Stalling, H. Hege, and M. Amira- Zockler An Advanced 3D Visualization and Modeling System. http://amira.zib.de , 2007. http://amira.zib.de.

] and a more sophisticated semiautomatic image contour segmentation tool called LiveWire [35

35. H. Ghassan and H. Judith, DTMRI Segmentation using DT-Snakes and DT-Livewire. Signal Processing and Information Technology, 2006 IEEE International Symposium on. Signal Processing and Information Technology, 2006 IEEE International Symposium on, 513–518, (2006).

,36

36. E. N. Mortensen and W. A. Barrett, “Interactive segmentation with intelligent scissors,” Graphical Models and Image Processing 60, 349–384 (1998). [CrossRef]

] to both qualitatively and quantitatively evaluate the effect of noise reduction.

The rest of the paper is organized as follows. In Section 2, we briefly describe baseline denoising algorithms such as median filtering, Wiener filtering, and orthogonal wavelet (OW) filtering, followed by our proposed ONW-ALPND denoising algorithm. In Section 3, we discuss methods for evaluating the performance of image denoising algorithms. Section 4 presents results of our proposed denoising algorithm along with quantitative/qualitative comparisons to baseline algorithms. This is followed by a discussion in Section 5.

2. Image denoising algorithms

2.1 Baseline methods: median, Wiener and orthogonal wavelet (OW) filtering

Fig. 1. Kovesi NW filtering for shot noise reduction. The image is convolved with a filter bank after Fourier transformation. The result is transformed back to the spatial domain by an inverse Fourier transformation. A noise threshold is identified in each scale and the filtered image is produced by reconstruction.

Following some ad hoc optimizations, we determined a fixed set of algorithm parameters that were used in our baseline denoising schemes. For median2, we used a square 13 × 13 filter kernel. In wiener2, we again set the filter size to be 13 x 13. In the case of the OW filter using wdencmp function of Matlab, we used the symlet-7 (sym7) wavelet with decomposition performed up to level 3. In order to make a fair comparison, these values were chosen to match the optimal settings for the proposed ONW and ALPND denoising schemes.

2.2 Optimized non-orthogonal wavelet (ONW) denoising

To reduce shot noise, we created the ONW algorithm. It builds upon the basic Kovesi NW filter [16

16. P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA’99. 212–217. 1999. Perth, WA.

] which was originally applied to synthetic and video surveillance images. Kovesi argued that denoising should not corrupt the phase information in the image, and used a non-orthogonal wavelet filter bank followed by a thresholding of the magnitudes of the wavelet coefficients leaving the phase unchanged. In our modification, we include an image data set specific method for designing the optimal wavelet filter bank for OCT images. A parameter optimization scheme is applied once to a given data set, a set of parameters for designing an optimal filter bank are derived, and all the images in the data set are processed using this optimal filter bank.

The basic Kovesi NW filter is illustrated in Fig. 1. It uses a non-orthogonal wavelet filter bank with a non-zero correlation between any two filters in the bank [16

16. P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA’99. 212–217. 1999. Perth, WA.

]. Filters are created to detect features at different frequency subbands (called scales) and different orientations. Assuming the original noise to be an Additive White Gaussian Noise (AWGN) process, the amplitude of the transformed noise follows a Rayleigh distribution [16

16. P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA’99. 212–217. 1999. Perth, WA.

], whose probability density function (PDF) is characterized by a single parameter. Due to this property, a noise threshold can be determined for the lowest scale from the transformed coefficient values and a simple scaling operation can be applied to derive noise thresholds for higher scales. The noise threshold at each scale is then subtracted from the corresponding filter response and the resulting constituent images are used to reconstruct the denoised image.

Fig. 2. Proposed optimization scheme for shot noise reduction based on the Kovesi NW filter. The resulting filter is called the Optimized Non-orthogonal Wavlet (ONW) filter. In ONW, a user-defined ROI is matched against a co-located ROI in the smallest scale reconstruction to compute an image dissimilarity measure. The parameters that generate the wavelet filter bank are varied through empirically determined ranges to minimize this dissimilarity measure.

In our modified filter, we propose an optimization scheme for the parameters that generate the wavelet filter bank as illustrated in the schematic flow diagram in Fig. 2. The parameters controlling the filters in the filter bank are (i) number of scales (s), (ii) number of orientations (o), (iii) number of standard deviations around noise threshold to reject as noise (k), (iv) multiplying factor between scales (p), (v) wavelength of smallest scale filter (λ), (vi) ratio of the standard deviation of the Gaussian describing the filter transfer function in the frequency domain to the filter center frequency (R g) , and (vii) ratio of angular interval between filter orientations and the standard deviation of the angular Gaussian function used to design filters (Ra). We have seen by experimentation that, although parameters (i) through (iv) can be set independently of image data, parameters (v) through (vii) namely λ, R g and Ra are image dependent. However, Kovesi’s technique [16

16. P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA’99. 212–217. 1999. Perth, WA.

] does not adapt these parameters based on noise characteristics of the images. We propose a parameter optimization step where empirically determined maximum and minimum values are used for λ, R g and Ra and each parameter is varied in this range [see Equation(1) below]. For each setting, the wavelet filter response of the smallest scale filter across all orientations, denoted by h ss, is computed. We note that the smallest scale filter mostly responds to noise thereby making it a useful approximation to the “noise pattern” in the image. Next, a user-selected noisy region-of-interest (ROI) from the original image I (denoted by Ω) is matched to the co-located ROI in h ss using an image distance measure function D based on local histograms [40

40. C. C. Chang, C. S. Chan, and J. Y. Hsiao, “A color image retrieval method based on local histogram,” Advances in Mutlimedia Information Processing – Pcm 2001, Proceedings 2195, 831–836 (2001). [CrossRef]

]. Finally, the optimal parameter settings (λ’, R g’, Ra’) are determined as those values that result in the minimum image distance between the two ROIs. Mathematically, this can be represented as:

(λ,Rb,Ra)=argminx,yΩ,λminλλmax,RgminRgRgmax,RaminRaRamaxD[hssxyλRgRa,Ixy]
(1)

where λmin, λmax, Rg,min, Rg,max, Ra,min, and Ra,max are the maximum and minimum values of λ, Rg and Ra that have been determined empirically. To compute this image distance D, the two ROIs are divided into tiled rectangular blocks. A local image histogram with a predefined number of bins is computed for each tile for both the ROIs. Now, a sum-of-absolute-difference (SAD) distance metric is computed between the histograms of the two ROIs [40

40. C. C. Chang, C. S. Chan, and J. Y. Hsiao, “A color image retrieval method based on local histogram,” Advances in Mutlimedia Information Processing – Pcm 2001, Proceedings 2195, 831–836 (2001). [CrossRef]

]. We note that ONW is not only suited for shot noise reduction, but could also be used for speckle reduction. However, we have found that in presence of speckle, shot noise reduction alone is not sufficient f and more sophisticated speckle reduction techniques need to be employed.

Fig. 3. Laplacian Pyramid Nonlinear Diffusion (LPND) technique for speckle reduction of Fan et al. The image is decomposed into constituent images spanning different frequency bands (referred to as layers). A nonlinear diffusion step is applied in each layer to reduce speckle and the output image is reconstructed from the speckle-reduced images in each layer.

2.3 Adaptive Laplacian pyramid nonlinear diffusion (ALPND) denoising

We developed the Adaptive Laplacian Pyramid Nonlinear Diffusion (LPND) technique building upon the basic LPND filter of Fan et al [25

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

] which was originally used on ultrasound images of the liver and the carotid artery. Specifically, we use an optimization scheme for determining parameters of the nonlinear diffusion step that are specific to the speckle characteristics of a given OCT image data set. In other words, the optimization is done once for a given data set by choosing a representative image (visually) that best describes the speckle characteristics. A detailed discussion of pyramid decomposition of images can be found in the medical imaging textbook by Paul Seutens [41

41. P. Suetens, Fundamentals of Medical Imaging (Cambridge University Press, 2002).

]. As illustrated in the flow diagram of Fig. 3, the basic technique proposed by Fan et al. [25

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

] for speckle reduction in ultrasound images employs (LPND) which essentially comprises of a nonlinear (anisotropic) diffusion technique [42

42. P. Perona and J. Malik, “Scale-Space and Edge-Detection Using Anisotropic Diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990). [CrossRef]

] applied to the frequency subbands of the images obtained by Laplacian pyramid image decomposition. As a result of this decomposition, the high frequency speckle noise occupies lower pyramid layers; its effect in higher layers will be negligible. Parameters controlling the amount of smoothing due to anisotropic diffusion are computed separately for each decomposition layer. The smoothing itself is directional in nature and is dependent on the gradient - a high gradient means lesser smoothing while a lower gradient implies heavier smoothing. As a result, speckle is reduced without affecting image features and edges.

We developed a technique for optimally determining the diffusion threshold td and filter kernel size N used for the nonlinear diffusion process, as illustrated in the flow diagram of Fig. 4. This optimization step is necessary to ensure that the speckle filtering is adaptive to the characteristics of the data set and to the imaging set up. Our technique computes a combined figure of merit μ for evaluating visual quality of the denoised images produced by LPND. Specifically, the following image quality metrics - structural similarity (SSIM) measure, edge preservation parameter (β) and contrast-to-noise ratio (CNR) were computed [25

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

]. These three measures are robust estimators of signal preservation in an image which made them applicable for our filter evaluation. SSIM is a measure of overall processing quality, which compares the original and processed images based on statistics of co-located sub-regions. β is an edge preservation metric that involves computation of high-pass filtered (edge-enhanced) images from the two images being compared using a Laplacian operator. CNR is the relationship of signal intensity differences between two regions, scaled to image noise. Improving CNR increases perception of the distinct differences between two clinical areas of interest.

A combined figure of merit μ was derived from the above three measures (CNR, SSIM and β) using a weighted linear function as shown by Eq. (1) below where set the weights ω1 = ω2 = ω3 = 1/3. Since each individual measure does not completely capture all aspects of image quality accurately (e.g. CNR produces a higher value with more smoothing and may erroneously rate a highly blurry image higher than a less blurry image), we found it advantageous to combine these parameters linearly using user-selected weights (which in our case were set equal). This let us tune out extreme effects caused by any one individual parameter.

μ=ω1.CNR+ω2.β+ω3.SSIM
(2)

A region-of-interest (foreground) ΩFG and a background region ΩBG are needed to evaluate μ and were manually selected by a user. μ is therefore a function of td, N, ΩFG, ΩBG, ω1, ω2, and ω3. However, for a fixed user choice of foreground and background regions and the weights ωi, it is sufficient to denote μ as μ(td, N). The parameter values td and N were iterated through a set of values from a predetermined range (determined empirically), and the optimal parameter settings (td’, N’) were derived by maximizing μ. Mathematically, this is written as:

tdN=argmaxtdmintdtdmin,NminNNmaxμtdN
(3)

2.4 Combined filter for shot and speckle noise reduction

We created a noise reduction method using both the ONW and ALPND algorithms in a serial fashion to reduce shot and speckle noise, respectively. We call this combined technique the ONW-ALPND denoising algorithm. To apply this method, a single representative 2D image was chosen from a data set, and ONW and ALPND parameters were optimized. Following optimization, the ONW-ALPND denoising algorithm was run on the entire (2D+time) or (3D+time) data set.

Fig. 4. The parameter optimization scheme used in adaptive LPND (ALPND) technique. The filter kernel size N used to compute gradients for nonlinear diffusion and the diffusion threshold td are iterated through a set of values in an empirically determined range and a quantitative figure of merit μ is evaluated at each step. The optimal values for td and N are determined by maximizing μ.

3. Methods for evaluating image quality

3.1 Evaluating 2D image quality

A quantitative method was used for evaluating image quality [3

3. M. E Brezinski, Optical Coherence Tomography: Principles and Applications (Elsevier, 2006).

,25

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

] of the resultant images produced through filtering. A human expert was shown (2D + time) OCT images of the quail embryonic heart. She first picked a region on the myocardial wall as the foreground and a nearby surrounding region consisting of cardiac jelly as the background. Following this, a combined figure of merit μ, introduced in Section 2.3 [Equation (2)], was evaluated for the 2D images.

3.2 Volumetric visualization

3.3 Semi-automatic segmentation

4. Results

We performed experiments using the three data sets discussed in the beginning of section 3. We determined optimal parameter settings for both the ONW and the ALPND algorithms using a representative image from our (2D + time) data set. We ran the optimization process using 2D images from different phases of the cardiac cycle and found little variation in the optimal values determined in each case. The optimization process was more dependent on the imaging set up and less on which image from a data set was chosen. For our experiments, we chose one image close to the beginning of diastole for both ONW and ALPND optimization. This image has been shown as the input image in Fig. 4. In the case of the ONW algorithm, the optimal parameter settings were determined to be λ’ = 3, Rg’ = 0.35, and Ra’ = 0.25, as opposed to the default (suboptimal) settings for the basic Kovesi NW filter which were λ’ = 2, Rg = 0.55, and Ra’ = 1.0. In the case of the ALPND algorithm, the optimal value for the square filter kernel size N was 13 and that for the diffusion threshold td was determined to be 0.0001, when compared with a default (suboptimal) setting of N = 7 and td = 0.005 that have been suggested previously by Fan et al [25

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

].

Fig. 5. ONW technique applied to OCT images of quail embryo (a-c) from (2D + time) data set and colon crypt pattern (d-f). (a) Original image of day 2 quail embryo, (b) Denoised image using the basic Kovesi NW filter, (c) Denoised image using ONW, (d) Original image of colon crypt pattern, (e) Denoised image using the basic Kovesi NW filter, and (f) Denoised image using ONW.

4.1 ONW denoising

In Fig. 5, we compare the basic Kovesi NW filter with the ONW filter using a sample 2D image from the (2D + time) data set [Fig. 5(a)]. Although the Kovesi NW filter efficiently reduced noise, we see that in the resulting image [Fig. 5(b)], there is a loss of information with depth. For instance, the image information close to the bottom part of the inflow tract (see bottom right portion of the tubular heart) seems to be lost in Fig. 5(b). The proposed ONW technique reduced the shot noise but preserved image information with depth [Fig. 5(c)]. Figs. 5d-f show the results of applying ONW to OCT images of the human colonic crypt data set. As before, a depth-dependent loss of information due to suboptimal choice of filter bank parameters is very apparent [Fig. 5(e)]. For instance, in Fig. 5e, the muscularis mucosae (thin layer of smooth muscle) at the bottom of the mucosa and the lower parts of the crypts of Lieberkuhn are clearly lost in this image. The ONW technique again preserved information with depth [Fig. 5(f)].

4.2 ALPND denoising

We applied the LPND method to the image already processed using ONW [Fig. 6(a), 6(b)]. Suboptimal parameter settings produced a heavily blurred image [Fig. 6(b)]. The ALPND method produced a speckle-reduced image with minimal blurring [Fig. 6(c)].

Fig. 6. ALPND technique applied to quail embryo OCT images (a) ONW filtered image from Fig. 5(c), (b) Result of basic LPND filtering of Fan et al. applied to the image in (a), (c) Result of proposed ALPND technique applied to the image in (a).

4.3 Visual and quantitative comparison with other filtering techniques

In Fig. 7 (a)–(f), we visually compared the ONW-ALPND denoising algorithm with the baseline denoising algorithms and the basic Kovesi NW filter [16

16. P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA’99. 212–217. 1999. Perth, WA.

] using a panel of experts in OCT technology and embryonic development. More specifically, our expert panel consisted of (i) an expert in cardiac developmental biology (Dr. Michiko Watanabe, Associate Professor of Pediatrics, Case Western Reserve University), (ii) two students from the Case School of Medicine majoring in Anatomy, and (iii) two experts in OCT imaging (Michael Jenkins & Andrew Rollins, authors). Our experts were shown images from different cardiac cycles for training and evaluation. Using the same input image as in Fig. 5(a) [repeated as Fig. 7(a)], we applied the baseline algorithms, the basic Kovesi NW filter and finally our ONW-ALPND denoising algorithm. Experts indicated that the denoised image obtained using the proposed method [Fig. 7(f)] was visually better as compared to the other methods [Fig. 7 (b)–(e)]. Figure 8 shows a movie comparing the image frames from the noisy and the denoised data sets after applying the proposed algorithm. Again, experts indicated that with denoising, blood flow can be more clearly visualized. Also, we can see structures such as cardiac jelly and the endocardial wall better after denoising. We also performed a quantitative comparison with the baseline algorithms. We chose 50 consecutive (in time) images from our (2D + time) data set. In order to make a fair comparison, the optimal filter kernel size determined by the ALPND algorithm was used to set the kernel size for the baseline algorithms. We obtained the combined figure of merit (μ) in Eq. (2) for the filtered images [Fig. 7(g)] produced by each of the above-mentioned techniques, from which it is evident that our proposed denoising algorithm performs better than the baseline algorithms.

Fig. 7. Visual and quantitative comparison of filtered images obtained by applying various noise reduction techniques to image in (a). (b) Median filter, (c) Wiener filter, (d) The OW filter using wdencmp function from the MATLAB® Wavelet ToolboxTM, (e) the basic Kovesi NW filter, and (f) proposed ONW-ALPND technique. (g) Quantitative comparison of figure of merit of ONW-ALPND filter with median, Wiener, OW and the basic Kovesi NW filters.

4.4 Surface and volume visualization

Surface and volume renderings from the original and denoised data were evaluated by experts in image processing and embryonic development. First, the experts were shown a volume rendering from one representative volume in the (3D+time) data set produced by each of the baseline denoising algorithms and the ONW-ALPND denoising algorithm. Anecdotally, the experts indicated that the ONW-ALPND denoising enabled the best visualization of internal structures without loss of useful details. Next, we asked them to compare the volume rendering produced by the ONW-ALPND denoising algorithm fro with that obtained from the original data [Fig. 9(a)]. They concluded that denoising enabled better visualization of tissue boundaries and the tubular structure of the heart. Following this, experts were shown surface renderings from the original [Fig. 9(c)] and ONW-ALPND denoised data [Fig. 9(d)]. They concluded that the heart surface was smoother and that the adjoining surfaces were more clearly visible after denoising. Finally, they were shown a movie made from a time series of volume renderings consisting of six phases of the cardiac cycle, corresponding to both the original and the denoised data [Fig. 9(e)]. They concluded that the dynamics of the beating heart in 3D, e.g. interaction between the heart and adjoining structures, could be more clearly visualized.

Fig. 8. (3.19 MB) A (2D + time) movie of original (noisy) and ONW-ALPND denoised data from the (2D + time) data set of the stage 13 quail embryo. [Media 1]

4.5. Semi-automatic segmentation

4.5.1 Tolerance based seeded region growing

Figures 9(f) and (h) show a visual comparison of noisy versus ONW-ALPND denoised data respectively using an en face 2D image from the single volume 3D data set. It is clear that anatomically important features such as outpocketings (sometimes called tethers) from the endocardial wall into the surrounding cardiac jelly are more distinctly visible in the denoised image in Fig. 9(h) (red arrows), suggesting that it should be easier to segment after denoising. We have verified that the tolerance based seeded region growing algorithm (section 3.3) performed better on denoised data because it clearly segmented out the lower portion of the cardiac jelly, as shown by the red region in Fig. 9(i). This can be observed by comparing it with the corresponding red region in Fig. 9(g), which shows the same algorithm applied on noisy data.

4.5.2 LiveWire segmentation

We compared LiveWire 2D segmentation results obtained by applying the proposed denoising technique with those obtained from noisy data and the OW filter denoising technique [Fig. 10(a)]. The data set consisted of 90 2D images from the (2D + time) data set corresponding to one complete cardiac cycle of the quail embryo. We computed the contour distance measure (section 3.3) between human expert traced contours and those obtained using LiveWire on (i) noisy data [denoted by D(I1,I2) in Fig. 10], (ii) denoised data using proposed technique [denoted by D(I1,I3)], and (iii) denoised data using OW filter [denoted by D(I1,I4)]. A scatter plot of D(I1,I2) versus D(I1,I3) is shown in Fig. 10 (b). It can be easily seen that a larger number of points (64%) lie below the dotted line [corresponding to D(I1,I2) = D(I1,I3)] than above it, as indicated by solid red squares. This indicated that LiveWire segmentation of the myocardial wall after noise reduction more closely matched an expert traced contour than LiveWire segmentation from noisy data.

Computing accurate gradients from data is very crucial for the success of 2D image segmentation algorithms. Some of the existing approaches for noise reduction are not tuned to the specific noise characteristics of an image data set. Therefore, the resulting images may be insufficiently or excessively filtered posing challenges to the gradient estimation process thereby affecting the performance of segmentation. Using the same (2D + time) data set, we compared the performance of the OW filter when used as a preprocessing (denoising) step for segmentation with our proposed ONW-ALPND technique [Fig. 10(c)] using human expert tracings as the baseline. We observed that, in 60% of the cases (solid red squares), Livewire contours produced by the proposed method were closer to human expert tracings than the LiveWire contours produced by OW filtering, thereby suggesting the superiority of the ONW-ALPND method.

The results from these data sets indicate a moderate percentage of conformity of contours obtained from the proposed algorithm to human tracings (in the range of 60-65%), when compared with those obtained from noisy data and the OW filtering method. This was probably due to (i) inaccuracies in human tracings owing to the changing shape and position of the myocardial wall contour across the 2D slices, and (ii) robustness of the LiveWire segmentation algorithm to noise.

4.5.3Shot versus speckle noise reduction for LiveWire segmentation

We performed an experiment to evaluate the effect of shot noise reduction on segmentation. First, we applied ONW-ALPND to the noisy images from both the (2D + time) and the single volume 3D data sets. We then applied LPND only to these same images. LiveWire segmentation was performed in both cases and compared (as usual) with human expert tracings (Fig. 12). We observed that the segmentation performance improvement due to the addition of the shot noise reduction step was only marginal, as demonstrated by Fig. 12(a) for the (2D + time) data set where only 48 of the 90 images (53%) indicated that adding the shot noise reduction step helped segmentation, and by Fig. 12(b) for the single volume 3D data set where only 62 of the 131 images (47%) indicated an improved LiveWire segmentation as a result of the shot noise reduction step. In other words, there was strong evidence to believe that speckle noise is more dominant in “segmentable” regions.

5. Discussion

The combined ONW-ALPND filter aids data interpretation by reducing noise, facilitates biologically useful volume visualizations, and improves semi-automatic segmentation. The results shown in Figs. 5 through 12 strongly support our claim with regard to the usefulness of these filters in investigating early cardiac development in small animal models. We explored the use of each individual filter (ONW, ALPND) in isolation but concluded that the combined filter when applied serially in a specific order (ONW followed by ALPND) produced the best volume visualizations as evaluated subjectively by human experts. These experts were also provided the results from median, Wiener and OW filtering methods to evaluate and compare with the proposed filter. As an example, one expert evaluated the median filter as being inadequate in terms of noise reduction when shown the corresponding volume renderings.

Fig. 9. Comparison of volume renderings of one phase of the cardiac cycle from the (3D + time) data set before (a) and after (b) ONW-ALPND denoising. Volume renderings were produced using the Drishti visualization software. An isosurface for a gray level value of 60 from both noisy (c) and denoised (d) data. In (e), (301 KB) a movie is shown of the time series of original (left) and denoised (right) volumes corresponding to a complete heartbeat [Media 2]. Figures (f) – (i) show an en face 2D image slice from a different data set - the single volume 3D data set of quail embryo. The noisy image appears in (f) and the ONW-ALPND denoised image is shown in (h). It is clear that outpocketings from the endocardium (red arrows) are more clearly visible after ONW-ALPND denoising. Figures (g) and (i) show the result of the Amira tolerance based seeded region growing tool applied to (f) and (h) respectively for segmenting the cardiac jelly (red region).
Fig. 10. Quantitative comparison of LiveWire segmentation with human tracings using ONW-ALPND and OW filters on 90 images from the (2D + time) data set. (a) Method used for contour comparison. Scatter plots of contour distance measure (section 3.3) are shown in (b) comparing noisy data with ONW-ALPND denoised data where 64% of the images showed closer conformity of proposed technique to human tracings, and in (c) comparing OW filter denoised data with ONW-ALPND denoised data where 60% of the images showed closer conformity to human tracings.
Fig. 11. Quantitative comparison of LiveWire segmentation on the single volume 3D data set consisting of 131 2D image slices corresponding to different spatial positions within the volume. Scatter plots of contour distance measure (section 3.3) have been plotted. (a) Comparison of contours obtained from noisy data with those obtained from ONW-ALPND where 60% of images showed closer conformity of proposed technique to human tracings, (b) Comparison of ONW-ALPND with OW filter where 65% of images showed closer conformity of proposed technique to human tracings.

There are some limitations to the current implementation. First, both the ONW and ALPND techniques involve an optimization step, which is currently applied on one representative image to determine the optimal settings for the filtering process. Once the optimal settings are obtained, the denoising is performed using these predetermined settings for the entire data set. Depending on the choice of the “representative” image, ONW may cause a signal drop off especially deeper in the tissue and the ALPND may result in image blurring. One solution would be to use different optimized parameters on different blocks of images. Second, with regards to the computational time, we first employed an unoptimized MATLAB code which took about 25 seconds to process a single 512 × 512 image from the (2D + time) data set on a 2.16 GHz Intel Core Duo laptop with 2 GB of RAM. Since this computation time can become prohibitively large for our extreme data sets, we performed some code optimization. We have currently reduced the computation time to 6 seconds per image on the same computer configuration but believe that further optimizations are possible. Third, we perform denoising on 2D image data only i.e. a 3D data set is processed in a serial fashion by applying the denoising to each 2D image in the stack. We plan to extend denoising to 3D as part of our future work. Also, we employ a serial, slice-by-slice, semi-automatic 2D segmentation technique to perform 3D segmentation. We plan to extend this to a fully automatic 3D implementation. Supervised (i.e. training-based) 3D shape modeling techniques such as Active Shape Models (ASM) and Active Appearance Models (AAM) [46

46. T. F. Cootes, A. Hill, C. J. Taylor, and J. Haslam, “Use of Active Shape Models for Locating Structure in Medical Images,” Image and Vision Computing 12, 355–365 (1994). [CrossRef]

49

49. T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 681–685 (2001). [CrossRef]

] will be useful in this regard but will pose several new challenges. For instance, the complex morphological changes that organs undergo during development would make training a subjective and difficult task. Fourth, there are probably opportunities for improving the opacity and color mapping functions for volume visualization. A 2D opacity transfer function (OTF) leads to enhanced volume visualization as shown by the renderings obtained using the Drishti software [43

43. A. L. Drishti; Volume Exploration and Presentation Tool([0.1.7. 2007). Ref Type: Computer Program

]. However, it is possible to obtain even improved renderings by designing more complex OTFs that use a higher number of data-dependent variables (and hence a higher number of dimensions). For instance, in addition to the original data value and its gradient, a Laplacian (second derivative) of the data could be employed to derive OTFs. When combined with suitable scalar weighting values along with the original data value and its gradient, a Laplacian can help determine exact location of boundaries, as suggested by Kniss et al [32

32. J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensional transfer functions for interactive volume rendering,” IEEE. Transactions on Visualization and Computer Graphics 8, 270–285 (2002). [CrossRef]

]. We are planning to build extensions to our volume visualization software so that it can support higher dimensional OTFs during volume rendering.

Fig. 12. Quantitative comparison of LiveWire segmentation to human tracings with and without shot noise reduction (ONW). Scatter plots of contour distance measure (section 3.3) have been plotted. (a) Results from the (2D + time) data set consisting of 90 images from one cardiac cycle of quail heart (where shot noise reduction helped in only 53% of the total images). (b) Results from single volume 3D data set consisting of 131 2D images from one time point of the cardiac cycle (where shot noise reduction helped in only 47% of the total images).

OCT technology coupled with image processing steps discussed in this paper would constitute a useful tool for investigating the early embryonic heart. Developmental cardiac researchers currently lack an effective imaging tool to investigate the morphological dynamics of the early avian/murine embryonic heart. Recently, we demonstrated the ability of OCT to morphologically phenotype embryonic murine hearts [50

50. M. W. Jenkins, P. Patel, H. Deng, M. M. Montano, M. Watanabe, and A. M. Rollins, “Phenotyping transgenic embryonic murine hearts using optical coherence tomography,” Appl. Opt. 46, 1776–1781 (2007). [CrossRef] [PubMed]

]. The set of image denoising algorithms, volume and surface visualization techniques and semi-automatic segmentation techniques presented in this report could be easily adapted to images of murine hearts and would represent a first step towards a fully automated, non-destructive, high-throughput system to assess the phenotype of embryonic murine hearts. This would allow researchers to pinpoint critical time periods at a much faster rate. Also, we have recently demonstrated the ability to image the 3D avian embryonic heart while beating [1

1. M. W. Jenkins, D. C. Adler, M. Gargesha, R. Huber, F. Rothenberg, J. Belding, M. Watanabe, D. L. Wilson, J. G. Fujimoto, and A. M. Rollins, “Ultrahigh-speed optical coherence tomography imaging and visualization of the embryonic avian heart using a buffered Fourier Domain Mode Locked laser,” Opt. Express 15, 6251–6267 (2007). [CrossRef] [PubMed]

,51

51. M. W. Jenkins, F. Rothenberg, D. Roy, V. P. Nikolski, Z. Hu, M. Watanabe, D. L. Wilson, I. R. Efimov, and A. M. Rollins, “4D embryonic cardiography using gated optical coherence tomography,” Opt. Express 14, 736–748 (2006). [CrossRef] [PubMed]

,52

52. M. W. Jenkins, O. Q. Chughtai, A. N. Basavanhally, M. Watanabe, and A. M. Rollins, “In vivo gated 4D imaging of the embryonic heart using optical coherence tomography,” J. Biomed. Opt. 12, 030505 (2007). [CrossRef] [PubMed]

]. Again, an evolved image processing pipeline would assist us in understanding mechanisms that drive normal versus abnormal heart development in early stages.

Fig. 13. Multi-resolution volume interaction on a single volume of quail embryonic heart from the (3D + time) data set. From a low resolution volume rendering of the heart, a region of interest can be selected (shown by bounding box) for higher resolution viewing.

6. Conclusion

Acknowledgments

This research is supported in part by National Institutes of Health RO1HL083048, the Ohio Wright Center of Innovation and Biomedical Research and Technology Transfer award: “The Biomedical Structure, Functional and Molecular Imaging Enterprise”, and the Interdisciplinary Biomedical Imaging Training Program NIH T32EB007509. This investigation was conducted in a facility constructed with support from Research Facilities Improvement Program Grant Number C06 RR12463-01 from the National Center of Research Resources, National Institutes of Health. The authors also acknowledge James G. Fujimoto, Desmond Adler and Robert Huber for their technical contributions.

References and links

1.

M. W. Jenkins, D. C. Adler, M. Gargesha, R. Huber, F. Rothenberg, J. Belding, M. Watanabe, D. L. Wilson, J. G. Fujimoto, and A. M. Rollins, “Ultrahigh-speed optical coherence tomography imaging and visualization of the embryonic avian heart using a buffered Fourier Domain Mode Locked laser,” Opt. Express 15, 6251–6267 (2007). [CrossRef] [PubMed]

2.

M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25, 545–547 (2000). [CrossRef]

3.

M. E Brezinski, Optical Coherence Tomography: Principles and Applications (Elsevier, 2006).

4.

A. E. Desjardins, B. J. Vakoc, G. J. Tearney, and B. E. Bouma, “Speckle reduction in OCT using massively-parallel detection and frequency-domain ranging,” Opt. Express 14, 4736–4745 (2006). [CrossRef] [PubMed]

5.

A. I. Kholodnykh, I. Y. Petrova, K. V. Larin, M. Motamedi, and R. O. Esenaliev, “Precision of measurement of tissue optical properties with optical coherence tomography,” Appl. Opt. 42, 3027–3037 (2003). [CrossRef] [PubMed]

6.

D. L. Marks, T. S. Ralston, and S. A. Boppart, “Speckle reduction by I-divergence regularization in optical coherence tomography,” J. Opt. Soc. Am. A 22, 2366–2371 (2005). [CrossRef]

7.

A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24, 1901–1910 (2007). [CrossRef]

8.

M. Pircher, E. Gotzinger, R. Leitgeb, A. F. Fercher, and C. K. Hitzenberger, “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt. 8, 565–569 (2003). [CrossRef] [PubMed]

9.

P. Puvanathasan and K. Bizheva, “Speckle noise reduction algorithm for optical coherence tomography based on interval type II fuzzy set,” Opt. Express 15, 15747–15758 (2007). [CrossRef] [PubMed]

10.

J. Rogowska and M. E. Brezinski, “Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images,” Phys. Med. Biol. 47, 641–655 (2002). [CrossRef] [PubMed]

11.

J. M. Schmitt, S. H. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4, 95–105 (1999). [CrossRef]

12.

A. Achim, A. Bezerianos, and P. Tsakalides, “Novel Bayesian multiscale method for speckle removal in medical ultrasound images,” IEEE Trans. Med. Imaging. 20, 772–783 (2001). [CrossRef] [PubMed]

13.

V. Dutt and J. F. Greenleaf, “Adaptive speckle reduction filter for log-compressed B-scan images,” IEEE Trans. Med. Imaging. 15, 802–813 (1996). [CrossRef] [PubMed]

14.

S. Gupta, R. C. Chauhan, and S. C. Sexana, “Wavelet-based statistical approach for speckle reduction in medical ultrasound images,” Medical & Biological Engineering & Computing 42, 189–192 (2004). [CrossRef] [PubMed]

15.

X. H. Hao, S. K. Gao, and X. R. Gao, “A novel multiscale nonlinear thresholding method for ultrasonic speckle suppressing,” IEEE Trans. Med. Imaging. 18, 787–794 (1999). [CrossRef] [PubMed]

16.

P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA’99. 212–217. 1999. Perth, WA.

17.

M. A. Kutay, A. P. Petropulu, and C. W. Piccoli, “On modeling biomedical ultrasound RF echoes using a power-law shot-noise model,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 48, 953–968 (2001). [CrossRef] [PubMed]

18.

T. Loupas, W. N. Mcdicken, and P. L. Allan, “An Adaptive Weighted Median Filter for Speckle Suppression in Medical Ultrasonic Images,” IEEE. Trans. Circuits Syst. 36, 129–135 (1989). [CrossRef]

19.

J. Xie, Y. F. Jiang, H. T. Tsui, and P. A. Heng, “Boundary enhancement and speckle reduction for ultrasound images via salient structure extraction,” IEEE Trans. Biomed. Eng. 53, 2300–2309 (2006). [CrossRef] [PubMed]

20.

Y. Yue, M. M. Croitoru, J. B. Zwischenberger, and J. W. Clark, “Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images,” IEEE Trans. Med. Imaging. 25, 297–311 (2006). [CrossRef] [PubMed]

21.

D. F. Zha and T. S. Qiu, “A new algorithm for shot noise removal in medical ultrasound images based on alpha-stable model,” International Journal of Adaptive Control and Signal Processing 20, 251–263 (2006). [CrossRef]

22.

R. C Gonzalez and R. E Woods, Digital Image Processing (Prentice Hall, 2002).

23.

W. K Pratt, Digital Image Processing (John Wiley and Sons, Inc., 2001). [CrossRef]

24.

M Sonka, V Hlavac, and R Boyle, Image Processing: Analysis and Machine Vision ( Brooks and Cole Publishing, 1998).

25.

Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]

26.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef] [PubMed]

27.

J. T. M. Verhoeven, J. M. Thijssen, and A. G. M. Theeuwes, “Improvement of Lesion Detection by Echographic Image-Processing - Signal-To-Noise-Ratio Imaging,” Ultrason. Imaging 13, 238–251 (1991). [CrossRef] [PubMed]

28.

A. S. Frangakis and R. Hegerl, “Noise reduction in electron tomographic reconstructions using nonlinear anisotropic diffusion,” J. Struct. Biol. 135, 239–250 (2001). [CrossRef] [PubMed]

29.

The Visualization Handbook ( Elsevier Academic Press, 2005).

30.

B. Csebfalvi, L. Mroz, H. Hauser, A. Konig, and E. Groller, “Fast visualization of object contours by non-photorealistic volume rendering,” Computer Graphics Forum 20, C452-+ (2001). [CrossRef]

31.

D. S. Ebert, C. J. Morris, P. Rheingans, and T. S. Yoo, “Designing effective transfer functions for volume rendering from photographic volumes,” IEEE. Transactions on Visualization and Computer Graphics 8, 183–197 (2002). [CrossRef]

32.

J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensional transfer functions for interactive volume rendering,” IEEE. Transactions on Visualization and Computer Graphics 8, 270–285 (2002). [CrossRef]

33.

Y. Sato, S. Nakajima, H. Atsumi, T. Koller, G. Gerig, S. Yoshida, and R. Kikinis, “3D multi-scale line filter for segmentation and visualization of curvilinear structures in medical images,” Cvrmed-Mrcas’97 1205, 213–222 (1997). [CrossRef]

34.

D Stalling, H. Hege, and M. Amira- Zockler An Advanced 3D Visualization and Modeling System. http://amira.zib.de , 2007. http://amira.zib.de.

35.

H. Ghassan and H. Judith, DTMRI Segmentation using DT-Snakes and DT-Livewire. Signal Processing and Information Technology, 2006 IEEE International Symposium on. Signal Processing and Information Technology, 2006 IEEE International Symposium on, 513–518, (2006).

36.

E. N. Mortensen and W. A. Barrett, “Interactive segmentation with intelligent scissors,” Graphical Models and Image Processing 60, 349–384 (1998). [CrossRef]

37.

M. Demirci, “Matlab Image-Processing Toolbox,” Computer 27, 106–107 (1994).

38.

J. S. Lim, Two-Dimensional Signal and Image Processing (Prentice Hall, Englewood Cliffs, NJ 1990).

39.

A. Chambolle, R. A. Devore, N. Y. Lee, and B. J. Lucier, “Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage,” IEEE Trans. Image Process. 7, 319–335 (1998). [CrossRef]

40.

C. C. Chang, C. S. Chan, and J. Y. Hsiao, “A color image retrieval method based on local histogram,” Advances in Mutlimedia Information Processing – Pcm 2001, Proceedings 2195, 831–836 (2001). [CrossRef]

41.

P. Suetens, Fundamentals of Medical Imaging (Cambridge University Press, 2002).

42.

P. Perona and J. Malik, “Scale-Space and Edge-Detection Using Anisotropic Diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990). [CrossRef]

43.

A. L. Drishti; Volume Exploration and Presentation Tool([0.1.7. 2007). Ref Type: Computer Program

44.

E.W. Dijkstra, “A note on two problems in connection with graphs,” Numerische Mathematik 1, 269–271 (1959). [CrossRef]

45.

V. Perlibakas, “Automatical detection of face features and exact face contour,” Pattern. Recogn. Lett. 24, 2977–2985 (2003). [CrossRef]

46.

T. F. Cootes, A. Hill, C. J. Taylor, and J. Haslam, “Use of Active Shape Models for Locating Structure in Medical Images,” Image and Vision Computing 12, 355–365 (1994). [CrossRef]

47.

T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active Shape Models – Their Training and Application,” Computer Vision and Image Understanding 61, 38–59 (1995). [CrossRef]

48.

T. F. Cootes, C. Beeston, G. J. Edwards, and C. J. Taylor, “A unified framework for atlas matching using Active Appearance Models,” Information Processing in Medical Imaging, Proceedings 1613, 322–333 (1999). [CrossRef]

49.

T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 681–685 (2001). [CrossRef]

50.

M. W. Jenkins, P. Patel, H. Deng, M. M. Montano, M. Watanabe, and A. M. Rollins, “Phenotyping transgenic embryonic murine hearts using optical coherence tomography,” Appl. Opt. 46, 1776–1781 (2007). [CrossRef] [PubMed]

51.

M. W. Jenkins, F. Rothenberg, D. Roy, V. P. Nikolski, Z. Hu, M. Watanabe, D. L. Wilson, I. R. Efimov, and A. M. Rollins, “4D embryonic cardiography using gated optical coherence tomography,” Opt. Express 14, 736–748 (2006). [CrossRef] [PubMed]

52.

M. W. Jenkins, O. Q. Chughtai, A. N. Basavanhally, M. Watanabe, and A. M. Rollins, “In vivo gated 4D imaging of the embryonic heart using optical coherence tomography,” J. Biomed. Opt. 12, 030505 (2007). [CrossRef] [PubMed]

OCIS Codes
(030.4280) Coherence and statistical optics : Noise in imaging systems
(100.2960) Image processing : Image analysis
(100.6890) Image processing : Three-dimensional image processing
(170.3880) Medical optics and biotechnology : Medical and biological imaging
(170.4500) Medical optics and biotechnology : Optical coherence tomography

ToC Category:
Image Processing

History
Original Manuscript: April 28, 2008
Revised Manuscript: June 28, 2008
Manuscript Accepted: June 30, 2008
Published: August 1, 2008

Virtual Issues
Vol. 3, Iss. 9 Virtual Journal for Biomedical Optics

Citation
Madhusudhana Gargesha, Michael W. Jenkins, Andrew M. Rollins, and David L. Wilson, "Denoising and 4D visualization of OCT images," Opt. Express 16, 12313-12333 (2008)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-16-12313


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. M. W. Jenkins, D. C. Adler, M. Gargesha, R. Huber, F. Rothenberg, J. Belding, M. Watanabe, D. L. Wilson, J. G. Fujimoto, and A. M. Rollins, "Ultrahigh-speed optical coherence tomography imaging and visualization of the embryonic avian heart using a buffered Fourier Domain Mode Locked laser," Opt. Express 15, 6251-6267 (2007). [CrossRef] [PubMed]
  2. M. Bashkansky and J. Reintjes, "Statistics and reduction of speckle in optical coherence tomography," Opt. Lett. 25, 545-547 (2000). [CrossRef]
  3. M. E Brezinski, Optical Coherence Tomography: Principles and Applications (Elsevier, 2006).
  4. A. E. Desjardins, B. J. Vakoc, G. J. Tearney, and B. E. Bouma, "Speckle reduction in OCT using massively-parallel detection and frequency-domain ranging," Opt. Express 14, 4736-4745 (2006). [CrossRef] [PubMed]
  5. A. I. Kholodnykh, I. Y. Petrova, K. V. Larin, M. Motamedi, and R. O. Esenaliev, "Precision of measurement of tissue optical properties with optical coherence tomography," Appl. Opt. 42, 3027-3037 (2003). [CrossRef] [PubMed]
  6. D. L. Marks, T. S. Ralston, and S. A. Boppart, "Speckle reduction by I-divergence regularization in optical coherence tomography," J. Opt. Soc. Am. A 22, 2366-2371 (2005). [CrossRef]
  7. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, "Speckle reduction in optical coherence tomography images using digital filtering," J. Opt. Soc. Am. A 24, 1901-1910 (2007). [CrossRef]
  8. M. Pircher, E. Gotzinger, R. Leitgeb, A. F. Fercher, and C. K. Hitzenberger, "Speckle reduction in optical coherence tomography by frequency compounding," J. Biomed. Opt. 8, 565-569 (2003). [CrossRef] [PubMed]
  9. P. Puvanathasan and K. Bizheva, "Speckle noise reduction algorithm for optical coherence tomography based on interval type II fuzzy set," Opt. Express 15, 15747-15758 (2007). [CrossRef] [PubMed]
  10. J. Rogowska and M. E. Brezinski, "Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images," Phys. Med. Biol. 47, 641-655 (2002). [CrossRef] [PubMed]
  11. J. M. Schmitt, S. H. Xiang, and K. M. Yung, "Speckle in optical coherence tomography," J. Biomed. Opt. 4, 95-105 (1999). [CrossRef]
  12. A. Achim, A. Bezerianos, and P. Tsakalides, "Novel Bayesian multiscale method for speckle removal in medical ultrasound images," IEEE Trans. Med. Imaging. 20, 772-783 (2001). [CrossRef] [PubMed]
  13. V. Dutt and J. F. Greenleaf, "Adaptive speckle reduction filter for log-compressed B-scan images," IEEE Trans. Med. Imaging. 15, 802-813 (1996). [CrossRef] [PubMed]
  14. S. Gupta, R. C. Chauhan, and S. C. Sexana, "Wavelet-based statistical approach for speckle reduction in medical ultrasound images," Medical & Biological Engineering & Computing 42, 189-192 (2004). [CrossRef] [PubMed]
  15. X. H. Hao, S. K. Gao, and X. R. Gao, "A novel multiscale nonlinear thresholding method for ultrasonic speckle suppressing," IEEE Trans. Med. Imaging. 18, 787-794 (1999). [CrossRef] [PubMed]
  16. P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA'99. 212-217. 1999. Perth, WA.
  17. M. A. Kutay, A. P. Petropulu, and C. W. Piccoli, "On modeling biomedical ultrasound RF echoes using a power-law shot-noise model," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 48, 953-968 (2001). [CrossRef] [PubMed]
  18. T. Loupas, W. N. Mcdicken, and P. L. Allan, "An Adaptive Weighted Median Filter for Speckle Suppression in Medical Ultrasonic Images," IEEE.Trans. Circuits Syst. 36, 129-135 (1989). [CrossRef]
  19. J. Xie, Y. F. Jiang, H. T. Tsui, and P. A. Heng, "Boundary enhancement and speckle reduction for ultrasound images via salient structure extraction," IEEE Trans. Biomed. Eng. 53, 2300-2309 (2006). [CrossRef] [PubMed]
  20. Y. Yue, M. M. Croitoru, J. B. Zwischenberger, and J. W. Clark, "Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images," IEEE Trans. Med. Imaging. 25, 297-311 (2006). [CrossRef] [PubMed]
  21. D. F. Zha and T. S. Qiu, "A new algorithm for shot noise removal in medical ultrasound images based on alpha-stable model," International Journal of Adaptive Control and Signal Processing 20, 251-263 (2006). [CrossRef]
  22. R. C Gonzalez and R. E Woods, Digital Image Processing (Prentice Hall, 2002).
  23. W. K Pratt, Digital Image Processing (John Wiley and Sons, Inc., 2001). [CrossRef]
  24. M Sonka, V Hlavac, and R Boyle, Image Processing: Analysis and Machine Vision (Brooks and Cole Publishing, 1998).
  25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, "Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction," IEEE Trans. Med. Imaging. 26, 200-211 (2007). [CrossRef]
  26. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE Trans. Image Process. 13, 600-612 (2004). [CrossRef] [PubMed]
  27. J. T. M. Verhoeven, J. M. Thijssen, and A. G. M. Theeuwes, "Improvement of Lesion Detection by Echographic Image-Processing - Signal-To-Noise-Ratio Imaging," Ultrason. Imaging 13, 238-251 (1991). [CrossRef] [PubMed]
  28. A. S. Frangakis and R. Hegerl, "Noise reduction in electron tomographic reconstructions using nonlinear anisotropic diffusion," J. Struct. Biol. 135, 239-250 (2001). [CrossRef] [PubMed]
  29. The Visualization Handbook (Elsevier Academic Press, 2005).
  30. B. Csebfalvi, L. Mroz, H. Hauser, A. Konig, and E. Groller, "Fast visualization of object contours by non-photorealistic volume rendering," Computer Graphics Forum 20, C452-+ (2001). [CrossRef]
  31. D. S. Ebert, C. J. Morris, P. Rheingans, and T. S. Yoo, "Designing effective transfer functions for volume rendering from photographic volumes," IEEE. Transactions on Visualization and Computer Graphics 8,183-197 (2002). [CrossRef]
  32. J. Kniss, G. Kindlmann, and C. Hansen, "Multidimensional transfer functions for interactive volume rendering," IEEE. Transactions on Visualization and Computer Graphics 8, 270-285 (2002). [CrossRef]
  33. Y. Sato, S. Nakajima, H. Atsumi, T. Koller, G. Gerig, S. Yoshida, and R. Kikinis, "3D multi-scale line filter for segmentation and visualization of curvilinear structures in medical images," Cvrmed-Mrcas'97 1205, 213-222 (1997). [CrossRef]
  34. Stalling, D , Hege, H. C , and Zockler, M.  Amira- An Advanced 3D Visualization and Modeling System. http://amira.zib.de, 2007. http://amira.zib.de.
  35. H. Ghassan, H. Judith, DTMRI Segmentation using DT-Snakes and DT-Livewire. Signal Processing and Information Technology, 2006 IEEE International Symposium on. Signal Processing and Information Technology, 2006 IEEE International Symposium on, 513-518, (2006).
  36. E. N. Mortensen and W. A. Barrett, "Interactive segmentation with intelligent scissors," Graphical Models and Image Processing 60, 349-384 (1998). [CrossRef]
  37. M. Demirci, "Matlab Image-Processing Toolbox," Computer 27, 106-107 (1994).
  38. J. S. Lim, Two-Dimensional Signal and Image Processing (Prentice Hall, Englewood Cliffs, NJ 1990).
  39. A. Chambolle, R. A. Devore, N. Y. Lee, and B. J. Lucier, "Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage," IEEE Trans. Image Process. 7, 319-335 (1998). [CrossRef]
  40. C. C. Chang, C. S. Chan, and J. Y. Hsiao, "A color image retrieval method based on local histogram," Advances in Mutlimedia Information Processing - Pcm 2001, Proceedings 2195, 831-836 (2001). [CrossRef]
  41. P. Suetens, Fundamentals of Medical Imaging (Cambridge University Press, 2002).
  42. P. Perona and J. Malik, "Scale-Space and Edge-Detection Using Anisotropic Diffusion," IEEE Trans. Pattern Anal. Mach. Intell. 12, 629-639 (1990). [CrossRef]
  43. A. L. Drishti; Volume Exploration and Presentation Tool([0.1.7. 2007). Ref Type: Computer Program
  44. E.W. Dijkstra, "A note on two problems in connection with graphs," Numerische Mathematik 1, 269-271 (1959). [CrossRef]
  45. V. Perlibakas, "Automatical detection of face features and exact face contour," Pattern. Recogn. Lett. 24, 2977-2985 (2003). [CrossRef]
  46. T. F. Cootes, A. Hill, C. J. Taylor, and J. Haslam, "Use of Active Shape Models for Locating Structure in Medical Images," Image and Vision Computing 12, 355-365 (1994). [CrossRef]
  47. T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, "Active Shape Models - Their Training and Application," Computer Vision and Image Understanding 61, 38-59 (1995). [CrossRef]
  48. T. F. Cootes, C. Beeston, G. J. Edwards, and C. J. Taylor, "A unified framework for atlas matching using Active Appearance Models," Information Processing in Medical Imaging, Proceedings 1613, 322-333 (1999). [CrossRef]
  49. T. F. Cootes, G. J. Edwards, and C. J. Taylor, "Active appearance models," IEEE Trans. Pattern Anal. Mach. Intell. 23, 681-685 (2001). [CrossRef]
  50. M. W. Jenkins, P. Patel, H. Deng, M. M. Montano, M. Watanabe, and A. M. Rollins, "Phenotyping transgenic embryonic murine hearts using optical coherence tomography," Appl. Opt. 46, 1776-1781 (2007). [CrossRef] [PubMed]
  51. M. W. Jenkins, F. Rothenberg, D. Roy, V. P. Nikolski, Z. Hu, M. Watanabe, D. L. Wilson, I. R. Efimov, and A. M. Rollins, "4D embryonic cardiography using gated optical coherence tomography," Opt. Express 14, 736-748 (2006). [CrossRef] [PubMed]
  52. M. W. Jenkins, O. Q. Chughtai, A. N. Basavanhally, M. Watanabe, and A. M. Rollins, "In vivo gated 4D imaging of the embryonic heart using optical coherence tomography," J. Biomed. Opt. 12, 030505 (2007). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: AVI (3198 KB)     
» Media 2: AVI (301 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited