OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 17, Iss. 18 — Aug. 31, 2009
  • pp: 15659–15669
« Show journal navigation

Automated segmentation of the macula by optical coherence tomography

Tapio Fabritius, Shuichi Makita, Masahiro Miura, Risto Myllylä, and Yoshiaki Yasuno  »View Author Affiliations


Optics Express, Vol. 17, Issue 18, pp. 15659-15669 (2009)
http://dx.doi.org/10.1364/OE.17.015659


View Full Text Article

Acrobat PDF (4151 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper presents optical coherence tomography (OCT) signal intensity variation based segmentation algorithms for retinal layer identification. Its main ambition is to reduce the calculation time required by layer identification algorithms. Two algorithms, one for the identification of the internal limiting membrane (ILM) and the other for retinal pigment epithelium (RPE) identification are implemented to evaluate structural features of the retina. Using a 830 nm spectral domain OCT device, this paper demonstrates a segmentation method for the study of healthy and diseased eyes.

© 2009 Optical Society of America

1. Introduction

An established method in ophthalmic imaging, optical coherence tomography (OCT) has the great advantage that it provides high-resolution three dimensional (3D) images of the human eye noninvasively. As technical development continues, its resolution and imaging speed will be further improved. One problem is that high resolution 3D imaging produces large quantities of data, and modifying the measured raw data requires substantial processing. That is time-consuming and degrades the clinical usability of OCT. To improve the practicability of OCT technology in ophthalmology therefore depends on effective data processing.

Several different methods are available for identifying the internal layers of posterior human eye [1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

, 2

2. H. Ishikawa, D.M. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, and J.S. Schuman, “Macular segmentation with optical coherence tomography,” Investigative Ophthalmol. Visual Scie. 46, 2012–2017 (2005).

, 3

3. M. Mujat, R. C. Chan, B. Cense, B. H. Park, C. Joo, T. Akkin, T. C. Chen, and J. F. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13, 9480–9491 (2005). [PubMed]

, 4

4. M. Szkulmowski, M. Wojtkowski, B. Sikorski, T. Bajraszewski, V. J. Srinivasan, A. Szkulmowska, J. J. Kaluzny, J. G. Fujimoto, and A. Kowalczyk, “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies,” J. Biomed. Opt.12, (2007). [PubMed]

, 5

5. D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13, 200–216 (2005).

, 6

6. M. Baroni, P. Fortunato, and A. L. Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” Med. Engin. Phys. 29, 432–441 (2007).

, 7

7. E. Götzinger, M. Pircher, W. Geitzenauer, C. Ahlers, B. Baumann, S. Michels, U. Schmidt-Erfurth, and C. K. Hitzenberger, “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express 16, 16410–16422 (2008). [PubMed]

]. Most of these are based on intensity variations in backscattered signal [1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

, 2

2. H. Ishikawa, D.M. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, and J.S. Schuman, “Macular segmentation with optical coherence tomography,” Investigative Ophthalmol. Visual Scie. 46, 2012–2017 (2005).

, 3

3. M. Mujat, R. C. Chan, B. Cense, B. H. Park, C. Joo, T. Akkin, T. C. Chen, and J. F. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13, 9480–9491 (2005). [PubMed]

, 4

4. M. Szkulmowski, M. Wojtkowski, B. Sikorski, T. Bajraszewski, V. J. Srinivasan, A. Szkulmowska, J. J. Kaluzny, J. G. Fujimoto, and A. Kowalczyk, “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies,” J. Biomed. Opt.12, (2007). [PubMed]

, 5

5. D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13, 200–216 (2005).

, 6

6. M. Baroni, P. Fortunato, and A. L. Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” Med. Engin. Phys. 29, 432–441 (2007).

]. ILM segmentation offers a straightforward approach, because the contrast between vitreous and retina is typically very good, but RPE segmentation has been found a challenging undertaking, especially in pathologic cases. One novel approach identifies RPE on the basis of the polarization scrambling property of RPE tissue [7

7. E. Götzinger, M. Pircher, W. Geitzenauer, C. Ahlers, B. Baumann, S. Michels, U. Schmidt-Erfurth, and C. K. Hitzenberger, “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express 16, 16410–16422 (2008). [PubMed]

].

However, regardless of whether tissue identification is based on segmentation relying on intensity variation in OCT signals or polarization sensitive OCT data, the required calculation time for 3D data processing using currently available methods is very long, making these methods a bit unpractical. To establish the justification and validity of that statement, we did the literature review of published segmentation methods. Articles published by Koozekanani et al. [1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

], Ishikawa et al. [2

2. H. Ishikawa, D.M. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, and J.S. Schuman, “Macular segmentation with optical coherence tomography,” Investigative Ophthalmol. Visual Scie. 46, 2012–2017 (2005).

] and Baroni et al. [6

6. M. Baroni, P. Fortunato, and A. L. Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” Med. Engin. Phys. 29, 432–441 (2007).

] do not mention a required calculation time. Mujat et al. published a method for retinal nerve fiber layer thickness determination. The processing of a single image (with 1000 A-scans) took 62 seconds [3

3. M. Mujat, R. C. Chan, B. Cense, B. H. Park, C. Joo, T. Akkin, T. C. Chen, and J. F. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13, 9480–9491 (2005). [PubMed]

]. If data segmentation of volume with 138 images is performed, the needed calculation time would be more than 2 hours. The corresponding processing time for semi-automatic segmentation method published by Szkulmowski et al. [4

4. M. Szkulmowski, M. Wojtkowski, B. Sikorski, T. Bajraszewski, V. J. Srinivasan, A. Szkulmowska, J. J. Kaluzny, J. G. Fujimoto, and A. Kowalczyk, “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies,” J. Biomed. Opt.12, (2007). [PubMed]

] was about 5 minutes for data volume containing 200 images with 600 A-scans/image. That time includes the increment of processing time caused by a necessary manual intervention. The computation time of segmentation method published by Fernandez et al. was 24 seconds for a 1024×512 image, entailing a 55 minutes total data processing time for volume with 138 images [5

5. D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13, 200–216 (2005).

]. Simpler version of the RPE segmentation method published by Götzinger et al. took 8.3 minutes for volume with 60 images (1000 A-scans/image) [7

7. E. Götzinger, M. Pircher, W. Geitzenauer, C. Ahlers, B. Baumann, S. Michels, U. Schmidt-Erfurth, and C. K. Hitzenberger, “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express 16, 16410–16422 (2008). [PubMed]

]. Although the segmentation method based on the polarization scrambling property of RPE tissue seems more specific for RPE identification than intensity based algorithms, it requires polarization sensitive OCT (PS-OCT), which is not commercially available so far, and is not yet common in clinics. Consequently, we have decided to concentrate on intensity signal-based RPE segmentation.

This work demonstrates an alternative method for identifying the ILM and RPE based on intensity variations in OCT signals. The main ambition is to decrease the necessary calculation time, while still obtaining reliable segmentation results.

2. Methods

2.1. Retinal pigment epithelium identification

Fig. 1. Steps in RPE identification. (i) The positions of maximum intensity pixels are determined and a 2D RPE position matrix is obtained, (ii) Automated binarization is performed to obtain a mask to identify erroneous pixels in the RPE position matrix (iii) Erroneous pixels are removed and replaced with new numerical values based on information on neighbouring pixels, (iv) 30 pixels around the RPE estimation are extracted from the original volume data, (v) the position of the RPE is redetermined on the basis of maximum intensity determination. The thus obtained RPE position map can be further improved by repeating steps (iv-vi) using a reduced amount of pixels around the estimated RPE.

Actual layer identification can be started after normal SD-OCT pre-processing, including depth motion compensation. RPE and ILM identification can be performed independently. Moreover, the presented RPE segmentation algorithm does not require any denoising, allowing us to avoid unnecessary complexity of calculation. The principle of the algorithm is based on the fact that the intensity of backscattered light is largest at the RPE complex. Fig. 1 depicts the sequences of the RPE identification algorithm.

2.2. Internal limiting membrane identification

Another important layer that needs to be identified, particularly when estimating retinal thickness, is the ILM, which can be considered as a limiting membrane between the retina and the vitreous. Owing to the very different optical properties of the retinal nerve fiber layer (RNFL) and the vitreous, the contrast between the ILM and the vitreous is typically very good in OCT images. This is because there are no highly scattering or absorbing tissues before the ILM. Consequently, ILM identification can be performed efficiently using automatic intensity-based binarization.

Fig. 2. Steps in ILM identification. (i) A threshold value is calculated automatically and each slice of volume data is binarized; (ii) the depth position of the first zero value of each binarized A-scan is determined to get a first estimation of the ILM position; (iii) to remove erroneous ILM position pixels, 45 pixels around the estimated ILM are extracted and reprocessed; (iv) intensity-based binarization is performed again with same threshold value as the first time; (v) the depth position of the ILM is re-estimated based on determining the position of the first zero value pixels of A-scans; (vi) to improve the reliability of the obtained ILM position matrix, steps (iii–v) are repeated with a smaller amount of pixels around the estimated ILM.

First, a suitable intensity threshold value is determined for each B-scan in a data cube, and the first 5 pixels in depth of each B-scan {Noisey (x, z), y∊[1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

,N], x∊[1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

,M], z∊[1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

,5

5. D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13, 200–216 (2005).

]} are extracted. Assuming that these pixels only contain a noise signal, threshold determination of each B-scan is based on evaluating 5×M pixels. The threshold is selected such that 0.5%of the pixels are set to be zero after binarization {∑M x=15 z=1 Bnoisey (x, z)≤0.005×5×M}. Here Bnoisey (x, z) refers to the binarized form of Noisey (x, z). Because of the noise variations of processed B-scans, each of them are binarized with their own threshold (see step (i) in Fig. 2.) and the estimation for the position of the ILM is obtained by determining the depth position of the first zero value of each A-scan (see step (ii) in Fig. 2.). Due to the noise signal mentioned above the ILM position matrix contains erroneous pixels. To remove them, the 2D ILM position matrix is smoothed by moving window median filters (size 1(x)×25(y) and 25(x)×1(y)). In addition, 45 pixels (~194 µm) around it {Sy (x, z), y∊[1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

,N], x∊[1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

,M], z∊[Zilm(x,y)-15,Zilm(x,y)+30]} are extracted from the original data (see step (iii) in Fig. 2.). In this expression, Zilm refers to the first depth position estimation of the ILM. To get more reliable segmentation results, steps (iii–v) can be repeated with smaller amount of pixels around the ILM (see step (vi) in Fig. 2.). In this work, only three iterations were performed. During the last iteration, 30 pixels (~129µm) around the estimated ILM {z∊[Z ilm2(x,y)-3,Z ilm2(x,y)+27]} of each B-scan were reprocessed. Here, Z ilm2 refers to a second depth position estimation of the ILM. Finally, the obtained position matrix is smoothened by a moving window median filter with a size of 10(x)×1(y) pixels. Because the number of reprocessed pixels is reduced dramatically between adjacent iterations, the calculation time does not increase significantly. Like in RPE segmentation method, the size of used filters must be adjusted if the scanning protocol is changed.

3. Results

We employed a spectral domain OCT (SD-OCT) system to obtain three-dimensional OCT images. As light source, the systemused a superluminescent diode (SLD) with a centre wavelength of 840 nm and a FWHMspectral bandwidth of 50 nm. The measured optical power of the beam on the cornea was 700 µW (less than ANSI limit). A transmission type diffractive grating of 1200 lines/mm was used. The scanning rate of the camera (Basler, L103k-2k) was 18.7 kHz and the exposure time of each A-line was 53.3 µs. With a measured maximum sensitivity of 99.3 dB, the system had a measured axial resolution of 8.8 µm in air. A more detailed description of the measurement setup can be found in Reference [10

10. S. Makita, Y.J. Hong, M. Yamanari, T. Yatagai, and Y. Yasuno, “Optical coherence angiography,” Opt. Express 17, 7821–7840 (2006).

]. All segmentation algorithms are implemented in Matlab and all data processing was performed by a normal personal computer (2.4 GHz CPU, 2.93 GB RAM).

3.1. Layer segmentation and investigation of a healthy macula

Performed on a healthy volunteer, the first experimental measurements of the macula involved 1024 depth scans (with 320 pixels) per frame, with the entire data set containing 138 frames. The imaging area was about 5×5 mm2 and the measurement time was about 7.6 s. Using the segmentation methods described above, the position of the RPE and the ILM were identified with calculation times of 21 s and 16 s, respectively. To evaluate the quality of the segmentation process, Fig. 3 shows an en-face projection image and 10 cross-section images with the RPE and ILM superimposed on them. Only very small ILM and RPE segmentation errors can be seen.

Fig. 3. RPE and ILM layer segmentation results in a healthy volunteers macula. En-face projection image and ten cross-section images with identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2 and the vertical dimension of each cross-section image is 1.37 mm. (in air)

We analysed the accuracy of the obtained RPE and ILM segmentation results by evaluating all B-scans manually. Assuming that if the segmentation error is less than 10% of the average thickness of normal retina (255±16 µm), the error can be considered to be small [1

1. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

]. All unclear results were regarded to be erroneous. In our evaluation that limit was set to be 5 pixels (~22 µm). In the case of RPE segmentation, 99.7% of depth scans has smaller error than 5 pixels when the corresponding value for ILM segmentation was 99.2%.

Because layer segmentation is performed over the whole measured area, 2D position maps of the RPE and ILM can be obtained. Calculating the distance between the two boundaries allows determining the thickness of the retina. Figure 4 presents the obtained RPE and ILM position maps and a retinal thickness map. The position of the RPE and ILM are given by the depth from the top of the cross-section image, while the scale bars indicate the optical distance.

Fig. 4. Healthy volunteers macula. (a) Position map of the ILM; (b) position map of the RPE; (c) retinal thickness map. The position of the ILM and RPE are given by the depth from the top of the cross-section image. The effect of the refractive index of the tissue is not taken into account.

One potential clinical application of our method involves the quantitative determination of retinal thickness. Assuming that the average refractive index of retinal tissue is n=1.38, the physical thickness of the measured retina varies between 156 µm and 305 µm. Central retinal thickness (area with a 1 mm diameter) was measured to be 202 µm which is comparable with previously published studies on the normal eye [11

11. A. Misota, T. Sakuma, O. Miyauchi, M. Honda, and M. Tanaka, “Measurement of retinal thickness from the three-dimensional images obtained from C scan images from the optical coherence tomography ophthalmoscope,” Clinical and Experimental Ophthalmology 35, 220–224 (2007).

, 12

12. S. H. M. Liew, C. E. Gilbert, T. D. Spector, J. Mellerio, F. J. Van Kuijk, S. Beatty, F. Fitzke, J. Marshall, and C. J. Hammond, “Central retinal thickness is positively correlated with macular pigment optical density,” Experimental Eye Research 82, 915–920 (2006).

].

3.2. Layer segmentation and investigation of a macula with disease

Macular imaging of a patient with ARMD was performed to evaluate applicability of the presented segmentation method for abnormal eye. The measured data set contained 140 frames with 1022 depth scans (with 380 pixels) per frame. Covering an imaging area of about 5×5 mm2, the required RPE and ILM segmentation times were 21 s and 16 s, respectively. Results of the segmentation are shown in Fig. 5. As seen, only a few, small segmentation errors can be found, demonstrating that ILM segmentation was successful. On the other hand, evaluating the quality of RPE segmentation is not unambiguous, due to the distortion of the RPE. Nonetheless, assuming that the RPE is not totally destroyed in the area of elevation, also the RPE segmentation seems to be very reliable. Manually performed segmentation accuracy analysis showed that ILM segmentation was performed without errors larger than 5 pixels. In the case of RPE segmentation, at least 96.7% of the depth scans were segmented without significant error. The portion of uncertain segmentation results from the total number of analyzed depth scans was 3.0%.

Figure 6 shows the obtained RPE and ILM position maps together with a retinal thickness map. The position of the RPE and ILM is given by the depth from the top of the cross-section image, while the scale bars indicate optical distances. Figure 6(c) shows that retinal thickness increases significantly around the elevation.

Fig. 5. Macula of the left eye with ARMD. An en-face projection image and ten cross-section images showing the identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2 and the vertical dimension of each crosssection image is 1.72 mm.
Fig. 6. Macula of a patient with ARMD. (a) Position map of the ILM; (b) position map of the RPE; (c) retinal thickness map. The positions of the ILM and RPE are given by the depth from the top of cross-section image. The effect of refractive index is ignored.

Macular imaging of a patient with polypoidal choroidal vasculopathy (PCV) was also performed during the experiments. Here, too, the measured data set contained 140 frames with 1022 depth scans (with 450 pixels) each, and the imaging area was about 5×5 mm2. The required RPE and ILM segmentation times were 21 s and 17 s, respectively. Displayed in Fig. 7, the obtained results demonstrate that ILM segmentation worked very reliably again with only some minor segmentation errors present, even though the measured OCT signal from the ILM was strongly attenuated on the fringes of the measured area. However, evaluating the RPE segmentation quality is a bit problematic, due to the severely distorted RPE. Cross-section image 4 in Fig. 7. shows a possible RPE segmentation error (indicated by a white arrow). A part of the RPE complex seems to have broken off, producing a strongly backscattering layer which is detected by the segmentation algorithm. Shown in cross-section image 7 is a clearly erroneous segmentation result. Thus, the method is incapable of detecting a deep gap between two adjacent elevations. Manual evaluation showed that ILM segmentation was performed with smaller error than 5 pixels for 98.6% of depth scans and the corresponding value for RPE segmentation was 97.0%.

Obtained RPE and ILM position maps and a retinal thickness map are shown in Fig. 8, with the position of the RPE and ILMindicated by the depth from the top of the cross-section image, while the scale bars denote optical distance. Fig. 8.(c) shows that retinal thickness increases significantly around the elevation of the RPE.

Fig. 7. Macula of the right eye with PCV. An en-face projection image and ten cross-section images showing the identified RPE (green line) and ILM (red line). The position of each cross-section image is pointed by a number and a line in the en-face image. The projection image covers an area of 5×5 mm2, and the vertical dimension of each cross-section image is 1.64 mm. Images 4 and 7 are magnified to see segmentation results in more details. White arrows are used to indicate the location of errors.
Fig. 8. Macula of a patient with PCV. (a) Position map of the ILM; (b) position map of the RPE;(c) retinal thickness map. The position of the ILM and the RPE are given by the depth from the top of the cross-section image. The effect of refractive index is ignored.

4. Discussion

As the presented ILM and RPE identification results suggest, the proposed method can be successfully applied to the study of the macula area. Even though the focus here was on macular segmentation, it should be possible to use the presented methods to identify the ILM and the RPE in the optic nerve head area (ONH) as well. The advantages of these methods stem from three facts at least. Firstly, the ILM and RPE layers can be segmented directly from the measured OCT data without massive denoising. Secondly, 3D information of pixels belonging to the ILM or the RPE are used for identification. And thirdly, rather simple tools are used iteratively. This makes the segmentation process very effective and reduces the required calculation time to about 17 s and 21 s for the ILM and the RPE, respectively. Contrast this with the corresponding calculation times for other published methods, which are in the range of several minutes. We may thus conclude that the presented method is tens of times faster than the other methods. Moreover, ILM segmentation seems even more reliable and efficient than RPE segmentation.

Fig. 9. The effect of iterations on segmentation result (a) RPE estimation based on maximum intensity search (red dots). The green line stands for final segmentation result; (b) RPE position estimate after masking the erroneous pixels (yellow dots); (c) representative cross-section image (healthy eye) with the red line showing the result of the first iteration, while the yellow and green lines stand for the second and third iteration, respectively; (d) magnified image of the region of interest with the white arrows showing the position where the second iteration fails; (e) representative cross-section image (ARMD eye) with the red line showing the result of the first iteration, while the yellow, blue and green lines stand for the second, third and forth iteration, respectively; (f) magnified image of the region of interest with the white arrows showing the position where the two first iterations fail.

It is well know fact that intensity variation based OCT data segmentation methods have tendency to give erroneous segmentation results especially in pathologic cases, which is also true for our approach. The algorithm assumes that the ILM and RPE layers are continuous. However, that is not true in all cases, as the layers can also be strongly distorted or even destroyed by a disease. Also the misalignment of frames in the OCT volume might be problematic, because information of neighbouring pixels is used for estimating the position of the ILM and the RPE. As a result, the presented method may give erroneous segmentation results. Although, the presented segmentation method seems to work quite reliably, the number of evaluated cases is very limited. The further investigation of segmentation accuracy is needed.

5. Conclusion

An alternative intensity variation-based ILM and RPE segmentation method is presented. Since its algorithms, which can be utilized independently, do not require massive pre-processing, it is very effective in terms of calculation time. In successful tests with a normal and a diseased macula, the entire data processing took less than 40 seconds for both layers, demonstrating that the method offers a highly promising tool for ophthalmological studies and enhances the usability of OCT technology in clinical applications.

References and links

1.

D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20, 900–916 (2001). [PubMed]

2.

H. Ishikawa, D.M. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, and J.S. Schuman, “Macular segmentation with optical coherence tomography,” Investigative Ophthalmol. Visual Scie. 46, 2012–2017 (2005).

3.

M. Mujat, R. C. Chan, B. Cense, B. H. Park, C. Joo, T. Akkin, T. C. Chen, and J. F. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13, 9480–9491 (2005). [PubMed]

4.

M. Szkulmowski, M. Wojtkowski, B. Sikorski, T. Bajraszewski, V. J. Srinivasan, A. Szkulmowska, J. J. Kaluzny, J. G. Fujimoto, and A. Kowalczyk, “Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies,” J. Biomed. Opt.12, (2007). [PubMed]

5.

D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13, 200–216 (2005).

6.

M. Baroni, P. Fortunato, and A. L. Torre, “Towards quantitative analysis of retinal features in optical coherence tomography,” Med. Engin. Phys. 29, 432–441 (2007).

7.

E. Götzinger, M. Pircher, W. Geitzenauer, C. Ahlers, B. Baumann, S. Michels, U. Schmidt-Erfurth, and C. K. Hitzenberger, “Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography,” Opt. Express 16, 16410–16422 (2008). [PubMed]

8.

M. Zeng, J. Li, and P. Zhang “The design of Top-Hat morphological filter and application to infrared target detection,” Infr. Phys. Technol. 48, 67–76 (2006).

9.

N. Otsu “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cyber. 9, 62–66 (1979).

10.

S. Makita, Y.J. Hong, M. Yamanari, T. Yatagai, and Y. Yasuno, “Optical coherence angiography,” Opt. Express 17, 7821–7840 (2006).

11.

A. Misota, T. Sakuma, O. Miyauchi, M. Honda, and M. Tanaka, “Measurement of retinal thickness from the three-dimensional images obtained from C scan images from the optical coherence tomography ophthalmoscope,” Clinical and Experimental Ophthalmology 35, 220–224 (2007).

12.

S. H. M. Liew, C. E. Gilbert, T. D. Spector, J. Mellerio, F. J. Van Kuijk, S. Beatty, F. Fitzke, J. Marshall, and C. J. Hammond, “Central retinal thickness is positively correlated with macular pigment optical density,” Experimental Eye Research 82, 915–920 (2006).

OCIS Codes
(100.0100) Image processing : Image processing
(100.5010) Image processing : Pattern recognition
(170.4470) Medical optics and biotechnology : Ophthalmology
(170.4500) Medical optics and biotechnology : Optical coherence tomography
(170.4580) Medical optics and biotechnology : Optical diagnostics for medicine

ToC Category:
Medical Optics and Biotechnology

History
Original Manuscript: June 4, 2009
Revised Manuscript: July 24, 2009
Manuscript Accepted: August 11, 2009
Published: August 20, 2009

Virtual Issues
Vol. 4, Iss. 10 Virtual Journal for Biomedical Optics

Citation
Tapio Fabritius, Shuichi Makita, Masahiro Miura, Risto Myllylä, and Yoshiaki Yasuno, "Automated segmentation of the macula by optical coherence tomography," Opt. Express 17, 15659-15669 (2009)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-17-18-15659


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. D. Koozekanani, K. Boyer, and C. Roberts, "Retinal thickness measurements from optical coherence tomography using a Markov boundary model," IEEE Trans. Med. Imaging 20,900-916 (2001). [PubMed]
  2. H. Ishikawa, D.M. Stein, G. Wollstein, S. Beaton, J.G. Fujimoto, and J.S. Schuman, "Macular segmentation with optical coherence tomography," Investigative Ophthalmol. Visual Scie. 46,2012-2017 (2005).
  3. M. Mujat, R. C. Chan, B. Cense, B. H. Park, C. Joo, T. Akkin, T. C. Chen, and J. F. de Boer, "Retinal nerve fiber layer thickness map determined from optical coherence tomography images," Opt. Express 13,9480-9491 (2005). [PubMed]
  4. M. Szkulmowski, M. Wojtkowski, B. Sikorski, T. Bajraszewski, V. J. Srinivasan, A. Szkulmowska, J. J. Kaluzny, J. G. Fujimoto, and A. Kowalczyk, "Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies," J. Biomed. Opt. 12, (2007). [PubMed]
  5. D. C. Fernandez, H. M. Salinas, and C. A. Puliafito, "Automated detection of retinal layer structures on optical coherence tomography images," Opt. Express 13,200-216 (2005).
  6. M. Baroni, P. Fortunato, and A. L. Torre, "Towards quantitative analysis of retinal features in optical coherence tomography," Med. Engin. Phys. 29,432-441 (2007).
  7. E. G¨otzinger, M. Pircher, W. Geitzenauer, C. Ahlers, B. Baumann, S. Michels, U. Schmidt-Erfurth, and C. K. Hitzenberger, "Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography," Opt. Express 16,16410-16422 (2008). [PubMed]
  8. M. Zeng, J. Li and P. Zhang "The design of Top-Hat morphological filter and application to infrared target detection," Infr. Phys. Technol. 48,67-76 (2006).
  9. N. Otsu "A threshold selection method from gray-level histograms," IEEE Trans. Syst. Man Cyber. 9,62-66 (1979).
  10. S. Makita, Y.J. Hong, M. Yamanari, T. Yatagai, and Y. Yasuno, "Optical coherence angiography," Opt. Express 17,7821-7840 (2006).
  11. A. Misota, T. Sakuma, O. Miyauchi,M. Honda, and M. Tanaka, "Measurement of retinal thickness fromthe threedimensional images obtained from C scan images from the optical coherence tomography ophthalmoscope," Clinical and Experimental Ophthalmology 35,220-224 (2007).
  12. S. H. M. Liew, C. E. Gilbert, T. D. Spector, J. Mellerio, F. J. Van Kuijk, S. Beatty, F. Fitzke, J. Marshall, and C. J. Hammond, "Central retinal thickness is positively correlated with macular pigment optical density," Experimental Eye Research 82,915-920 (2006).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited