## Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis |

Optics Express, Vol. 18, Issue 14, pp. 14730-14744 (2010)

http://dx.doi.org/10.1364/OE.18.014730

Acrobat PDF (2065 KB)

### Abstract

A novel statistical model based on texture and shape for fully automatic intraretinal layer segmentation of normal retinal tomograms obtained by a commercial 800nm optical coherence tomography (OCT) system is developed. While existing algorithms often fail dramatically due to strong speckle noise, non-optimal imaging conditions, shadows and other artefacts, the novel algorithm’s accuracy only slowly deteriorates when progressively increasing segmentation task difficulty. Evaluation against a large set of manual segmentations shows unprecedented robustness, even in the presence of additional strong speckle noise, with dynamic range tested down to 12dB, enabling segmentation of almost all intraretinal layers in cases previously inaccessible to the existing algorithms. For the first time, an error measure is computed from a large, representative manually segmented data set (466 B-scans from 17 eyes, segmented twice by different operators) and compared to the automatic segmentation with a difference of only 2.6% against the inter-observer variability.

© 2010 OSA

## 1. Introduction

2. T. Fabritius, S. Makita, M. Miura, R. Myllylä, and Y. Yasuno, “Automated segmentation of the macula by optical coherence tomography,” Opt. Express **17**(18), 15659–15669 (2009). [CrossRef] [PubMed]

3. R. J. Zawadzki, S. S. Choi, S. M. Jones, S. S. Oliver, and J. S. Werner, “Adaptive optics-optical coherence tomography: optimizing visualization of microscopic retinal structures in three dimensions,” J. Opt. Soc. Am. A **24**(5), 1373 (2007). [CrossRef]

4. M. M. K. Garvin, M. M. D. Abramoff, R. R. Kardon, S. S. R. Russell, X. X. Wu, and M. M. Sonka, “Intraretinal Layer Segmentation of Macular Optical Coherence Tomography Images Using Optimal 3-D Graph Search,” IEEE Trans. Med. Imaging **27**(10), 1495–1505 (2008). [CrossRef] [PubMed]

5. D. Cabrera Fernández, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express **13**(25), 10200–10216 (2005). [CrossRef] [PubMed]

6. M. Mujat, R. Chan, B. Cense, B. Park, C. Joo, T. Akkin, T. Chen, and J. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express **13**(23), 9480–9491 (2005). [CrossRef] [PubMed]

7. D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging **20**(9), 900–916 (2001). [CrossRef] [PubMed]

9. A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express **17**(26), 23719–23728 (2009). [CrossRef]

## 2. Materials and methods

### 2.1 The algorithm overview

### 2.2 Pre-processing

10. I. W. Selesnick, R. G. Baraniuk, and N. G. Kingsbury, “The Dual-Tree Complex Wavelet Transform,” IEEE Signal Process. Mag. **22**(6), 123–151 (2005). [CrossRef]

12. A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express **18**(8), 8338–8352 (2010). [CrossRef] [PubMed]

14. C. O. S. Sorzano, P. Thevenaz, and M. Unser, “Elastic registration of biological images using vector-spline regularization,” BIEEE Biomed. Eng. **52**(4), 652–663 (2005). [CrossRef]

### 2.3 Model building

16. T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active Appearance Models,” IEEE Trans. Pattern Anal. Mach. Intell. **23**(6), 681–685 (2001). [CrossRef]

*m*training images, for each layer (

*n*layers) we get one vector of offsets

**v**per layer, per image of width

*w*, which stacked together for all the layers define

**x**. All of the manual segmentations then comprise the matrix

**X**Eq. (1).Shape features that are used are sparsely sampled distances of the boundaries from the top boundary (ILM). Texture features that are currently used are simple, although it is trivial to include additional features if needed to further increase performance in case of vessels, large shadows and pathological tissue; currently used features are the mean of all the pixels for each of the layers in the original image, standard deviation and mean of all the pixels for each of the layers in the median filtered image, as well as the multiple-scale (a pyramid of Gaussian filtered versions of the image) edges sampled along the boundaries. In practice, for an image of width 512, we sampled each boundary at 26 positions. Thus we have 26 spatial features and 4 texture features per each layer, and for eight layers, we obtain 208 spatial and 32 texture features.

**X**is the original data matrix, as defined in Eq. (1), after the decomposition we can select only L principal components and in that way project the data into a reduced dimensionality space to get

**Y**Eq. (2).However, rather than PCA, we used neural network based dimensionality reduction since it offers nonlinear eigenvectors and therefore can reduce the space more compactly if the data is nonlinearly distributed than the linear representation obtained by PCA [17]. The shape features proved to be nonlinear and thus we obtained a more compact representation using nonlinear dimensionality reduction, rather than PCA. A Neural network (NN) is a mathematical or computational model based on principles found in biological neural networks. It consists of an interconnected group of artificial neurons and processes information where each connection between neurons has a weight, with the weights modulating the value across the connection. The training phase is performed to modify the weights until the network implements a desired function. Once training has completed, the network can be applied to data that was not part of the training set. It is useful to note that a special type of neural network (inverse) [18

18. M. Scholz, F. Kaplan, C. L. Guy, J. Kopka, and J. Selbig, “Non-linear PCA: a missing data approach,” Bioinformatics **21**(20), 3887–3895 (2005). [CrossRef] [PubMed]

**X**from the parameters

**z**(equivalent to

16. T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active Appearance Models,” IEEE Trans. Pattern Anal. Mach. Intell. **23**(6), 681–685 (2001). [CrossRef]

**x**is the shape vector (which is normalized by subtracting the mean shape and rescaling, Eq. (4)) and

**g**is the texture vector obtained from an image

**I**and the shape vector (it is also normalized) Eq. (5). Function

**s**multiplied by the shape matrix

*b*is the number of boundaries,

*w*is image width and

*T*;

*T*is defined in Eq. (6).

**t**that are most likely to generate a given vector of texture features

**g**. The first term of the objective function defines the main measure for evaluation of the model fitting, determined by the difference of the model texture parameters and the texture parameters extracted from the image regions defined by the model shape parameters. The second term penalizes deviations from the initial boundary as found by the initial three boundaries algorithm and the one produced by running the optimization function for the statistical model. This is an important novelty, when compared to the standard AAM, which helps to constrain the optimization process to valid solutions. Additionally, we do not start the optimization process from the mean of the model, but rather we determine the median distance between ILM and RPE boundaries found by the adaptive thresholding algorithm, as well as the ratio of the foveal pit distance to the greatest thickness found in the image. Using these values we pick the closest example from the training set and use these parameters for the initial model position. This way we ensure a faster and more robust convergence.

### 2.4 The Mechanical Turk

## 3. Results and discussion

*i*separately and from these we compute error measures for an entire B-scan or for an individual layer, Eq. (9).

*w*term for normalization so that for the special case when the two boundaries are equally distant from each other along their whole length (

*j*), it is equal to

*A*is the area between top (ILM) and bottom boundaries (RPE/CH).

*k*separately, instead of summing up across all boundary errors, only the two boundaries that define a layer are added and divided by the sum of the layer area as given by the automatic segmentation (

## 4. Conclusion

2. T. Fabritius, S. Makita, M. Miura, R. Myllylä, and Y. Yasuno, “Automated segmentation of the macula by optical coherence tomography,” Opt. Express **17**(18), 15659–15669 (2009). [CrossRef] [PubMed]

## Acknowledgements

## References and links

1. | W. Drexler, and J. G. Fujimoto, |

2. | T. Fabritius, S. Makita, M. Miura, R. Myllylä, and Y. Yasuno, “Automated segmentation of the macula by optical coherence tomography,” Opt. Express |

3. | R. J. Zawadzki, S. S. Choi, S. M. Jones, S. S. Oliver, and J. S. Werner, “Adaptive optics-optical coherence tomography: optimizing visualization of microscopic retinal structures in three dimensions,” J. Opt. Soc. Am. A |

4. | M. M. K. Garvin, M. M. D. Abramoff, R. R. Kardon, S. S. R. Russell, X. X. Wu, and M. M. Sonka, “Intraretinal Layer Segmentation of Macular Optical Coherence Tomography Images Using Optimal 3-D Graph Search,” IEEE Trans. Med. Imaging |

5. | D. Cabrera Fernández, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express |

6. | M. Mujat, R. Chan, B. Cense, B. Park, C. Joo, T. Akkin, T. Chen, and J. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express |

7. | D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging |

8. | D. Tolliver, Y. Koutis, H. Ishikawa, J. S. Schuman, and G. L. Miller, “Unassisted Segmentation of Multiple Retinal Layers via Spectral Rounding,” in |

9. | A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express |

10. | I. W. Selesnick, R. G. Baraniuk, and N. G. Kingsbury, “The Dual-Tree Complex Wavelet Transform,” IEEE Signal Process. Mag. |

11. | A. Mishra, A. Wong, D. A. Clausi, and P. W. Fieguth, “Quasi-random nonlinear scale space,” Pattern Recognit. Lett. In Press. (Corrected Proof.). |

12. | A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express |

13. | P. Thevenaz, and M. Unser, “A pyramid approach to sub-pixel image fusion based on mutual information,” in |

14. | C. O. S. Sorzano, P. Thevenaz, and M. Unser, “Elastic registration of biological images using vector-spline regularization,” BIEEE Biomed. Eng. |

15. | A. K. Mishra, P. W. Fieguth, and D. A. Clausi, “Decoupled Active Contour (DAC) for Boundary Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence |

16. | T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active Appearance Models,” IEEE Trans. Pattern Anal. Mach. Intell. |

17. | M. Scholz, M. Fraunholz, and J. Selbig, “Nonlinear Principal Component Analysis: Neural Network Models and Applications,” in |

18. | M. Scholz, F. Kaplan, C. L. Guy, J. Kopka, and J. Selbig, “Non-linear PCA: a missing data approach,” Bioinformatics |

19. | A. A. Efros, and W. T. Freeman, “Image quilting for texture synthesis and transfer,” in |

**OCIS Codes**

(100.0100) Image processing : Image processing

(170.4500) Medical optics and biotechnology : Optical coherence tomography

(170.4580) Medical optics and biotechnology : Optical diagnostics for medicine

(100.3008) Image processing : Image recognition, algorithms and filters

**ToC Category:**

Medical Optics and Biotechnology

**History**

Original Manuscript: May 14, 2010

Revised Manuscript: June 20, 2010

Manuscript Accepted: June 21, 2010

Published: June 24, 2010

**Virtual Issues**

Vol. 5, Iss. 11 *Virtual Journal for Biomedical Optics*

**Citation**

Vedran Kajić, Boris Považay, Boris Hermann, Bernd Hofer, David Marshall, Paul L. Rosin, and Wolfgang Drexler, "Robust segmentation of intraretinal layers in the normal human fovea using a novel statistical model based on texture and shape analysis," Opt. Express **18**, 14730-14744 (2010)

http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-18-14-14730

Sort: Year | Journal | Reset

### References

- W. Drexler, and J. G. Fujimoto, Optical Coherence Tomography: Technology and Applications (Springer, 2008).
- T. Fabritius, S. Makita, M. Miura, R. Myllylä, and Y. Yasuno, “Automated segmentation of the macula by optical coherence tomography,” Opt. Express 17(18), 15659–15669 (2009). [CrossRef] [PubMed]
- R. J. Zawadzki, S. S. Choi, S. M. Jones, S. S. Oliver, and J. S. Werner, “Adaptive optics-optical coherence tomography: optimizing visualization of microscopic retinal structures in three dimensions,” J. Opt. Soc. Am. A 24(5), 1373 (2007). [CrossRef]
- M. M. K. Garvin, M. M. D. Abramoff, R. R. Kardon, S. S. R. Russell, X. X. Wu, and M. M. Sonka, “Intraretinal Layer Segmentation of Macular Optical Coherence Tomography Images Using Optimal 3-D Graph Search,” IEEE Trans. Med. Imaging 27(10), 1495–1505 (2008). [CrossRef] [PubMed]
- D. Cabrera Fernández, H. M. Salinas, and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express 13(25), 10200–10216 (2005). [CrossRef] [PubMed]
- M. Mujat, R. Chan, B. Cense, B. Park, C. Joo, T. Akkin, T. Chen, and J. de Boer, “Retinal nerve fiber layer thickness map determined from optical coherence tomography images,” Opt. Express 13(23), 9480–9491 (2005). [CrossRef] [PubMed]
- D. Koozekanani, K. Boyer, and C. Roberts, “Retinal thickness measurements from optical coherence tomography using a Markov boundary model,” IEEE Trans. Med. Imaging 20(9), 900–916 (2001). [CrossRef] [PubMed]
- D. Tolliver, Y. Koutis, H. Ishikawa, J. S. Schuman, and G. L. Miller, “Unassisted Segmentation of Multiple Retinal Layers via Spectral Rounding,” in ARVO(2008).
- A. Mishra, A. Wong, K. Bizheva, and D. A. Clausi, “Intra-retinal layer segmentation in optical coherence tomography images,” Opt. Express 17(26), 23719–23728 (2009). [CrossRef]
- I. W. Selesnick, R. G. Baraniuk, and N. G. Kingsbury, “The Dual-Tree Complex Wavelet Transform,” IEEE Signal Process. Mag. 22(6), 123–151 (2005). [CrossRef]
- A. Mishra, A. Wong, D. A. Clausi, and P. W. Fieguth, “Quasi-random nonlinear scale space,” Pattern Recognit. Lett. In Press. (Corrected Proof).
- A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express 18(8), 8338–8352 (2010). [CrossRef] [PubMed]
- P. Thevenaz, and M. Unser, “A pyramid approach to sub-pixel image fusion based on mutual information,” in Image Processing, 1996. Proceedings., International Conference on(1996), p. 265.
- C. O. S. Sorzano, P. Thevenaz, and M. Unser, “Elastic registration of biological images using vector-spline regularization,” BIEEE Biomed. Eng. 52(4), 652–663 (2005). [CrossRef]
- A. K. Mishra, P. W. Fieguth, and D. A. Clausi, “Decoupled Active Contour (DAC) for Boundary Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence 99.
- T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active Appearance Models,” IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001). [CrossRef]
- M. Scholz, M. Fraunholz, and J. Selbig, “Nonlinear Principal Component Analysis: Neural Network Models and Applications,” in Principal Manifolds for Data Visualization and Dimension Reduction(2007), pp. 44–67.
- M. Scholz, F. Kaplan, C. L. Guy, J. Kopka, and J. Selbig, “Non-linear PCA: a missing data approach,” Bioinformatics 21(20), 3887–3895 (2005). [CrossRef] [PubMed]
- A. A. Efros, and W. T. Freeman, “Image quilting for texture synthesis and transfer,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques(ACM, 2001), pp. 341–346.

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.