OSA's Digital Library

Biomedical Optics Express

Biomedical Optics Express

  • Editor: Joseph A. Izatt
  • Vol. 3, Iss. 2 — Feb. 1, 2012
  • pp: 327–339
« Show journal navigation

Exploratory Dijkstra forest based automatic vessel segmentation: applications in video indirect ophthalmoscopy (VIO)

Rolando Estrada, Carlo Tomasi, Michelle T. Cabrera, David K. Wallace, Sharon F. Freedman, and Sina Farsiu  »View Author Affiliations


Biomedical Optics Express, Vol. 3, Issue 2, pp. 327-339 (2012)
http://dx.doi.org/10.1364/BOE.3.000327


View Full Text Article

Acrobat PDF (1453 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We present a methodology for extracting the vascular network in the human retina using Dijkstra’s shortest-path algorithm. Our method preserves vessel thickness, requires no manual intervention, and follows vessel branching naturally and efficiently. To test our method, we constructed a retinal video indirect ophthalmoscopy (VIO) image database from pediatric patients and compared the segmentations achieved by our method and state-of-the-art approaches to a human-drawn gold standard. Our experimental results show that our algorithm outperforms prior state-of-the-art methods, for both single VIO frames and automatically generated, large field-of-view enhanced mosaics. We have made the corresponding dataset and source code freely available online.

© 2012 OSA

1. Introduction

Accurate segmentation and evaluation of the anatomical and pathological features of retinal vessels are critical for the diagnosis and study of many ocular diseases. These include retinopathy of prematurity (ROP). ROP is a disorder of the retinal blood vessels that is a major cause of vision loss in premature neonates [1

1. W. Tasman, A. Patz, J. A. McNamara, R. S. Kaiser, M. T. Trese, and B. T. Smith, “Retinopathy of prematurity: The life of a lifetime disease,” Am. J. Ophthalmol. 141, 167 – 174 (2006). [CrossRef] [PubMed]

]. Important features of the disease include increased diameter (dilation) as well as increased tortuosity (wiggliness) of the retinal blood vessels in the portion of the retina centered on the optic nerve (the posterior pole). Increased dilation and tortuosity of the blood vessels in the posterior pole (called pre-plus in intermediate, and plus in severe circumstances) is an important indicator of ROP severity. [2

2. G. A. Gole, A. L. Ells, X. Katz, G. Holmstrom, A. R. Fielder, A. Capone Jr, J. T. Flynn, W. G. Good, J. M. Holmes, J. A. McNamara, E. A. Palmer, G. Quinn, E, M. J. Shapiro, M. G. J. Trese, and D. K. Wallace, “The international classification of retinopathy of prematurity revisited,” Arch. Ophthalmol. 123, 991–999 (2011).

]. Subjective assessment of plus and pre-plus disease leads to poor agreement between examiners [3

3. D. K. Wallace, G. E. Quinn, S. F. Freedman, and M. F. Chiang, “Agreement among pediatric ophthalmologists in diagnosing plus and pre-plus disease in retinopathy of prematurity,” J. Am. Assoc. Pediatric Ophthalmol. Strabismus 12, 352 – 356 (2008). [CrossRef]

]. Manual segmentation of retinal images is not only demanding for experts and excessively time-consuming for clinical use, but is also inherently subjective, and different annotators often yield different results [4

4. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18, 19413–19428 (2010). [CrossRef] [PubMed]

]. To address these difficulties, different approaches for automated segmentation of retinal vessels have been tried, with varying levels of success.

Prior methods can be roughly classified into region- and path-based methods. Region-based methods [5

5. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Trans. Med. Imag. 8, 263–269 (1989). [CrossRef]

, 6

6. C. Kirbas and F. Quek, “A review of vessel extraction techniques and algorithms,” ACM Comput. Surv. 36, 81–121 (2004). [CrossRef]

, 7

7. Q. Li, J. You, L. Zhang, and P. Bhattacharya, “Automated retinal vessel segmentation using Gabor filters and scale multiplication,” in Proceedings of System, Man and Cybernetics (IEEE, 2006), pp. 3521–3527.

, 8

8. E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Trans. Med. Imag. 26, 1357–1365 (2007). [CrossRef]

, 9

9. J. Soares, J. Leandro, R. Cesar Jr, H. Jelinek, and M. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imag. 25, 1214–1222 (2006). [CrossRef]

, 10

10. J. Staal, M. Abràmoff, M. Niemeijer, M. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imag. 23, 501–509 (2004). [CrossRef]

, 11

11. B. Lam, Y. Gao, and A. Liew, “General retinal vessel segmentation using regularization-based multiconcavity modeling,” IEEE Trans. Med. Imag. 29, 1369–1381 (2010). [CrossRef]

, 12

12. G. Lathen, J. Jonasson, and M. Borga, “Blood vessel segmentation using multi-scale quadrature filtering,” Pattern Recogn. Lett. 31, 762–767 (2010). [CrossRef]

, 13

13. D. Marín, A. Aquino, M. Gegúndez-Arias, and J. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Trans. Med. Imag. 30, 146–158 (2011). [CrossRef]

] classify image pixels directly into vessel and non-vessel pixels. Classification relies on local appearance, as measured by the responses of suitable filter banks at various scales and orientations. In unsupervised region-based approaches, these filter responses are combined into a new image, which is then appropriately thresholded to yield the final classification. Methods in this category employ matched filters [5

5. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Trans. Med. Imag. 8, 263–269 (1989). [CrossRef]

], piecewise thresholding [14

14. A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imag. 19, 203–210 (2002). [CrossRef]

], local entropy [15

15. T. Chanwimaluang and G. Fan, “An efficient blood vessel detection algorithm for retinal images using local entropy thresholding,” in Proceedings of the International Symposium on Circuits and Systems (IEEE2003), pp. 21–24.

], and quadrature filters [12

12. G. Lathen, J. Jonasson, and M. Borga, “Blood vessel segmentation using multi-scale quadrature filtering,” Pattern Recogn. Lett. 31, 762–767 (2010). [CrossRef]

]. Supervised region-based methods, on the other hand, assemble the filter responses into feature vectors that are fed to a classifier, which is trained on hand-labeled data. Techniques used within this framework include ridge detection [10

10. J. Staal, M. Abràmoff, M. Niemeijer, M. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imag. 23, 501–509 (2004). [CrossRef]

], Gabor wavelet filtering [9

9. J. Soares, J. Leandro, R. Cesar Jr, H. Jelinek, and M. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imag. 25, 1214–1222 (2006). [CrossRef]

], line operators [8

8. E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Trans. Med. Imag. 26, 1357–1365 (2007). [CrossRef]

], and moment invariants [13

13. D. Marín, A. Aquino, M. Gegúndez-Arias, and J. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Trans. Med. Imag. 30, 146–158 (2011). [CrossRef]

]. Other region-based approaches have used region growing [16

16. M. Martínez-Pérez, A. Hughes, A. Stanton, S. Thom, A. Bharath, and K. Parker, “Retinal blood vessel segmentation by means of scale-space analysis and region growing,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention (Springer1999), pp. 90–97. [CrossRef]

], mathematical morphology [17

17. F. Zana and J. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE Trans. Image Process. 10, 1010–1019 (2002). [CrossRef]

], and multiconcavity modeling [11

11. B. Lam, Y. Gao, and A. Liew, “General retinal vessel segmentation using regularization-based multiconcavity modeling,” IEEE Trans. Med. Imag. 29, 1369–1381 (2010). [CrossRef]

].

The goal of path-based methods [18

18. L. Pedersen, M. Grunkin, B. Ersboll, K. Madsen, M. Larsen, N. Christoffersen, and U. Skands, “Quantitative measurement of changes in retinal vessel diameter in ocular fundus images,” Pattern Recogn. Lett. (21), 1215–1223 (2000). [CrossRef]

,19

19. M. Cree, D. Cornforth, and HF. Jelinek, “Vessel segmentation and tracking using a two-dimensional model,” in Proceedings of Image and Vision Computing New Zealand (IVCNZ, 2005), pp. 345–350.

,20

20. F. Benmansour and L. Cohen, “Tubular structure segmentation based on minimal path method and anisotropic enhancement,” Int. J. Comput. Vision 92, 192–210 (2011). [CrossRef]

,21

21. H. Li and A. Yezzi, “Vessels as 4-D curves: Global minimal 4-D paths to extract 3-D tubular surfaces and centerlines,” IEEE Trans. Med. Imag. 26, 1213–1223 (2007). [CrossRef]

,22

22. M. Pechaud, R. Keriven, and G. Peyre, “Extraction of tubular structures over an orientation domain,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 2009), pp. 336–342.

,23

23. O. Wink, W. Niessen, and M. Viergever, “Multiscale vessel tracking,” IEEE Trans. Med. Imag. 23, 130–133 (2004). [CrossRef]

,24

24. S. Ahmad, D. Wallace, S. Freedman, and Z. Zhao, “Computer-assisted assessment of plus disease in retinopathy of prematurity using video indirect ophthalmoscopy images,” Retina 28, 1458–1462 (2008). [CrossRef] [PubMed]

,25

25. A. Kiely, D. Wallace, S. Freedman, and Z. Zhao, “Computer-assisted measurement of retinal vascular width and tortuosity in retinopathy of prematurity,” Arch. Ophthalmol. 128, 847–852 (2010). [CrossRef] [PubMed]

], on the other hand, is primarily to trace the centerline of individual vessels, rather than classifying every pixel in the image. Many path-based approaches also estimate vessel thickness as they track each branch, generally by determining the width of the cross-section perpendicular to the current path. Prior work on two-dimensional branch extraction has addressed this topological ambiguity semi-automatically by relying on user-supplied points, requiring either a single seed point [24

24. S. Ahmad, D. Wallace, S. Freedman, and Z. Zhao, “Computer-assisted assessment of plus disease in retinopathy of prematurity using video indirect ophthalmoscopy images,” Retina 28, 1458–1462 (2008). [CrossRef] [PubMed]

] or a pair of start-and end-points [22

22. M. Pechaud, R. Keriven, and G. Peyre, “Extraction of tubular structures over an orientation domain,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 2009), pp. 336–342.

]. User-supplied one-point methods generally employ ridge detection based on differential geometry [26

26. T. Lindeberg, “Edge detection and ridge detection with automatic scale selection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 1996), pp. 465–470.

], while two-point methods find a path between the points that minimizes a cost measure designed to penalize paths that stray from the middle of a vessel. Several of these methods rely on front propagation algorithms, such as the fast marching method [27

27. J. Sethian, Level Set Methods and Fast Marching Methods (Cambridge University Press, 1999).

]. In contrast, as described in Section 2, our tracking methodology forgoes the need for external seed points by being robust to a particular tracker’s initial position.

In this paper, we propose a hybrid method that extends the path-based methodology into a region-based segmentation scheme for detecting retinal vessels. Our complete approach works in two stages, as illustrated in Fig. 1. The first stage pre-processes the input image to remove both lens and motion artifacts, and to construct a high-contrast vessel map. The second stage builds a forest of tree-like vessel regions through a sequence of exploration waves on the vessel map: the most vessel-like pixel s0 in the image is used as the starting point for an exploration wave that searches for the best tree-like vessel region in the image around s0 by means of the single-source, multi-destination version of Dijkstra’s shortest path algorithm [28

28. E. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1, 269–271 (1959). [CrossRef]

]. This exploration returns an entire tree region for part of the vessel system, that is, it handles branching naturally and efficiently, and preserves vessel thickness. When this exploration ends, a new exploration begins at the best remaining starting point s1 in the unexplored part of the image, which yields a new vessel tree region. Our method stops constructing new regions when the best unexplored starting point is no longer likely to be part of the vessel system. Unlike existing single-source, single-destination vessel analysis methods [20

20. F. Benmansour and L. Cohen, “Tubular structure segmentation based on minimal path method and anisotropic enhancement,” Int. J. Comput. Vision 92, 192–210 (2011). [CrossRef]

, 21

21. H. Li and A. Yezzi, “Vessels as 4-D curves: Global minimal 4-D paths to extract 3-D tubular surfaces and centerlines,” IEEE Trans. Med. Imag. 26, 1213–1223 (2007). [CrossRef]

, 22

22. M. Pechaud, R. Keriven, and G. Peyre, “Extraction of tubular structures over an orientation domain,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 2009), pp. 336–342.

, 23

23. O. Wink, W. Niessen, and M. Viergever, “Multiscale vessel tracking,” IEEE Trans. Med. Imag. 23, 130–133 (2004). [CrossRef]

], our single-source, multiple-destinations approach automatically explores the complete vasculature in a retinal image, and requires no user intervention whatsoever.

Fig. 1 Proposed VIO vessel segmentation: In the first stage, VIO images are pre-processed with directional local-contrast filters (DLCF) and LoG-Gabor filters to eliminate artifacts and increase contrast. In the second stage, the best, unvisited vessel pixel in the image is repeatedly chosen as a starting point for a dynamic-programming exploration of the unvisited part of the image. The result of each exploration yields a new tree in the growing forest of vessels. Forest growth stops when the best, unvisited vessel pixel is worse than a predefined threshold.

Furthermore, the initial single-frame image enhancement step can be optionally replaced by a multi-frame image mosaicing technique. We have recently developed such a technique to combine several low-quality VIO frames into a high-quality, large field-of-view (FOV) composite [29

29. R. Estrada, C. Tomasi, M. Cabrera, D. Wallace, S. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express 2, 2871–2887 (2011). [CrossRef] [PubMed]

]. As our results in Section 3 show, our approach obtains superior segmentation results on both types of –raw and composite– VIO images compared to current state-of-the-art segmentation methods.

The rest of this paper is organized as follows: we first detail our automated dynamic-programming segmentation method in Section 2 and then describe our experiments in Section 3. We present the experimental results in Section 4 and discuss their significance and explore future directions in Section 5.

2. Exploratory Dijkstra forest based vessel segmentation method

Thus, we use the single-source, multiple-destination version of Dijkstra’s shortest path algorithm [28

28. E. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1, 269–271 (1959). [CrossRef]

], rather than the single-source, single-destination version used in prior work. In other words, rather than connecting a start point with a destination point, our method explores the image outward from an (automatically selected) source point. This exploratory strategy has two advantages: it eliminates the need for selecting a destination point manually, and it finds vessels as tree-like image regions, thereby accounting for vessel branching naturally and efficiently.

The computational cost of this important change of perspective is trivial, as the only difference between the single-destination and multi-destination algorithms is when they stop: the single-destination algorithm stops when it reaches the designated vertex, while the multidestination algorithm stops when a target threshold on the path cost has been reached. Both versions of Dijkstra’s algorithm have the same computational complexity of O(|E| + |V|log |V|), where | · | indicates the cardinality or size of a set. This complexity is achievable with a heap-based priority queue implementation [31

31. T. Cormen, C. Leiserson, R. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2001).

].

2.1. Arcs and Arc Costs

We view each VIO color image as an X × Y × 3 matrix I. Prior to processing, we first remove the image’s artifacts by using directional local contrast filtering (DLCF) as defined in [29

29. R. Estrada, C. Tomasi, M. Cabrera, D. Wallace, S. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express 2, 2871–2887 (2011). [CrossRef] [PubMed]

]. Figure 2 illustrates the effect of this image enhancement step. A pixel position in I is given by a two-dimensional vector of integers, p = [x,y]T. The value at each pixel position is given by a three-dimensional vector I(p) of red, green, blue values normalized between 0 and 1.

Fig. 2 DLCF exudate removal: (a) An image from the STARE dataset [14]. (b) The image after DLCF. (c) Matched filtering [5] applied to (a). (d) Matched filtering applied to (b). The non-vascular filter responses around the exudates have been eliminated in (d) without affecting the true vessel responses.

We define two features that determine the arc costs at each pixel p: the green channel intensity Ig(p) and the inverted response F(p) to a Laplacian-of-Gaussian filter followed by a Gabor filter bank, or Laplace-Gabor filtering, as detailed in [29

29. R. Estrada, C. Tomasi, M. Cabrera, D. Wallace, S. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express 2, 2871–2887 (2011). [CrossRef] [PubMed]

]. The vessel map F maximizes the discriminability of vessels, as illustrated in Fig. 3.

Fig 3 LoG-Gabor filtering: (a) A sample VIO frame. (b) Frame after LoG-Gabor filtering. (c) A sample mosaic. (d) Mosaic after LoG-Gabor filtering. The isotropic LoG filtering enhances vessel contrast, while the anisotropic Gabor wavelets selectively enhance elongated structures.

To apply Dijkstra’s algorithm to I, we define a weighted lattice graph on the set V = {p} of all pixel locations in the image. There is an arc e = (v,v′) in the arc set E for this directed graph G = (V,E) for any ordered pair of 8-neighbors, that is, whenever
max(|xx|,|yy|)=1.
(1)
A non-negative cost is defined on each arc, with the intent that arcs inside and along vessels cost less than arcs that have one or both endpoints outside any vessel. Specifically, we define the cost of arc e as the following convex linear combination:
c(e)=m=14wmeαzm(e)wherem=14wm=1
(2)
and zm(e) indicates the m-th element in the following four-dimensional feature vector:
z(e)=z(v,v)=[Ig(v),|Ig(v)Ig(v)|,F(v),|F(v)F(v)|].
(3)
Therefore, a low-cost arc is an arc whose destination point v′ is dark (Ig(v′) << 1) and has a low inverted Laplace-Gabor response (F(v′) << 1), and such that the two arc endpoints are similar in both brightness (|Ig(v)–Ig(v′)| << 1) and Laplace-Gabor response (|F(v)–F(v′)|).

The exponential in Eq. 2 provides a non-linear scaling of the arc’s features that emphasizes the divide between vascular and non-vascular feature values, and the scalar α controls the growth rate of the exponential term. In our experiments, we set the values of both α and the coefficients wm based on training images from our dataset, as explained in Section 3.

2.2. Path Costs

A path γ between any two nodes v, v′ in V is composed of a sequence of neighboring lattice locations:
γ(v,v)=(v=v1,v2,,vk=v),
(4)
subject to the constraint that (vi, vi+1) ∈ E, for i ∈ [1,k – 1]. In short, γ is a curve discretized as a sequence of neighboring pixels. The cost of γ is defined as the sum of the costs of its arcs:
c(γ)=i=1k1c(vi,vi+1).
(5)
The associative nature of this definition allows splitting a path’s total cost into disjoint sub-path costs at any point along γ:
c(γ(v,v))=c(γ(v,vi))+c(γ(vi,v))for anyi[2,k1],
(6)
with which we can efficiently determine the minimum cost path between any two nodes v and v′. That is, we use Dijkstra’s algorithm to compute:
γ˜(v,v)=argminc(γ)γΓ(v,v),
(7)
where Γ(v,v′) is the set of all possible paths between the two nodes.

2.3. Exploratory Dijkstra Segmentation

Dijkstra’ minimum cost algorithm solves Eq. 7 for any graph with non-negative arc costs [28

28. E. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1, 269–271 (1959). [CrossRef]

]. More generally, it finds a minimum cost path γ̃(s,v) between a single source vertex s and (potentially) every other vertex v in the graph.

As discussed earlier, instead of simply connecting user-defined points, we employ an exploratory strategy by using the single-source, many-destinations version of Dijkstra’s method. Starting from a single position s on a major vessel, this strategy enables us to segment this major vessel and all the less prominent vessels that branch out of it, without any need for setting any destination point. Instead, we set an exploration threshold τ on the cost of any path, and find all the minimum-cost paths γ̃ from s in G such that c(γ̃) < τ. Algorithm 1 outlines our exploratory Dijkstra vessel segmentation method.

Algorithm 1. Exploratory Dijkstra vessel segmentation: starting from a single pixel, the algorithm progressively explores the rest of the image such that every unvisited pixel has a higher minimum path cost than every visited pixel. The algorithm keeps adding pixels until a cost boundary is reached.

With the lattice arc costs defined in Eq. 2, the exploratory Dijkstra algorithm will preferentially visit vascular pixels before exploring non-vascular ones, since the cost to reach the latter is generally much higher. When it stops, it will have visited the Dijkstra region:
Rτ(s)={v|γ˜(s,v)τ}.
(8)
The segmentation’s accuracy is thus dependent on the value of τ. However, our choice of τ is made less sensitive by the exponential in Eq. 2, which increases the separation between the vascular and non-vascular pixel classes. This lower sensitivity reduces both the problem of “leakage”, in which a segmentation goes beyond the correct vessel boundary and the problem of stopping too soon. For our experiments, we set τ based on the training set images, as explained in Section 3.

2.4. Dijkstra Forest

The exploratory Dijkstra method outlined in Subsection 2.3 efficiently segments a Dijkstra region Rτ(s) given a single source vertex s. As Fig. 3 (d) exemplifies, however, the vasculature in the retina extends from more than one primary vessel. Furthermore, the low quality and blur of VIO frames can obscure large sections of the vascular network, and break up the vasculature into several disconnected regions. Therefore, in order to segment all visible vessels better, we extend the single source method to multiple sources.

To this end, we first generate the initial region R0 = Rτ(s0) from a first source point s0 as described above. We then select a new source vertex s1 from those vertices in V that are not part of R0, and generate a new region R1 from it, such that R0R1 = ∅. By repeating, we thus form a Dijkstra forest:
R={R0,R1,,RK},whereF(s0)F(sK)ψ.
(9)

Here, ψ is a threshold on the highest allowable inverted Laplace-Gabor response. We stop adding new regions to the forest when the highest response outside R is higher than ψ. Algorithm 2 outlines the complete Dijkstra forest computation. As with τ, we determine ψ in our experiments using the training set of images in our database (Section 3). In our experiments, each image requires around 10 source vertices.

Algorithm 2. Dijkstra forest vessel segmentation: The algorithm adds disjoint Dijkstra regions until the minimum inverted Laplace-Gabor response at the source pixel exceeds ψ. The operation V \ R represents {xV | xR}.

3. Experiments

To validate the effectiveness of our proposed segmentation method, we collected a new VIO retinal vessel dataset from pediatric patients and manually segmented the corresponding vascular system to produce the associated ground truth. In this section, we outline the dataset construction process and our methodology for comparing the various segmentation methods to the ground truth.

3.1. Benchmark dataset

Existing benchmark retinal vessel segmentation datasets such as the DRIVE [32

32. M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” Proc. SPIE 5370, 648–656 (2004). [CrossRef]

], STARE [14

14. A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imag. 19, 203–210 (2002). [CrossRef]

] and REVIEW [33

33. B. Al-Diri, A. Hunter, D. Steel, M. Habib, T. Hudaib, and S. Berry, “REVIEW - A reference data set for retinal vessel profiles,” in Proceedings of the IEEE Conference on Engineering in Medicine and Biology Society (IEEE, 2008), pp. 2262–2265.

] databases do not include VIO images. The relatively lower quality and artifacts in VIO images present a number of unique challenges for automated analysis methods. Thus, there is a need for a benchmark VIO dataset. To address this issue, we constructed a thirty-two image database of VIO images, the Vessel Extraction in Video Indirect Ophthalmoscopy (VEVIO) dataset. VEVIO consists of sixteen manually selected frames and sixteen corresponding enhanced large FOV mosaics from sixteen different premature infants imaged. All images are of each patient’s right eye. Figure 4 showcases some of the frames and mosaics in the dataset. Four steps were needed to construct the VEVIO dataset: video recording, manual frame selection, automatic mosaicing and manual vessel segmentation.

Fig. 4 VEVIO images: Two pairs of manually selected frames (a), (c) and two automatically generated mosaics (b), (d). Although each pair was obtained from the same video, the manual frames were not used to generate the mosaics. The mosaics were constructed using selected source frames as described in [29].

3.1.1. VIO recording

3.1.2. Manually selected frames

During the recording of the bedside examination, the assistant viewed a real-time feed of the video being recorded and manually screen-captured a number of frames, using Keeler’s recording software, when she considered that the video feed was well-centered and in focus. One of the authors (MTC) later examined each set of manually captured frames and selected the highest quality image of each right eye.

3.1.3. Automatic mosaics

To generate the corresponding ten mosaics, we applied our automatic mosaicing pipeline [29

29. R. Estrada, C. Tomasi, M. Cabrera, D. Wallace, S. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express 2, 2871–2887 (2011). [CrossRef] [PubMed]

] to each video. The set of frames suitable for mosaicing into a single image were automatically selected by our method and did not rely on the manually captured frames. From the thousands of frames in each video, our method retained the twenty frames with the highest frame-quality scores. Each mosaic was constructed from five of those twenty frames. While it is possible to construct the mosaic from the highest five scoring frames directly, to ensure that the mosaics had the widest possible field of view we manually selected the final five frames.

3.1.4. Manual vessel segmentation

In order to provide a quantitative assessment of the various automated methods’ performance, we produced a gold standard segmentation by manually tracing all the visible retinal vessels in each of the twenty VIO images. MTC, a practicing ophthalmologist, traced each image in Adobe Photoshop CS3 (Adobe Systems Inc., San Jose, CA) using a Wacom Intuous3 graphics tablet (Wacom Co. Ltd, Kazo-shi, Saitama, Japan). This tablet uses a pressure-sensitive pen that mimics a real brush, thus allowing the user to dynamically alter the thickness of a pen stroke. The set of vessel tracings for each VIO image were then saved as a separate binary image mask.

3.2. Comparison to other methods

We divided the VEVIO dataset into a training set of ten images and a test set of twenty-two images. Each set included frame/mosaic pairs taken from the same videos, so that there was no overlap between the training and test patients. In order to compare our method to existing methods fairly, we contacted a large number of research groups who had developed methods for retinal vessel segmentation. The results presented here were all obtained using the source code of the groups that kindly made their methods available to us.

In this work, we were able to test both supervised and unsupervised state-of-the-art approaches. We obtained source code for the unsupervised methods of Chaudhuri et al. (matched filters) [5

5. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Trans. Med. Imag. 8, 263–269 (1989). [CrossRef]

] and Chanwimaluang and Fang (local entropy) [15

15. T. Chanwimaluang and G. Fan, “An efficient blood vessel detection algorithm for retinal images using local entropy thresholding,” in Proceedings of the International Symposium on Circuits and Systems (IEEE2003), pp. 21–24.

]. We also obtained code for the supervised classification based on Gabor responses of Soares et al. [9

9. J. Soares, J. Leandro, R. Cesar Jr, H. Jelinek, and M. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imag. 25, 1214–1222 (2006). [CrossRef]

]. For the latter, we tested two types of classifiers: Gaussian mixture models (GMM) and K-nearest neighbors (KNN).

For the supervised methods, we trained the different classifiers on the training data using the learning code made available by Soares et al. [9

9. J. Soares, J. Leandro, R. Cesar Jr, H. Jelinek, and M. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imag. 25, 1214–1222 (2006). [CrossRef]

]. For the unsupervised methods, we optimized their parameters by exhaustively determining the values which resulted in the best possible F-measure for the training set. We then kept the parameters fixed for the testing stage. The optimal thresholds for each method are summarized in Table 4. We tested each existing method on the manually selected frames in two ways: (1) using the raw frames directly captured from the video and (2) using the frames after DLCF pre-processing. The raw frames capture how existing methods fare on VIO data as is, while the pre-processed images allowed us to gauge how our Dijkstra forest segmentation itself compared to other methods on the same source data.

4. Results

Table 1. Segmentation results on the test set *

table-icon
View This Table
| View All Tables

Table 2. Segmentation results on the single (not mosaiced) test frames *

table-icon
View This Table
| View All Tables

Table 3. Segmentation results on the test mosaics *

table-icon
View This Table
| View All Tables

Each table includes the mean F-measure, Cohen’s Kappa [34

34. J. Cohen, “A Coefficient of agreement for nominal scales,” Educ. Psychol. Meas. 20, 37–46 (1960). [CrossRef]

], accuracy (Appendix A), and area under the ROC curve (Az) for each method with the corresponding standard deviation in parentheses. For each image, each metric was calculated inside a region-of-interest (ROI) that only includes the image’s retinal pixels. We obtained each image’s ROI by applying our hue masking method outlined in [29

29. R. Estrada, C. Tomasi, M. Cabrera, D. Wallace, S. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express 2, 2871–2887 (2011). [CrossRef] [PubMed]

]. In short, hue masking retains only those pixels that match the color profile of the current retina. Figure 5 illustrates the ROI mask for a particular mosaic.

Fig. 5 VEVIO ROI: (a) The original mosaic. (b) The binary mask outlining the ROI for the mosaic in (a). (c) The corresponding manual gold standard. Only pixels that appear white in (b) are taken into account for the metrics tallied in our results.

Each metric was determined on a pixel-by-pixel basis. For a given automatic segmentation, a pixel is considered a true positive if both it and the matching pixel in the ground truth image are ones. If both are zero, it corresponds to a true negative. A mismatch in which the automatic segmentation produced a one and the ground truth had a zero is a false positive. The converse mismatch is a false negative.

Each of the four metrics captures some form of similarity between a method’s output and the corresponding ground truth. The retinal vessel segmentation literature has traditionally favored accuracy and area under the ROC curve, Az, as the primary metrics [5

5. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Trans. Med. Imag. 8, 263–269 (1989). [CrossRef]

,6

6. C. Kirbas and F. Quek, “A review of vessel extraction techniques and algorithms,” ACM Comput. Surv. 36, 81–121 (2004). [CrossRef]

,7

7. Q. Li, J. You, L. Zhang, and P. Bhattacharya, “Automated retinal vessel segmentation using Gabor filters and scale multiplication,” in Proceedings of System, Man and Cybernetics (IEEE, 2006), pp. 3521–3527.

,8

8. E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Trans. Med. Imag. 26, 1357–1365 (2007). [CrossRef]

,9

9. J. Soares, J. Leandro, R. Cesar Jr, H. Jelinek, and M. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imag. 25, 1214–1222 (2006). [CrossRef]

,10

10. J. Staal, M. Abràmoff, M. Niemeijer, M. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imag. 23, 501–509 (2004). [CrossRef]

,11

11. B. Lam, Y. Gao, and A. Liew, “General retinal vessel segmentation using regularization-based multiconcavity modeling,” IEEE Trans. Med. Imag. 29, 1369–1381 (2010). [CrossRef]

,13

13. D. Marín, A. Aquino, M. Gegúndez-Arias, and J. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Trans. Med. Imag. 30, 146–158 (2011). [CrossRef]

,14

14. A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imag. 19, 203–210 (2002). [CrossRef]

,32

32. M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” Proc. SPIE 5370, 648–656 (2004). [CrossRef]

]. While Az is an adequate measure of classifier robustness, as we argue in Appendix A, we believe the F-measure is a much more appropriate measure than accuracy for analyzing segmentation results in this type of data. Due to the very low prior probability of a pixel being part of a vessel, methods that only segment a small fraction of each image will still obtain competitive accuracy scores. The F-measure, on the other hand, provides a ratio-independent summary of the overlap between two segmentation’s pixel labels. Therefore, the approach of labeling very few pixels as vascular will yield a very low F-measure score due to the large number of false negatives.

We applied a Wilcox signed-rank test between our proposed method’s F-measure distribution and the F-measures of every other method [35

35. J. Gibbons and S. Chakraborti, Nonparametric Statistical Inference (CRC Press, 2003).

]. Methods for which the difference was statistically significant (p < 0.05) are marked with an * in each table.

5. Discussion

As Tables 1, 2 and 3 show, our proposed method compares favorably to existing supervised and unsupervised methods. Regardless of metric, our method consistently outperformed existing state-of-the-art approaches in our experiments by better balancing the likelihood of false positives and negatives. In contrast, Fig. 6 illustrates how a method such as the GMM classifier has good recall, but poor precision, while a more conservative method such as the KNN classifier has better precision, but worse recall. In the first case, the segmentation has too many non-vascular pixels, while the latter segmentation misses a significant portion of the vasculature. Our method’s connectivity contraints allow us to strike a good balance between these two objectives by better disambiguating between similarly valued pixels. In other words, our method is more likely to label a pixel as vascular if it can be directly connected to a large vascular region than if it is isolated, since the latter case is more indicative of noise rather than an actual vessel.

Fig. 6 Vessel segmentation on a mosaic: (a) Original image (b) Manual segmentation (c) Dijkstra forest (d) Matched filters (e) Local entropy (f) GMM classifier (g) KNN classifier

Finally, it is also worth noting how DLCF pre-processing has a sizable impact on the segmentation results of existing methods. All state-of-the-art methods performed significantly better on pre-processed frames than raw frames. As an extreme example, note in Table 2 the four-fold improvement in the F-measure of the local entropy method when using pre-processed frames.

In the future, we wish to expand our VEVIO database with more images from more patients. The presented experimental results highlight the challenges that VIO data present for vessel segmentation methods. The F-measures reported in this paper indicate significant room for further improvement. To encourage further research in this area, we have made the VEVIO dataset and the MATLAB code that we have developed for this project publically available at http://www.duke.edu/~sf59/Estrada_BOE_2012.htm

Appendix A. F-measure vs. accuracy

Traditionally, accuracy has been one of the key metrics for evaluating vessel segmentation results. For a binary classification, this metric is defined thus:
accuracy=tp+tntp+tn+fp+fn
(A.1)
where tp and fp indicate true and false positives respectively, while tn and fn tally true and false negatives. The unbiased F-measure, on the other hand, is given by:
F1=2precisionrecallprecision+recall
(A.2)
where precision and recall are defined as:
precision=tptp+fp,recall=tptp+fn.
(A.3)

Accuracy becomes less informative when one of the two classes if far more likely than the other, as is the case for vascular vs. non-vascular pixels. On average, vascular pixels only comprise about 5–10% of an image. This means that a classifier that labels all pixels as non-vascular can already boast a 90–95% accuracy. The F-measure, on the other hand, provides a better balance between labeling pixels correctly or incorrectly, since it is not affected by class sizes.

Appendix B. Parameter values

Table 4. Parameter values

table-icon
View This Table
| View All Tables

Acknowledgments

This work was supported in part by the Knights Templar Eye Foundation, Inc. pediatric ophthalmology research grant, Research to Prevent Blindness 2011 Duke’s Unrestricted Grant award, and a fellowship from the Duke University Center for Theoretical & Mathematical Sciences. We would like to thanks Prof. Yuliya Lokhnygina for her invaluable help with the biostatistical analysis. We would also like to thank the research groups that kindly provided their source code to us, especially Dr. João V. B. Soares.

References and links

1.

W. Tasman, A. Patz, J. A. McNamara, R. S. Kaiser, M. T. Trese, and B. T. Smith, “Retinopathy of prematurity: The life of a lifetime disease,” Am. J. Ophthalmol. 141, 167 – 174 (2006). [CrossRef] [PubMed]

2.

G. A. Gole, A. L. Ells, X. Katz, G. Holmstrom, A. R. Fielder, A. Capone Jr, J. T. Flynn, W. G. Good, J. M. Holmes, J. A. McNamara, E. A. Palmer, G. Quinn, E, M. J. Shapiro, M. G. J. Trese, and D. K. Wallace, “The international classification of retinopathy of prematurity revisited,” Arch. Ophthalmol. 123, 991–999 (2011).

3.

D. K. Wallace, G. E. Quinn, S. F. Freedman, and M. F. Chiang, “Agreement among pediatric ophthalmologists in diagnosing plus and pre-plus disease in retinopathy of prematurity,” J. Am. Assoc. Pediatric Ophthalmol. Strabismus 12, 352 – 356 (2008). [CrossRef]

4.

S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18, 19413–19428 (2010). [CrossRef] [PubMed]

5.

S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Trans. Med. Imag. 8, 263–269 (1989). [CrossRef]

6.

C. Kirbas and F. Quek, “A review of vessel extraction techniques and algorithms,” ACM Comput. Surv. 36, 81–121 (2004). [CrossRef]

7.

Q. Li, J. You, L. Zhang, and P. Bhattacharya, “Automated retinal vessel segmentation using Gabor filters and scale multiplication,” in Proceedings of System, Man and Cybernetics (IEEE, 2006), pp. 3521–3527.

8.

E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Trans. Med. Imag. 26, 1357–1365 (2007). [CrossRef]

9.

J. Soares, J. Leandro, R. Cesar Jr, H. Jelinek, and M. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imag. 25, 1214–1222 (2006). [CrossRef]

10.

J. Staal, M. Abràmoff, M. Niemeijer, M. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imag. 23, 501–509 (2004). [CrossRef]

11.

B. Lam, Y. Gao, and A. Liew, “General retinal vessel segmentation using regularization-based multiconcavity modeling,” IEEE Trans. Med. Imag. 29, 1369–1381 (2010). [CrossRef]

12.

G. Lathen, J. Jonasson, and M. Borga, “Blood vessel segmentation using multi-scale quadrature filtering,” Pattern Recogn. Lett. 31, 762–767 (2010). [CrossRef]

13.

D. Marín, A. Aquino, M. Gegúndez-Arias, and J. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Trans. Med. Imag. 30, 146–158 (2011). [CrossRef]

14.

A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imag. 19, 203–210 (2002). [CrossRef]

15.

T. Chanwimaluang and G. Fan, “An efficient blood vessel detection algorithm for retinal images using local entropy thresholding,” in Proceedings of the International Symposium on Circuits and Systems (IEEE2003), pp. 21–24.

16.

M. Martínez-Pérez, A. Hughes, A. Stanton, S. Thom, A. Bharath, and K. Parker, “Retinal blood vessel segmentation by means of scale-space analysis and region growing,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention (Springer1999), pp. 90–97. [CrossRef]

17.

F. Zana and J. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE Trans. Image Process. 10, 1010–1019 (2002). [CrossRef]

18.

L. Pedersen, M. Grunkin, B. Ersboll, K. Madsen, M. Larsen, N. Christoffersen, and U. Skands, “Quantitative measurement of changes in retinal vessel diameter in ocular fundus images,” Pattern Recogn. Lett. (21), 1215–1223 (2000). [CrossRef]

19.

M. Cree, D. Cornforth, and HF. Jelinek, “Vessel segmentation and tracking using a two-dimensional model,” in Proceedings of Image and Vision Computing New Zealand (IVCNZ, 2005), pp. 345–350.

20.

F. Benmansour and L. Cohen, “Tubular structure segmentation based on minimal path method and anisotropic enhancement,” Int. J. Comput. Vision 92, 192–210 (2011). [CrossRef]

21.

H. Li and A. Yezzi, “Vessels as 4-D curves: Global minimal 4-D paths to extract 3-D tubular surfaces and centerlines,” IEEE Trans. Med. Imag. 26, 1213–1223 (2007). [CrossRef]

22.

M. Pechaud, R. Keriven, and G. Peyre, “Extraction of tubular structures over an orientation domain,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 2009), pp. 336–342.

23.

O. Wink, W. Niessen, and M. Viergever, “Multiscale vessel tracking,” IEEE Trans. Med. Imag. 23, 130–133 (2004). [CrossRef]

24.

S. Ahmad, D. Wallace, S. Freedman, and Z. Zhao, “Computer-assisted assessment of plus disease in retinopathy of prematurity using video indirect ophthalmoscopy images,” Retina 28, 1458–1462 (2008). [CrossRef] [PubMed]

25.

A. Kiely, D. Wallace, S. Freedman, and Z. Zhao, “Computer-assisted measurement of retinal vascular width and tortuosity in retinopathy of prematurity,” Arch. Ophthalmol. 128, 847–852 (2010). [CrossRef] [PubMed]

26.

T. Lindeberg, “Edge detection and ridge detection with automatic scale selection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 1996), pp. 465–470.

27.

J. Sethian, Level Set Methods and Fast Marching Methods (Cambridge University Press, 1999).

28.

E. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math. 1, 269–271 (1959). [CrossRef]

29.

R. Estrada, C. Tomasi, M. Cabrera, D. Wallace, S. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express 2, 2871–2887 (2011). [CrossRef] [PubMed]

30.

R. Bellman, Dynamic Programming (Dover, 2003).

31.

T. Cormen, C. Leiserson, R. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2001).

32.

M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” Proc. SPIE 5370, 648–656 (2004). [CrossRef]

33.

B. Al-Diri, A. Hunter, D. Steel, M. Habib, T. Hudaib, and S. Berry, “REVIEW - A reference data set for retinal vessel profiles,” in Proceedings of the IEEE Conference on Engineering in Medicine and Biology Society (IEEE, 2008), pp. 2262–2265.

34.

J. Cohen, “A Coefficient of agreement for nominal scales,” Educ. Psychol. Meas. 20, 37–46 (1960). [CrossRef]

35.

J. Gibbons and S. Chakraborti, Nonparametric Statistical Inference (CRC Press, 2003).

OCIS Codes
(100.0100) Image processing : Image processing
(100.2960) Image processing : Image analysis
(170.4470) Medical optics and biotechnology : Ophthalmology

ToC Category:
Image Processing

History
Original Manuscript: November 30, 2011
Revised Manuscript: January 2, 2012
Manuscript Accepted: January 2, 2012
Published: January 18, 2012

Citation
Rolando Estrada, Carlo Tomasi, Michelle T. Cabrera, David K. Wallace, Sharon F. Freedman, and Sina Farsiu, "Exploratory Dijkstra forest based automatic vessel segmentation: applications in video indirect ophthalmoscopy (VIO)," Biomed. Opt. Express 3, 327-339 (2012)
http://www.opticsinfobase.org/boe/abstract.cfm?URI=boe-3-2-327


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. W. Tasman, A. Patz, J. A. McNamara, R. S. Kaiser, M. T. Trese, and B. T. Smith, “Retinopathy of prematurity: The life of a lifetime disease,” Am. J. Ophthalmol.141, 167 – 174 (2006). [CrossRef] [PubMed]
  2. G. A. Gole, A. L. Ells, X. Katz, G. Holmstrom, A. R. Fielder, A. Capone, J. T. Flynn, W. G. Good, J. M. Holmes, J. A. McNamara, E. A. Palmer, G. Quinn, E, M. J. Shapiro, M. G. J. Trese, and D. K. Wallace, “The international classification of retinopathy of prematurity revisited,” Arch. Ophthalmol.123, 991–999 (2011).
  3. D. K. Wallace, G. E. Quinn, S. F. Freedman, and M. F. Chiang, “Agreement among pediatric ophthalmologists in diagnosing plus and pre-plus disease in retinopathy of prematurity,” J. Am. Assoc. Pediatric Ophthalmol. Strabismus12, 352 – 356 (2008). [CrossRef]
  4. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express18, 19413–19428 (2010). [CrossRef] [PubMed]
  5. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Trans. Med. Imag.8, 263–269 (1989). [CrossRef]
  6. C. Kirbas and F. Quek, “A review of vessel extraction techniques and algorithms,” ACM Comput. Surv.36, 81–121 (2004). [CrossRef]
  7. Q. Li, J. You, L. Zhang, and P. Bhattacharya, “Automated retinal vessel segmentation using Gabor filters and scale multiplication,” in Proceedings of System, Man and Cybernetics (IEEE, 2006), pp. 3521–3527.
  8. E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Trans. Med. Imag.26, 1357–1365 (2007). [CrossRef]
  9. J. Soares, J. Leandro, R. Cesar, H. Jelinek, and M. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Trans. Med. Imag.25, 1214–1222 (2006). [CrossRef]
  10. J. Staal, M. Abràmoff, M. Niemeijer, M. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans. Med. Imag.23, 501–509 (2004). [CrossRef]
  11. B. Lam, Y. Gao, and A. Liew, “General retinal vessel segmentation using regularization-based multiconcavity modeling,” IEEE Trans. Med. Imag.29, 1369–1381 (2010). [CrossRef]
  12. G. Lathen, J. Jonasson, and M. Borga, “Blood vessel segmentation using multi-scale quadrature filtering,” Pattern Recogn. Lett.31, 762–767 (2010). [CrossRef]
  13. D. Marín, A. Aquino, M. Gegúndez-Arias, and J. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Trans. Med. Imag.30, 146–158 (2011). [CrossRef]
  14. A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Trans. Med. Imag.19, 203–210 (2002). [CrossRef]
  15. T. Chanwimaluang and G. Fan, “An efficient blood vessel detection algorithm for retinal images using local entropy thresholding,” in Proceedings of the International Symposium on Circuits and Systems (IEEE2003), pp. 21–24.
  16. M. Martínez-Pérez, A. Hughes, A. Stanton, S. Thom, A. Bharath, and K. Parker, “Retinal blood vessel segmentation by means of scale-space analysis and region growing,” in Proceedings of Medical Image Computing and Computer-Assisted Intervention (Springer1999), pp. 90–97. [CrossRef]
  17. F. Zana and J. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE Trans. Image Process.10, 1010–1019 (2002). [CrossRef]
  18. L. Pedersen, M. Grunkin, B. Ersboll, K. Madsen, M. Larsen, N. Christoffersen, and U. Skands, “Quantitative measurement of changes in retinal vessel diameter in ocular fundus images,” Pattern Recogn. Lett. (21), 1215–1223 (2000). [CrossRef]
  19. M. Cree, D. Cornforth, and HF. Jelinek, “Vessel segmentation and tracking using a two-dimensional model,” in Proceedings of Image and Vision Computing New Zealand (IVCNZ, 2005), pp. 345–350.
  20. F. Benmansour and L. Cohen, “Tubular structure segmentation based on minimal path method and anisotropic enhancement,” Int. J. Comput. Vision92, 192–210 (2011). [CrossRef]
  21. H. Li and A. Yezzi, “Vessels as 4-D curves: Global minimal 4-D paths to extract 3-D tubular surfaces and centerlines,” IEEE Trans. Med. Imag.26, 1213–1223 (2007). [CrossRef]
  22. M. Pechaud, R. Keriven, and G. Peyre, “Extraction of tubular structures over an orientation domain,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 2009), pp. 336–342.
  23. O. Wink, W. Niessen, and M. Viergever, “Multiscale vessel tracking,” IEEE Trans. Med. Imag.23, 130–133 (2004). [CrossRef]
  24. S. Ahmad, D. Wallace, S. Freedman, and Z. Zhao, “Computer-assisted assessment of plus disease in retinopathy of prematurity using video indirect ophthalmoscopy images,” Retina28, 1458–1462 (2008). [CrossRef] [PubMed]
  25. A. Kiely, D. Wallace, S. Freedman, and Z. Zhao, “Computer-assisted measurement of retinal vascular width and tortuosity in retinopathy of prematurity,” Arch. Ophthalmol.128, 847–852 (2010). [CrossRef] [PubMed]
  26. T. Lindeberg, “Edge detection and ridge detection with automatic scale selection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE Computer Society, 1996), pp. 465–470.
  27. J. Sethian, Level Set Methods and Fast Marching Methods (Cambridge University Press, 1999).
  28. E. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math.1, 269–271 (1959). [CrossRef]
  29. R. Estrada, C. Tomasi, M. Cabrera, D. Wallace, S. Freedman, and S. Farsiu, “Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing,” Biomed. Opt. Express2, 2871–2887 (2011). [CrossRef] [PubMed]
  30. R. Bellman, Dynamic Programming (Dover, 2003).
  31. T. Cormen, C. Leiserson, R. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2001).
  32. M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” Proc. SPIE5370, 648–656 (2004). [CrossRef]
  33. B. Al-Diri, A. Hunter, D. Steel, M. Habib, T. Hudaib, and S. Berry, “REVIEW - A reference data set for retinal vessel profiles,” in Proceedings of the IEEE Conference on Engineering in Medicine and Biology Society (IEEE, 2008), pp. 2262–2265.
  34. J. Cohen, “A Coefficient of agreement for nominal scales,” Educ. Psychol. Meas.20, 37–46 (1960). [CrossRef]
  35. J. Gibbons and S. Chakraborti, Nonparametric Statistical Inference (CRC Press, 2003).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited