## Three-dimensional reconstruction of blood vessels extracted from retinal fundus images |

Optics Express, Vol. 20, Issue 10, pp. 11451-11465 (2012)

http://dx.doi.org/10.1364/OE.20.011451

Acrobat PDF (2255 KB)

### Abstract

We present a 3D reconstruction of retinal blood vessel trees using two views of fundus images. The problem is addressed by using well known computer vision techniques which consider: 1) The recovery of camera-eyeball model parameters by an auto-calibration method. The camera parameters are found via the solution of simplified Kruppa equations, based on correspondences found by a LMedS optimisation correlation between pairs of eight different views. 2) The extraction of blood vessels and skeletons from two fundus images. 3) The matching of corresponding points of the two skeleton trees. The trees are previously labelled during the analysis of 2D binary images. Finally, 4) the lineal triangulation of matched correspondence points and the surface modelling via generalised cylinders using diameter measurements extracted from the 2D binary images. The method is nearly automatic and it is tested with 2 sets of 10 fundus retinal images, each one taken from different subjects. Results of 3D vein and artery trees reconstructions are shown.

© 2012 OSA

## 1. Introduction

1. M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. **49**, 912–917 (2002). [CrossRef]

2. M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal. **11**, 47–61 (2007). [CrossRef] [PubMed]

3. K. Deguchi, J. Noami, and H. Hontani, “3d fundus pattern reconstruction and display from multiple fundus images,” in *Proceedings 15th International Conference on Pattern Recognition* (IEEE, 2000), pp. 94–97. [CrossRef]

5. T. E. Choe, G. Medioni, I. Cohen, A. C. Walsh, and S. R. Sadda, “2-D registration and 3-D shape inference of the retinal fundus from fluorescein images,” Med. Image Anal. **12**, 174–190 (2008). [CrossRef]

6. D. Liu, N. Wood, X. Xu, N. Witt, A. Hughes, and S. Thom, “3D reconstruction of the retinal arterial tree using subject-specific fundus images,” in *Advances in Computational Vision and Medical Image Processing*, J. M. R. S. Tavares and R. M. N. Jorge ed. (Springer, 2009), pp. 187–201. [CrossRef]

8. J. Arnold, J. Gates, and K. Taylor, “Possible errors in the measurement of retinal lesions,” Invest. Ophthalmol. Vis. Sci. **34**, 2576–2580 (1993). [PubMed]

## 2. System calibration

### 2.1. Fundus camera distortion correction and calibration

*K*, which is a 3 × 3 matrix having the well-known form: where

*α*and

_{u}*α*correspond to the focal distances in pixels along the axes of the image,

_{v}*θ*is the angle between the two image axes and (

*u*

_{0},

*v*

_{0}) are the coordinates of the principal point. The method we use to find the coordinates of the principal point and the focal distances is that of Kanatani [11], which will be briefly described in following Sections.

#### 2.1.1. Principal point

#### 2.1.2. Focal distance

*α*=

_{u}*α*.

_{v}*m*and

*m*′ are conjugate if and only if:

*m*·

*m*′ = 0. The homogeneous coordinates of the point

*m*in the image plane can be defined as the vector [

*x,y, f*]

^{⊤}. In terms of Cartesian coordinates two points in the image plane with coordinates (

*a,b*) and (

*a*′

*,b*′) are conjugate if and only if Since we do not know the value of the focal length,

*f*, the homogeneous coordinates do not provide an explicit expression for the equivalent Cartesian coordinates on the image plane. For this reason we express conjugacy in terms of the change of a vector

*m*as the focal length of the system changes. Since the homogeneous coordinates of the point

*m*defined before, represents an image point with respect to the focal length

*f*, the point

*m*′ represents the same image point with respect to the focal length

*f*′, and it is given by

*m*and

*m*′ are the vectors of the vanishing points of two mutually orthogonal space lines,

*f*is the tentative focal length, and

*f̂*is the true focal length. From the last expression it follows that:

### 2.2. Auto-calibration of the “fundus camera-eyeball” system

*ω*be the projection of the absolute conic. The matching constraint can be expressed in the following form: where

*F*is the fundamental matrix of the two views,

*e*′ is the second epipole. The equality is up to scale (

*ω*

^{*}=

*ω*

^{−1}, dual image of the absolute conic) and [

*e*′]

_{×}is the skew-symmetric matrix associated with the cross-product of

*e*′. Equation (5) is called the

*Kruppa equations*.

*F*=

*UDV*, Eq. (5) is equivalent to: where

^{T}*r,s*are the eigenvalues of the matrix

*FF*;

^{T}*u*

_{1}

*, u*

_{2},

*u*

_{3}are the column vectors of

*U*;

*v*

_{1},

*v*

_{2},

*v*

_{3}are the column vectors of

*V*, and

*U*,

*V*are two orthogonal matrices. The aim is to find

*ω*

^{*}from Eq. (6).

*ω*

^{*}can be expressed as well in terms of the calibration matrix,

*K*, as: therefore

*K*is extracted from Eq. (7) by computing

*K*employing the Cholesky decomposition of

^{T}*ω*

^{−1}, then it is transposed and finally inverted to yield

*K*.

**1. Initial solution and point correspondences.**Unlike [9] algorithm, the initial camera parameters we used are taken from the previous stage of fundus camera classical calibration described in Section 2.1. Assuming we have

*M*images, that have been acquired with constant camera intrinsic parameters, a total of

*N*≤ (

*M*(

*M*− 1))/2 fundamental matrices can be defined to be used for the auto-calibration method.

1. M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. **49**, 912–917 (2002). [CrossRef]

*m*×

*m*and correlate it with a window of the same size corresponding to every feature point in image 2. The correlation matrix is as follow: where

*W*

_{1}and

*W*

_{2}are the windows of data from image 1 and 2 respectively which centres are the feature points.

*W*

_{1}should be also a window with normalised data

*r*= (

_{i}*y*−

_{i}*θ̂*) is the residual of the difference between what is actually observed,

*y*, and what is estimated,

_{i}*θ*̂. The robust standard deviation is based on the minimal median and multiplied by a finite correction factor for the case of normal errors: where

*n*is the number of points and

*p*is the dimension (p=1). This scale factor is then used to determine a weight

*k*for the

_{i}*i*th observation namely: The correspondences having

*k*= 0 are outliers and should not be further taken into account. The reader is refer to [13

_{i}13. P. J. Rousseeuw and A. M. Leroy, *Robust Regression and Outilier Detection* (John Wiley & Sons, 1987). [CrossRef]

**2. Non-linear optimisation.**Matrix

*ω*

^{*}, from Eq. (5), is computed as the solution of a nonlinear least squares problem.

*π*(

_{ij}*S*,

_{F}*ω*

^{*}) denote the differences of ratios

*ij*and

*ω*

^{*}is computed as the solution of the non-linear least squares problem:

*S*is a vector formed by parameters from the SVD of

_{F}*F*. The method used to find

*F*is based on the one proposed in [14]. For a detailed description of this method please refer to [9]).

## 3. Blood vessel extraction

2. M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal. **11**, 47–61 (2007). [CrossRef] [PubMed]

### 3.1. Analysis of binary images

1. M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. **49**, 912–917 (2002). [CrossRef]

## 4. 3D skeleton reconstruction

*x*↔

_{i}*x*′

*is given, such as the skeleton points of a tree. It is assumed that these correspondences come from a set of 3D points*

_{i}*X*, which are unknown. The reconstruction task is to find the camera projection matrices

_{i}*P*and

*P*′, as well as the 3D points

*X*such that The reconstruction process consist of: 1) compute the fundamental matrix

_{i}*F*from the correspondences of tree skeleton points, 2) compute the camera projection matrices,

*P*and

*P*′, from fundamental matrix, and 3) for each pair of correspondence points

*x*↔

_{i}*x*′

*compute the point in the space that projects these two image points by linear triangulation [12, 15].*

_{i}## 5. Surface modelling

16. J. Ponce, “Straight homogeneous generalized cylinders: Differential geometry and uniqueness results,” Int. J. Comput. Vis. **4**, 79–100 (1990). [CrossRef]

**49**, 912–917 (2002). [CrossRef]

*K*of the “fundus camera-eyeball” system.

## 6. Experimental results

*θ*=

*π*/2, the following fundus camera calibration matrix,

*K*, was obtained as an initial condition for the auto-calibration technique. Since each set of data (for each subject) was taken with different CCD camera, we do compute a

_{mi}*K*matrix for each one:

_{mi}*K*

_{m1}for fundus camera with Sony camera and

*K*

_{m2}for fundus camera with Canon camera.

### 6.1. Camera auto-calibration matrix

*dmax*= 300 pixels. The choice of these parameters depend on the image and feature size. To compensate for brightness differences between images, before applying the correlation, we subtract a smoothed image with an averaging filter of same size as the correlation window from each of the images. Figures 6(a) and 6(b) show one of these image pairs with the feature points, Fig. 6(c) shows the putative points after correlation and Fig. 6(d) shows the inlier points after LMedS minimisation.

*K*. Solving the system 5 we obtained the calibration matrix,

_{mi}*K*, for the whole “camera-eyeball” system as:

_{i}*K*

_{1}for the first set and

*K*

_{2}for the second set.

*K*shown above was the one with the smaller covariance matrix from those similar to the matrix

_{i}*K*. We presume that this difference between the fundus camera matrix and auto-calibrated matrix, might be due the optics of the eyeball, which is unknown and was not considered in the model of the system.

_{mi}### 6.2. 3D skeleton reconstruction

2. M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal. **11**, 47–61 (2007). [CrossRef] [PubMed]

**49**, 912–917 (2002). [CrossRef]

*x*↔

_{i}*x*′

*) of a continuous tree skeletons from a pair of images are extracted automatically. Figures 7(a) and 7(b) show the same tree extracted from images Fig. 1(b) and 1(d), taken from set 1, respectively. Figure 7(c) shows the corresponding matched tree skeletons. Note that a small branch at the top-centre of the tree, shown in Fig. 7(a), is missing in image 2 of the same eye shown in Fig. 7(b). This small branch is of course automatically discarded in the correspondences, as it is shown in Fig. 7(c). Figures 7(d) and 7(e) show the same tree taken from images of set 2 and Fig. 7(f) corresponds to the matched tree skeletons.*

_{i}*x*↔

_{i}*x*′

*for only one tree is needed in order to compute the two projection matrices*

_{i}*P*and

*P*′, as described in Section 4, with the use of the camera auto-calibration matrix

*K*calculated in Section 6.1. In Fig. 8 the epipolar lines corresponding to a subset of matching points for each of the two image pairs used to reconstruct the vascular trees are shown.

_{i}### 6.3. Surface visualisation

*i.e.*the set that describes the generalised cylinder axis, a polygon or cross–section is considered. Each polygon is centred along the axis position in space, and its orientation is set so that the vector normal to the polygon centre is tangent to the skeleton axis at that point. The coordinates of the polygon vertices are used as the mesh vertices. Each vertex is connected to the two neighbouring vertices in the same cross–section and to the corresponding vertices in the neighbouring cross-sections. The vertices of the cross-section at the extreme points of each segment are only connected to one neighbouring cross-section. Three 3D views of different surface model of blood vessel trees are shown in Fig. 10.

## 7. Conclusions

3. K. Deguchi, J. Noami, and H. Hontani, “3d fundus pattern reconstruction and display from multiple fundus images,” in *Proceedings 15th International Conference on Pattern Recognition* (IEEE, 2000), pp. 94–97. [CrossRef]

5. T. E. Choe, G. Medioni, I. Cohen, A. C. Walsh, and S. R. Sadda, “2-D registration and 3-D shape inference of the retinal fundus from fluorescein images,” Med. Image Anal. **12**, 174–190 (2008). [CrossRef]

6. D. Liu, N. Wood, X. Xu, N. Witt, A. Hughes, and S. Thom, “3D reconstruction of the retinal arterial tree using subject-specific fundus images,” in *Advances in Computational Vision and Medical Image Processing*, J. M. R. S. Tavares and R. M. N. Jorge ed. (Springer, 2009), pp. 187–201. [CrossRef]

**49**, 912–917 (2002). [CrossRef]

## Acknowledgments

## References and links

1. | M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. |

2. | M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal. |

3. | K. Deguchi, J. Noami, and H. Hontani, “3d fundus pattern reconstruction and display from multiple fundus images,” in |

4. | K. Deguchi, D. Kawamata, K. Mizutani, H. Hontani, and K. Wakabayashi, “3d fundus shape reconstruction and display from stereo fundus images,” IEICE Trans. Inf. Syst. |

5. | T. E. Choe, G. Medioni, I. Cohen, A. C. Walsh, and S. R. Sadda, “2-D registration and 3-D shape inference of the retinal fundus from fluorescein images,” Med. Image Anal. |

6. | D. Liu, N. Wood, X. Xu, N. Witt, A. Hughes, and S. Thom, “3D reconstruction of the retinal arterial tree using subject-specific fundus images,” in |

7. | M. E. Martinez-Perez and A. Espinosa-Romero, “3D Reconstruction of Retinal Blood Vessels From Two Views,” in |

8. | J. Arnold, J. Gates, and K. Taylor, “Possible errors in the measurement of retinal lesions,” Invest. Ophthalmol. Vis. Sci. |

9. | M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999). |

10. | M. Sonka, |

11. | K. Kanatani, |

12. | R. Hartley and A. Zisserman, |

13. | P. J. Rousseeuw and A. M. Leroy, |

14. | Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Research Report 2273, INRIA Sophia-Antipolis, (1994). |

15. | R. I. Hartley, “Estimation of relative camera positions for uncalibrated cameras,” in |

16. | J. Ponce, “Straight homogeneous generalized cylinders: Differential geometry and uniqueness results,” Int. J. Comput. Vis. |

**OCIS Codes**

(100.2000) Image processing : Digital image processing

(150.0150) Machine vision : Machine vision

(170.3010) Medical optics and biotechnology : Image reconstruction techniques

(170.4470) Medical optics and biotechnology : Ophthalmology

**ToC Category:**

Medical Optics and Biotechnology

**History**

Original Manuscript: November 16, 2011

Revised Manuscript: March 5, 2012

Manuscript Accepted: March 6, 2012

Published: May 4, 2012

**Virtual Issues**

Vol. 7, Iss. 7 *Virtual Journal for Biomedical Optics*

**Citation**

M. Elena Martinez-Perez and Arturo Espinosa-Romero, "Three-dimensional reconstruction of blood vessels extracted from retinal fundus images," Opt. Express **20**, 11451-11465 (2012)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-10-11451

Sort: Year | Journal | Reset

### References

- M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng.49, 912–917 (2002). [CrossRef]
- M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal.11, 47–61 (2007). [CrossRef] [PubMed]
- K. Deguchi, J. Noami, and H. Hontani, “3d fundus pattern reconstruction and display from multiple fundus images,” in Proceedings 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 94–97. [CrossRef]
- K. Deguchi, D. Kawamata, K. Mizutani, H. Hontani, and K. Wakabayashi, “3d fundus shape reconstruction and display from stereo fundus images,” IEICE Trans. Inf. Syst.E83-D, 1408–1414 (2000).
- T. E. Choe, G. Medioni, I. Cohen, A. C. Walsh, and S. R. Sadda, “2-D registration and 3-D shape inference of the retinal fundus from fluorescein images,” Med. Image Anal.12, 174–190 (2008). [CrossRef]
- D. Liu, N. Wood, X. Xu, N. Witt, A. Hughes, and S. Thom, “3D reconstruction of the retinal arterial tree using subject-specific fundus images,” in Advances in Computational Vision and Medical Image Processing, J. M. R. S. Tavares and R. M. N. Jorge ed. (Springer, 2009), pp. 187–201. [CrossRef]
- M. E. Martinez-Perez and A. Espinosa-Romero, “3D Reconstruction of Retinal Blood Vessels From Two Views,” in Proceedings of the 4th Indian Conference on Computer Vision, Graphics and Image Processing, B. Chanda, S. Chandran, and L. Davis, ed. (Indian Statistical Insitute, 2004), pp. 258–263.
- J. Arnold, J. Gates, and K. Taylor, “Possible errors in the measurement of retinal lesions,” Invest. Ophthalmol. Vis. Sci.34, 2576–2580 (1993). [PubMed]
- M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999).
- M. Sonka, Image Processing, Analysis, and Machine Vision (Thomson, 2008).
- K. Kanatani, Geometry Computation for Machine Vision (Oxford Science Publications, 1993).
- R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge Uiversity Press, 2000).
- P. J. Rousseeuw and A. M. Leroy, Robust Regression and Outilier Detection (John Wiley & Sons, 1987). [CrossRef]
- Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Research Report 2273, INRIA Sophia-Antipolis, (1994).
- R. I. Hartley, “Estimation of relative camera positions for uncalibrated cameras,” in Proceedings of the 2nd European Conference on Computer Vision, G. Sandini, ed. (Springer-Verlag, 1992), pp. 579–587.
- J. Ponce, “Straight homogeneous generalized cylinders: Differential geometry and uniqueness results,” Int. J. Comput. Vis.4, 79–100 (1990). [CrossRef]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Multimedia

Multimedia Files | Recommended Software |

» Media 1: MPG (3153 KB) | QuickTime |

» Media 2: MPG (2936 KB) | QuickTime |

» Media 3: MPG (6384 KB) | QuickTime |

» Media 4: MPG (2438 KB) | QuickTime |

» Media 5: MPG (1653 KB) | QuickTime |

» Media 6: MPG (4242 KB) | QuickTime |

« Previous Article | Next Article »

OSA is a member of CrossRef.