OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 20, Iss. 10 — May. 7, 2012
  • pp: 11451–11465
« Show journal navigation

Three-dimensional reconstruction of blood vessels extracted from retinal fundus images

M. Elena Martinez-Perez and Arturo Espinosa-Romero  »View Author Affiliations


Optics Express, Vol. 20, Issue 10, pp. 11451-11465 (2012)
http://dx.doi.org/10.1364/OE.20.011451


View Full Text Article

Acrobat PDF (2255 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We present a 3D reconstruction of retinal blood vessel trees using two views of fundus images. The problem is addressed by using well known computer vision techniques which consider: 1) The recovery of camera-eyeball model parameters by an auto-calibration method. The camera parameters are found via the solution of simplified Kruppa equations, based on correspondences found by a LMedS optimisation correlation between pairs of eight different views. 2) The extraction of blood vessels and skeletons from two fundus images. 3) The matching of corresponding points of the two skeleton trees. The trees are previously labelled during the analysis of 2D binary images. Finally, 4) the lineal triangulation of matched correspondence points and the surface modelling via generalised cylinders using diameter measurements extracted from the 2D binary images. The method is nearly automatic and it is tested with 2 sets of 10 fundus retinal images, each one taken from different subjects. Results of 3D vein and artery trees reconstructions are shown.

© 2012 OSA

1. Introduction

Fig. 1 Fundus images, (a) and (c) are two different views from the same eyeball and (b) and (d) are their respective segmented binary images.

Photogrammetric analysis of features in images of human ocular fundus is affected by various sources of error, like aberrations of the fundus camera and the eye optics. The magnification in a fundus image is equal to the focal length of the fundus camera divided by the focal length of the eye. This formula can only be applied under different ametropic conditions and changes in the camera position with respect to the subject eye. This difference in magnification between one fundus image and another from different subjects has been pointed out in [8

8. J. Arnold, J. Gates, and K. Taylor, “Possible errors in the measurement of retinal lesions,” Invest. Ophthalmol. Vis. Sci. 34, 2576–2580 (1993). [PubMed]

]. As a result, it is not possible to make a direct comparison between measurements from different subjects. Consequently, the 3D metric reconstruction of retinal blood vessel trees is a great challenge.

We have applied well known computer vision techniques to get a nearly automatic approximation to a 3D reconstruction of fundus vasculature. The method considers the following procedures: 1) the estimation of the intrinsic camera parameters for each “fundus camera-eyeball” system, 2) the extraction of blood vessel trees from 2D fundus images, 3) the match of the correspondences of vessel skeleton tree points between images, 4) the computation of the fundamental matrix from the correspondences, 5) the computation of the camera projection matrices, 6) the computation of a point in the space via linear triangulation for each correspondence point, and finally, 7) construction of a 3D surface model of the blood vessel tree.

Fig. 2 Flowchart of the whole 3D reconstruction process.

The rest of the paper is organised as follows: Section 2 describes the techniques used for distortion lens correction, calibration of fundus camera and auto-calibration of the “fundus camera-eyeball” system. Section 3 deals with the blood vessel extraction from 2D fundus images and the analysis of the binary images resulted. Section 4 explains the computation of the projection matrices and the linear triangulation procedure to get 3D views of vessel tree skeletons. Section 5 depicts an approach to the surface reconstruction, and finally Sections 6 and 7 present some experimental results and conclusions.

2. System calibration

The calibration task we are dealing with is particularly difficult since we ignore the intrinsic and extrinsic parameters of the optical system formed by the cornea, lens and vitreous humour of the eyeball being examined. Thus, we need to use an auto-calibration technique for the whole “fundus camera-eyeball” system, which uses a number of specific correspondences of branching blood vessel points. These points are taken automatically from a set of retinal images captured from different views of the same eyeball. A pre-processing stage has been introduced first to eliminate the radial distortions of the fundus camera lens and second to get the fundus camera intrinsic parameters through a classical technique, in order to use them as the initial values for the auto-calibration technique.

2.1. Fundus camera distortion correction and calibration

The images obtained with the fundus camera are affected by radial and tangential distortion, an artifact produced by the camera optical system. Since the 3D reconstruction methods we use in our work assume a lineal pin-hole camera model, it is necessary to eliminate the distortion. For this effect we use a classical polynomial model adjusted by using a least mean square method [10

10. M. Sonka, Image Processing, Analysis, and Machine Vision (Thomson, 2008).

].

For the calibration of the fundus camera, we need to find the camera calibration matrix, K, which is a 3 × 3 matrix having the well-known form:
K=[αuαucotθu00αv/sinθv0001]
(1)
where αu and αv correspond to the focal distances in pixels along the axes of the image, θ is the angle between the two image axes and (u0, v0) are the coordinates of the principal point. The method we use to find the coordinates of the principal point and the focal distances is that of Kanatani [11

11. K. Kanatani, Geometry Computation for Machine Vision (Oxford Science Publications, 1993).

], which will be briefly described in following Sections.

2.1.1. Principal point

The principal point of the fundus camera, or optical centre is calculated by finding the focus of expansion (FOE) of a set of spatial reference points while moving the camera along the optical axis. The FOE is defined as the vanishing points of the trajectories of points moving on the scene. If we move the camera along the optical axis, the trajectories of the reference scene points will be parallel to it and therefore will intersect the image plane at the principal point.

We used the corners of a frame as the spatial reference points, similar to the one shown in Fig. 3(a), which was held perpendicular to the camera main axis. Different pictures of the reference frame were captured while carefully moving the fundus camera forward along its main axis. By tracking the frame corners, finding the lines connecting corresponding corners in different images, and finding their intersections, we found an estimation of the camera principal point.

Fig. 3 Examples of the vanishing point estimation process. (a) shows an image of the calibration frame captured by the fundus camera, (b) shows the edges of the calibration frame obtained from the image in (a), and (c) shows the rectangle fitted to the calibration frame in red lines, and the two pairs of lines whose vanishing points are used to estimate the focal length which are shown in blue.

2.1.2. Focal distance

In order to estimate the camera focal distance, also called focal length, we rely on the fact that two space lines are orthogonal to each other if and only if their vanishing points are conjugate on the image plane (see Fig. 3(c)). This method considers that the pixels of the camera sensor used to capture the images are square, hence αu = αv.

Two points in the image plane whose homogeneous coordinates are the vectors m and m′ are conjugate if and only if: m · m′ = 0. The homogeneous coordinates of the point m in the image plane can be defined as the vector [x,y, f]. In terms of Cartesian coordinates two points in the image plane with coordinates (a,b) and (a,b′) are conjugate if and only if
aa+bb+f2=0
(2)
Since we do not know the value of the focal length, f, the homogeneous coordinates do not provide an explicit expression for the equivalent Cartesian coordinates on the image plane. For this reason we express conjugacy in terms of the change of a vector m as the focal length of the system changes. Since the homogeneous coordinates of the point m defined before, represents an image point with respect to the focal length f, the point m′ represents the same image point with respect to the focal length f′, and it is given by
m=[m1,m2,(f/f)m3]
(3)

By substituting Eq. (3) into Eq. (2) we obtain the following expression: m1m1+m2m2+f^2f2m3m3=0, where m and m′ are the vectors of the vanishing points of two mutually orthogonal space lines, f is the tentative focal length, and is the true focal length. From the last expression it follows that:
f^=fm1m1+m2m2m3m3
(4)

2.2. Auto-calibration of the “fundus camera-eyeball” system

In order to introduce the eye optics into the calibration process we assume that an auto-calibration technique will include both the eye and the fundus camera optics together. The latter is because the feature points for calibration are taken directly from the fundus images.

Although the eyeball and the camera are separate units, each of them capable of independent movement, the variation on their relative position for the images captured in this article is small, due to the way the images are captured. The computation performed in the method described in this article depends on the number of correspondences present between the image pairs; this obliges to have a large overlap area between images. The procedure to capture the fundus images is as follows: the subject maintains the eye fixed on a beacon set in the space in front of the camera (a fixed flashing red light) by the camera operator (see Fig. 4). The operator moves the camera around the subject eye in order to frame the subject’s retina area to be captured. To be able to have large overlaping areas, the camera principal axis should not move more than certain degrees. The latter is estimated based on the fact that the FOV (field of view) used is 50°, we calculate while taking the images, that camera principal axis does not move more than a quarter of the FOV, this is approximately no more than 12.5°. Since movements are minimal and in search of simplicity, we consider safe to asume a black box “fundus camera-eyeball” system with fixed intrinsic parameters.

Fig. 4 Procedure to capture the fundus images.

The method we applied for camera auto-calibration is based on the work reported by [9

9. M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999).

] which employs a simplification of the Kruppa equations. The method relies solely on the singular value decomposition (SVD) of the fundamental matrix that reduces the number of equations to be solved. It also avoids to recover noise-sensitive quantities such as epipoles, since its accurate estimation is difficult in the presence of noise and/or degenerate motions.

Kruppa equations can be considered as an epipolar matching constraint for the projections of quadratics or conics [12

12. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge Uiversity Press, 2000).

]. The image conics are identical when the images are captured with a camera having fixed intrinsic parameters, which is the case of any pair of views taken with the fundus camera to the same eyeball. Let ω be the projection of the absolute conic. The matching constraint can be expressed in the following form:
Fω*FT=[e]×ω*[e]×
(5)
where F is the fundamental matrix of the two views, e′ is the second epipole. The equality is up to scale (ω* = ω−1, dual image of the absolute conic) and [e′]× is the skew-symmetric matrix associated with the cross-product of e′. Equation (5) is called the Kruppa equations.

Using singular value decomposition (SVD) of matrix F = UDVT, Eq. (5) is equivalent to:
u2Tω*u2r2v1Tω*v1=u1Tω*u2rsv1Tω*v2=u1Tω*u1sv2Tω*v2
(6)
where r,s are the eigenvalues of the matrix FFT ; u1, u2, u3 are the column vectors of U ; v1, v2, v3 are the column vectors of V, and U, V are two orthogonal matrices. The aim is to find ω* from Eq. (6). ω* can be expressed as well in terms of the calibration matrix, K, as:
ω*=KKT
(7)
therefore K is extracted from Eq. (7) by computing KT employing the Cholesky decomposition of ω−1, then it is transposed and finally inverted to yield K.

Feature points are selected from each 2D image automatically using the skeletons of the segmented images, looking for those points which connectivity is either a bifurcation or a vessel crossing point (see Fig. 5(a), 5(d) and [1

1. M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. 49, 912–917 (2002). [CrossRef]

] for more details). The correspondence between image feature points are obtained by a classical normalised correlation technique to get the putative matches, followed by the solution of a nonlinear minimisation problem base on the least-median-of-squares (LMedS) of the distances between putative points to eliminate the outliers.

Fig. 5 Process to extract a tree from binary image. (a) and (b) show skeleton and significant points marked over the original image, (b) and (e) show the automated tracking of the tree selected by the operator and (c) and (f) show the extraction of the tree segments from the binary image. Upper raw: subject set 1, lower raw: subject set 2.

For every feature point in image 1 we extract a window of normalised data of size m×m and correlate it with a window of the same size corresponding to every feature point in image 2. The correlation matrix is as follow:
Cor=W1*W2W22
(8)
where W1 and W2 are the windows of data from image 1 and 2 respectively which centres are the feature points. W1 should be also a window with normalised data W1=W/(W2). This correlation matrix holds the correlation strength of every point relative to every other point. The latter is needed since we want to find pairs of points that correlate maximally in both directions (rows and columns).

After finding the putative matches, we eliminate the outliers using the least median of squares (LMedS) of the distances between matched putative points. The LMedS method estimates the parameters by solving the nonlinear minimisation problem which in one dimension reduces to:
minθ^mediri2
(9)
where ri = (yiθ̂) is the residual of the difference between what is actually observed, yi, and what is estimated, θ̂. The robust standard deviation is based on the minimal median and multiplied by a finite correction factor for the case of normal errors:
σ^=1.4826(1+5/(np))mediri2(θ^)
(10)
where n is the number of points and p is the dimension (p=1). This scale factor is then used to determine a weight ki for the ith observation namely:
ki={1if|ri/σ^|0otherwise
(11)
The correspondences having ki = 0 are outliers and should not be further taken into account. The reader is refer to [13

13. P. J. Rousseeuw and A. M. Leroy, Robust Regression and Outilier Detection (John Wiley & Sons, 1987). [CrossRef]

] for further explanation about robust estimators. The LMedS method to eliminate outliers in this way works properly for our application since rotation between fundus images is minimal. In cases with a very large rotation this method does not work.

2. Non-linear optimisation. Matrix ω*, from Eq. (5), is computed as the solution of a nonlinear least squares problem.

Let πij(SF, ω*) denote the differences of ratios ij and σij2(SF,ω*) the variance from Eq. (6). Matrix ω* is computed as the solution of the non-linear least squares problem:
ω*=argminω˜*i=1Nπ122(SFi,ω˜*)σπ122(SFi,ω˜*)+π132(SFi,ω˜*)σπ132(SFi,ω˜*)+π232(SFi,ω˜*)σπ232(SFi,ω˜*)
(12)
where SF is a vector formed by parameters from the SVD of F. The method used to find F is based on the one proposed in [14

14. Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Research Report 2273, INRIA Sophia-Antipolis, (1994).

]. For a detailed description of this method please refer to [9

9. M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999).

]).

3. Blood vessel extraction

Blood vessels are automatically segmented and extracted from 2D fundus images using a previously described algorithm based on multi-scale analysis [2

2. M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal. 11, 47–61 (2007). [CrossRef] [PubMed]

]. Two geometrical features based upon the first and the second derivative of the intensity image, maximum gradient and principal curvature, are obtained at different scales by means of Gaussian derivative operators. A multiple pass region growing procedure is used which progressively segments the blood vessels using the feature information together with spatial information about the 8-neighbouring pixels. Figures 1(a) and 1(c) show two different views a fundus image and Fig. 1(b) and 1(d) their segmented binary images, respectively. The optic disc region in this pair of images is on the bottom-left, blood vessels are tracked from this area outwards.

3.1. Analysis of binary images

4. 3D skeleton reconstruction

Suppose a set of points of correspondence xixi is given, such as the skeleton points of a tree. It is assumed that these correspondences come from a set of 3D points Xi, which are unknown. The reconstruction task is to find the camera projection matrices P and P′, as well as the 3D points Xi such that
xi=PXixi=PXiforalli
(13)
The reconstruction process consist of: 1) compute the fundamental matrix F from the correspondences of tree skeleton points, 2) compute the camera projection matrices, P and P′, from fundamental matrix, and 3) for each pair of correspondence points xixi compute the point in the space that projects these two image points by linear triangulation [12

12. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge Uiversity Press, 2000).

, 15

15. R. I. Hartley, “Estimation of relative camera positions for uncalibrated cameras,” in Proceedings of the 2nd European Conference on Computer Vision, G. Sandini, ed. (Springer-Verlag, 1992), pp. 579–587.

].

5. Surface modelling

Since the 3D reconstruction algorithm described in Section 4 is applied only to the skeleton points of each blood vessel tree, we did not recover 3D information from the actual shape of its surface. We considered a fair assumption to describe the blood vessel segment cross-section as a circle in order to generate a 3D surface. Each blood vessel segment was modelled by a generalised cylinder [16

16. J. Ponce, “Straight homogeneous generalized cylinders: Differential geometry and uniqueness results,” Int. J. Comput. Vis. 4, 79–100 (1990). [CrossRef]

] which axis is the 3D skeleton estimated earlier. The radius of each circular cross-section is kept constant for each vessel segment, and it is obtained from the measures taken from the 2D fundus images [1

1. M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. 49, 912–917 (2002). [CrossRef]

]. The cylinders are scaled using the camera intrinsic parameters from the matrix K of the “fundus camera-eyeball” system.

6. Experimental results

Two sets of 10 fundus images each were taken belonging to two different subjects. Both sets of retinal images were taken using a fundus camera with a 50° field of view (Zeiss FF 450 IR). The first set of images were acquired with a CCD Sony camera (Sony Power HAD 3CCD Colour Video Camera) attached to the fundus camera. Images were 768 × 576 pixels in size. For the second set of images a CCD Canon camera (Canon EOS 5D Mark II) was used. Images were 783 × 522 pixels in size. For the purpose of these experiments, 8 different views were taken for the auto-calibration process and one different pair for the reconstruction example, for each set. The distortion of all the images captured was corrected and the fundus camera was manually calibrated using the procedure described in Section 2.1. Under the general assumption of having a sensor with square pixels and the angle between image axes θ = π/2, the following fundus camera calibration matrix, Kmi, was obtained as an initial condition for the auto-calibration technique. Since each set of data (for each subject) was taken with different CCD camera, we do compute a Kmi matrix for each one:
Km1=[909.680391.280909.68295.61001];Km2=[639.790460.750639.39247.02001]
Km1 for fundus camera with Sony camera and Km2 for fundus camera with Canon camera.

6.1. Camera auto-calibration matrix

For each set of images, eight different views of the fundus images were chosen for auto-calibration purposes. The branching points were detected from fundus images automatically, from the skeletons as explained in Section 3.1, and were matched using a normalised correlation technique followed by a LMedS estimator as described in Section 2.2.

Since we have 8 images acquired with constant camera intrinsic parameters, a total of 28 fundamental matrices can be defined. However due to the high disparity of some of these combination of pairs we undertake a further pair selection by using only those pairs which ROI (region of interest) overlapped was larger than 70%. The latter is to ensure we have enough correspondence points. After this selection they turn out to be 12 pairs. The average overlap area for pairs in set 1 is 87.7%, and for pairs in set 2 is 90.9%.

The parameters for correlation consider for both experiments were: a window size of 25×25 and a distance for searching window dmax = 300 pixels. The choice of these parameters depend on the image and feature size. To compensate for brightness differences between images, before applying the correlation, we subtract a smoothed image with an averaging filter of same size as the correlation window from each of the images. Figures 6(a) and 6(b) show one of these image pairs with the feature points, Fig. 6(c) shows the putative points after correlation and Fig. 6(d) shows the inlier points after LMedS minimisation.

Fig. 6 Matching process: (a) feature points on image 1 (n=55), (b) feature points on image 2 (n=59), (c) putative matches after normalised correlation (n=36) and (d) inlier points after LMedS minimisation (n=29). Images taken from set 1.

Finally, we assumed the initial parameters for auto-calibration equal to Kmi. Solving the system 5 we obtained the calibration matrix, Ki, for the whole “camera-eyeball” system as:
K1=[998.811372.740905.54317.42001];K2=[462.890.0090344.160515.60329.64001]
K1 for the first set and K2 for the second set.

Each auto-calibration matrix Ki shown above was the one with the smaller covariance matrix from those similar to the matrix Kmi. We presume that this difference between the fundus camera matrix and auto-calibrated matrix, might be due the optics of the eyeball, which is unknown and was not considered in the model of the system.

6.2. 3D skeleton reconstruction

One pair of images was selected and automatically segmented from each set of images [2

2. M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal. 11, 47–61 (2007). [CrossRef] [PubMed]

]. Skeletons of the extracted blood vessel trees, as those shown in Fig. 5(c) a vein from set 1 and 5(f) a vein from set 2, are marked and measured. Measurements are kept in an ASCII table per tree for further morphological analysis [1

1. M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. 49, 912–917 (2002). [CrossRef]

].

Since continuous blood vessel tree skeletons are marked they can be tracked along the tree. Using these tables, the correspondences (xixi) of a continuous tree skeletons from a pair of images are extracted automatically. Figures 7(a) and 7(b) show the same tree extracted from images Fig. 1(b) and 1(d), taken from set 1, respectively. Figure 7(c) shows the corresponding matched tree skeletons. Note that a small branch at the top-centre of the tree, shown in Fig. 7(a), is missing in image 2 of the same eye shown in Fig. 7(b). This small branch is of course automatically discarded in the correspondences, as it is shown in Fig. 7(c). Figures 7(d) and 7(e) show the same tree taken from images of set 2 and Fig. 7(f) corresponds to the matched tree skeletons.

Fig. 7 (a) T2 extracted tree from image 1 from set 1, (b) T2 extracted tree from image 2 from set 1 and (c) matched skeleton points between the two images. (d), (e) and (f) correspond to the same example using a pair of images from set 2.

A set of correspondences xixi for only one tree is needed in order to compute the two projection matrices P and P′, as described in Section 4, with the use of the camera auto-calibration matrix Ki calculated in Section 6.1. In Fig. 8 the epipolar lines corresponding to a subset of matching points for each of the two image pairs used to reconstruct the vascular trees are shown.

Fig. 8 (a) and (b) show the epipolar lines drawn on the image pair from subject 1, used to recover the 3D vascular tree, while (c) and (d) show the same from the image pair from subject 2.

Once the projection matrices are obtained, the triangulation of the rest of correspondences for each skeleton tree extracted from the pair of images is performed. A total of 2 trees were reconstructed from each set. Correspondences of the skeleton points were smoothed by a Gaussian kernel and interpolated using splines in order to obtain a smoother and refined version of the reconstructed skeletons. Figure 9 shows different 3D views of the blood vessel skeletons extracted and plotted on the same coordinate system. First raw of Fig. 9 corresponds to set 1 and second raw to set 2.

Fig. 9 (a) and (b) different 3D views of skeleton trees reconstruction from set 1. (c) and (d) same example from set 2.

Figures 9(b) and 9(d) show the curvature of the eyeball for each subject, respectively, these views correspond to about 50° field of view of the fundus camera. Figures 9(a) and 9(c) are just rotated versions of the formers.

6.3. Surface visualisation

Using the surface model described in Section 5, and based on the 3D skeleton points and vessel segment diameter, the surface of each vessel segment is generated. Each tree is visualised using Geomview (http://www.geomview.org/), a 3D viewing program. Each tree is built as a set of disconnected segments such as those shown in Fig. 7(a) and 7(d), each of them with a defined spatial position and orientation.

Each vessel segment surface is described by a mesh of square patches wrapped in one direction so that it forms a cylinder. The mesh vertices for each segment are calculated as follows: for each 3D skeleton point, i.e. the set that describes the generalised cylinder axis, a polygon or cross–section is considered. Each polygon is centred along the axis position in space, and its orientation is set so that the vector normal to the polygon centre is tangent to the skeleton axis at that point. The coordinates of the polygon vertices are used as the mesh vertices. Each vertex is connected to the two neighbouring vertices in the same cross–section and to the corresponding vertices in the neighbouring cross-sections. The vertices of the cross-section at the extreme points of each segment are only connected to one neighbouring cross-section. Three 3D views of different surface model of blood vessel trees are shown in Fig. 10.

Fig. 10 Blood vessel surfaces extracted from the pair of images. (a), (b) and (c) correspond to set 1; (d), (e) and (f) to set 2. Diameter measure of each vessel segment is obtained from the corresponded 2D binary images. (c) and (f) show both trees from each set respectively, where the curvature of each eyeball can be seen.

Figure 10(a) corresponds to the surface reconstruction of the tree shown in Fig. 5(c) a vein. Figure 10(b) corresponds to the surface reconstruction of the tree shown in Fig. 7(a) an artery and Fig. 10(c) shows both trees together but rotated to show the eyeball curvature of about 50°. Note that in Fig. 10(a) and 10(b) the optic disc is at the top left of the coordinate system, and blood vessel segments coming from that area are wider than those from the tips of the tree, as vessels naturally tapered. Figure 10(d) is a vein shown inf Fig. 5(f), Fig. 10(e) is an artery from the same pair and Fig. 10(f) are both trees together. Figures 10(a), 10(b) and 10(c) correspond to the set 1 whereas Fig. 10(d), 10(e) and 10(f) correspond to the set 2.

7. Conclusions

We have presented a 3D model of a retinal blood vessel tree, reconstructed from two different views of fundus images through well known computer vision techniques: camera lens distortion correction, correspondence matching, auto-calibration procedure, 3D projection, triangulation and surface meshing.

By contrast, [3

3. K. Deguchi, J. Noami, and H. Hontani, “3d fundus pattern reconstruction and display from multiple fundus images,” in Proceedings 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 94–97. [CrossRef]

, 4

4. K. Deguchi, D. Kawamata, K. Mizutani, H. Hontani, and K. Wakabayashi, “3d fundus shape reconstruction and display from stereo fundus images,” IEICE Trans. Inf. Syst. E83-D, 1408–1414 (2000).

] do not use an auto-calibration procedure but instead they employed two parallel planes in front of the fundus camera and used a transparent acrylic plate with a grid in order to calibrate the camera. Additionally, they projected the correspondences and the fundus pattern into a spherical surface which means that even a vessel crossing point belongs to the same surface. In the case of [5

5. T. E. Choe, G. Medioni, I. Cohen, A. C. Walsh, and S. R. Sadda, “2-D registration and 3-D shape inference of the retinal fundus from fluorescein images,” Med. Image Anal. 12, 174–190 (2008). [CrossRef]

] they reconstructed the retinal fundus as a whole without distinguishing between the retinal fundus itself and blood vessels. Finally [6

6. D. Liu, N. Wood, X. Xu, N. Witt, A. Hughes, and S. Thom, “3D reconstruction of the retinal arterial tree using subject-specific fundus images,” in Advances in Computational Vision and Medical Image Processing, J. M. R. S. Tavares and R. M. N. Jorge ed. (Springer, 2009), pp. 187–201. [CrossRef]

] used a self-calibration approach based on the essential matrix, where they only reconstruct the skeleton lines of blood vessels using a 30° FOV, therefore the eyeball curvature is not clear. A more realistic assumption, however can be seen from Fig. 10(c) and 10(f) in which all vessel trees surfaces follow the same eyeball curvature but they do not belong to the same spherical surface. Moreover, we reconstructed blood vessel trees rather than fundus patterns.

The core of the reconstruction presented in this paper is the auto-calibration technique which was based on the work of [9

9. M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999).

]. The authors have shown experimental results from extensive simulations and several image sequences which demonstrate the effectiveness of their method in accurately estimating the intrinsic calibration matrices. Their evaluation was done with syntetic and real data taken from buildings. Unlike [9

9. M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999).

], our initial camera parameters for the auto-calibration algorithm were calculated using a classical camera calibration algorithm [11

11. K. Kanatani, Geometry Computation for Machine Vision (Oxford Science Publications, 1993).

], the latter makes our auto–calibration implementation faster to converge, since the initial value is nearer to the solution.

There are various questions that still have to be addressed in order to use this model for Euclidean measurable applications. The most significant is the evaluation of the current blood vessel surfaces, this is a difficult process because unlike buildings, retinal fundus has the difficulty in accessing grand truth measurements. A possible source of grand truth measurements can be obtained from ultrasound or optical coherence tomography (OCT) data, therefore future work will attempt to do so. Another issue to be addressed is that since trees are extracted from individual vessel segments separately, branching regions are not considered in the mesh of the current model. At the present Fig. 9 and 10 only show projective views of the reconstruction. Finally, larger eyeball views could be obtained by using multiple view geometry methods. Nonetheless our results are promising and, after clearing up these issues, the 3D model obtained could be used for realistic applications.

The representation of vascular trees with 3D models offers many advantages for ophthalmology: it can provide a more realistic view for clinical and for educational purposes, it can permit to extend all the geometrical and topological properties already measured in 2D images [1

1. M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. 49, 912–917 (2002). [CrossRef]

] to a 3D model and finally, it can be used for blood flow simulation.

Acknowledgments

This work was supported in part by CONACYT Grant CB-2007-83088 and in part by SEP PIFI 2011-31MSU0098J-15.

References and links

1.

M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng. 49, 912–917 (2002). [CrossRef]

2.

M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal. 11, 47–61 (2007). [CrossRef] [PubMed]

3.

K. Deguchi, J. Noami, and H. Hontani, “3d fundus pattern reconstruction and display from multiple fundus images,” in Proceedings 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 94–97. [CrossRef]

4.

K. Deguchi, D. Kawamata, K. Mizutani, H. Hontani, and K. Wakabayashi, “3d fundus shape reconstruction and display from stereo fundus images,” IEICE Trans. Inf. Syst. E83-D, 1408–1414 (2000).

5.

T. E. Choe, G. Medioni, I. Cohen, A. C. Walsh, and S. R. Sadda, “2-D registration and 3-D shape inference of the retinal fundus from fluorescein images,” Med. Image Anal. 12, 174–190 (2008). [CrossRef]

6.

D. Liu, N. Wood, X. Xu, N. Witt, A. Hughes, and S. Thom, “3D reconstruction of the retinal arterial tree using subject-specific fundus images,” in Advances in Computational Vision and Medical Image Processing, J. M. R. S. Tavares and R. M. N. Jorge ed. (Springer, 2009), pp. 187–201. [CrossRef]

7.

M. E. Martinez-Perez and A. Espinosa-Romero, “3D Reconstruction of Retinal Blood Vessels From Two Views,” in Proceedings of the 4th Indian Conference on Computer Vision, Graphics and Image Processing, B. Chanda, S. Chandran, and L. Davis, ed. (Indian Statistical Insitute, 2004), pp. 258–263.

8.

J. Arnold, J. Gates, and K. Taylor, “Possible errors in the measurement of retinal lesions,” Invest. Ophthalmol. Vis. Sci. 34, 2576–2580 (1993). [PubMed]

9.

M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999).

10.

M. Sonka, Image Processing, Analysis, and Machine Vision (Thomson, 2008).

11.

K. Kanatani, Geometry Computation for Machine Vision (Oxford Science Publications, 1993).

12.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge Uiversity Press, 2000).

13.

P. J. Rousseeuw and A. M. Leroy, Robust Regression and Outilier Detection (John Wiley & Sons, 1987). [CrossRef]

14.

Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Research Report 2273, INRIA Sophia-Antipolis, (1994).

15.

R. I. Hartley, “Estimation of relative camera positions for uncalibrated cameras,” in Proceedings of the 2nd European Conference on Computer Vision, G. Sandini, ed. (Springer-Verlag, 1992), pp. 579–587.

16.

J. Ponce, “Straight homogeneous generalized cylinders: Differential geometry and uniqueness results,” Int. J. Comput. Vis. 4, 79–100 (1990). [CrossRef]

OCIS Codes
(100.2000) Image processing : Digital image processing
(150.0150) Machine vision : Machine vision
(170.3010) Medical optics and biotechnology : Image reconstruction techniques
(170.4470) Medical optics and biotechnology : Ophthalmology

ToC Category:
Medical Optics and Biotechnology

History
Original Manuscript: November 16, 2011
Revised Manuscript: March 5, 2012
Manuscript Accepted: March 6, 2012
Published: May 4, 2012

Virtual Issues
Vol. 7, Iss. 7 Virtual Journal for Biomedical Optics

Citation
M. Elena Martinez-Perez and Arturo Espinosa-Romero, "Three-dimensional reconstruction of blood vessels extracted from retinal fundus images," Opt. Express 20, 11451-11465 (2012)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-10-11451


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. M. E. Martinez-Perez, A. D. Hughes, A. V. Stanton, S. A. Thom, N. Chapman, A. A. Bharath, and K. H. Parker, “Retinal vascular tree morphology: A semi-automatic quantification,” IEEE Trans. Biomed. Eng.49, 912–917 (2002). [CrossRef]
  2. M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Med. Image Anal.11, 47–61 (2007). [CrossRef] [PubMed]
  3. K. Deguchi, J. Noami, and H. Hontani, “3d fundus pattern reconstruction and display from multiple fundus images,” in Proceedings 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 94–97. [CrossRef]
  4. K. Deguchi, D. Kawamata, K. Mizutani, H. Hontani, and K. Wakabayashi, “3d fundus shape reconstruction and display from stereo fundus images,” IEICE Trans. Inf. Syst.E83-D, 1408–1414 (2000).
  5. T. E. Choe, G. Medioni, I. Cohen, A. C. Walsh, and S. R. Sadda, “2-D registration and 3-D shape inference of the retinal fundus from fluorescein images,” Med. Image Anal.12, 174–190 (2008). [CrossRef]
  6. D. Liu, N. Wood, X. Xu, N. Witt, A. Hughes, and S. Thom, “3D reconstruction of the retinal arterial tree using subject-specific fundus images,” in Advances in Computational Vision and Medical Image Processing, J. M. R. S. Tavares and R. M. N. Jorge ed. (Springer, 2009), pp. 187–201. [CrossRef]
  7. M. E. Martinez-Perez and A. Espinosa-Romero, “3D Reconstruction of Retinal Blood Vessels From Two Views,” in Proceedings of the 4th Indian Conference on Computer Vision, Graphics and Image Processing, B. Chanda, S. Chandran, and L. Davis, ed. (Indian Statistical Insitute, 2004), pp. 258–263.
  8. J. Arnold, J. Gates, and K. Taylor, “Possible errors in the measurement of retinal lesions,” Invest. Ophthalmol. Vis. Sci.34, 2576–2580 (1993). [PubMed]
  9. M. I. A. Lourakis and R. Deriche, “Camera self-calibration using the singular value decomposition of the fundamental matrix: from point correspondences to 3d measurements,” Research Report 3748, INRIA Sophia-Antipolis, (1999).
  10. M. Sonka, Image Processing, Analysis, and Machine Vision (Thomson, 2008).
  11. K. Kanatani, Geometry Computation for Machine Vision (Oxford Science Publications, 1993).
  12. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge Uiversity Press, 2000).
  13. P. J. Rousseeuw and A. M. Leroy, Robust Regression and Outilier Detection (John Wiley & Sons, 1987). [CrossRef]
  14. Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Research Report 2273, INRIA Sophia-Antipolis, (1994).
  15. R. I. Hartley, “Estimation of relative camera positions for uncalibrated cameras,” in Proceedings of the 2nd European Conference on Computer Vision, G. Sandini, ed. (Springer-Verlag, 1992), pp. 579–587.
  16. J. Ponce, “Straight homogeneous generalized cylinders: Differential geometry and uniqueness results,” Int. J. Comput. Vis.4, 79–100 (1990). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Multimedia

Multimedia FilesRecommended Software
» Media 1: MPG (3153 KB)      QuickTime
» Media 2: MPG (2936 KB)      QuickTime
» Media 3: MPG (6384 KB)      QuickTime
» Media 4: MPG (2438 KB)      QuickTime
» Media 5: MPG (1653 KB)      QuickTime
» Media 6: MPG (4242 KB)      QuickTime

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited