OSA's Digital Library

Biomedical Optics Express

Biomedical Optics Express

  • Editor: Joseph A. Izatt
  • Vol. 5, Iss. 8 — Aug. 1, 2014
  • pp: 2458–2470
« Show journal navigation

Joint iris boundary detection and fit: a real-time method for accurate pupil tracking

Marconi Barbosa and Andrew C. James  »View Author Affiliations


Biomedical Optics Express, Vol. 5, Issue 8, pp. 2458-2470 (2014)
http://dx.doi.org/10.1364/BOE.5.002458


View Full Text Article

Acrobat PDF (1120 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

A range of applications in visual science rely on accurate tracking of the human pupil’s movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust.

© 2014 Optical Society of America

1. Introduction

One approach to boundary localization in gray-level images to is to find a set of points that define the object by an edge-detecting mechanism and proceed to find a geometric curve that best fits that set, see for example [1

1. E. S. Maini, “Robust ellipse-specific fitting for real-time machine vision,” in Brain, Vision, and Artificial Intelligence, M. Gregorio, V. Maio, M. Frucci, and C. Musio, eds. (Springer Berlin Heidelberg, 2005) vol. 3704, pp. 318–327. [CrossRef]

3

3. K. Kanatani, “Ellipse fitting with hyperaccuracy,” IEICE Trans. Inf. Syst. E89-D, 2653–2660 (2006). [CrossRef]

] and [4

4. K. Kanatani, “Statistical bias of conic fitting and renormalization,” IEEE Trans. Pattern Analysis Mach. Intell. 16, 320–326 (1994). [CrossRef]

]. In these approaches no prior assumption is made as to what shape the boundary might have, even if later one tries to fit a simple geometric curve to the set of points found to describe it. We refer to this approach hereafter as find and fit methods. In such methods, it is often assumed that the data points representing the geometric curve are uniformly sampled and this is hardly the case when an arc or section are missing. When part of a pupil image is uncertain, e.g. momentarily covered by eyelids or eyelashes, this uncertainty will propagate into the fit [5

5. J. Porrill, “Fitting ellipses and predicting confidence envelopes using a bias corrected kalman filter,” Image Vis. Comput. 8, 37–41 (1990). [CrossRef]

] often in a catastrophic way for tracking.

With the notable exception of the integro-differential operator method for circles [6

6. J. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Analysis Mach. Intell. 15(11), 1148–1161 (1993). [CrossRef]

] and for elliptical contours [7

7. S. A. C. Schuckers, N. A. Schmid, A. Abhyankar, V. Dorairaj, C. K. Boyce, and L. A. Hornak, “On techniques for angle compensation in nonideal iris recognition,” IEEE transactions on systems, man, cybernetics. Part B, Cybern. a publication IEEE Syst. Man, Cybern. Soc. , 37, 1176–1190 (2007). [CrossRef] [PubMed]

], global methods—which enforce a priori knowledge of the geometric curve describing the boundary—have been mostly overlooked. The integro-differential operator method leverages the fact that the pupil is generally darker than the iris. By taking the absolute value of the optimization function, the method can be extended to detect brighter than iris, in cases of abnormal lens opacity (cataract) or the occasional red-eye effect caused by coaxial illumination. However, for non-ideal, noisy images this method is very sensitive to artifacts, particularly reflections seen inside the pupil or at its border. In specific contexts this method can be successfully implemented [8

8. W. Sankowski, K. Grabowski, M. Napieralska, M. Zubert, and A. Napieralski, “Reliable algorithm for iris segmentation in eye image,” Image Vis. Comput. , 28, 231–237, 2010. [CrossRef]

] albeit with preprocessing (heuristics) to deal with artifacts which further degrades its computational performance. Preprocessing has the potential to increase the number of parameters in the procedure to a level similar to those in find and fit methods, eroding its attractiveness for simple shapes.

An analysis of the optimization problem for the proposed method and for the integro-differential operator method is carried out, including a stability analysis of their respective solutions. The performance of these methods is accessed using a numerically simulated pupil, where the true boundary is known beforehand. We also perform the same analysis using real images, where the true boundary is established (subjectively) by visual inspection. We illustrate the use of our method for tracking real recording data, highlighting a variety of dynamic artifacts that it can overcome.

2. Optimization methods

2.1. Integro-differential operators

2.2. Proposed method

Fig. 1 An example of a pupil image showing a candidate fit in dashed line and the (far off) initial circle using a solid line. The parameters for the pupil-iris boundary fit corresponds to the minimum of the proposed objective function in Eq. (6). Image from [17].

3. Optimization landscape

Because most segmentation tasks can be very subjective, we first turn to pupil simulations where we can control exactly where the center and boundary are located. We study in this section increasingly realistic pupil profiles, producing increasingly better approximations of the optimization landscape, giving a general outlook of the challenges the subsequent optimization algorithms will face.

3.1. Idealized pupil, analytic

In this section we examine in more detail the optimization problem in Eq. (6). An idealized boundary for the pupil border can be modeled by a smooth approximation to the Heaviside function such as
H(δz)=limk(1+erf(kδz))2,
(9)
and Z becomes
Z=w(z)H(δz)dzdθ,=πexpz2/2(1+erf(kδz)+δkerf(αz)α,
(10)
where α = (1/2 + 2)1/2.

Thus, as is the case with the integro-differential operator method of Eq. (1), we can have a glimpse of objective function of in Eq. (6) by using an idealized pupil profile approximation such as in Eq. (9). Figure 2 shows this artificial minimization valley—a manageable gaussian-like profile with a clear minimum for a range of values of .

Fig. 2 An approximation of the objective function Z in Eq. (6), as a function of z and for various values of the product . The pupil Heaviside approximation parameter is k while δ is the scale parameter in Eq. (5).

3.2. Idealized pupil, numeric

Fig. 3 The optimization landscape for the idealized pupil of Eq. (10). On the left is the landscape contours for the integro-differential operator method of Eq. (1), and on the right for the objective function of the method proposed in this paper in Eq. (6). Both the radius r and center c in pixels.

3.3. Real pupil

Fig. 4 The optimization landscape for the real pupil image of Fig. 1. On the left is the landscape contours for the integro-differential operator method of Eq. (1) and on the right for the objective function of this paper, Eq. (6). Actual radius at r = 24 and center c = 31, in pixels.

4. Stability analysis

4.1. Ideal pupil

We first removed the effect of distracting artifacts, by using an iris simulation. We study the ability of the optimization algorithm, for both the method proposed here and the integro-differential operator method, to find the best solution from various initial positions and radii. For sake of comparison we used the same Nelder–Mead simplex algorithm (see [18

18. J. A. Nelder and R. Mead, “A simplex method for function minimization,” Comp. J. 7, 308–313 (1965). [CrossRef]

,19

19. J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the nelder-mead simplex method in low dimensions,” SIAM J. Optim. 9, 112–147 (1998). [CrossRef]

]) for both optimization problems. Figure 5 shows the relative distance from the final output of the optimization solution to the known boundary of the pupil simulation, in terms of RMSD, as a function of the various initial conditions. It is clear from Fig. 5(a) that, for the integro-differential operator method, the error increases consistently with an increasing initial radius, while being not so sensitive about the offset angle and even less so about the offset, Fig. 5(b). For the method proposed here we can see in Fig. 5(c) and (d), that the error remains about an order of magnitude smaller, for the whole range of initial conditions—regardless of type. This is consistent with what one would expect from the smoother landscape seen in Fig. 3.

Fig. 5 Stability analysis of integro-differential (left) and the method proposed here (right), using a simulated pupil and a range of initial radii. In the top row the deviation is shown as a function of the offset angle (averaged by offset), while in the bottom row it is displayed as function of the offset (averaged by offset angle).

4.2. Real pupil

Using the real pupil image of Fig. 1 again, we selected by visual inspection the center and radius that will serve as the ground truth. Figure 6 shows the relative deviation in RMSD terms, from the solution of the maximization problem to the subjectively collected ground-truth parameters. For the integro-differential operator method the error grows as the initial radius increases and is less sensitive to either the offset or its angle, Figs. 6(a) and (b). On the other hand—for the method proposed here and as it was for the artificial pupil—the fit error is still one order of magnitude smaller [see Figs. 6(c) and (d)] for the whole range of initial conditions. This is consistent with the more cluttered optimization landscape shown for a real iris and the fact that this landscape (see Fig. 4) deteriorates much less for the method proposed here.

Fig. 6 Stability analysis of integro-differential (left) and the method proposed here (right), using a real pupil and a range of initial radii. In the top row the deviation is shown as a function of the offset angle (averaged by offset), while in the bottom row it is displayed as function of the offset (averaged by offset angle).

5. Tracking

5.1. Hardware

Dichoptic stimulation [16

16. C. F. Carle, T. Maddess, and A. C. James, “Contraction anisocoria: Segregation, summation, and saturation in the pupillary pathway,” Invest. Ophthalmol. Vis. Sci. 52, 2365–2371 (2011). [CrossRef] [PubMed]

] was provided at 60Hz via a pair of stereoscopically arranged LCD displays. The individual stimulus regions of the visual field were spatially low pass filtered to contain no spatial frequencies above 1.5 cpd. This assisted in providing tolerance to mis-refraction of 2 to 3 D. Subjects were refracted to the nearest 1.5 D spherical equivalent. Each region received statistically independent stimulus presentations at a mean rate of 1/s/stimulus region. The aggregate presentation rate to the two eyes was thus 48 stimuli/s. The recording duration was 4 min for each test, divided into eight segments of 30s duration. Pupil responses were recorded by video cameras under infrared illumination. The video sampling was at 30 frames/s, made synchronous (via software) with the LCD displays.

5.2. Offline method

5.3. Results

Fig. 8 An example of tracking with lots of blinks and tear formation, see Media 1.
Fig. 9 An example of tracking with prominent droopy eye-lid interference, see Media 2.
Fig. 10 An example of tracking with large dynamic range and uneven shape during dilation, see Media 3.

6. Discussion

We illustrate the quasi-real time tracking ability our method (see Figs. 8, 9, 10 and their associated Media 1, Media 2, Media 3) on real pupils in unfavourable conditions with an offline algorithm running at 0.05s per frame in Matlab®. We assume byreal time a value close to 1/24s per frame, thus real time performance is likely to be achieved by using a non-interpreted programming language. The Matlab® implementation of our algorithm will be made openly available.

7. Conclusion

References and links

1.

E. S. Maini, “Robust ellipse-specific fitting for real-time machine vision,” in Brain, Vision, and Artificial Intelligence, M. Gregorio, V. Maio, M. Frucci, and C. Musio, eds. (Springer Berlin Heidelberg, 2005) vol. 3704, pp. 318–327. [CrossRef]

2.

A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Analysis Mach. Intell. 21, 476–480 (1999). [CrossRef]

3.

K. Kanatani, “Ellipse fitting with hyperaccuracy,” IEICE Trans. Inf. Syst. E89-D, 2653–2660 (2006). [CrossRef]

4.

K. Kanatani, “Statistical bias of conic fitting and renormalization,” IEEE Trans. Pattern Analysis Mach. Intell. 16, 320–326 (1994). [CrossRef]

5.

J. Porrill, “Fitting ellipses and predicting confidence envelopes using a bias corrected kalman filter,” Image Vis. Comput. 8, 37–41 (1990). [CrossRef]

6.

J. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Analysis Mach. Intell. 15(11), 1148–1161 (1993). [CrossRef]

7.

S. A. C. Schuckers, N. A. Schmid, A. Abhyankar, V. Dorairaj, C. K. Boyce, and L. A. Hornak, “On techniques for angle compensation in nonideal iris recognition,” IEEE transactions on systems, man, cybernetics. Part B, Cybern. a publication IEEE Syst. Man, Cybern. Soc. , 37, 1176–1190 (2007). [CrossRef] [PubMed]

8.

W. Sankowski, K. Grabowski, M. Napieralska, M. Zubert, and A. Napieralski, “Reliable algorithm for iris segmentation in eye image,” Image Vis. Comput. , 28, 231–237, 2010. [CrossRef]

9.

H. Yuen, J. Princen, J. Illingworth, and J. Kittler, “Comparative study of hough transform methods for circle finding,” Image Vis. Comput. 8, 71–77 (1990). [CrossRef]

10.

H. Proenca, “Iris recognition: On the segmentation of degraded images acquired in the visible wavelength,” IEEE Trans. Pattern Analysis Mach. Intell. 32(8), 1502–1516 (2010). [CrossRef]

11.

Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Analysis Mach. Intell. 31(9), 1670–1684 (2009). [CrossRef]

12.

Z. He, T. Tan, and Z. Sun, “Iris localization via pulling and pushing,” in 18th International Conference on Pattern Recognition, 2006. ICPR 2006, 4, 366–369, 2006.

13.

T. Camus and R. Wildes, “Reliable and fast eye finding in close-up images,” in 16th International Conference on Pattern Recognition, 2002. Proceedings, 1, 389–394 vol. 1, 2002.

14.

Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, and J. Andrian, “A highly accurate and computationally efficient approach for unconstrained iris segmentation,” Image Vis. Comput. 28, 261–269 (2010). [CrossRef]

15.

A. Bell, A. C. James, M. Kolic, R. W. Essex, and T. Maddess, “Dichoptic multifocal pupillography reveals afferent visual field defects in early type 2 diabetes,” Invest. Ophthalmol. Vis. Sci. 51, 602–608 (2010). [CrossRef]

16.

C. F. Carle, T. Maddess, and A. C. James, “Contraction anisocoria: Segregation, summation, and saturation in the pupillary pathway,” Invest. Ophthalmol. Vis. Sci. 52, 2365–2371 (2011). [CrossRef] [PubMed]

17.

J. Miles., www.milesresearch.com. Image use permission kindly granted by owner.

18.

J. A. Nelder and R. Mead, “A simplex method for function minimization,” Comp. J. 7, 308–313 (1965). [CrossRef]

19.

J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the nelder-mead simplex method in low dimensions,” SIAM J. Optim. 9, 112–147 (1998). [CrossRef]

20.

G. Taubin, “Estimation of planar curves, surfaces, and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation,” IEEE Trans. Pattern Analysis Mach. Intell. 13, 1115–1138 (1991). [CrossRef]

21.

C. Broyden, “The convergence of a class of double-rank minimization algorithms 1. general considerations,” IMA J. Appl. Math. 6, 76–90 (1970). [CrossRef]

22.

R. Fletcher, “A new approach to variable metric algorithms,” The Comp. J. 13, 317–322 (1970). [CrossRef]

23.

D. Goldfarb, “A family of variable-metric methods derived by variational means,” Math. Comput. 24, 23–26 (1970). [CrossRef]

24.

D. F. Shanno, “Conditioning of quasi-newton methods for function minimization,” Math. Comput. 24, 647–656 (1970). [CrossRef]

OCIS Codes
(100.0100) Image processing : Image processing
(150.0150) Machine vision : Machine vision
(170.4470) Medical optics and biotechnology : Ophthalmology
(330.2210) Vision, color, and visual optics : Vision - eye movements
(150.1135) Machine vision : Algorithms
(100.4999) Image processing : Pattern recognition, target tracking

ToC Category:
Image Processing

History
Original Manuscript: April 17, 2014
Revised Manuscript: June 20, 2014
Manuscript Accepted: June 22, 2014
Published: July 2, 2014

Citation
Marconi Barbosa and Andrew C. James, "Joint iris boundary detection and fit: a real-time method for accurate pupil tracking," Biomed. Opt. Express 5, 2458-2470 (2014)
http://www.opticsinfobase.org/boe/abstract.cfm?URI=boe-5-8-2458


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. E. S. Maini, “Robust ellipse-specific fitting for real-time machine vision,” in Brain, Vision, and Artificial Intelligence, M. Gregorio, V. Maio, M. Frucci, and C. Musio, eds. (Springer Berlin Heidelberg, 2005) vol. 3704, pp. 318–327. [CrossRef]
  2. A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Analysis Mach. Intell.21, 476–480 (1999). [CrossRef]
  3. K. Kanatani, “Ellipse fitting with hyperaccuracy,” IEICE Trans. Inf. Syst.E89-D, 2653–2660 (2006). [CrossRef]
  4. K. Kanatani, “Statistical bias of conic fitting and renormalization,” IEEE Trans. Pattern Analysis Mach. Intell.16, 320–326 (1994). [CrossRef]
  5. J. Porrill, “Fitting ellipses and predicting confidence envelopes using a bias corrected kalman filter,” Image Vis. Comput.8, 37–41 (1990). [CrossRef]
  6. J. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Analysis Mach. Intell.15(11), 1148–1161 (1993). [CrossRef]
  7. S. A. C. Schuckers, N. A. Schmid, A. Abhyankar, V. Dorairaj, C. K. Boyce, and L. A. Hornak, “On techniques for angle compensation in nonideal iris recognition,” IEEE transactions on systems, man, cybernetics. Part B, Cybern. a publication IEEE Syst. Man, Cybern. Soc., 37, 1176–1190 (2007). [CrossRef] [PubMed]
  8. W. Sankowski, K. Grabowski, M. Napieralska, M. Zubert, and A. Napieralski, “Reliable algorithm for iris segmentation in eye image,” Image Vis. Comput., 28, 231–237, 2010. [CrossRef]
  9. H. Yuen, J. Princen, J. Illingworth, and J. Kittler, “Comparative study of hough transform methods for circle finding,” Image Vis. Comput.8, 71–77 (1990). [CrossRef]
  10. H. Proenca, “Iris recognition: On the segmentation of degraded images acquired in the visible wavelength,” IEEE Trans. Pattern Analysis Mach. Intell.32(8), 1502–1516 (2010). [CrossRef]
  11. Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Trans. Pattern Analysis Mach. Intell.31(9), 1670–1684 (2009). [CrossRef]
  12. Z. He, T. Tan, and Z. Sun, “Iris localization via pulling and pushing,” in 18th International Conference on Pattern Recognition, 2006. ICPR 2006, 4, 366–369, 2006.
  13. T. Camus and R. Wildes, “Reliable and fast eye finding in close-up images,” in 16th International Conference on Pattern Recognition, 2002. Proceedings, 1, 389–394 vol. 1, 2002.
  14. Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, and J. Andrian, “A highly accurate and computationally efficient approach for unconstrained iris segmentation,” Image Vis. Comput.28, 261–269 (2010). [CrossRef]
  15. A. Bell, A. C. James, M. Kolic, R. W. Essex, and T. Maddess, “Dichoptic multifocal pupillography reveals afferent visual field defects in early type 2 diabetes,” Invest. Ophthalmol. Vis. Sci.51, 602–608 (2010). [CrossRef]
  16. C. F. Carle, T. Maddess, and A. C. James, “Contraction anisocoria: Segregation, summation, and saturation in the pupillary pathway,” Invest. Ophthalmol. Vis. Sci.52, 2365–2371 (2011). [CrossRef] [PubMed]
  17. J. Miles., www.milesresearch.com . Image use permission kindly granted by owner.
  18. J. A. Nelder and R. Mead, “A simplex method for function minimization,” Comp. J.7, 308–313 (1965). [CrossRef]
  19. J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the nelder-mead simplex method in low dimensions,” SIAM J. Optim.9, 112–147 (1998). [CrossRef]
  20. G. Taubin, “Estimation of planar curves, surfaces, and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation,” IEEE Trans. Pattern Analysis Mach. Intell.13, 1115–1138 (1991). [CrossRef]
  21. C. Broyden, “The convergence of a class of double-rank minimization algorithms 1. general considerations,” IMA J. Appl. Math.6, 76–90 (1970). [CrossRef]
  22. R. Fletcher, “A new approach to variable metric algorithms,” The Comp. J.13, 317–322 (1970). [CrossRef]
  23. D. Goldfarb, “A family of variable-metric methods derived by variational means,” Math. Comput.24, 23–26 (1970). [CrossRef]
  24. D. F. Shanno, “Conditioning of quasi-newton methods for function minimization,” Math. Comput.24, 647–656 (1970). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: AVI (9271 KB)     
» Media 2: AVI (7780 KB)     
» Media 3: AVI (8360 KB)     

Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited