## Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy

Optics Express, Vol. 14, Issue 2, pp. 487-497 (2006)

http://dx.doi.org/10.1364/OPEX.14.000487

Acrobat PDF (228 KB)

### Abstract

We apply a novel computational technique known as the map-seeking circuit algorithm to estimate the motion of the retina of eye from a sequence of frames of data from a scanning laser ophthalmoscope. We also present a scheme to dewarp and co-add frames of retinal image data, given the estimated motion. The motion estimation and dewarping techniques are applied to data collected from an adaptive optics scanning laser ophthalmoscopy.

© 2006 Optical Society of America

## 1. Introduction

1. A. Roorda, F Romero-Borja, W.J. Donnelly, T.J. Hebert, H. Queener, and M.C.W. Campbell, “Adaptive Optics Scanning Laser Ophthalmoscopy,” Opt. Express **10**, 405–412 (2002). [PubMed]

2. J. Liang, D. R. Williams, and D. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A **14**, 2884–2892 (1997). [CrossRef]

3. R. H. Webb, G. W. Hughes, and F. C. Delori, “Confocal scanning laser ophthalmoscope,” Appl. Opt. **26**, 1492–1499 (1987). [CrossRef] [PubMed]

5. S.B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” in Ophthalmic Technologies XV, edited by
Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. **5688A** (SPIE, Bellingham, WA, 2005), pp. 145–151. [CrossRef]

6. D. P. Wornson and G. W. Hughes, et al., “Fundus tracking with the scanning laser ophthalmoscope,” Appl. Opt. **26**, 1500–1504 (1987). [CrossRef] [PubMed]

7. N. J. O’Connor and D. U. Bartsch, et al, “Fluorescent infrared scanning-laser ophthalmoscope for three-dimensional visualization: automatic random-eye-motion correction and deconvolution,” Appl. Opt. **37**, 2021–2033 (1998). [CrossRef]

8. E. Decastro and G. Cristini, et al, “Compensation of random eye motion in television ophthalmoscopy—preliminary results,” IEEE Transactions on Medical Imaging **6**, 74–81 (1987). [CrossRef]

9. A. V. Cideciyan, “Registration of ocular fundus images—an algorithm using cross-correlation of triple invariant image descriptors,” IEEE Engineering in Medicine and Biology Magazine **14**, 52–58 (1995). [CrossRef]

## 2. Mathematical Model for Scanning Data

**x**= (

*x*,

*y*) denote lateral (horizontal and vertical) position in the retinal layer being scanned (if there were no lateral displacements due to eye movements), and let

*E*(

**x**) denote its reflectivity. Let

**r**(

*t*) = (

*r*(

_{H}*t*),

*r*(

_{V}*t*)) represent the known raster position at time

*t*, and let

**X**(

*t*) = (

*X*(

*t*),

*Y*(

*t*)) denote the unknown lateral displacement of the retina. A continuous model for preprocessed, noise-free AOSLO scanning data is then

*η*represents noise and the

_{i}*t*denote discrete pixel recording times.

_{i}*r*and

_{H}*r*, are periodic sawtooths. Thus in the absence of retinal motion, the AOSLO would measure the reflectivity of the retinal layer at discrete, equispaced points on a rectangular grid. For the first generation AOSLO, this grid is 512 pixels across by 480 pixels vertically and it is resampled each 1/30 second. With retinal motion, the sampled grid moves and is distorted so that it is no longer rectangular; see [5

_{V}5. S.B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” in Ophthalmic Technologies XV, edited by
Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. **5688A** (SPIE, Bellingham, WA, 2005), pp. 145–151. [CrossRef]

## 3. Motion Retrieval Algorithms

### 3.1. Translational Motion Estimation Via Cross-Correlation

**v**,

*E*and

*E*′ is defined to be

*k*, ℓ which maximize cross-correlation between consecutive pairs of discrete, rectangular AOSLO image frames, one can obtain an estimate for the offset

*τ*

_{f}**v**in (4), and then extract the drift velocity

**v**.

*τ*

_{f}**v**.

5. S.B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” in Ophthalmic Technologies XV, edited by
Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. **5688A** (SPIE, Bellingham, WA, 2005), pp. 145–151. [CrossRef]

### 3.2. Motion Detection Using the MSC Algorithm

*T*, from a given class of transformations, that maps a match image to a reference image. In the context of maximizing the cross-correlation (5), the transformations are translations and can be discretely parameterized as

*k*pixels, which we denote by

*T*

_{k}^{(1)}, followed by a vertical shift of

*ℓ*pixels, denoted by

*T*

_{ℓ}^{(2)}, so that

*k*,

*ℓ*) to maximize the cross-correlation, we select coefficient, or “gain”, vectors

**g**

^{(1)}= (

*g*

_{1}

^{(1)},

*g*

_{2}

^{(1)},...) and

**g**

^{(2)}= (

*g*

_{1}

^{(2)},

*g*

_{2}

^{(2)},...) which maximize the extended cross-correlation,

*N*log

*N*, where

*N*is the number of pixels, the computational expense of MSC is proportional to

*N*, and the proportionality constant is quite small. MSC’s cost can be dramatically reduced if one can obtain a “sparse encoding” of the information in the images. By this we mean a very compact representation which requires minimal storage.

*T*

_{m}^{(3)}to the decomposition (6). The

*T*

_{m}^{(3)}represents a discretization of the rotations to be considered.

### 3.3. Image Preprocessing

## 4. Image Dewarping

*E*denote the reference image and let

*E*′ denote the “match” image onto which the reference image is mapped via a transformation

*T*. We map the match image back to the reference image using the inverse transformation to obtain a new image,

**x**lies on a regular reference grid, then

**x**′ =

*T*

**x**is unlikely to lie on this same regular grid. In a reciprocal manner, the match image

*E*′ is implicitly collected on a rectangular array. Points

**x**′ on this “match array” will not map back to the reference grid. See Fig. 1 for an illustration.

*Earray*, which will contain intensity values each point in the reference grid, and

*Warray*, which contains weights at each point in the reference grid. We then sweep through the points in the match grid. For each point

**x**′

_{i}in the match grid, we apply the inverse transformation to obtain

**x**″

_{i}=

*T*

^{-1}

**x**′

_{i}. We next determine into which rectangle in the reference grid

**x**″

_{i}falls. For each of the four reference grid points

**x**

_{j}at the corner of this rectangle, we do the following: If

*Warray*(

*j*) = 0, we reset intensity value

*Earray*(

*j*) =

*E*′(

**x**′

_{i}), and we reset the weight

*x*and

_{j}*y*denote the x- and y-components of

_{j}**x**

_{y}, and similarly for

*x*″

_{i}and

*y*″

_{i}, and

*h*and

_{x}*h*denote the mesh spacings in the x- and y-directions. On the other hand, if the current weight

_{y}*Warray*(

*j*) ≠ 0, we compute a new weight as before, but we reset the intensity value

*Earray*(

*j*) to be a convex combination of the old intensity value and

*E*′ (

**x**′

_{i}), where the coefficients in the convex combination are determined by the current and new weights. We then add the new weight to the current weight

*Warray*(

*j*).

## 5. Experimental Results

## 6. Discussion and Conclusions

20. T.N. Cornsweet and H.D. Crane, “Accurate two-dimensional eye tracker using first and fourth Purkinje images,” J. Opt. Soc. Am. **63**, 921–928 (1973). [CrossRef] [PubMed]

*relative motion*. For instance, one can “freeze” the reference frame (which was the first frame in the case presented above) and compute motion relative to the reference frame. However, the retina moves during the 1/30 second scan period during which the reference frame is recorded. Consequently, the estimated motion will be in error, or “biased”, by an amount that depends on the retinal motion during the scan of the reference frame.

*T*. Consequently, if a nonconstant reference frame bias is present in the estimated motion

**X**(

*t*), one can then extract the bias from the fact that

*N*. Here

*τ*is the frame scan period, and

_{s}**X**

_{bias}(

*t*), 0 ≤

*t*≤

*τ*, is the (nonconstant) reference frame bias. The corrected motion for the nth scan period is then

_{s}**X**(

*t*+

*nτ*)-

_{s}**X**

_{bias}(

*t*), 0 ≤

*t*≤

*τ*.

_{s}## Acknowledgments

## References and links

1. | A. Roorda, F Romero-Borja, W.J. Donnelly, T.J. Hebert, H. Queener, and M.C.W. Campbell, “Adaptive Optics Scanning Laser Ophthalmoscopy,” Opt. Express |

2. | J. Liang, D. R. Williams, and D. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A |

3. | R. H. Webb, G. W. Hughes, and F. C. Delori, “Confocal scanning laser ophthalmoscope,” Appl. Opt. |

4. | J.B. Mulligan, “Recovery of motion parameters from distortions in scanned images,” Proceedings of the NASA Image Registration Workshop (IRW97), NASA Goddard Space Flight Center, MD, 1997. |

5. | S.B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” in Ophthalmic Technologies XV, edited by
Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. |

6. | D. P. Wornson and G. W. Hughes, et al., “Fundus tracking with the scanning laser ophthalmoscope,” Appl. Opt. |

7. | N. J. O’Connor and D. U. Bartsch, et al, “Fluorescent infrared scanning-laser ophthalmoscope for three-dimensional visualization: automatic random-eye-motion correction and deconvolution,” Appl. Opt. |

8. | E. Decastro and G. Cristini, et al, “Compensation of random eye motion in television ophthalmoscopy—preliminary results,” IEEE Transactions on Medical Imaging |

9. | A. V. Cideciyan, “Registration of ocular fundus images—an algorithm using cross-correlation of triple invariant image descriptors,” IEEE Engineering in Medicine and Biology Magazine |

10. | J. Modersitzki, |

11. | D.W. Arathorn, |

12. | D. W. ARATHORN, “Computation in higher visual cortices: Map-seeking circuit theory and application to machine vision,” Proceedings of IEEE Applied Imagery Pattern Recognition Workshop 73–78 (2004). |

13. | D. W. ARATHORN, “From wolves hunting elk to Rubik’s cubes: Are the cortices compositional/decompositional engines?” Proceedings of AAAI Symposium on Compositional Connectionism (2004), pp. 1–5. |

14. | D. W. ARATHORN, “Memory-driven visual attention: An emergent behavior of map-seeking circuits,” in |

15. | D. W. ARATHORN, |

16. | D. W. Arathorn and T. Gedeon, “Convergence in map finding circuits,” preprint, 2004. |

17. | S.A. Harker, T. Gedeon, and C.R. Vogel, “A multilinear optimization problem associated with correspondence maximization,” preprint, 2005. |

18. | |

19. | J. A. Martin and A. Roorda, “Direct and n on-invasive assessment of parafoveal capillary leukocyte velocity,” Ophthalmology (in press). |

20. | T.N. Cornsweet and H.D. Crane, “Accurate two-dimensional eye tracker using first and fourth Purkinje images,” J. Opt. Soc. Am. |

**OCIS Codes**

(010.1080) Atmospheric and oceanic optics : Active or adaptive optics

(180.1790) Microscopy : Confocal microscopy

**ToC Category:**

Focus issue: Signal recovery and synthesis

**Virtual Issues**

Vol. 1, Iss. 2 *Virtual Journal for Biomedical Optics*

**Citation**

Curtis R. Vogel, David W. Arathorn, Austin Roorda, and Albert Parker, "Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy," Opt. Express **14**, 487-497 (2006)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-2-487

Sort: Journal | Reset

### References

- A. Roorda, F Romero-Borja, W.J. Donnelly, T.J. Hebert, H. Queener, M.C.W. Campbell, "Adaptive Optics Scanning Laser Ophthalmoscopy," Opt. Express 10 , pp. 405-412 (2002). [PubMed]
- J. Liang, D. R. Williams, and D. Miller, "Supernormal vision and high-resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14 (1997), pp. 2884-2892. [CrossRef]
- R. H. Webb, G. W. Hughes, and F. C. Delori, "Confocal scanning laser ophthalmoscope," Appl. Opt. 26 (1987), pp. 1492-1499. [CrossRef] [PubMed]
- J.B. Mulligan, "Recovery of motion parameters from distortions in scanned images," Proceedings of the NASA Image Registration Workshop (IRW97), NASA Goddard Space Flight Center, MD, 1997.
- S.B. Stevenson and A. Roorda, "Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy," in Ophthalmic Technologies XV, edited by Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. 5688A (SPIE, Bellingham, WA, 2005), pp. 145-151. [CrossRef]
- D. P. Wornson, G. W. Hughes, et al, "Fundus tracking with the scanning laser ophthalmoscope," Applied Optics 26 (1987), pp. 1500-1504. [CrossRef] [PubMed]
- N. J. O'Connor, D. U. Bartsch, et al, "Fluorescent infrared scanning-laser ophthalmoscope for three-dimensional visualization: automatic random-eye-motion correction and deconvolution," Applied Optics 37 (1998), pp. 2021-2033. [CrossRef]
- E. Decastro, G. Cristini, et al, "Compensation of random eye motion in television ophthalmoscopy—preliminary results," IEEE Transactions on Medical Imaging 6 (1987): 74-81. [CrossRef]
- A. V. Cideciyan, "Registration of ocular fundus images—an algorithm using cross-correlation of triple invariant image descriptors," IEEE Engineering in Medicine and Biology Magazine 14 (1995), pp. 52-58. [CrossRef]
- J. Modersitzki, Numerical Methods for Image Registration, Oxford University Press, 2004.
- D.W. Arathorn, Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision, Stanford University Press, 2002.
- D. W. ARATHORN, "Computation in higher visual cortices: Map-seeking circuit theory and application to machine vision," Proceedings of IEEE Applied Imagery Pattern Recognition Workshop (2004), pp. 73-78.
- D. W. ARATHORN, "From wolves hunting elk to Rubik's cubes: Are the cortices compositional/decompositional engines?" Proceedings of AAAI Symposium on Compositional Connectionism (2004), pp. 1-5.
- D. W. ARATHORN, "Memory-driven visual attention: An emergent behavior of map-seeking circuits," in Neurobiology of Attention, Eds. L. Itti, G. Rees, and J. Tsotsos, Academic Press/Elsevier, 2005.
- D. W. ARATHORN, A cortically plausible inverse problem solving method applied to recognizing static and kinematic 3-D objects, proceedings of Neural Information Processing Systems (NIPS) Workshop, 2005
- D. W. Arathorn and T. Gedeon, "Convergence in map finding circuits," preprint, 2004.
- S.A. Harker, T. Gedeon, and C.R. Vogel, "A multilinear optimization problem associated with correspondence maximization," preprint, 2005.
- <a href="http://www.math.montana.edu/~vogel/Vision/graphics/">http://www.math.montana.edu/~vogel/Vision/graphics/</a>
- J. A. Martin and A. Roorda, "Direct and n on-invasive assessment of parafoveal capillary leukocyte velocity," Ophthalmology (in press).
- T.N. Cornsweet and H.D. Crane, "Accurate two-dimensional eye tracker using first and fourth Purkinje images," J. Opt. Soc. Am. 63 (1973), pp. 921-928. [CrossRef] [PubMed]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.