OSA's Digital Library

Optics Express

Optics Express

  • Editor: Michael Duncan
  • Vol. 14, Iss. 2 — Jan. 23, 2006
  • pp: 487–497
« Show journal navigation

Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy

Curtis R. Vogel, David W. Arathorn, Austin Roorda, and Albert Parker  »View Author Affiliations


Optics Express, Vol. 14, Issue 2, pp. 487-497 (2006)
http://dx.doi.org/10.1364/OPEX.14.000487


View Full Text Article

Acrobat PDF (228 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We apply a novel computational technique known as the map-seeking circuit algorithm to estimate the motion of the retina of eye from a sequence of frames of data from a scanning laser ophthalmoscope. We also present a scheme to dewarp and co-add frames of retinal image data, given the estimated motion. The motion estimation and dewarping techniques are applied to data collected from an adaptive optics scanning laser ophthalmoscopy.

© 2006 Optical Society of America

1. Introduction

The adaptive optics scanning laser opthalmoscope [1

1. A. Roorda, F Romero-Borja, W.J. Donnelly, T.J. Hebert, H. Queener, and M.C.W. Campbell, “Adaptive Optics Scanning Laser Ophthalmoscopy,” Opt. Express 10, 405–412 (2002). [PubMed]

] (AOSLO) is a scanning device which produces high resolution optical sections of the retina of the living eye. This instrument combines adaptive optics [2

2. J. Liang, D. R. Williams, and D. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14, 2884–2892 (1997). [CrossRef]

], which is a set of techniques used to measure and correct for aberrations that cause blur in retinal images, with confocal scanning laser ophthalmoscopy [3

3. R. H. Webb, G. W. Hughes, and F. C. Delori, “Confocal scanning laser ophthalmoscope,” Appl. Opt. 26, 1492–1499 (1987). [CrossRef] [PubMed]

]. After adaptive optics correction, microscopic details in the human retina are directly observed. The usefulness of the AOSLO has been limited by motions of the eye that occur on time scales which are comparable to the scan rate. These motions can lead to severe distortions, or warping, of the AOSLO images. These distortions are particularly apparent in the AOSLO because, with the small field sizes that are used to achieve sufficient sampling rates for microscopic imaging (typically 1–3 degrees), the effects of the eye movements are magnified. Removing the distortions is an important step toward providing high fidelity visualizations of the retina, either as stabilized videos, or as high signal-to-noise (S:N) frames. Since the signal in a single frame is limited by safe light exposure limits, co-adding an undistorted sequence of frames is required to improve the S:N of static images. In order to correct for these distortions, one must first determine the retinal motion that has caused them.

In this paper we apply a patch-based cross-correlation approach to estimate retinal motion. This approach is similar to that proposed by Mulligan [4

4. J.B. Mulligan, “Recovery of motion parameters from distortions in scanned images,” Proceedings of the NASA Image Registration Workshop (IRW97), NASA Goddard Space Flight Center, MD, 1997.

] and has been successfully implemented on AOSLO data by Stevenson and Roorda [5

5. S.B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” in Ophthalmic Technologies XV, edited by Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. 5688A (SPIE, Bellingham, WA, 2005), pp. 145–151. [CrossRef]

]. It should be noted that this approach requires only a sequence of frames of scan data, and it allows one to estimate within-frame motion. Wornson et al [6

6. D. P. Wornson and G. W. Hughes, et al., “Fundus tracking with the scanning laser ophthalmoscope,” Appl. Opt. 26, 1500–1504 (1987). [CrossRef] [PubMed]

] and O’Connor et al [7

7. N. J. O’Connor and D. U. Bartsch, et al, “Fluorescent infrared scanning-laser ophthalmoscope for three-dimensional visualization: automatic random-eye-motion correction and deconvolution,” Appl. Opt. 37, 2021–2033 (1998). [CrossRef]

] apply frame-to-frame cross-correlation, but do not estimate within-frame motion; Decastro et al [8

8. E. Decastro and G. Cristini, et al, “Compensation of random eye motion in television ophthalmoscopy—preliminary results,” IEEE Transactions on Medical Imaging 6, 74–81 (1987). [CrossRef]

] require a larger static image in order to estimate retinal motion; and Cideciyan [9

9. A. V. Cideciyan, “Registration of ocular fundus images—an algorithm using cross-correlation of triple invariant image descriptors,” IEEE Engineering in Medicine and Biology Magazine 14, 52–58 (1995). [CrossRef]

] discusses techniques to align static (snapshot) rather than scanned frames. It may be possible to adapt other standard techniques from image registration [10

10. J. Modersitzki, Numerical Methods for Image Registration, (Oxford University Press, 2004).

], e.g., landmark-based registration, to obtain within-frame motion estimates from scanned data.

Rather than applying standard Fourier-based techniques to compute the cross-correlations, we employ a novel computational technique known as the map-seeking circuit (MSC) algorithm [11

11. D.W. Arathorn, Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision, (Stanford University Press, 2002).

, 12

12. D. W. ARATHORN, “Computation in higher visual cortices: Map-seeking circuit theory and application to machine vision,” Proceedings of IEEE Applied Imagery Pattern Recognition Workshop 73–78 (2004).

, 13

13. D. W. ARATHORN, “From wolves hunting elk to Rubik’s cubes: Are the cortices compositional/decompositional engines?” Proceedings of AAAI Symposium on Compositional Connectionism (2004), pp. 1–5.

, 14

14. D. W. ARATHORN, “Memory-driven visual attention: An emergent behavior of map-seeking circuits,” in Neurobiology of Attention, Eds. L. Itti, G. Rees, and J. Tsotsos, (Academic Press/Elsevier, 2005) pp. 605–609.

, 15

15. D. W. ARATHORN, A cortically plausible inverse problem solving method applied to recognizing static and kinematic 3-D objects, proceedings of Neural Information Processing Systems (NIPS) Workshop, (in press).

, 16

16. D. W. Arathorn and T. Gedeon, “Convergence in map finding circuits,” preprint, 2004.

, 17

17. S.A. Harker, T. Gedeon, and C.R. Vogel, “A multilinear optimization problem associated with correspondence maximization,” preprint, 2005.

]. MSC has much lower computational complexity than Fourier approaches; hence it may allow real-time motion estimation. In addition MSC easily allows one to consider more general motions—like rotations—that may arise in AOSLO imaging.

Once motion estimates are available, one can compensate for the motion-induced image distortions. This is called image “dewarping”. In this paper we introduce a weighting scheme that is based on linear interpolation to do the dewarping. This scheme can also be used to co-add multiple image frames in order to form image mosaics and to average out the noise.

This paper is organized as follows. In Section 2 we present a simple model for scanned image data from the AOSLO. Section 3 deals with motion estimation. We review the patch-based cross-correlation approach in the context of our data model, and we present the MSC algorithm as a computationally efficient alternative to fast Fourier transform based schemes to compute cross-correlation. In Section 4 we present our dewarping scheme. In Section 5 we present experimental results which demonstrate the effectiveness of both our MSC-based motion estimation scheme and our dewarping scheme. Finally, we present discussion and conclusions in Section 6.

2. Mathematical Model for Scanning Data

The first generation AOSLO device uses a pair of scan mirrors to sweep across the retina. The fast scan mirror sweeps back and forth horizontally once every 63.5 microseconds; it is attached to a vibrating piezo-electric crystal, which traces out a sinusoidal path in space-time. The scanning beam, which illuminates the retina and also drives the wavefront sensor for the AO system, is turned off for half of each scan period. Data at the extremes of the scan period are discarded and some preliminary data processing is done to remove sinusoidal effects, so all the remaining 512 processed pixels sample regions of the retina with the same horizontal extent. Each pixel subtends about .17 minutes of arc, or 0.88 microns of planar distance across the retina. The slow scan mirror sweeps vertically across the retina at a 30-hertz rate and traces out a skewed sawtooth pattern in space-time. The scan rate of the slow scan mirror is calibrated so that the pixels are nearly square. The slow scan return path takes about 10 percent of the total scan time, and the data recording during the return path are discarded.

Let x = (x,y) denote lateral (horizontal and vertical) position in the retinal layer being scanned (if there were no lateral displacements due to eye movements), and let E(x) denote its reflectivity. Let r(t) = (rH(t),rV(t)) represent the known raster position at time t, and let X(t) = (X(t),Y(t)) denote the unknown lateral displacement of the retina. A continuous model for preprocessed, noise-free AOSLO scanning data is then

d(t)=E(r(t)+X(t)).
(1)

A model for recorded data is

di=E(r(ti)+X(ti))+ηi,
(2)

where ηi represents noise and the ti denote discrete pixel recording times.

Since the data are preprocessed, we can assume both horizontal and vertical scan paths, rH and rV, are periodic sawtooths. Thus in the absence of retinal motion, the AOSLO would measure the reflectivity of the retinal layer at discrete, equispaced points on a rectangular grid. For the first generation AOSLO, this grid is 512 pixels across by 480 pixels vertically and it is resampled each 1/30 second. With retinal motion, the sampled grid moves and is distorted so that it is no longer rectangular; see [5

5. S.B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” in Ophthalmic Technologies XV, edited by Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. 5688A (SPIE, Bellingham, WA, 2005), pp. 145–151. [CrossRef]

].

3. Motion Retrieval Algorithms

3.1. Translational Motion Estimation Via Cross-Correlation

Retinal motion in the form of a drift with constant velocity v,

X(t)=X(t0)+(tt0)v,
(3)

E(x(ti+τf))=E(x(ti)+τfv)+noise.
(4)

Recall that the cross-correlation between a pair of discrete rectangular images E and E′ is defined to be

corr(E,E)k,=ijE(i+k,j+)E(i,j).
(5)

By finding the indices k, ℓ which maximize cross-correlation between consecutive pairs of discrete, rectangular AOSLO image frames, one can obtain an estimate for the offset τf v in (4), and then extract the drift velocity v.

The above cross-correlation approach may be used to estimate translational motion between non sequential image frames; pixels that are recorded at integer multiples of the frame scan rate correspond as they do in the sequential frame case, and offsets are integer multiples of τf v.

There are several shortcomings to the basic cross-correlation approach outlined above. The motion within each frame period is typically not even close to a constant drift. This problem is addressed in Section 3.3 below. Another shortcoming is due to the fact that the reflectivity E at a particular point may vary with time. This problem may be dealt with by changing the reference frame (the fixed frame against which the other frames are cross-correlated) when large changes in reflectivity are detected.

The standard approach to cross-correlation of discrete images relies on the fast Fourier transform (FFT). Stevenson and Roorda have made use of FFT-based correlation for retinal motion detection [5

5. S.B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” in Ophthalmic Technologies XV, edited by Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. 5688A (SPIE, Bellingham, WA, 2005), pp. 145–151. [CrossRef]

].

3.2. Motion Detection Using the MSC Algorithm

MSC is a method for discovering the “best” transformation T, from a given class of transformations, that maps a match image to a reference image. In the context of maximizing the cross-correlation (5), the transformations are translations and can be discretely parameterized as

Tk,E(i,j)=E(i+k,j+).

Each of the translations can be decomposed into a horizontal shift by k pixels, which we denote by Tk (1), followed by a vertical shift of pixels, denoted by T (2), so that

Tk,=T(2)Tk(1).
(6)

We introduce the notation

E,E=ijE(i,j)E(i,j),
(7)

so that the cross-correlation (5) can be expressed as

corr(k,)=T(2)Tk(1)E,E.
(8)

A key idea underpinning MSC is that of superposition. Rather than explicitly selecting discrete indices (k, ) to maximize the cross-correlation, we select coefficient, or “gain”, vectors g (1) = (g 1 (1),g 2 (1),...) and g (2) = (g 1 (2),g 2 (2),...) which maximize the extended cross-correlation,

corr(g(1),g(2))=(g(2)T(2))(kgk(1)Tk(1))E,E.
(9)

Note that when one picks the gain vectors to be standard unit vectors (all zeros except a single “1” entry), the extended cross-correlation reduces to the standard cross-correlation.

The MSC algorithm is an iterative scheme which makes use of efficiently computed gradients of the extended cross-correlation (9) to very quickly find the maximizer of the standard cross-correlation (8). Whereas standard approaches to maximizing correlation make use of the fast Fourier transform and have computation expense proportional to N logN, where N is the number of pixels, the computational expense of MSC is proportional to N, and the proportionality constant is quite small. MSC’s cost can be dramatically reduced if one can obtain a “sparse encoding” of the information in the images. By this we mean a very compact representation which requires minimal storage.

MSC can easily handle more general classes of transformations. For instance, if one wishes to consider rotations as well as translations, one can simply add a rotational “layer” by adding another term Tm (3) to the decomposition (6). The Tm (3) represents a discretization of the rotations to be considered.

3.3. Image Preprocessing

As we noted at the end of Section 3.1, retinal motion within the frame scan period is much more complex than a constant drift. However, during short time intervals, the motion can be well-approximated by drift. Since the horizontal scan rate is several orders of magnitude faster than the vertical scan rate, we first partition each image frame into several patches that extend horizontally across the frame but are some fraction (typically 1/8 or 1/16) of the vertical extent of the frame. Hence the patches are typically 512 pixels across by 60 or 30 pixels vertically. We then apply correlation between corresponding patches in the match and the reference frames to obtain approximations to the motion within frames.

Some additional image preprocessing is required to increase the MSC convergence rate and to decrease the chance that MSC might converge to a spurious local maximizer. With AOSLO data, we decompose each image frame into several binary image “channels”, consisting for example of pixels of high intensity in one channel, middle intensity pixels in a second channel, and low intensity pixels in a third channel.

By recording only the locations of the nonzero pixels in the binary images, we can dramatically reduce the storage requirements of the MSC algorithm. Moreover, operation counts can be dramatically reduced by computing on only the nonzero pixels.

4. Image Dewarping

Let E denote the reference image and let E′ denote the “match” image onto which the reference image is mapped via a transformation T. We map the match image back to the reference image using the inverse transformation to obtain a new image,

E(x)=E(x),wherex=T1x.

Problems arise when the image data are discrete. If a point x lies on a regular reference grid, then x′ = T x is unlikely to lie on this same regular grid. In a reciprocal manner, the match image E′ is implicitly collected on a rectangular array. Points x′ on this “match array” will not map back to the reference grid. See Fig. 1 for an illustration.

Fig. 1. Illustration of transformation effects. Under the transformation T, straight lines in the rectangular grid on the left map to curved lines on the right. Under the inverse transformation T -1, equispaced grid points on the right (blue dots) map back to non equispaced points on the left (red dots).

5. Experimental Results

The video clip in Fig. 2 shows a 24 frame sequence (.8 seconds) of AOSLO data in which the sinusoidal warp has been removed (see beginning of Section 2). Note that the frames have been clipped from the original 512×480 pixel size to the smaller 350×350 pixel size. In this video one observes that the predominant motion is a somewhat erratic downward drift followed by a jerky, short-duration, large-amplitude “snap-back” (this is called a micro-saccade), followed by more downward drift. One can also observe a high temporal frequency, low amplitude jitter, known as tremor, superimposed onto the larger motions.

Fig. 2. Sample frame from the raw video clip SLD-AR.avi. This clip consists of 24 image frames and the file size is 3.1 MB. The image size is 350 × 350 pixels, or 1.02 × 1.02 degrees, or 300 × 300 microns. The fovea is located 400 microns up and to the left of the frame. [Media 1]

To detect motion relative to a fixed reference frame (taken to be the first frame), we implemented the patch-based cross-correlation scheme using the MSC algorithm, which was described in Section 3. Fig. 3 shows the horizontal and vertical motion estimates that were obtained. The positive direction in subplot (a) corresponds to motion to the right in the video, while the positive direction in subplot (b) corresponds to downward motion in the video. One can clearly see the downward drift and the microsaccade in subplot (b). In both the video and subplot (a), one can also see a drift to the right near the end of the recording period. Note that we have used simple linear interpolation to fill in the missing motion due to lack of recorded data during the return time, or “flyback”, of the slow scan mirror and due to the loss of data from clipping.

Given the motion estimates from Fig. 3, we apply the image dewarping scheme described in Section 4. Results obtained from this approach are shown in the video clip in Fig. 4. Note that individual photoreceptors, which appear as bright spots, are nearly stationary in the dewarped video. The motion of the retina is now manifested in the movement of the boundary.

Fig. 3. Horizontal and vertical motion estimates obtained from AOSLO data. One pixel corresponds to .17 minutes or arc, or .88 microns of planar distance across the retina. The .8 second duration of the motion corresponds to 24 frames of AOSLO data.

In both the raw and the dewarped videos, one can see bright “blobs”, which are leukocytes (white blood cells), moving through dark regions which are capillaries. The invention of the AOSLO has made it possible to noninvasively measure blood flow in the retina [19

19. J. A. Martin and A. Roorda, “Direct and n on-invasive assessment of parafoveal capillary leukocyte velocity,” Ophthalmology (in press).

], albeit manually. The image stabilization in the dewarping videos may facilitate automated blood flow measurements.

Fig. 5 shows the effect of co-adding dewarped image frames. The top image in this figure is of a single frame of AOSLO data from the video clip in Fig. 2. The bottom image in Fig. 5 was obtained by co-adding the dewarped frames that appear in the second video clip (Fig. 4). An examination of the region 100 pixels down and 100 pixels across from the upper left corner of co-added image reveals a honeycomb structure known as the cone mosaic. This feature cannot be seen in the single raw image frame.

Fig. 4. Sample frame from dewarped video clip SLD-AR-dewarp.avi. This clip consists of 24 image frames and the file size is 3.5 MB. The image statistics are the same as in Fig. 2. [Media 2]

6. Discussion and Conclusions

By applying the MSC-based motion estimation scheme presented in Section 3 to AOSLO data, we were able to estimate retinal motion on time scales of 1/(30 × 16) sec ≈ 2 millisec. Given the motion estimates, we were then able to apply the dewarping scheme presented in Section 4 to remove motion-induced artifacts. We were also able to co-add dewarped data frames to enhance image quality to the point where previously unobservable features like the cone mosaic could be seen.

The fact that there was very little residual motion in the dewarped video suggests that the motion estimates are quite accurate. Additional evidence of the accuracy of the motion estimation scheme comes from a study conducted by Dr. Scott Stevenson at the University of Houston School of Optometry. Stevenson obtained retinal motion estimates using (i) our MSC-based cross-correlation approach applied to AOSLO data; (ii) a standard Fourier-based cross-correlation approach applied to the same AOSLO data; and (iii) simultaneous recordings of eye motion taken from a dual-Purkinje eye tracker [20

20. T.N. Cornsweet and H.D. Crane, “Accurate two-dimensional eye tracker using first and fourth Purkinje images,” J. Opt. Soc. Am. 63, 921–928 (1973). [CrossRef] [PubMed]

]. Stevenson found the MSC and the Fourier-based motion estimates to be in very close agreement. These in turn agreed fairly well with the Purkinje data. Discrepancies with the Purkinje measurements can be accounted for by the fact that the Purkinje measures the motion of the lens of the eye rather than the motion of the retina itself.

Note that schemes that are based on correlating patches within successive frames can detect only relative motion. For instance, one can “freeze” the reference frame (which was the first frame in the case presented above) and compute motion relative to the reference frame. However, the retina moves during the 1/30 second scan period during which the reference frame is recorded. Consequently, the estimated motion will be in error, or “biased”, by an amount that depends on the retinal motion during the scan of the reference frame.

To obtain the results presented in Section 5 we assumed that the average of the true motion,

1T0TXtrue(t)dt,
(10)

tends to a constant for large T. Consequently, if a nonconstant reference frame bias is present in the estimated motion X(t), one can then extract the bias from the fact that

1Nn=1NX(t+nτs)Xbias(t)

for large N. Here τs is the frame scan period, and X bias(t), 0 ≤ tτs, is the (nonconstant) reference frame bias. The corrected motion for the nth scan period is then X(t + s)-X bias(t), 0 ≤ tτs.

Fig. 5. Raw image (top) and co-added image (bottom) obtained from AOSLO data. Image statistics are the same as in Fig. 2. Note the honeycomb structure known as a cone mosaic in the co-added image.

Acknowledgments

This research was supported in part by the Center for Adaptive Optics, an NSF-supported Science and Technology Center, through grant AST-9876783. Additional support comes from the NIH Bioengineering Research Partnership at the University of Rochester Center for Visual Science through grant EY014365 and from the Air Force Office of Scientific Research through grant F49620-02-1-0297.

We wish to thank Dr. Scott Stevenson of the College of Optometry at the University of Houston for supplying us with his simultaneously recorded AOSLO data and Purkinje tracker data, which we used to validate our motion estimation algorithm.

References and links

1.

A. Roorda, F Romero-Borja, W.J. Donnelly, T.J. Hebert, H. Queener, and M.C.W. Campbell, “Adaptive Optics Scanning Laser Ophthalmoscopy,” Opt. Express 10, 405–412 (2002). [PubMed]

2.

J. Liang, D. R. Williams, and D. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14, 2884–2892 (1997). [CrossRef]

3.

R. H. Webb, G. W. Hughes, and F. C. Delori, “Confocal scanning laser ophthalmoscope,” Appl. Opt. 26, 1492–1499 (1987). [CrossRef] [PubMed]

4.

J.B. Mulligan, “Recovery of motion parameters from distortions in scanned images,” Proceedings of the NASA Image Registration Workshop (IRW97), NASA Goddard Space Flight Center, MD, 1997.

5.

S.B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” in Ophthalmic Technologies XV, edited by Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. 5688A (SPIE, Bellingham, WA, 2005), pp. 145–151. [CrossRef]

6.

D. P. Wornson and G. W. Hughes, et al., “Fundus tracking with the scanning laser ophthalmoscope,” Appl. Opt. 26, 1500–1504 (1987). [CrossRef] [PubMed]

7.

N. J. O’Connor and D. U. Bartsch, et al, “Fluorescent infrared scanning-laser ophthalmoscope for three-dimensional visualization: automatic random-eye-motion correction and deconvolution,” Appl. Opt. 37, 2021–2033 (1998). [CrossRef]

8.

E. Decastro and G. Cristini, et al, “Compensation of random eye motion in television ophthalmoscopy—preliminary results,” IEEE Transactions on Medical Imaging 6, 74–81 (1987). [CrossRef]

9.

A. V. Cideciyan, “Registration of ocular fundus images—an algorithm using cross-correlation of triple invariant image descriptors,” IEEE Engineering in Medicine and Biology Magazine 14, 52–58 (1995). [CrossRef]

10.

J. Modersitzki, Numerical Methods for Image Registration, (Oxford University Press, 2004).

11.

D.W. Arathorn, Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision, (Stanford University Press, 2002).

12.

D. W. ARATHORN, “Computation in higher visual cortices: Map-seeking circuit theory and application to machine vision,” Proceedings of IEEE Applied Imagery Pattern Recognition Workshop 73–78 (2004).

13.

D. W. ARATHORN, “From wolves hunting elk to Rubik’s cubes: Are the cortices compositional/decompositional engines?” Proceedings of AAAI Symposium on Compositional Connectionism (2004), pp. 1–5.

14.

D. W. ARATHORN, “Memory-driven visual attention: An emergent behavior of map-seeking circuits,” in Neurobiology of Attention, Eds. L. Itti, G. Rees, and J. Tsotsos, (Academic Press/Elsevier, 2005) pp. 605–609.

15.

D. W. ARATHORN, A cortically plausible inverse problem solving method applied to recognizing static and kinematic 3-D objects, proceedings of Neural Information Processing Systems (NIPS) Workshop, (in press).

16.

D. W. Arathorn and T. Gedeon, “Convergence in map finding circuits,” preprint, 2004.

17.

S.A. Harker, T. Gedeon, and C.R. Vogel, “A multilinear optimization problem associated with correspondence maximization,” preprint, 2005.

18.

http://www.math.montana.edu/~vogel/Vision/graphics/

19.

J. A. Martin and A. Roorda, “Direct and n on-invasive assessment of parafoveal capillary leukocyte velocity,” Ophthalmology (in press).

20.

T.N. Cornsweet and H.D. Crane, “Accurate two-dimensional eye tracker using first and fourth Purkinje images,” J. Opt. Soc. Am. 63, 921–928 (1973). [CrossRef] [PubMed]

OCIS Codes
(010.1080) Atmospheric and oceanic optics : Active or adaptive optics
(180.1790) Microscopy : Confocal microscopy

ToC Category:
Focus issue: Signal recovery and synthesis

Virtual Issues
Vol. 1, Iss. 2 Virtual Journal for Biomedical Optics

Citation
Curtis R. Vogel, David W. Arathorn, Austin Roorda, and Albert Parker, "Retinal motion estimation in adaptive optics scanning laser ophthalmoscopy," Opt. Express 14, 487-497 (2006)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-2-487


Sort:  Journal  |  Reset  

References

  1. A. Roorda, F Romero-Borja, W.J. Donnelly, T.J. Hebert, H. Queener, M.C.W. Campbell, "Adaptive Optics Scanning Laser Ophthalmoscopy," Opt. Express 10 , pp. 405-412 (2002). [PubMed]
  2. J. Liang, D. R. Williams, and D. Miller, "Supernormal vision and high-resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14 (1997), pp. 2884-2892. [CrossRef]
  3. R. H. Webb, G. W. Hughes, and F. C. Delori, "Confocal scanning laser ophthalmoscope," Appl. Opt. 26 (1987), pp. 1492-1499. [CrossRef] [PubMed]
  4. J.B. Mulligan, "Recovery of motion parameters from distortions in scanned images," Proceedings of the NASA Image Registration Workshop (IRW97), NASA Goddard Space Flight Center, MD, 1997.
  5. S.B. Stevenson and A. Roorda, "Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy," in Ophthalmic Technologies XV, edited by Fabrice Manns, Per Soderberg, Arthur Ho, Proceedings of SPIE Vol. 5688A (SPIE, Bellingham, WA, 2005), pp. 145-151. [CrossRef]
  6. D. P. Wornson, G. W. Hughes, et al, "Fundus tracking with the scanning laser ophthalmoscope," Applied Optics 26 (1987), pp. 1500-1504. [CrossRef] [PubMed]
  7. N. J. O'Connor, D. U. Bartsch, et al, "Fluorescent infrared scanning-laser ophthalmoscope for three-dimensional visualization: automatic random-eye-motion correction and deconvolution," Applied Optics 37 (1998), pp. 2021-2033. [CrossRef]
  8. E. Decastro, G. Cristini, et al, "Compensation of random eye motion in television ophthalmoscopy—preliminary results," IEEE Transactions on Medical Imaging 6 (1987): 74-81. [CrossRef]
  9. A. V. Cideciyan, "Registration of ocular fundus images—an algorithm using cross-correlation of triple invariant image descriptors," IEEE Engineering in Medicine and Biology Magazine 14 (1995), pp. 52-58. [CrossRef]
  10. J. Modersitzki, Numerical Methods for Image Registration, Oxford University Press, 2004.
  11. D.W. Arathorn, Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision, Stanford University Press, 2002.
  12. D. W. ARATHORN, "Computation in higher visual cortices: Map-seeking circuit theory and application to machine vision," Proceedings of IEEE Applied Imagery Pattern Recognition Workshop (2004), pp. 73-78.
  13. D. W. ARATHORN, "From wolves hunting elk to Rubik's cubes: Are the cortices compositional/decompositional engines?" Proceedings of AAAI Symposium on Compositional Connectionism (2004), pp. 1-5.
  14. D. W. ARATHORN, "Memory-driven visual attention: An emergent behavior of map-seeking circuits," in Neurobiology of Attention, Eds. L. Itti, G. Rees, and J. Tsotsos, Academic Press/Elsevier, 2005.
  15. D. W. ARATHORN, A cortically plausible inverse problem solving method applied to recognizing static and kinematic 3-D objects, proceedings of Neural Information Processing Systems (NIPS) Workshop, 2005
  16. D. W. Arathorn and T. Gedeon, "Convergence in map finding circuits," preprint, 2004.
  17. S.A. Harker, T. Gedeon, and C.R. Vogel, "A multilinear optimization problem associated with correspondence maximization," preprint, 2005.
  18. <a href="http://www.math.montana.edu/~vogel/Vision/graphics/">http://www.math.montana.edu/~vogel/Vision/graphics/</a>
  19. J. A. Martin and A. Roorda, "Direct and n on-invasive assessment of parafoveal capillary leukocyte velocity," Ophthalmology (in press).
  20. T.N. Cornsweet and H.D. Crane, "Accurate two-dimensional eye tracker using first and fourth Purkinje images," J. Opt. Soc. Am. 63 (1973), pp. 921-928. [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Multimedia

Multimedia FilesRecommended Software
» Media 1: AVI (3011 KB)     
» Media 2: AVI (3424 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited