OSA's Digital Library

Optics Express

Optics Express

  • Editor: Michael Duncan
  • Vol. 14, Iss. 4 — Feb. 20, 2006
  • pp: 1339–1352
« Show journal navigation

Wave front sensor-less adaptive optics: a model-based approach using sphere packings

Martin J. Booth  »View Author Affiliations


Optics Express, Vol. 14, Issue 4, pp. 1339-1352 (2006)
http://dx.doi.org/10.1364/OE.14.001339


View Full Text Article

Acrobat PDF (236 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Certain adaptive optics systems do not employ a wave front sensor but rather maximise a photodetector signal by appropriate control of an adaptive element. The maximisation procedure must be optimised if the system is to work efficiently. Such optimisation is often implemented empirically, but further insight can be obtained by using an appropriate mathematical model. In many practical systems aberrations can be accurately represented by a small number of modes of an orthogonal basis, such as the Zernike polynomials. By heuristic reasoning we develop a model for the operation of such systems and demonstrate a link with the geometrical problems of sphere packings and coverings. This approach aids the optimisation of control algorithms and is illustrated by application to direct search and hill climbing algorithms. We develop an efficient scheme using a direct maximisation calculation that permits the measurement of N Zernike modes with only N +1 intensity measurements.

© 2006 Optical Society of America

1. Introduction

Fig. 1. Schematic diagram of the adaptive system. The input wave front is incident from the left whereupon it passes through the pupil plane of the lens. A phase element in this plane adds a chosen phase aberration to the input wave front. The wave front is then focussed by the lens onto pinhole positioned at the nominal focus where the intensity signal is measured by the photodetector.

The deterministic algorithms presented here differ from many commonly used methods, in which the steps between successive trial solutions adaptively change, usually in a stochastic manner, as the algorithm progresses permitting convergence to a solution with arbitrary precision. We approach the algorithm design by specifying the desired precision and using it in conjunction with the mathematical model to predetermine the necessary sequence of steps. The results show that deterministic, non-adaptive algorithms can be effective in controlling wave front sensorless adaptive optics systems, if they are suitably formulated.

2. Mathematical model

In the conceptual measurement system shown in Fig. 1, the input wave front is incident from the left. A positive lens focuses the wave front onto an infinitely small pinhole photodetector situated at the nominal focal point of the lens. The phase aberration of the input wave front is described by the function Φ(r, θ), where r and θ are polar coordinates in the pupil plane of the lens. The coordinates are normalised such that the pupil has a radius of 1. In the pupil plane of the lens is a phase mask, which could in practice be an adaptive element, that subtracts a phase function Ψ(r, θ) from the input wave front. Fourier diffraction theory[12

12. T. Wilson and C. J. R. Sheppard, Theory and Practice of Scanning Optical Microscopy, (Academic Press, London, 1984).

] shows that the signal measured by the photodetector is given by

F=I01πθ=02πr=01exp{rθrθ}rdrdθ2
(1)

where I0 is proportional to the incident light power and j=1.

We assume that Φ and Ψ can each be expressed as a series of N Zernike polynomials[13

13. M. Born and E. Wolf, Principles of Optics, 6th Edition, (Pergamon Press, 1983).

], each denoted by Zn (r,θ):

Φrθ=n=1NanZnrθ
(2)
Ψrθ=n=1NbnZnrθ
(3)

The normalisation and indexing scheme of the Zernike polynomials are explained in Appendix A. The aberration of the input wave front Φ can be represented by a N element vector a, whose elements are the Zernike mode coefficients an . Similarly, the correction Ψ introduced by the adaptive element is represented by the vector b, whose elements are the coefficients bn . We then define

F(c)=I0f(c),
(4)

where c = a-b and

f(c)=1πθ=02πr=01exp{jn=1NcnZnrθ}rdrdθ2,
(5)

where cn is the coefficient of the nth mode. The function f(c) is independent of the overall intensity and is equivalent to the Strehl ratio [13

13. M. Born and E. Wolf, Principles of Optics, 6th Edition, (Pergamon Press, 1983).

]. Due to the orthogonality of the Zernike modes, for small ∣c∣ we find that

f(c)1c2,
(6)

where ∣c2 also represents the variance of the corrected wave front [13

13. M. Born and E. Wolf, Principles of Optics, 6th Edition, (Pergamon Press, 1983).

]. The system is considered to be well corrected if ∣c∣ < ε, where ε 2 is a small quantity equal to the maximum acceptable wave front variance. Equation (6) provides the equivalent requirement that f(c) > 1-ε 2. From Eq. (6) we see that f(c) is isotropic for small arguments - a property that is illustrated further in Fig. 2. It follows that the contours of constant f(c) are N-dimensional spheres centred on the origin. This is a different expression of the well known result that the Strehl ratio depends only on the variance of the aberration and not its form[13

13. M. Born and E. Wolf, Principles of Optics, 6th Edition, (Pergamon Press, 1983).

]. It is also important to note that for large arguments f(c) becomes small. This is a consequence of the rapid variation of the exponential term in the integral of Eq. (5). The global maximum at c = 0 is therefore much larger than any of the surrounding local maxima. These properties of f(c) can be used to our advantage when designing maximisation algorithms.

The discussion in this paper is based upon the measurement of Zernike aberration modes because of their widespread use in optics. However, the results would also be applicable for other, orthonormal, basis functions, for example the normalised eigenmodes of a deformable mirror.

3. Strategy for algorithm design

Fig. 2. The photodetector signal calculated as a function of aberration magnitude for random combinations of the six Zernike aberration modes i = 4 to i = 9. Each data point shows the mean and the 10th and 90th percentiles of a collection of 100 random samples.

One could attempt to cover the solution space ∑ using a random arrangement of candidate solutions – this would be equivalent to a random search. However, this would not guarantee fulfilling the objective by finding the actual solution; it is possible that such a scheme could miss the maximum entirely. It is also possible that it could over-sample some areas of ∑. One could, of course, increase the numbers of random candidates to give an acceptable probabilty of success, but this would be at the expense of efficiency. Similar arguments also apply to more advanced stochastic methods.

4. Sphere coverings and the representation

The problem of sphere coverings involves finding the most efficient way to cover R N with equal, overlapping N-dimensional spheres (hereafter referred to simply as ‘spheres’). The efficiency of a covering is quantified in terms of its ‘thickness’, which is equivalent to the average number of spheres that cover a point of the space. The covering problem therefore asks for the arrangement of spheres that has the minimal thickness, otherwise termed the thinnest covering. This is non-trivial and has been the subject of lengthy mathematical investigation. Optimal coverings of R N, which include both lattice or non-lattice arrangements of spheres, have only been proven for N ≤ 3 and optimal lattice coverings have only been found for N ≤ 5 [15

15. J. H. Conway and N. J. A. Sloan, Sphere Packings, Lattices and Groups, 3rd Edition, (Springer-Verlag, 1998).

, 16

16. T. C. Hales, “An overview of the Kepler conjecture,” http://xxx.lanl.gov/ math.MG/9811071 (1999).

]. Conway and Sloane list the best known coverings for N ≤ 24, although they are not necessarily proven to be optimal [15

15. J. H. Conway and N. J. A. Sloan, Sphere Packings, Lattices and Groups, 3rd Edition, (Springer-Verlag, 1998).

].

We now apply the concepts of sphere coverings to the representation for our optimisation problem. Let us assume that ∑ is large so we can apply the best known coverings for infinite regions. We first consider the trivial case N = 1, when only a single Zernike mode is present. The region bounded by a 1-dimensional sphere of radius e is simply a line segment of length 2ε. The thinnest covering therefore consists of spheres arranged in a line with their centres spaced by 2ε; thus the candidate solutions in B should consist of integer multiples of 2ε. Since the spheres do not overlap, this is a perfect covering with a thickness of 1. One can think of this process as dividing up the b 1 -axis into sections of width 2ε in order to find the section containing the maximum intensity. It is obvious that the maximum error would be ε.

Extending this to the case N = 2, it is tempting to choose each element of a candidate solution b to be an integer multiple of 2ε. The vectors in B would then point to the vertices of a regular square grid, a scaled version of the integer lattice (usually referred to as Z N) [15

15. J. H. Conway and N. J. A. Sloan, Sphere Packings, Lattices and Groups, 3rd Edition, (Springer-Verlag, 1998).

]. This is illustrated in Fig. 3(a). As discussed in Section 3, the correct solution will only be found if it lies within a sphere of radius ε centred on a candidate solution b. It can be seen that significant portions of the plane, and hence potential solutions, do not lie within one of the spheres. Indeed, only 79% of the plane is covered and therefore the representation is incomplete. We could overcome this by reducing the spacing of the lattice to 2ε. so that the gaps between the circles disappear. This is illustrated in Fig. 3(b). Although there is now overlap between the spheres, we can be sure that a lies within at least one sphere and the representation is complete. However, this scaled integer lattice (with thickness 1.57) does not provide the thinnest possible covering, which is instead given by the hexagonal lattice shown in Fig. 3(c) [17

17. R. Kershner, “The number of circles covering a set,” Am. J. Math. 61665–671 (1939). [CrossRef]

]. The hexagonal covering has thickness 1.21 and is 23% more efficient than the square arrangement.

We can extend this analysis to include more modes. In general, the thickness Θ1 of the incomplete integer lattice in N dimensions is equivalent the ratio between the volume of a sphere and the cube that it inscribes. The volume of a unit radius sphere is given by [15

15. J. H. Conway and N. J. A. Sloan, Sphere Packings, Lattices and Groups, 3rd Edition, (Springer-Verlag, 1998).

]

VN=πN2Γ(N2+1),
(7)
Fig. 3. Sphere packings in two dimensions: (a) incomplete covering based upon an integer lattice with lattice constant 2ε; (b) complete covering based upon an integer lattice with lattice constant 2ε; (c) thinnest possible covering based upon hexagonal lattice. The points at the centre of each circle represent the candidate solutions b. Part (d) illustrates the body centred cubic arrangement, the optimal three dimensional covering.

where Γ is the gamma function. Since this would inscribe a cube of volume 2N, the thickness follows as

Θ1=VN2N.
(8)

To ensure complete coverage, the spacing of the integer lattice must be reduced by a factor N. The thickness Θ2 is now given by the ratio of the volume of a sphere and its circumscribed cube, which has volume (2/N)N. Hence,

Θ2=VNNN22N.
(9)

For N ≤ 23, the best known coverings are provided by the lattice known as AN*. In three dimensions, A3* is equivalent to the body centred cubic lattice, illustrated in Fig. 3(d). This covering has thickness [15

15. J. H. Conway and N. J. A. Sloan, Sphere Packings, Lattices and Groups, 3rd Edition, (Springer-Verlag, 1998).

]

Θ3=VNN+1(N(N+2)12(N+1))N2.
(10)

Figure 4 shows the thickness of these three coverings for different N. The difference between the thickness of the best integer lattice covering and the optimal known lattice covering increases markedly with N. Indeed, for N = 10 their ratio is approximately 50. With the incomplete lattice covering, it is clear that as N increases, the proportion of the possible solutions that are covered decreases dramatically. This trend continues for larger N, as can be seen from the asymptotic behaviours of these thicknesses, given by log(Θ1) ~ -N log(N), log(Θ2) ~ N and log(Θ3) ~ N.

Fig. 4. Base 10 logarithmic plots of the covering thickness for different packings in N dimensions, corresponding to the efficiency of measurement of N modes: (i) Incomplete integer lattice covering (long dashed line), Θ1; (ii) Complete integer lattice covering (short dashed line), Θ2; (iii) Optimum known lattice covering (solid line), Θ2.

The AN* lattice provides the most efficient scheme whilst ensuring that the maximum measurement error does not exceed ε. We note that other lattices, known as quantisers, have been found that would alternatively minimise the mean square error of the measurement [15

15. J. H. Conway and N. J. A. Sloan, Sphere Packings, Lattices and Groups, 3rd Edition, (Springer-Verlag, 1998).

].

5. Exhaustive search using sphere coverings

Fig. 5. Base 10 logarithmic plots of the number of evalutions, K, required for the exaustive search in N dimensions, based upon (i) Complete integer lattice (diamonds); (ii) the AN* lattice (stars); (iii) three-level branch and bound exhaustive search using the AAN* lattice (squares).

6. Exhaustive search with branch and bound

7. Steepest ascent hill climbing

The problem requires definition of the neighbourhood - the set of candidate solutions near to the present solution that are tested at each step. If the candidates are too close to the present solution or there are too many, the algorithm will have slow convergence; if they are too widely separated, then potential solutions will be missed. The neighbourhood of the present solution should therefore be adequately, but thinly covered. There are various ways to define such a neighbourhood. Most simply, we could take it to be the surface of the sphere of radius ε centred on the present solution. This surface needs to be covered by the spheres centred on the adjacent candidate solutions. Just as the intersection of two spheres forms a circle, the intersection of two N-spheres forms an (N-1)-sphere. The problem of covering this neighbourhood is therefore equivalent to the problem of covering the surface of an N-sphere with an arrangement of (N-1)-spheres. The minimum number of neighbouring candidates that would be required whilst still spanning N dimensions is N+1 and, if regularly spaced, they would be positioned at the vertices of an N-dimensional regular simplex (i.e. an equilateral triangle for N = 2, a regular tetrahedron for N = 3, etc.). The vectors representing the simplex vertices are derived in Appendix B. This arrangement covers the neighbourhood only if the distance, s, from the present solution to the candidates satisfies 0 < s = 2ε/N (see Appendix C). The efficiency of such a SAHC algorithm is illustrated in Fig. 6 for s = 2ε/N and N ≤ 6. In each trial a randomly oriented vector of magnitude 1.5 was used as the initial candidate. For each data point, K was taken as the mean number of evaluated candidates from one hundred trials. Since the step size s ∝ 1/N and the number of evaluations per step varies as N, we expect K to vary as N 2; this quadratic dependence is confirmed by Fig. 6. When s was increased beyond 2ε/N, it was observed that the algorithm failed to find the solution using some initial candidates.

This approach guarantees convergence of the SAHC algorithm, but the efficiency can be improved further. For example, when using the simplex arrangement of candidates there is strong directional dependence, that is convergence is much faster for initial candidates in the direction of the simplex vertices. This dependence can be reduced by reversing the orientation of the simplex at each step (Fig. 6). Further improvement could also be achieved by randomising the orientation of the simplex at each step and by combining the algorithm with a branch and bound approach to perform optimisation on progressively finer scales.

It is useful to note the difference between the method presented here and the commonly used ‘simplex method’ for function maximisation [19

19. W. Press, S. Teukolsky, W. Vetterling, and B. Flannery, Numerical Recipes in C, 2nd Edition, (Cambridge University Press, 1992).

]. This latter method uses variable step sizes and orientations of the simplex to adapt to the local shape of the function as the algorithm progresses. Because of the known topology of f(c), we were able to use fixed step sizes to achieve convergence to the desired precision.

8. Direct maximisation

Fig. 6. Linear plots of K, the mean number of evalutions required for the hill climbing algorithm in N dimensions, using (i) a simplex with fixed orientation (diamonds); (ii) a simplex with reversals at each step (stars).

It can be seen in Fig. 2 that the function F(c) has a well defined maximum and is isotropic in the surrounding region. Since its value also becomes small away from the maximum, a good estimate of the location of the maximum can be found from a first moment (centre of mass) calculation. This estimate, denoted by the vector W, could be formulated as

W=bF(ab)dVF(ab)dVa,
(11)

where ∫⋯∫ represents a N-dimensional integral over a suitably large region ∑ and dV is the volume element at b. In practice, Eq. (11

11. M. J. Booth, T. Wilson, H.-B. Sun, T. Ota, and S. Kawata, “Methods for the characterisation of deformable membrane mirrors,” Appl. Opt. 445131–5139 (2005). [CrossRef] [PubMed]

) must be approximated using a discrete numerical integration scheme where each integrand evaluation corresponds to a photodetector measurement. We therefore calculate W as

W=m=1MγmbmF(abm)m=1MγMF(abm)
(12)

where γm are integration weights that depend upon the particular numerical integration scheme [19

19. W. Press, S. Teukolsky, W. Vetterling, and B. Flannery, Numerical Recipes in C, 2nd Edition, (Cambridge University Press, 1992).

]. The vectors b m are the M integration abcissæ, the locations where the integrand is evaluated, each of which represents the aberration introduced by the adaptive element for a particular intensity measurement. If a large number of appropriately distributed abcissæ are used then Wa. A simple integration scheme could, for example, involve the regular distribution of the b m throughout ∑, based upon a lattice such as AN* or Z N, using weights γm = 1. This would be a multidimensional analogue of block integration with one variable. More advanced schemes, such as Gaussian quadrature, might alternatively be employed [19

19. W. Press, S. Teukolsky, W. Vetterling, and B. Flannery, Numerical Recipes in C, 2nd Edition, (Cambridge University Press, 1992).

].

The simplest way to distribute the b m for measurement of N modes would be at the N + 1 vertices of a regular simplex. In this case, since a small number of abcissæ are used, the approximation errors in Eq. (12) become significant and the approximate equality Wa no longer holds. However, with suitable choice of the vector length ∣b m∣ there remains a linear relationship between W and a so we can calculate a as

Fig. 7. Aberration measurement accuracy for direct estimation using a simplex arrangement of abcissæ for N = 6 and ∣b n ∣ = 0.5. The data points show the mean and standard deviation from a sample of 1000 input aberrations. The solid line is an even polynomial fit. The dashed line shows the measurement tolerance of E=0.1.
aS1W,
(13)

where we use the matrix S, whose element Sik is defined as

Sik=Wiaka=0.
(14)

We can investigate the measurement accuracy by defining the measurement error E as

E=S1Wa.
(15)

As an example, Fig. 7 shows the variation of E with ∣a∣ for ∣b m ∣ =0.5 and N = 6. Each data point shows the mean and standard deviation from a sample of 1000 randomly oriented input aberrations of a given magnitude. The mean value of E was within the measurement tolerance of 0.1 over the range ∣a∣ ≤ 1.09. The same analysis was performed for other combinations of modes for N ≤ 6. In all cases E showed ∣a∣ similar dependence on a and the ranges over which the mean value of E was within the measurement tolerance all lay between ∣a∣ ≤ 1.05 and ∣a∣ ≤ 1.26.

9. Conclusions

Through construction of an appropriate mathematical model and by heuristic reasoning, we have designed and analysed deterministic aberration measurement algorithms for a wave front sensorless adaptive optics system. By using knowledge of the maximised function’s topology, we were able to optimise algorithms to provide higher efficiency. The results indicated that deterministic, non-adaptive algorithms could be effective in controlling these systems, if suitably formulated.

The discussion in this paper was based around the simple optical configuration of Fig. 1. However, similar mathematical models, demonstrating the same spherical symmetry, can be obtained for many other optical systems including those described in the Introduction. With minor adjustments, the results presented here are also applicable to these other systems.

Appendix A: Zernike polynomials

The Zernike polynomials used in this paper are listed in Table 1. They are defined such that the variance of each polynomial is normalised to unity [21

21. R. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66207–277 (1976). [CrossRef]

]. Since it represents phase, a coefficient of value 1 corresponds to a wave front variance of 1 rad2. The mode indexing schemes, using the single index i or the dual indices (n,m), are explained by Neil et al. [20

20. M. A. A. Neil, M. J. Booth, and T. Wilson, “New modal wavefront sensor: a theoretical analysis,” J. Opt. Soc. Am. A 171098–1107 (2000). [CrossRef]

]

Table 1. Zernike mode definitions

table-icon
View This Table

Appendix B: Simplex construction

The construction of a set of N +1 unity magnitude vectors b n that represent the vertices of a regular N-dimensional simplex proceeds as follows. Firstly, note that when the centroid of the simplex is at the origin

n=1N+1bn=0.
(16)

We choose the initial vector to have a non-zero coordinate only in the first dimension, such that b 1 = (1,0,0...). Using equation 16 we find that

n=2N+1(bn)1=1,
(17)

where (b n)m represents the mth element of b n. The symmetry of the simplex means that the coefficients (b n)1 for n > 1 are equal, so we can determine the first coordinate of the remaining vectors as

(bn)1=1Nn>1.
(18)

We choose the second vector b 2 to have only two non-zero coordinates in the first two dimensions and hence lies in the plane defined by the first two coordinates. The second coordinate is simply obtained by

(b2)2=1(b2)12=1(1N)2.
(19)

Continuation of this approach, including an extra dimension in each consecutive vector, leads to a general result for calculation of the vector coordinates as

(bn)m=(bm)mNm+1m<n
=1p=1m(bm)p2m=n
=0m>n
(20)

Appendix C: Calculation of step size for SAHC method

An N-dimensional sphere of radius ε is centred at the origin. Its surface, denoted by T, must be covered by the N + 1 overlapping spheres, also of radius ε, centred on the vertices of a regular simplex whose centroid is positioned at the origin. The distance from the origin to each simplex vertex is s and the vertices are described by the vectors s b n, where we use the vectors derived in Appendix B. There are N +1 similar points on T, given by the vectors -ε b n, that are equidistant from the adjacent vertices (the directions of these vectors correspond, in N = 2, to the midpoints of the triangle’s edges, and in N = 3, to the centroids of the tetrahedron’s faces). If we ensure that one of these points is covered, we can conclude that the whole of T will be covered. To satisfy this, the distance between the point -ε b 1 and the vertex s b 2 must be less than ε. Hence,

sb2εb12=s22εsN+ε2ε2.
(21)

For the inequality to be satisfied,

s(s2εN)0,
(22)

from which it follows that

0s2εN.
(23)

Acknowledgments

The author is a Royal Academy of Engineering/EPSRC Research Fellow. Thanks are due to M. Schwertner for helpful discussions concerning this paper.

References and links

1.

J. W. Hardy, Adaptive Optics for Astronomical Telescopes, (Oxford University Press, 1998).

2.

M. J. Booth, M. A. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope”, Proc. Nat. Acad. Sci. 995788–5792 (2002). [CrossRef] [PubMed]

3.

A. J. Wright, D. Burns, B. A. Patterson, S. P. Poland, G. J. Valentine, and J. M. Girkin, “Exploration of the optimisation algorithms used in the implementation of adaptive optics in confocal and multiphoton microscopy,” Microsc. Res. Technol. 6736–44 (2005). [CrossRef]

4.

L. Sherman, J. Y. Ye, O. Albert, and T. B. Norris, “Adaptive correction of depth-induced aberrations in multipho-ton scanning microscopy using a deformable mirror,” J. Microsc. 20665–71 (2002). [CrossRef] [PubMed]

5.

P. N. Marsh, D. Burns, and J. M. Girkin, “Practical implementation of adaptive optics in multiphoton microscopy,” Opt. Express 111123–1130 (2003). [CrossRef] [PubMed]

6.

O. Albert, L. Sherman, G. Mourou, T. B. Norris, and G. Vdovin, “Smart microscope: an adaptive optics learning system for aberration correction in multiphoton confocal microscopy,” Opt. Lett. 2552–54 (2000). [CrossRef]

7.

W. Lubeigt, G. Valentine, J. Girkin, E. Bente, and D. Burns, “Active transverse mode control and optimization of an all-solid-state laser using an intracavity adaptive-optic mirror,” Opt. Express 10, 550–555 (2002). [PubMed]

8.

E. Theofanidou, L. Wilson, W. J. Hossack, and J. Arlt, “Spherical aberration correction for optical tweezers,” Opt. Commun. 236145–150 (2004). [CrossRef]

9.

A. C. F. Gonte and R. Dandliker, “Optimization of single-mode fiber coupling efficiency with an adaptive membrane mirror,” Opt. Eng. 411073–1076 (2002). [CrossRef]

10.

M. Vorontsov, “Decoupled stochastic parallel gradient descent optimization for adaptive optics: integrated approach for wave-front sensor information fusion,” J. Opt. Soc. Am. A 19356–368 (2002). [CrossRef]

11.

M. J. Booth, T. Wilson, H.-B. Sun, T. Ota, and S. Kawata, “Methods for the characterisation of deformable membrane mirrors,” Appl. Opt. 445131–5139 (2005). [CrossRef] [PubMed]

12.

T. Wilson and C. J. R. Sheppard, Theory and Practice of Scanning Optical Microscopy, (Academic Press, London, 1984).

13.

M. Born and E. Wolf, Principles of Optics, 6th Edition, (Pergamon Press, 1983).

14.

Michalewicz and D. B. Fogel, How to Solve It: Modern Heuristics, (Springer, Berlin, 2000).

15.

J. H. Conway and N. J. A. Sloan, Sphere Packings, Lattices and Groups, 3rd Edition, (Springer-Verlag, 1998).

16.

T. C. Hales, “An overview of the Kepler conjecture,” http://xxx.lanl.gov/ math.MG/9811071 (1999).

17.

R. Kershner, “The number of circles covering a set,” Am. J. Math. 61665–671 (1939). [CrossRef]

18.

E. Vitterbo and J. Boutros, “A universal lattice code decoder for fading channels,” IEEE Trans. Inf. Theory 451639–1642 (1999). [CrossRef]

19.

W. Press, S. Teukolsky, W. Vetterling, and B. Flannery, Numerical Recipes in C, 2nd Edition, (Cambridge University Press, 1992).

20.

M. A. A. Neil, M. J. Booth, and T. Wilson, “New modal wavefront sensor: a theoretical analysis,” J. Opt. Soc. Am. A 171098–1107 (2000). [CrossRef]

21.

R. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66207–277 (1976). [CrossRef]

OCIS Codes
(010.1080) Atmospheric and oceanic optics : Active or adaptive optics
(010.7350) Atmospheric and oceanic optics : Wave-front sensing

ToC Category:
Adaptive Optics

History
Original Manuscript: November 15, 2005
Revised Manuscript: January 20, 2006
Manuscript Accepted: February 13, 2006
Published: February 20, 2006

Virtual Issues
Vol. 1, Iss. 3 Virtual Journal for Biomedical Optics

Citation
Martin Booth, "Wave front sensor-less adaptive optics: a model-based approach using sphere packings," Opt. Express 14, 1339-1352 (2006)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-14-4-1339


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J. W. Hardy, Adaptive Optics for Astronomical Telescopes, (Oxford University Press, 1998).
  2. M. J. Booth, M. A. A. Neil, R. Ju¡skaitis and T.Wilson, "Adaptive aberration correction in a confocal microscope," Proc. Nat. Acad. Sci. 99, 5788-5792 (2002). [CrossRef] [PubMed]
  3. A. J. Wright, D. Burns, B. A. Patterson, S. P. Poland, G. J. Valentine and J. M. Girkin, "Exploration of the optimisation algorithms used in the implementation of adaptive optics in confocal and multiphoton microscopy," Microsc. Res. Technol. 67, 36-44 (2005). [CrossRef]
  4. L. Sherman, J. Y. Ye, O. Albert and T. B. Norris,"Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror," J. Microsc. 206, 65-71 (2002). [CrossRef] [PubMed]
  5. P. N. Marsh, D. Burns and J. M. Girkin, "Practical implementation of adaptive optics in multiphoton microscopy," Opt. Express 11, 1123-1130 (2003). [CrossRef] [PubMed]
  6. O. Albert, L. Sherman, G. Mourou, T. B. Norris and G. Vdovin, "Smart microscope: an adaptive optics learning system for aberration correction in multiphoton confocal microscopy," Opt. Lett. 25, 52-54 (2000). [CrossRef]
  7. W. Lubeigt, G. Valentine, J. Girkin, E. Bente, and D. Burns, "Active transverse mode control and optimization of an all-solid-state laser using an intracavity adaptive-optic mirror," Opt. Express 10, 550-555 (2002). [PubMed]
  8. E. Theofanidou, L. Wilson,W. J. Hossack and J. Arlt, "Spherical aberration correction for optical tweezers," Opt. Commun. 236, 145-150 (2004). [CrossRef]
  9. A. C. F. Gonte and R. Dandliker, "Optimization of single-mode fiber coupling efficiency with an adaptive membrane mirror," Opt. Eng. 41, 1073-1076 (2002). [CrossRef]
  10. M. Vorontsov, "Decoupled stochastic parallel gradient descent optimization for adaptive optics: integrated approach for wave-front sensor information fusion," J. Opt. Soc. Am. A 19, 356-368 (2002). [CrossRef]
  11. M. J. Booth, T. Wilson, H.-B. Sun, T. Ota and S. Kawata, "Methods for the characterisation of deformable membrane mirrors," Appl. Opt. 44, 5131-5139 (2005). [CrossRef] [PubMed]
  12. T. Wilson and C. J. R. Sheppard, Theory and Practice of Scanning Optical Microscopy, (Academic Press, London, 1984).
  13. M. Born and E. Wolf, Principles of Optics, 6th Edition, (Pergamon Press, 1983).
  14. Z. Michalewicz and D. B. Fogel, How to Solve It: Modern Heuristics, (Springer, Berlin, 2000).
  15. J. H. Conway and N. J. A. Sloan, Sphere Packings, Lattices and Groups, 3rd Edition, (Springer-Verlag, 1998).
  16. T. C. Hales, "An overview of the Kepler conjecture," http://xxx.lanl.gov/ math.MG/9811071 (1999).
  17. R. Kershner, "The number of circles covering a set," Am. J. Math. 61, 665-671 (1939). [CrossRef]
  18. E. Vitterbo and J. Boutros, "A universal lattice code decoder for fading channels," IEEE Trans. Inf. Theory 45, 1639-1642 (1999). [CrossRef]
  19. W. Press, S. Teukolsky, W. Vetterling and B. Flannery, Numerical Recipes in C, 2nd Edition, (Cambridge University Press, 1992).
  20. M. A. A. Neil, M. J. Booth and T. Wilson, "New modal wavefront sensor: a theoretical analysis," J. Opt. Soc. Am. A 17, 1098-1107 (2000). [CrossRef]
  21. R. Noll, "Zernike polynomials and atmospheric turbulence," J. Opt. Soc. Am. 66, 207-277 (1976). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited