OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 13 — Jul. 1, 2013
  • pp: 15131–15143
« Show journal navigation

Characterization of spatially varying aberrations for wide field-of-view microscopy

Guoan Zheng, Xiaoze Ou, Roarke Horstmeyer, and Changhuei Yang  »View Author Affiliations


Optics Express, Vol. 21, Issue 13, pp. 15131-15143 (2013)
http://dx.doi.org/10.1364/OE.21.015131


View Full Text Article

Acrobat PDF (2543 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We describe a simple and robust approach for characterizing the spatially varying pupil aberrations of microscopy systems. In our demonstration with a standard microscope, we derive the location-dependent pupil transfer functions by first capturing multiple intensity images at different defocus settings. Next, a generalized pattern search algorithm is applied to recover the complex pupil functions at ~350 different spatial locations over the entire field-of-view. Parameter fitting transforms these pupil functions into accurate 2D aberration maps. We further demonstrate how these aberration maps can be applied in a phase-retrieval based microscopy setup to compensate for spatially varying aberrations and to achieve diffraction-limited performance over the entire field-of-view. We believe that this easy-to-use spatially-varying pupil characterization method may facilitate new optical imaging strategies for a variety of wide field-of-view imaging platforms.

© 2013 OSA

1. Introduction

The characterization of optical system aberrations is critical in such applications as ophthalmology, microscopy, photolithography, and optical testing [1

1. H. Gross, W. Singer, M. Totzeck, F. Blechinger, and B. Achtner, Handbook of Optical Systems (Wiley Online Library, 2005), Vol. 2.

]. Knowledge of these different imaging platforms’ aberrations allows users to predict the achievable resolution, and permits system designers to correct aberrations either actively through adaptive optics or passively with post-detection image deconvolution. Digital aberration removal techniques play an especially prominent role in computational imaging platforms aimed at achieving simple and compact optical arrangements [2

2. O. S. Cossairt, D. Miau, and S. K. Nayar, “Scaling law for computational imaging using spherical optics,” J. Opt. Soc. Am. A 28(12), 2540–2553 (2011). [CrossRef] [PubMed]

]. A recent important class of such platforms is geared towards efficiently creating gigapixel images with high resolution over a wide field-of-view (FOV) [2

2. O. S. Cossairt, D. Miau, and S. K. Nayar, “Scaling law for computational imaging using spherical optics,” J. Opt. Soc. Am. A 28(12), 2540–2553 (2011). [CrossRef] [PubMed]

, 3

3. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486(7403), 386–389 (2012). [CrossRef] [PubMed]

]. Given the well-known linear scaling relationship between the influence of aberrations and imaging FOV [4

4. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28(23), 4996–4998 (1989). [CrossRef] [PubMed]

], it is critical to characterize their effect before camera throughput can be successfully extended to the gigapixel scale.

Over the past half-century, many unique aberration characterization methods have been reported [5

5. F. Berny and S. Slansky, “Wavefront determination resulting from Foucault test as applied to the human eye and visual instruments,” in Optical Instruments and Techniques (Oriel, 1969), pp. 375–386.

17

17. M. J. Booth, “Adaptive optics in microscopy,” Philos Trans A Math Phys Eng Sci 365(1861), 2829–2843 (2007). [CrossRef] [PubMed]

]. Each of these methods attempts to estimate the phase deviations or the frequency response of the optical system under testing. Several relatively simple non-interferometric procedures utilize a Shack-Hartmann wavefront sensor [11

11. L. N. Thibos, “Principles of hartmann-shack aberrometry,” in Vision Science and its Applications, (Optical Society of America, 2000)

13

13. L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216(4-6), 313–319 (2003). [CrossRef]

], consisting of an array of microlenses that each focus light onto a detector. The local tilt of an incident wavefront across one microlens can be calculated from the position of its detected focal spot. Using the computed local tilts from the microlenses across the entire array, the amplitude and phase of the incident wavefront can be directly approximated. Despite offering high accuracy, measuring aberrations with a Shack-Hartmann sensor often requires considerable modification to an existing optical setup. For example, insertion and removal of the wavefront sensor from the imaging platform’s pupil plane requires additional relay lenses, each subject to their own aberrations and possible misalignments.

Alternatively, wavefront aberrations can be inferred directly from intensity measurements by relying upon phase retrieval procedures [18

18. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” JOSA A 9(7), 1072–1085 (1992). [CrossRef]

24

24. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21(5), 215829 (1982). [CrossRef]

]. A common phase retrieval-based strategy is to introduce phase diversity [18

18. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” JOSA A 9(7), 1072–1085 (1992). [CrossRef]

, 24

24. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21(5), 215829 (1982). [CrossRef]

] between multiple measurements of the intensity of an optical field. Phase diversity may be introduced either with additional optical elements or by simply inducing system defocus. Various methods for phase retrieval using defocus diversity have been reported in literature, including transport-of-intensity equation (TIE) based methods [25

25. L. Waller, L. Tian, and G. Barbastathis, “Transport of Intensity phase-amplitude imaging with higher order intensity derivatives,” Opt. Express 18(12), 12552–12561 (2010). [CrossRef] [PubMed]

28

28. S. S. Kou, L. Waller, G. Barbastathis, and C. J. Sheppard, “Transport-of-intensity approach to differential interference contrast (TI-DIC) microscopy for quantitative phase imaging,” Opt. Lett. 35(3), 447–449 (2010). [CrossRef] [PubMed]

], iterative algorithms [29

29. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]

] and other non-iterative methods [30

30. Y. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Reconstruction of in-line digital holograms from two intensity measurements,” Opt. Lett. 29(15), 1787–1789 (2004). [CrossRef] [PubMed]

, 31

31. B. Das and C. S. Yelleswarapu, “Dual plane in-line digital holographic microscopy,” Opt. Lett. 35(20), 3426–3428 (2010). [CrossRef] [PubMed]

].

In this paper, we describe a characterization method that is able to map spatially varying aberrations in a robust, cost-effective and easy-to-implement manner. In brief, this method operates by collecting a set of intensity images of a calibration sample at various defocus planes. The sample must contain identical discretized objects spread over its entire viewing area. In combination with a phase-retrieval algorithm, our method first recovers the phase-and-amplitude profile of a target object located at the center of the FOV. This complex profile then serves as the ground truth image of the object (i.e., image with minimal aberration). Next, our method automatically identifies another target object at an off-axis location and initializes a set of aberration parameters at that location. We then use this set of aberration parameters, in combination with the recovered ground truth image, to generate a set of aberrated intensity images for the same number of defocus planes. For each off-axis location, we recover its associated aberration parameters by minimizing the difference between the generated aberrated intensity images and the collected experimental data. Finally, we apply the recovered off-axis aberration parameters (from ~350 locations in our experiment) to generate continuous 2D aberration function maps by parameter fitting.

To demonstrate the utility of the recovered 2D aberration maps, we experimentally show how they can be used in combination with a phase-retrieval method to render images with improved resolution performance – spatially varying aberrations can be compensated by using an information-preserving image deconvolution scheme.

This paper is structured as follows: In Section 2, we briefly review some of the concepts essential to the context of our work, including phase retrieval and spatially varying pupil aberrations. In Section 3, we describe our experimental setup and the calibration sample. In Section 4, we detail our procedure for pupil function recovery at one location off the optical axis. In Section 5, we explain how to automate the aberration characterization process, experimentally demonstrate the automated measurement of spatially varying aberration weights, and show how these weights can yield accurate 2D aberration function estimates. In Section 6, we demonstrate a specific application of these aberration function maps – improving the resolution performance of phase retrieval-based image rendering across the entire imaging FOV. Finally, we end with a discussion of some of advantages and limitations of the reported method.

2. Overview of phase retrieval and spatially varying pupil aberrations

2.1 Phase retrieval and defocus diversity

In this work, we apply defocus diversity [29

29. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]

, 37

37. J. Fienup and C. Wackerman, “Phase-retrieval stagnation problems and solutions,” JOSA A 3(11), 1897–1907 (1986). [CrossRef]

] to perform phase retrieval within a conventional microscope. Two or more images must be captured with known defocus distances, as shown in Fig. 1(a)
Fig. 1 Multi-plane phase retrieval with defocus diversity. (a) Multiple intensity images I(s) (s = −2, −1, 0, 1, 2) are captured at different defocus settings. (b) Multi-plane iterative phase retrieval algorithm presented in [29].
. Based on these intensity measurements I(s) (s = −2, −1, 0, 1, 2 in Fig. 1(a)) at different defocus planes, we follow the multi-plane iterative algorithm outlined in Fig. 1(b) [29

29. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]

]. In this algorithm, we first initialize a complex estimate of the object function. This complex estimate is then propagated to one defocus plane (multiplication by a quadratic phase factor in the Fourier domain [42

42. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

]). After propagation, the amplitude of the estimate is replaced by the square root of the corresponding measurement I(s), while the phase is kept unchanged. Such a propagate-and-replace process is repeated until the complex solution converges (see Section 4 for implementation details).

2.2 Spatially varying pupil aberrations

Second, an understanding of spatially varying pupil aberrations is important to fully appreciating the impact of our work. In an aberration-free coherent imaging system, the light field distribution at the pupil plane (i.e., the back focal plane of the objective lens) is directly proportional to the Fourier transform of the light field at the object plane. Therefore, the spatial coordinates at the object plane and the pupil plane can be expressed as (x, y) and (kx, ky), respectively, with kx and ky as the wave number in the x and y directions. Due to such a Fourier relationship, aberrations of an imaging platform are often characterized at the pupil plane for simplicity [42

42. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

]. Different types of aberrations can be quantified as different Zernike modes at the pupil plane. For example, defocus aberration can be modeled as a phase factor p5Z20(kx,ky), where Z20(kx,ky)denotes the corresponding Zernike polynomial for this aberration (here a quadratic function), while coefficient p5 denotes the amount of defocus aberration (subscript ‘5’ indicates the fifth Zernike mode).

A more complete aberration model uses the generalized pupil function W(kx, ky), whose phase factor is a summation of different Zernike modes with different aberration coefficients pm (pm denotes the amount of mth Zernike mode; refer to Eq. (1) in Section 4). If the imaging platform is shift-invariant, each aberration coefficient pm is constant over the entire imaging FOV and the generalized pupil function W(kx, ky) is independent of spatial coordinates x and y. However, as noted above, recent extreme-FOV computational imaging platforms push beyond the limits of conventional lens design and thus invalidate this shift-invariant assumption. Aberration coefficients pms are 2D functions of x and y in this case, and thus, the generalized pupil function can be expressed as a function of both kx, ky and x, y, i.e. W(kx, ky, x, y). Our goal here is to characterize the aberration parameters pm (m = 1, 2, …) as a function of spatial coordinates x and y. Based on pm(x, y), we can derive the generalized pupil function W(kx, ky, x, y) at any given spatial location (Section 5) and accurately perform post-detection image deconvolution (Section 6).

3. Experimental setup and sample preparation

In our experiment, we used a conventional upright microscope (BX 41, Olympus) with a 2X apochromatic lens (0.08 NA, Olympus) and a full-frame CCD camera (KAI-29050, Kodak). The tested objective lens has a relatively large FOV (~1.3 cm in diameter) with the potential to facilitate whole-slide imaging for a variety of applications [32

32. Y. Kawano, C. Higgins, Y. Yamamoto, J. Nyhus, A. Bernard, H.-W. Dong, H. J. Karten, and T. Schilling, “Darkfield adapter for whole slide imaging: Adapting a darkfield internal reflection illumination system to extend wsi applications,” PLoS ONE 8(3), e58344 (2013). [CrossRef] [PubMed]

]. However, scale-dependent geometric aberrations compound any attempt to directly capture images at a resolution commensurate with the specified NA uniformly across the entire image plane [4

4. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28(23), 4996–4998 (1989). [CrossRef] [PubMed]

]. While aberrations are well-corrected near the optical axis, significant blur deteriorates image quality towards the FOV’s edge.

A microsphere target sample is easy-to-prepare, cost-effective and accessible to the average microscopist. Sample preparation time totals less than 2 minutes. The standard deviation of the microspheres’ size is about 0.3 µm, and thus, these calibration objects are effectively identical over the entire FOV. We note that while alternative fabrication methods such as e-beam or photo-lithography may also generate calibration samples, the aberrations of lithography lens, the evenness of photoresist, and the alignment of the mechanical stage would all need to be considered and jointly optimized to minimize unexpected target variations.

4. Off-axis pupil function recovery

With a proper calibration target prepared, we are now ready to detail our procedure for pupil function recovery at one location off the optical axis. Assuming the aberrations of the objective lens are minimal (i.e., they are well-corrected) at the center of its FOV, we use images of the object located near the FOV center to serve as the ground truth for other off-axis positions. The proposed characterization approach consists of two primary steps: 1) phase retrieval, and 2) pupil function estimation, as detailed below.

1) Phase retrieval. Following the general procedure outlined in Section 2, we displace the microscope stage from the focal plane at δ = 50 µm increments in either defocus direction, capturing a total of 17 images of the microsphere calibration target I(s), where s = (−8,…0,…8). The maximum defocus distance with such a scheme is 400 µm in either direction. For each image, the microsphere target is illuminated with a quasi-monochromatic collimated plane wave (632 nm).

Next, we create a 642-pixel cropped image set Ic(s) that contains one microsphere at the center FOV (see Fig. 2
Fig. 2 Pupil function recovery at one off-axis position. Two cropped areas of one set of defocused intensity images are used for algorithm input. One cropped set Ic(s) is centered on a microsphere at the images’ central FOV (left), while the other cropped set Id(s) is centered on a microsphere at an off-axis position (right). Each cropped image set contains 17 intensity measurements (here only 5 are shown) at different defocus distances (−400 µm to + 400 µm, 50 µm per step). We approximate an unknown pupil function W with 8 Zernike coefficients (x-tilt, y-tilt, x-astigmatism, y-astigmatism, defocus, x-coma, y-coma and spherical aberration). We use this pupil function estimate to modify the 17 “ground truth” images Ic(s) of the central microsphere to generate a new set of aberrated intensity images, Ia(s) (middle). We then adjust the values of the 8 unknown Zernike coefficients to minimize the difference between Ia(s) and the actual intensity measurements of the off-axis microsphere, Id(s) (right). The corresponding pupil function described by 8 Zernike coefficients is recovered when the mean-squared error difference between these two image sets is minimized.
, left). We recover the complex profile of this centered microsphere using the multi-plane phase retrieval algorithm [29

29. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]

] from Section 2, detailed briefly as follows. First, an estimate of the complex field is initialized at the object plane. The initial estimate’s phase is set to a constant and its amplitude is set to the square root of the in-focus intensity measurement of the centered microsphere Ic(0). Second, this complex field estimate is Fourier transformed and multiplied by a quadratic phase factor exp(ikzz), describing defocus of the field by axial distance z = s∙δ. To begin, we set s = 1, corresponding to z = + 50 µm of defocus. Third, after digitally defocusing, we again replace the amplitude values of the complex field estimate with the square root of the intensity data from recorded image, Ic(s). Beginning with s = 1, we first use the intensity values Ic(1) captured at z = +50 µm for amplitude value replacement, while the estimate’s phase values remain unchanged. This digital propagate-and-replace process is repeated for all values of s (all 17 cropped intensity measurements from the captured focal stack). Finally, we iterate the entire phase retrieval loop approximately 10 times. The final recovered complex image, denoted asItrutheiφtruth, serves as a “ground truth” estimate of the complex field from a minimally aberrated microsphere, which may be digitally refocused to any position of interest.

5. Spatially varying aberration characterization over the entire FOV

Figure 3(a)
Fig. 3 Off-axis aberration characterization with a calibration target. (a) ~350 microspheres are automatically identified on a microscope slide, each denoted by a red dot. (b) The recovered pupil function at position (x1, y1). (c1)-(c5) Intensity measurements Id(s) of the microsphere centered at (x1, y1) under different amounts of defocus. (d1)-(d5) The corresponding aberrated image estimates generated using the pupil function in Fig. 3(b).
shows a full FOV image of the calibration target with ~350 microspheres denoted by a red dot. For each microsphere, we recover the same 8 location-specific Zernike coefficients. For example, Fig. 3(b) shows the pupil function W recovered following Eq. (3) at position (x1, y1), the center of the black square in Fig. 3(a). Figures 3(c1)-(c5) are 5 of the 17 intensity measurements of the microsphere at position (x1, y1) under different amounts of defocus: Id(s = 0), Id(s = ± 3), and Id(s = ± 6). Figures 3(d1)-(d5) display the corresponding aberrated image estimates Ia(s) generated by the recovered pupil function in Fig. 3(b). Following the convex form of Eq. (3), the applied GPS algorithm successfully minimizes the mean-squared error difference between the measurements Id(s) and the estimates Ia(s).

Following this aberration recovery pipeline, 8 Zernike coefficients are calculated for approximately 350 unique spatial locations across the microscope’s FOV. Figures 4(a)
Fig. 4 Spatially varying aberrations of the 2X objective lens. Each data point, denoted by a blue dot, represents the extracted Zernike coefficient weight for one microsphere. ~350 microspheres are identified over the entire FOV and their corresponding parameters are fitted to a 2D surface for each type of aberration. (a)-(f) correspond to x-astigmatism, y-astigmatism defocus, x-coma, y-coma and spherical aberration.
-4(f) plot the recovered second, third and fourth order spatially varying aberrations of our tested 2X objective lens, corresponding to x-astigmatism, y-astigmatism, defocus, x-coma, y-coma and spherical aberration respectively (first order Zernike modes are normally not considered as aberrations, and are thus not shown). The full FOV image of our calibration target is displayed at the bottom plane of each plot, where the FOV diameter is 1.3 cm. Each blue dot in Fig. 4 represents the recovered coefficient for the corresponding Zernike mode, and the spatial location of each blue dot corresponds to one microsphere labeled in Fig. 3(a).

Finally, we fit these 350 discrete values to a continuous polynomial function pm(x, y), allowing us to accurately recover the pupil function at any location across the image plane (curved surfaces in Fig. 4). The order of each polynomial function can be predicted via aberration theory for a conventional imaging platform [1

1. H. Gross, W. Singer, M. Totzeck, F. Blechinger, and B. Achtner, Handbook of Optical Systems (Wiley Online Library, 2005), Vol. 2.

]. The aberrations of increasingly unconventional optical designs in computational imaging systems may not follow such predictable trends, which we may account for with alternative fitting models and/or recovering coefficients at more than 350 unique spatial locations.

We verified the accuracy of our aberration parameter recovery process with an additional simple experiment. We defocused the calibration target by +50 µm along the optical axis and again implemented our aberration parameter recovery process (using the same ground truth images as before). For the tested wide-field microscope objective, Fig. 5
Fig. 5 Recovered defocus parameter function p5(x, y) with (color surface) and without (blue grid) +50 µm of sample defocus. The difference between these two surfaces corresponds to a defocus distance of +48.9 µm, which is in a good agreement with the actual displacement distance.
displays two of these fitted polynomial functions for spatially varying defocus - one computed for an in-focus target and one for the target under +50 µm of defocus. The major difference between the two polynomial fits is a constant offset corresponding to Δz = 48.9 µm, which is in a good agreement with the experimentally induced +50 µm displacement distance. As a reference, the depth-of-focus of the objective lens is about 80 µm.

6. Image deconvolution using the recovered aberration parameters

We will now demonstrate that our recovered 2D aberration maps can be used in an image deconvolution process to render images with improved resolution performance. The image deconvolution process is comprised of two main steps: 1) phase retrieval, 2) segment decomposition and shift-invariant image deconvolution, as outlined below.

1) Full-FOV phase retrieval. We use the multi-plane phase retrieval algorithm described in Section 2 to recover the amplitude and phase of a sample over the microscope’s entire FOV. This complex image contains the objective lens’s spatially varying aberrations.

2) Segment decomposition and shift-invariant image deconvolution. We then divide the full-FOV complex image into smaller 128 x 128 pixel image segments, denoted by Iseg(n) (n = 1, 2,… 1600 for our employed detector). Aberrations within each small segment are treated as shift-invariant, a common strategy for wide FOV imaging processing [45

45. B. K. Gunturk and X. Li, Image Restoration: Fundamentals and Advances (CRC Press, 2012), Vol. 7.

]. The pupil function W(kx, ky, xc(n), yc(n)) is then calculated for each small segment following Eq. (1), where (xc(n), yc(n)) represents the central spatial location of the nth segment. We then perform image deconvolution to recover the corrected image segment Icor(n) as follows:
Icor(n)=|1((Isegeiφseg)/W(kx,ky,xc(n),y(n)c))|2,
(4)
where Isegeiφsegis the corresponding cropped segment of the complex field recovered in Step 1. We note that, in the above equation, we only perform division within the circular pupil of the objective lens; for regions outside the circular pupil, we set the spectrum to 0 in the Fourier domain. Furthermore, since our deconvolution process is applied to complex data, we successfully avoid division by zero in the Fourier domain.

To characterize the resolution performance of the above deconvolution process at different image plane locations, we perform a first experiment using a shifted USAF resolution target as our sample. Figures 6
Fig. 6 Resolution characterization using a USAF resolution target. (a) The USAF resolution target is placed at 3 different locations indicated by color arrows (b)-(d). Full FOV corresponds to circular region with 1.3 cm diameter. The original images captured using the aberrated objective lens at the center (b1), 60% away from the center (c1), and 95% away from the center (d1). (b2)-(d2) are the corresponding processed images using the deconvolution scheme. Group 7, element 1 (line width of 3.9 µm) of the USAF target can be resolved from the corrected images, in a good agreement with the Abbe diffraction limit of 3.94 µm.
(b1)-(d1) are the raw image segments Iseg directly captured using the aberrated objective lens, while Fig. 6(b2)-(d2) are the corresponding processed images Icor using Eq. (4). From Fig. 6(b2)-(d2), Group 7, element 1 (line width of 3.9 µm) of the USAF target can be resolved, in a good agreement with the Abbe diffraction limit of 3.94 µm of our 0.08 NA objective lens. This simple experiment indicates our aberration correction scheme can correct this particular objective’s aberration blur to yield diffraction-limited performance across its entire image FOV.

Based on Eq. (4), we can also recombine all the corrected image segments Icor(n) to form a correct full FOV image. Figure 7
Fig. 7 Full FOV image deconvolution of the microsphere calibration target. (a) The aberration-corrected full FOV image. (b1)-(e1) Recovered pupil functions corresponding to highlighted regions in (a). (b2)-(e2) The corrected images of highlighted regions in (a). (b3)-(e3) The original images of the test target without aberration correction.
and Fig. 8
Fig. 8 Full FOV image deconvolution of a new test target, containing a mixture of microspheres with different diameters (5-20 µm). (a) The aberration-corrected full FOV image. (b1)-(e1) Recovered pupil functions corresponding to highlighted regions in (a). (b2)-(e2) The corrected images of highlighted regions in (a). (b3)-(e3) The original images of the test target without aberration correction.
show the results of a second experiment, where the entire FOV of images of two samples are corrected. An alpha blending algorithm [46

46. T. McReynolds and D. Blythe, Advanced graphics programming using OpenGL (Morgan Kaufmann, 2005).

] is used to remove edge artifacts at the segment boundary. Specifically, we cut away 2 pixels at the edge of each segment and use another 5 pixels to overlap with the adjacent portions. This blending comes at a small computational cost of redundantly processing the regions of overlap twice.

The sample in Fig. 7 is the calibration target discussed in Section 3, and the sample in Fig. 8 is a new test target with a mixture of microspheres of different diameters (5-20 µm) on a microscope slide. The 4 regions outlined by red squares in Fig. 7(a) and Fig. 8(a) are highlighted for detailed observation. The corresponding pupil functions of these four regions are shown in Figs. 7(b1)-7(e1) and Figs. 8(b1)-8(e1). Figures 7(b2)-7(e2) and Figs. 8(b2)-8(e2) display their associated corrected (i.e., deconvolved) images, while Figs. 7(b3)-7(e3) and Figs. 8(b3)-8(e3) display their original images without aberration correction. From these two examples, it is clear that our aberration characterization procedure can digitally compensate for the spatially varying aberrations across a microscope objective’s full FOV.

Finally, we note that the deconvolution scheme in Eq. (4) is based on inverting the coherent transfer function (i.e., the complex pupil function) of the objective lens. For the case of incoherent illumination, the incoherent optical transfer function can be directly calculated from the complex pupil function through a close form equation [42

42. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

], and image deconvolution can be performed in the Fourier domain accordingly.

7. Conclusion

In our characterization scheme, we assume that the aberration is well-corrected at the center FOV. The object located at the center FOV serves as the ground truth for off-axis positions. If the objective lens under testing is not well-corrected at the center of the FOV, we can use other well-corrected optics (such as a high NA, small FOV objective) to capture the ground truth image. Finally, we note that future work will be aimed at extending the proposed aberration characterization pipeline beyond recovery of 8 Zernike modes. For more unconventional imaging designs, 10-15 Zernike modes may be required for accurate aberration characterization. A GPU implementation of the proposed pipeline can significantly shorten the associated processing time. Furthermore, this work tested an objective lens with an assumed 100% transmissive circular back aperture. Using our framework to model objective lens apertures with non-perfect transmission, or containing apodizing filters or coded modulation masks will be an additional future research direction.

Acknowledgments

We acknowledge funding support from National Institute of Health under Grant No. 1R01AI096226-01.

References and links

1.

H. Gross, W. Singer, M. Totzeck, F. Blechinger, and B. Achtner, Handbook of Optical Systems (Wiley Online Library, 2005), Vol. 2.

2.

O. S. Cossairt, D. Miau, and S. K. Nayar, “Scaling law for computational imaging using spherical optics,” J. Opt. Soc. Am. A 28(12), 2540–2553 (2011). [CrossRef] [PubMed]

3.

D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486(7403), 386–389 (2012). [CrossRef] [PubMed]

4.

A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28(23), 4996–4998 (1989). [CrossRef] [PubMed]

5.

F. Berny and S. Slansky, “Wavefront determination resulting from Foucault test as applied to the human eye and visual instruments,” in Optical Instruments and Techniques (Oriel, 1969), pp. 375–386.

6.

S. Yokozeki and K. Ohnishi, “Spherical aberration measurement with shearing interferometer using Fourier imaging and moiré method,” Appl. Opt. 14(3), 623–627 (1975). [CrossRef] [PubMed]

7.

M. Ma, X. Wang, and F. Wang, “Aberration measurement of projection optics in lithographic tools based on two-beam interference theory,” Appl. Opt. 45(32), 8200–8208 (2006). [CrossRef] [PubMed]

8.

M. Takeda and S. Kobayashi, “Lateral aberration measurements with a digital Talbot interferometer,” Appl. Opt. 23(11), 1760–1764 (1984). [CrossRef] [PubMed]

9.

J. Sung, M. Pitchumani, and E. G. Johnson, “Aberration measurement of photolithographic lenses by use of hybrid diffractive photomasks,” Appl. Opt. 42(11), 1987–1995 (2003). [CrossRef] [PubMed]

10.

Q. Gong and S. S. Hsu, “Aberration measurement using axial intensity,” Opt. Eng. 33(4), 1176–1186 (1994). [CrossRef]

11.

L. N. Thibos, “Principles of hartmann-shack aberrometry,” in Vision Science and its Applications, (Optical Society of America, 2000)

12.

J. L. Beverage, R. V. Shack, and M. R. Descour, “Measurement of the three - dimensional microscope point spread function using a Shack - Hartmann wavefront sensor,” J. Microsc. 205(1), 61–75 (2002). [CrossRef] [PubMed]

13.

L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun. 216(4-6), 313–319 (2003). [CrossRef]

14.

R. G. Lane and M. Tallon, “Wave-front reconstruction using a Shack-Hartmann sensor,” Appl. Opt. 31(32), 6902–6908 (1992). [CrossRef] [PubMed]

15.

D. Debarre, M. J. Booth, and T. Wilson, “Image based adaptive optics through optimisation of low spatial frequencies,” Opt. Express 15(13), 8176–8190 (2007). [CrossRef] [PubMed]

16.

T. Čižmár, M. Mazilu, and K. Dholakia, “In situ wavefront correction and its application to micromanipulation,” Nat. Photonics 4(6), 388–394 (2010). [CrossRef]

17.

M. J. Booth, “Adaptive optics in microscopy,” Philos Trans A Math Phys Eng Sci 365(1861), 2829–2843 (2007). [CrossRef] [PubMed]

18.

R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” JOSA A 9(7), 1072–1085 (1992). [CrossRef]

19.

B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett. 28(10), 801–803 (2003). [CrossRef] [PubMed]

20.

B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase - retrieved pupil functions in wide - field fluorescence microscopy,” J. Microsc. 216(1), 32–48 (2004). [CrossRef] [PubMed]

21.

J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt. 32(10), 1737–1746 (1993). [CrossRef] [PubMed]

22.

J. R. Fienup, J. C. Marron, T. J. Schulz, and J. H. Seldin, “Hubble Space Telescope characterized by using phase-retrieval algorithms,” Appl. Opt. 32(10), 1747–1767 (1993). [CrossRef] [PubMed]

23.

G. R. Brady and J. R. Fienup, “Nonlinear optimization algorithm for retrieving the full complex pupil function,” Opt. Express 14(2), 474–486 (2006). [CrossRef] [PubMed]

24.

R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng. 21(5), 215829 (1982). [CrossRef]

25.

L. Waller, L. Tian, and G. Barbastathis, “Transport of Intensity phase-amplitude imaging with higher order intensity derivatives,” Opt. Express 18(12), 12552–12561 (2010). [CrossRef] [PubMed]

26.

N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49(1), 6–10 (1984). [CrossRef]

27.

T. E. Gureyev and K. A. Nugent, “Rapid quantitative phase imaging using the transport of intensity equation,” Opt. Commun. 133(1-6), 339–346 (1997). [CrossRef]

28.

S. S. Kou, L. Waller, G. Barbastathis, and C. J. Sheppard, “Transport-of-intensity approach to differential interference contrast (TI-DIC) microscopy for quantitative phase imaging,” Opt. Lett. 35(3), 447–449 (2010). [CrossRef] [PubMed]

29.

L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun. 199(1-4), 65–75 (2001). [CrossRef]

30.

Y. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Reconstruction of in-line digital holograms from two intensity measurements,” Opt. Lett. 29(15), 1787–1789 (2004). [CrossRef] [PubMed]

31.

B. Das and C. S. Yelleswarapu, “Dual plane in-line digital holographic microscopy,” Opt. Lett. 35(20), 3426–3428 (2010). [CrossRef] [PubMed]

32.

Y. Kawano, C. Higgins, Y. Yamamoto, J. Nyhus, A. Bernard, H.-W. Dong, H. J. Karten, and T. Schilling, “Darkfield adapter for whole slide imaging: Adapting a darkfield internal reflection illumination system to extend wsi applications,” PLoS ONE 8(3), e58344 (2013). [CrossRef] [PubMed]

33.

H. Nomura, K. Tawarayama, and T. Kohno, “Aberration measurement from specific photolithographic images: a different approach,” Appl. Opt. 39(7), 1136–1147 (2000). [CrossRef] [PubMed]

34.

H. Nomura and T. Sato, “Techniques for measuring aberrations in lenses used in photolithography with printed patterns,” Appl. Opt. 38(13), 2800–2807 (1999). [CrossRef] [PubMed]

35.

R. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237 (1972).

36.

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef] [PubMed]

37.

J. Fienup and C. Wackerman, “Phase-retrieval stagnation problems and solutions,” JOSA A 3(11), 1897–1907 (1986). [CrossRef]

38.

J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” JOSA A 4(1), 118–123 (1987). [CrossRef]

39.

M. R. Bolcar and J. R. Fienup, “Sub-aperture piston phase diversity for segmented and multi-aperture systems,” Appl. Opt. 48(1), A5–A12 (2009). [CrossRef] [PubMed]

40.

M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef] [PubMed]

41.

B. H. Dean and C. W. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A 20(8), 1490–1504 (2003). [CrossRef] [PubMed]

42.

J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).

43.

C. Audet and J. E. Dennis Jr., “Analysis of generalized pattern searches,” SIAM J. Optim. 13(3), 889–903 (2002). [CrossRef]

44.

X. Yang, H. Li, and X. Zhou, “Nuclei segmentation using marker-controlled watershed, tracking using mean-shift, and kalman filter in time-lapse microscopy,” Circuits and Systems I: Regular Papers, IEEE Transactions on 53, 2405–2414 (2006). [CrossRef]

45.

B. K. Gunturk and X. Li, Image Restoration: Fundamentals and Advances (CRC Press, 2012), Vol. 7.

46.

T. McReynolds and D. Blythe, Advanced graphics programming using OpenGL (Morgan Kaufmann, 2005).

OCIS Codes
(100.0100) Image processing : Image processing
(170.0180) Medical optics and biotechnology : Microscopy

ToC Category:
Microscopy

History
Original Manuscript: May 15, 2013
Revised Manuscript: May 31, 2013
Manuscript Accepted: June 10, 2013
Published: June 17, 2013

Virtual Issues
Vol. 8, Iss. 8 Virtual Journal for Biomedical Optics

Citation
Guoan Zheng, Xiaoze Ou, Roarke Horstmeyer, and Changhuei Yang, "Characterization of spatially varying aberrations for wide field-of-view microscopy," Opt. Express 21, 15131-15143 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-13-15131


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. H. Gross, W. Singer, M. Totzeck, F. Blechinger, and B. Achtner, Handbook of Optical Systems (Wiley Online Library, 2005), Vol. 2.
  2. O. S. Cossairt, D. Miau, and S. K. Nayar, “Scaling law for computational imaging using spherical optics,” J. Opt. Soc. Am. A28(12), 2540–2553 (2011). [CrossRef] [PubMed]
  3. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature486(7403), 386–389 (2012). [CrossRef] [PubMed]
  4. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt.28(23), 4996–4998 (1989). [CrossRef] [PubMed]
  5. F. Berny and S. Slansky, “Wavefront determination resulting from Foucault test as applied to the human eye and visual instruments,” in Optical Instruments and Techniques (Oriel, 1969), pp. 375–386.
  6. S. Yokozeki and K. Ohnishi, “Spherical aberration measurement with shearing interferometer using Fourier imaging and moiré method,” Appl. Opt.14(3), 623–627 (1975). [CrossRef] [PubMed]
  7. M. Ma, X. Wang, and F. Wang, “Aberration measurement of projection optics in lithographic tools based on two-beam interference theory,” Appl. Opt.45(32), 8200–8208 (2006). [CrossRef] [PubMed]
  8. M. Takeda and S. Kobayashi, “Lateral aberration measurements with a digital Talbot interferometer,” Appl. Opt.23(11), 1760–1764 (1984). [CrossRef] [PubMed]
  9. J. Sung, M. Pitchumani, and E. G. Johnson, “Aberration measurement of photolithographic lenses by use of hybrid diffractive photomasks,” Appl. Opt.42(11), 1987–1995 (2003). [CrossRef] [PubMed]
  10. Q. Gong and S. S. Hsu, “Aberration measurement using axial intensity,” Opt. Eng.33(4), 1176–1186 (1994). [CrossRef]
  11. L. N. Thibos, “Principles of hartmann-shack aberrometry,” in Vision Science and its Applications, (Optical Society of America, 2000)
  12. J. L. Beverage, R. V. Shack, and M. R. Descour, “Measurement of the three - dimensional microscope point spread function using a Shack - Hartmann wavefront sensor,” J. Microsc.205(1), 61–75 (2002). [CrossRef] [PubMed]
  13. L. Seifert, J. Liesener, and H. J. Tiziani, “The adaptive Shack–Hartmann sensor,” Opt. Commun.216(4-6), 313–319 (2003). [CrossRef]
  14. R. G. Lane and M. Tallon, “Wave-front reconstruction using a Shack-Hartmann sensor,” Appl. Opt.31(32), 6902–6908 (1992). [CrossRef] [PubMed]
  15. D. Debarre, M. J. Booth, and T. Wilson, “Image based adaptive optics through optimisation of low spatial frequencies,” Opt. Express15(13), 8176–8190 (2007). [CrossRef] [PubMed]
  16. T. Čižmár, M. Mazilu, and K. Dholakia, “In situ wavefront correction and its application to micromanipulation,” Nat. Photonics4(6), 388–394 (2010). [CrossRef]
  17. M. J. Booth, “Adaptive optics in microscopy,” Philos Trans A Math Phys Eng Sci365(1861), 2829–2843 (2007). [CrossRef] [PubMed]
  18. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” JOSA A9(7), 1072–1085 (1992). [CrossRef]
  19. B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett.28(10), 801–803 (2003). [CrossRef] [PubMed]
  20. B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase - retrieved pupil functions in wide - field fluorescence microscopy,” J. Microsc.216(1), 32–48 (2004). [CrossRef] [PubMed]
  21. J. R. Fienup, “Phase-retrieval algorithms for a complicated optical system,” Appl. Opt.32(10), 1737–1746 (1993). [CrossRef] [PubMed]
  22. J. R. Fienup, J. C. Marron, T. J. Schulz, and J. H. Seldin, “Hubble Space Telescope characterized by using phase-retrieval algorithms,” Appl. Opt.32(10), 1747–1767 (1993). [CrossRef] [PubMed]
  23. G. R. Brady and J. R. Fienup, “Nonlinear optimization algorithm for retrieving the full complex pupil function,” Opt. Express14(2), 474–486 (2006). [CrossRef] [PubMed]
  24. R. A. Gonsalves, “Phase retrieval and diversity in adaptive optics,” Opt. Eng.21(5), 215829 (1982). [CrossRef]
  25. L. Waller, L. Tian, and G. Barbastathis, “Transport of Intensity phase-amplitude imaging with higher order intensity derivatives,” Opt. Express18(12), 12552–12561 (2010). [CrossRef] [PubMed]
  26. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun.49(1), 6–10 (1984). [CrossRef]
  27. T. E. Gureyev and K. A. Nugent, “Rapid quantitative phase imaging using the transport of intensity equation,” Opt. Commun.133(1-6), 339–346 (1997). [CrossRef]
  28. S. S. Kou, L. Waller, G. Barbastathis, and C. J. Sheppard, “Transport-of-intensity approach to differential interference contrast (TI-DIC) microscopy for quantitative phase imaging,” Opt. Lett.35(3), 447–449 (2010). [CrossRef] [PubMed]
  29. L. Allen and M. Oxley, “Phase retrieval from series of images obtained by defocus variation,” Opt. Commun.199(1-4), 65–75 (2001). [CrossRef]
  30. Y. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Reconstruction of in-line digital holograms from two intensity measurements,” Opt. Lett.29(15), 1787–1789 (2004). [CrossRef] [PubMed]
  31. B. Das and C. S. Yelleswarapu, “Dual plane in-line digital holographic microscopy,” Opt. Lett.35(20), 3426–3428 (2010). [CrossRef] [PubMed]
  32. Y. Kawano, C. Higgins, Y. Yamamoto, J. Nyhus, A. Bernard, H.-W. Dong, H. J. Karten, and T. Schilling, “Darkfield adapter for whole slide imaging: Adapting a darkfield internal reflection illumination system to extend wsi applications,” PLoS ONE8(3), e58344 (2013). [CrossRef] [PubMed]
  33. H. Nomura, K. Tawarayama, and T. Kohno, “Aberration measurement from specific photolithographic images: a different approach,” Appl. Opt.39(7), 1136–1147 (2000). [CrossRef] [PubMed]
  34. H. Nomura and T. Sato, “Techniques for measuring aberrations in lenses used in photolithography with printed patterns,” Appl. Opt.38(13), 2800–2807 (1999). [CrossRef] [PubMed]
  35. R. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttg.)35, 237 (1972).
  36. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt.21(15), 2758–2769 (1982). [CrossRef] [PubMed]
  37. J. Fienup and C. Wackerman, “Phase-retrieval stagnation problems and solutions,” JOSA A3(11), 1897–1907 (1986). [CrossRef]
  38. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” JOSA A4(1), 118–123 (1987). [CrossRef]
  39. M. R. Bolcar and J. R. Fienup, “Sub-aperture piston phase diversity for segmented and multi-aperture systems,” Appl. Opt.48(1), A5–A12 (2009). [CrossRef] [PubMed]
  40. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express16(10), 7264–7278 (2008). [CrossRef] [PubMed]
  41. B. H. Dean and C. W. Bowers, “Diversity selection for phase-diverse phase retrieval,” J. Opt. Soc. Am. A20(8), 1490–1504 (2003). [CrossRef] [PubMed]
  42. J. W. Goodman, Introduction to Fourier Optics (Roberts & Company Publishers, 2005).
  43. C. Audet and J. E. Dennis., “Analysis of generalized pattern searches,” SIAM J. Optim.13(3), 889–903 (2002). [CrossRef]
  44. X. Yang, H. Li, and X. Zhou, “Nuclei segmentation using marker-controlled watershed, tracking using mean-shift, and kalman filter in time-lapse microscopy,” Circuits and Systems I: Regular Papers, IEEE Transactions on 53, 2405–2414 (2006). [CrossRef]
  45. B. K. Gunturk and X. Li, Image Restoration: Fundamentals and Advances (CRC Press, 2012), Vol. 7.
  46. T. McReynolds and D. Blythe, Advanced graphics programming using OpenGL (Morgan Kaufmann, 2005).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited