OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 7, Iss. 6 — May. 25, 2012
« Show journal navigation

Characterizing the 3-D field distortions in low numerical aperture fluorescence zooming microscope

Praveen Pankajakshan, Zvi Kam, Alain Dieterlen, and Jean-Christophe Olivo-Marin  »View Author Affiliations


Optics Express, Vol. 20, Issue 9, pp. 9876-9889 (2012)
http://dx.doi.org/10.1364/OE.20.009876


View Full Text Article

Acrobat PDF (1701 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this article, we characterize the lateral field distortions in a low numerical aperture and large field-of-view (FOV) fluorescence imaging system. To this end, we study a commercial fluorescence MACROscope setup, which is a zooming microscope. The versatility of this system lies in its ability to image at different zoom ranges, so that sample preparations can be examined in three-dimensions, at cellular, organ and whole body levels. Yet, we found that the imaging system’s optics are optimized only for high magnifications where the observed FOV is small. When we studied the point-spread function (PSF) by using fluorescent polystyrene beads as “guide-stars”, we noticed that the PSF is spatially varying due to field distortions. This variation was found to be laterally symmetrical and the distortions were found to increase with the distance from the center of the FOV. In this communication, we investigate the idea of using the field at the back focal plane of an optical system for characterizing distortions. As this field is unknown, we develop a theoretical framework to retrieve the amplitude and phase of the field at the back focal pupil plane, from the empirical bead images. By using the retrieved amplitude, we can understand and characterize the underlying cause of these distortions. We also propose a few approaches, before acquisition, to either avoid it or correct it at the optical design level.

© 2012 OSA

1. Introduction

In this article, we characterize the lateral field distortions of a low numerical aperture (NA) and large field-of-view (FOV) fluorescence imaging system. We used a commercial fluorescent zooming microscope (also known as MACROscope) where the FOV changes with the optical zoom as it is well suited to illustrate the applicability of our characterization procedure. This setup from Leica™ (Fig. 1(a)) is a macro documentation system, combined with fluorescence techniques for visualization of sample preparations at a range of zooms [1

1. P. Sendrowski and C. Kress, “Arrangement for analyzing microscopic and macroscopic preparations,” WO 2009/04711 (2009). PCT/EP2008/062749.

].

Fig. 1 (a) Schematic of a simple wide-field fluorescence MACROscope (Reproduced from [1]); Best of two worlds: maximum intensity projection along the optical axis of a Convallaria majalis sample taken using a Leica™ ZAPO16, fit with a confocal scanning head, at (b) a minimum zoom setting with lateral pixel size of 1.09 μm and (c) a sub-region of the sample at the maximum zoom setting with lateral pixel size of 0.89 μm (Courtesy of INRA). The scale bars are 100 μm in length.

As in the case of a microscope, the emitted fluorescence from the sample is collected by an objective lens, but a fine/coarse focusing can be obtained by using a macro lens. The apochromatic macro lens is combined with the objective lens to image large fields (about 20mm diagonal diameter) and to provide larger working distances (about 97mm). Existing wide-field microscopes offer high resolutions but with limited FOV, while stereomicroscopes offer larger FOV but compromise on the resolution. The MACROscope offers higher FOV and good lateral resolution (for NA between 0.12–0.50, a lateral resolution of 1.65–0.39 μm respectively). Variations of this setup is also available from other commercial vendors like the Axio Zoom V16 from Carl Zeiss or the AZC2 from Nikon. The principal difference between these commercial adaptations is the range of zooms that they work at.

A sample of the plant Convallaria Majalis is used to highlight the MACROscope’s imaging capabilities, under two different settings: minimum and maximum zoom. The three-dimensional (3-D) image volumes are shown in Fig. 1(b) and 1(c) as maximum intensity projections (MIP) along the optical axis.

1.1. Context

Most of the commercial MACROscopes are guaranteed to be telecentric. In telecentric images,
  • the apparent size of the object does not vary with distance from the camera,
  • the apparent shape of objects does not vary with distance from the center of FOV.

However, in the observation of the Convallaria Majalis specimen in Fig. 1(b), we noticed that with shifts in the focal plane, the specimen expands or contracts. The cell walls ‘appear’ tilted and thicker than expected. This effect is best illustrated in the observed image of a Haemocytometer (Fig. 2).

Fig. 2 Haemocytometer grid used for illustrating and measuring the distortion in the field. The square area indicated in red is of size 1 mm2, in green is 0.0625 mm2, in yellow is 0.04 mm2 and finally the smallest in blue is 0.0025 mm2. Reproduced from Wikimedia Commons

In such a slide, the grid dimensions are calibrated, so that the image distortions can be quantified. For example, the square area indicated in red is of size 1 mm2, in green is 0.0625 mm2, in yellow is 0.04 mm2 and finally the smallest in blue is 0.0025 mm2. This slide was illuminated from below and the image was captured, from above, by a Photometrics CoolSNAP HQ2 cooled CCD camera (6.45 μm × 6.45 μm pixel). The radial pixel size for the 12.7× zoom is 390nm, while the axial slice width was fixed at 50 μm to capture the entire volume. We have shown in Fig. 3 a single lateral focal plane and the projection along the y-direction for the transmitted image volume. The dimensions of the displayed volume is 343×343×6200 μm.

Fig. 3 The focal plane of the observed transmitted volume of the Haemocytometer (top) and the maximum intensity projection along the y-direction (bottom). The object is imaged using a 2x/air PlanApo objective fit to a Leica™ MacroFluo™ APOZ16. The zoom for this acquisition was set at 12.7x, the lateral sampling at 390nm, the slice thickness at 50 μm and the scale bar length is 50 μm. The total size of the displayed volume is 343×343×6200 μm.

From the acquired data (see Media 1), we noticed the following:
  • Two symmetric focal planes (about the sharpest focus) have their periphery grid lines either stretched or contracted laterally with respect to the optical center. This is equivalent to magnification change with focus.
  • The points which exactly coincide with the optic axis remain pivoted, while all other points in the image plane are scaled relative to this pivot. This lateral relative scaling was measured to be up to 344nm for a 1 μm axial displacement.
  • These distortions are significant for low zooms only.

1.2. Motivation and outline

The fundamental motivation underlying the study and characterization of distortions in any optical system is to identify the cause, understand the limits in system usage, and take necessary precautions to avoid or actions to correct it. The questions that we wish to answer as a result of this study are:
  • Can the optical system’s back aperture be an indicator of the cause of these distortions?
  • Can an analysis of the physics behind these distortions help in correcting them?
This article is organized as follows. In Sect. 2, we briefly introduce the basics of the scalar diffraction model, and explain the roles that the pupil phase and amplitude play in defining the impulse response of the system or the point-spread function (PSF). As the field intensity at the back aperture gives information about the changes in the light path through the optical system, its estimation might validate our hypothesis on the cause of these distortions. We therefore estimate the amplitude and phase of this field, from the observed fluorescence intensities [2

2. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef] [PubMed]

], by adopting a Bayesian framework. We also show that the Gerchberg-Saxton (GS) algorithm [3

3. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

] can be derived as a special case from this framework. Finally, the algorithm is used to estimate the amplitude and the phase from some empirically obtained fluorescence point intensities in Sect. 3. Based on these results, we discuss the implications on the distortion process.

1.3. Notations

Scalar variables used in this article are denoted by lowercase letters (x), vectors are denoted by the boldface lowercase letters (x), and the matrices by the boldface uppercase letters (X). As the images are discrete, their spatial support is Ωs = {(x,y,z) : 0 ≤ xNx – 1, 0 ≤ yNy – 1, 0 ≤ zNz – 1}. By 𝒪s) = {o = (oxyz) : Ωs ⊂ 𝕅3 ↦ 𝕉}, we refer to the possible observable objects, and we assign the function h : Ωs ↦ 𝕉 to the microscope PSF. The observed intensities are denoted by i(x) : x ∈ Ωs (bounded and positive), and a 3-D convolution operation between two functions is denoted by ‘*’. When the same symbol is used as a superscript over a given function, as in h*(x), it represents the Hermitian adjoint operation on h(x). For a complex function hA : Ωs ↦ 𝔺, by |hA(x)| and ∠hA(x), we refer to its magnitude and phase respectively. While, by ℜ(hA(x)) and ℑ(hA(x)), we refer to its real and imaginary components. By Pr(·), we denote the probability density function.

The objective lenses of a microscope are defined by their magnification (M), numerical aperture (NA), and the medium in between the lens and the cover slip. For example, a lens of 5x magnification, 0.5 NA, and air as medium between the lens and cover slip is written as ‘5x/0.5 air’.

As mentioned earlier, we present 3-D images by their 2-D maximum intensity projection (MIP) along the optical axis in the 2-D XY plane or along the y-direction in the 2-D XZ plane.

2. Sensing the back aperture field

It is necessary to understand the conditions where our imaging system performs optimally, if we wish to extract the best from it. Nearly diffraction-limited (or aberration free) performance can be obtained, when the sources of these distortions are isolated. Once isolated, the objective would be to restore telecentricity in the images, by using post-acquisition computational methods.

2.1. PSF and role of the phase

The effective NA of the combined objective-zoom system is usually ≪ 0.7, and we work under near paraxial conditions. The effect of polarization is neglected, and the incoherent PSF can be modeled by using the scalar diffraction model. From the Kirchhoff-Fraunhofer approximation [4

4. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, 1999).

], we can write the near-focus amplitude PSF, hA(x, y, z), in terms of the inverse Fourier transform of the two-dimensional (2-D) exit pupil function, P(kx, ky, z;NA), at each defocus z as
hA(x,y,z)=2D1{P(kx,ky,z;NA)},
(1)
where (x,y,z) ∈ Ωs and (kx, ky, kz) ∈ Ωf are the coordinates in the spatial and in the pupil domain, and NA is the effective numerical aperture of the optical system. If ni is the refractive index of the objective immersion medium and λex the excitation wavelength, the pupil function (including defocus and aberrations), can be written as [5

5. P. A. Stokseth, “Properties of a defocused optical system,” J. Opt. Soc. Am. A 59, 1314–1321 (1969). [CrossRef]

]
P(kx,ky,z;NA,φa)={exp(jz((k0ni)2(kx2+ky2))12+jφa),if(kx2+ky2)12k0<NA,0,otherwise,
(2)
where k0 = 2π/λex is the angular wavenumber or the number of wavelengths per 2π units of distance, and φa is the phase due to aberrations. As the medium between the lens and the specimen is air, ni = 1.0. In Eq. (2), the amplitude of the pupil function is assumed to be a constant. The magnitude PSF, h(x), can be written in terms of the excitation amplitude PSF, hA(x; λex), as
h(x)=|hA(x;λex)|2.
(3)
By studying the expressions in Eqs. (1)(3), we can state that the intensity distribution of a point source in an image space is the inverse Fourier transform of the overall complex field distribution of the wavefront, in the back aperture of the optical system.

We observed that smooth variations in the amplitude of the pupil in Eq. (2) do not strongly affect the final PSF, while phase variations, such as defocus or aberrations, can produce an entirely different PSF. We use this as the basis for distortion characterization. We thus rephrase the question raised in Sect. 1.2: ‘Can the amplitude or the phase of the field at the back aperture of the optical system be an indication of the source of the distortions?’.

2.2. A Bayesian perspective

Although the phase information is not directly measurable in an incoherent imaging setup, it can be retrieved by choosing the PSF model that best fits the given bead image. This problem is also known as sensor-less wavefront sensing.

Wavefront sensing could also be accomplished computationally, for example, by using the GS algorithm [3

3. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

]. Wavefront sensing [12

12. P. Pankajakshan, A. Dieterlen, G. Engler, Z. Kam, L. Blanc-Feraud, J. Zerubia, and J.-C. Olivo-Marin, “Wavefront sensing for aberration modeling in fluorescence macroscopy,” in Proc. IEEE International Symposium on Biomedical Imaging (ISBI), IEEE (IEEE, Chicago, USA, 2011).

] by phase retrieval is the process of estimating the amplitude and the phase of a pupil function from the observed 3-D intensities of an imaged point source. In the expression for h(x) in Eq. (3), as the only unknown is the phase φa from the aberrations, the problem of phase retrieval is a question of estimating the aberrated phase from the observed intensities. This problem of phase retrieval is normally under-determined. However, as the phase that is to be estimated does not change with defocus, it can be estimated if images of point source at multiple defocus positions are available. The only requirement is that these sections are sufficiently far from the focus. As the distance from the central focal plane grows to infinity, the intensity approaches that at the back pupil plane. However, in practice, the measurement of defocused beads becomes increasingly difficult for larger defocusing due to the decaying fluorescence intensities. We remark that the distortions that we observe are mainly amplitude aberrations. That is, they do not generate variations in the optical path difference (or the phase) of the light and so is not an aberration in the strict sense.

For the GS algorithm, there is no intrinsic smoothness term on the solution. To compare our approach with the GS algorithm, we drop the prior energy term in Eq. (5) (by assuming a uniform distribution). The amplitude PSF can be estimated by the maximum likelihood (ML) algorithm:
h^A(x)=argminhA(x)𝒥obs(hA),s.t.kMAX<2πλexNA=argminhA(x)log[Pr(i|hA)],s.t.kMAX<2πλexNA,=argmaxhA(x)|hA(x)|2i(x)log(|hA(x)|2+b(x)),s.t.kMAX<2πλexNA.
(9)
As there is no closed-form solution to the problem in Eq. (9), we use the following fixed-point iterative algorithm:
h^A(n+1)(x)=h^A(n)(x)τ2𝒥obs(hA).
(10)
In Eq. (10), τ ∈ [0.5, 0.99] is a scaling factor. The cost function 𝒥obs(hA) is real, and ∇(·) is the complex gradient operation on it so that
𝒥obs(hA)=𝒥obs(hA)(hA(x))+j𝒥obs(hA)(hA(x))=2×(hA(x)i(x)(|hA(x)|2+b(x))hA(x)),xΩs.
(11)
The division is Hadamard, where each element of the matrices are divided element-wise, while ‘·’ denotes Hadamard element-wise multiplication. From Eq. (10) and (11), we get the fixed-point iterative algorithm for the near-focus amplitude PSF as
h^A(n+1)(x)=(1τ)h^A(n)(x)+τ(i(x)|h^A(n)(x)|2+b(x)h^A(n)(x)),xΩs,
(12)
It is important to note that although the given observation is real, the final estimate ĥA(x) is complex. In practice, the optimization process in Eq. (12) respects certain constraints.
  • Relaxation constraints on the pupil function: An upper limit can be introduced on the field intensity at the back aperture of the optical system based on the effective NA. Thus, the initial pupil function, P̂(0)(kx, ky, z = 0), is chosen to be a unit disc with a maximum radius of kMAX and phase zero (cf. Eq. (2)). This is inverse Fourier transformed to get h^A(0)(x) (cf. Eq. (1)). For successive estimates, the above relaxation constraint on the bandwidth in the pupil domain is maintained.
  • Loose support on the magnitude of the coherent PSF hA(x): We assume that part of this magnitude is zero, or that the PSF is confined to a region Ωh. That is |hA(x)| ≥ ε, ∀x ∈ Ωh and ε is a small value close to zero. For the lateral plane, we define the maximum permissible radius as 5 × 0.61λex/NA [14

    14. T. J. Holmes, D. Biggs, and A. Abu-Tarif, “Blind Deconvolution,” in Handbook of Biological Confocal Microscopy, 3rd ed, J. B. Pawley, ed. (Springer, New York, 2006), Chap. 24, pp. 468–487. [CrossRef]

    ]. The idea of using a constraint on the PSF is to fit the model only to those regions in the observation where the fluorescence signal is strong. It also removes any spurious background noise in the process.
Generally, the MAP incorporates the prior knowledge simultaneously, but in the above case, we add some prior information about the solution sequentially. In reality, we find ourselves in a situation where only partial knowledge or partial certainty about the solution is known. We call such constraints as ‘partial knowledge”, because it refers to the fact that we represent our knowledge about states of nature not necessarily in the form of probability distributions. The idea of representing partial knowledge by convex sets, as in this article, is not new. More recently in [13

13. P. Pankajakshan, “Blind Deconvolution for Confocal Laser Scanning Microscopy,” Ph.D. thesis, Université de Nice Sophia-Antipolis (2009).

], we have shown that such constraints on the solution space can be introduced elegantly in the form of a prior probability measure.

For the fixed-point iterative algorithm, the step size, τ, was chosen to be 0.6 in all our experiments. The iterations are continued until either the mean-squared error (MSE) between the phase estimates for two successive iterations is below a pre-defined threshold ε or a pre-defined maximum number of iterations is reached by the algorithm.

Algorithm 1:. Proposed Algorithm.

table-icon
View This Table

2.3. Gerchberg-Saxton algorithm as a special case

The GS algorithm [3

3. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

] is a technique to estimate the field at the back focal plane of the obejctive by following a forward and inverse Fourier transforms of the observation. The fixed-point iterative algorithm and the GS algorithm are initialized in the same manner for the pupil function. After initialization, a suitable curvature is added to the phase of the complex pupil function to obtain the defocus adjusted complex pupil function, P(kx, ky, z), at every defocus z [15

15. B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett. 28, 801–803 (2003). [CrossRef] [PubMed]

]. This is inverse Fourier transformed (cf. Eq. (3)) to get the corresponding amplitude PSF (hA(x)) intensities at the different defocus planes. The magnitude of hA(x) is assigned to the corresponding measured intensities (after background subtraction) at the different defocus planes. A Fourier transform of this modified hA(x) gives the new estimate of the defocus-adjusted complex exit pupil function, P̂(kx, ky, z), at the different defocus positions of z. The resulting defocus-adjusted complex pupil functions are readjusted back to zero and averaged to get a new estimate of the complex exit pupil function P̂(kx, ky, z = 0). This process is repeated until the MSE criterion or the maximum iteration is reached. Some constraints are introduced during the iterative algorithm that can aid in the convergence of the algorithm. The progress of the GS algorithm is the same as the fixed-point algorithm except for Step 6 in Algorithm 1. We see that when τ = 1 in Eq. (12), then the factor i(x)/(|h^A(n)(x)|2+b(x)), at each iteration, performs the assigning operation of Step 6. This ratio also has a physical significance. It has the role of replacing the incorrect amplitude of hA(x) by the correct experimentally obtained magnitude i(x).

3. Experiments

Fig. 4 Empirical PSFs are shown at the different positions (denoted by a cross) in the lateral field of the lens. Here, the PSFs are shown as the MIP along the y-direction.

3.1. NA for convex relaxation

Fig. 5 The schematic to measure the maximum object spread for constraining the iterative algorithm.

3.2. Results

As the problem is under determined, to introduce diversity, four defocus sections (M = 4 in Algorithm 1) were chosen, symmetrically, above and below the central focal plane of the 1.6× image. These sections lie at a distance of about 2–5 times the Rayleigh length from the focal plane. This bead was cropped from the periphery of the field and these individual sections approximate the OOFH that was mentioned earlier. One of the defocus sections is shown in Fig. 6(a). The unwrapped phase of the pupil, that was retrieved, φ̂a, is shown in Fig. 6(b), after about 32 cycles of the fixed-point algorithm, with τ = 0.6. We allowed the algorithm to continue, although the solution for the electric field amplitude converged between 12–15 cycles. In order to reduce noisy estimates, at each iteration, it was suggested in [15

15. B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett. 28, 801–803 (2003). [CrossRef] [PubMed]

] to filter the estimate obtained from the GS algorithm by a Gaussian filter. To avoid such an ad hoc method we propose, as future work, that at each cycle of the fixed-point iterative algorithm, the field amplitude be regularized by a total variation functional [13

13. P. Pankajakshan, “Blind Deconvolution for Confocal Laser Scanning Microscopy,” Ph.D. thesis, Université de Nice Sophia-Antipolis (2009).

]. For reproducibility of the experiments, the complete source code in Matlab™, and the data are provided here: http://bioimageanalysis.org/praveen/code.zip.

Fig. 6 (a) The first section of the observed intensity, with z = −57 μm, and the scale bar is 10 μm, and (b) retrieved unwrapped pupil phase, φ̂a. The bead image was cropped from the right peripherary intensity image of Fig. 4 at a zoom 1.6× (radial sampling of 998.3nm and axial sampling of 1000nm). τ = 0.6, the maximum number of iteration is 32 and the phase scale is between [−π, +π] radians.

3.3. Discussion

From the estimated phase in Fig. 6(b), we see that the pupil function of the optical system is partially chopped. The chopping of the pupil is such that it resembles a ‘cat’s’ eye [6

6. P. Pankajakshan, Z. Kam, A. Dieterlen, G. Engler, L. Blanc-Féraud, J. Zerubia, and J.-C. Olivo-Marin, “Point-spread function model for fluorescence macroscopy imaging,” in Proc. of Asilomar Conference on Signals, Systems and Computers, (2010), 1364–1368.

]. This could be the result of two limiting apertures (from the sizes of lenses in the objective and the zoom) creating the vignetting effect in peripheral regions of the field. In the schematic that is shown in Fig. 7, we illustrate such a distortion. The axial aperture is the complete circle while the oblique aperture is vignetted. Our reconstruction of the cat’s eye in Fig. 6(b) is not sharp. This could be explained as follows. Every lens creates a limiting pupil due to its physical dimension. The imaging of these pupils through the optical system on to the back aperture plane will make diffused circles when it is far from the conjugated back aperture planes.

Fig. 7 A schematic showing the effect of two limiting apertures (here zoom and objective lenses) at the back focal plane of the optical system. Here the on-axis and off-axis positions are shown.

The retrieval of the phase allows us to understand the physics behind the field distortions. Although, some of these setups are claimed to be telecentric, we found that the system is not telecentric in the entire zoom range but only for high zooms. In addition, based on the amplitude of the retrieved electric field, we can also see the extent of overlap in the apertures. For example, from the defocus intensities in Fig. 8, the algorithm was able to retrieve the back aperture amplitude as shown in Fig. 9(b). Although the effective NA for this particular acquisition was calculated to be 0.17, the amplitudes were also retrieved with minor variations in the effective NA. We found that the estimation of the phase requires the exact NA value. In spite of erroneous input NA, the estimated amplitude on the other hand could still validate our hypothesis of vignetting (Fig. 9(a) and 9(c)). Even with an erroneous NA, the chopping could still be quantified to be between 84–89%.

Fig. 8 Diversity sections, i(x), taken at four symmetrical positions with defocus at (a) z = −36 μm, (b) z = −15 μm, (c) z = +15 μm and (d) z = +36 μm. The objective is a 2x/air PlanApo and the zoom is set at 4.6x. The slice width is fixed at 3 μm and the effective NA was calculated to be 0.17.
Fig. 9 Retrieved back focal pupil amplitude, |P̂(kx, ky, z = 0)|, from the defocus sections in Fig. 8. The algorithm was run with variations in the effective NA (a) 0.05, (b) 0.17, (c) 0.20.

The approach presented here can be thought of as adding a virtual wavefront or interferometric pupil plane sensor, and using it to retrieve the field information. By using fluorescence beads as ‘guide stars’, we can validate the causes of our distortions and quantify them. Although we have demonstrated our algorithm and experiments on a MACROscope, the field distortions studied here are present in any optical system working under low NA and low magnification conditions.

There are many ways to overcome the field distortions. Some of them are listed here:
  1. During acquistion, the FOV can be reduced so that only regions with minimum distortions are imaged. The complete image field can also be reconstructed by mosacing together two overlapping images taken in sequence.
  2. In a zooming lens, the magnification change with defocus is proportional to the squared root of the distance between the two lens group in the axial direction [17

    17. J. Winterot and T. Kaufhold, “Optical arrangement and method for the imaging of depth-structured objects,” US Patent 7564620 (2009).

    ]. For manufacturers of zooming microscopes, a possible solution to minimize the distortions is by reducing the change in the back focus of the tube lens system.
  3. In [16

    16. J. E. Webb, “Distortion tuning of quasi-telecentric lens,” US Patent 7646543 (2010).

    ], the author discusses a method to correct distortions in a telecentric zoom system. The distortion here, as in our case, is characterized by magnification changes with working distance. It is claimed that by adjusting the first or the last optical component of the lens adjacent to telecentric image or object space, the distortions can be minimized.

In order to avoid the radial distortions that in turn can be produced as a result of such translations, the imaged object is moved in addition to the lens components. However, it is not clear what would be the defocus contribution of such an optical element translation to the final output image. It is likely that the imaging plane needs to be moved as well in addition to the optical elements.

Each of the above three methods have their advantages and difficulties. In the first case, the zoom system cannot be used at its full capacity while the other two are suggestions in redesigning the setup at the optical level. Our future work is therefore aimed at correcting these field distortions, computationally, after the images have been acquired.

Acknowledgments

This research was supported by the ANR DIAMOND project (http://www-syscom.univ-mlv.fr/ANRDIAMOND). The authors gratefully acknowledge Dr. Philippe Herbomel (Institut Pasteur, France) and Dr. Didier Hentsch (Imaging Center, IGBMC, Strasbourg) for making their MACROscope setups available to us. We also thank Dr. Gilbert Engler (INRA Sophia Antipolis, France) and Dr. Peter Kner (University of Georgia, Athens, GA, USA) for the interesting discussions.

References and links

1.

P. Sendrowski and C. Kress, “Arrangement for analyzing microscopic and macroscopic preparations,” WO 2009/04711 (2009). PCT/EP2008/062749.

2.

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef] [PubMed]

3.

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

4.

M. Born and E. Wolf, Principles of Optics (Cambridge University Press, 1999).

5.

P. A. Stokseth, “Properties of a defocused optical system,” J. Opt. Soc. Am. A 59, 1314–1321 (1969). [CrossRef]

6.

P. Pankajakshan, Z. Kam, A. Dieterlen, G. Engler, L. Blanc-Féraud, J. Zerubia, and J.-C. Olivo-Marin, “Point-spread function model for fluorescence macroscopy imaging,” in Proc. of Asilomar Conference on Signals, Systems and Computers, (2010), 1364–1368.

7.

L. Sherman, J. Y. Ye, O. Albert, and T. B. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206, 65–71 (2002). [CrossRef] [PubMed]

8.

M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. USA 99, 5788–5792 (2002). [CrossRef] [PubMed]

9.

Z. Kam, P. Kner, D. Agard, and J. W. Sedat, “Modelling the application of adaptive optics to wide-field microscope live imaging,” J. Microsc. 226, 33–42 (2007). [CrossRef] [PubMed]

10.

M. J. Booth, “Adaptive optics in microscopy,” Philos. Transact. A Math. Phys. Eng. Sci. 365, 2829–2843 (2007). [CrossRef] [PubMed]

11.

R. Juškaitis and T. Wilson, “The measurement of the amplitude point spread function of microscope objective lenses,” J. Microsc. 189, 8–11 (1998). [CrossRef]

12.

P. Pankajakshan, A. Dieterlen, G. Engler, Z. Kam, L. Blanc-Feraud, J. Zerubia, and J.-C. Olivo-Marin, “Wavefront sensing for aberration modeling in fluorescence macroscopy,” in Proc. IEEE International Symposium on Biomedical Imaging (ISBI), IEEE (IEEE, Chicago, USA, 2011).

13.

P. Pankajakshan, “Blind Deconvolution for Confocal Laser Scanning Microscopy,” Ph.D. thesis, Université de Nice Sophia-Antipolis (2009).

14.

T. J. Holmes, D. Biggs, and A. Abu-Tarif, “Blind Deconvolution,” in Handbook of Biological Confocal Microscopy, 3rd ed, J. B. Pawley, ed. (Springer, New York, 2006), Chap. 24, pp. 468–487. [CrossRef]

15.

B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett. 28, 801–803 (2003). [CrossRef] [PubMed]

16.

J. E. Webb, “Distortion tuning of quasi-telecentric lens,” US Patent 7646543 (2010).

17.

J. Winterot and T. Kaufhold, “Optical arrangement and method for the imaging of depth-structured objects,” US Patent 7564620 (2009).

OCIS Codes
(100.3190) Image processing : Inverse problems
(100.5070) Image processing : Phase retrieval
(100.6890) Image processing : Three-dimensional image processing
(180.2520) Microscopy : Fluorescence microscopy

ToC Category:
Image Processing

History
Original Manuscript: February 3, 2012
Revised Manuscript: March 30, 2012
Manuscript Accepted: April 3, 2012
Published: April 16, 2012

Virtual Issues
Vol. 7, Iss. 6 Virtual Journal for Biomedical Optics

Citation
Praveen Pankajakshan, Zvi Kam, Alain Dieterlen, and Jean-Christophe Olivo-Marin, "Characterizing the 3-D field distortions in low numerical aperture fluorescence zooming microscope," Opt. Express 20, 9876-9889 (2012)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-20-9-9876


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. P. Sendrowski and C. Kress, “Arrangement for analyzing microscopic and macroscopic preparations,” WO 2009/04711 (2009). PCT/EP2008/062749.
  2. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt.21, 2758–2769 (1982). [CrossRef] [PubMed]
  3. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik35, 237–246 (1972).
  4. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, 1999).
  5. P. A. Stokseth, “Properties of a defocused optical system,” J. Opt. Soc. Am. A59, 1314–1321 (1969). [CrossRef]
  6. P. Pankajakshan, Z. Kam, A. Dieterlen, G. Engler, L. Blanc-Féraud, J. Zerubia, and J.-C. Olivo-Marin, “Point-spread function model for fluorescence macroscopy imaging,” in Proc. of Asilomar Conference on Signals, Systems and Computers, (2010), 1364–1368.
  7. L. Sherman, J. Y. Ye, O. Albert, and T. B. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc.206, 65–71 (2002). [CrossRef] [PubMed]
  8. M. J. Booth, M. A. Neil, R. Juškaitis, and T. Wilson, “Adaptive aberration correction in a confocal microscope,” Proc. Natl. Acad. Sci. USA99, 5788–5792 (2002). [CrossRef] [PubMed]
  9. Z. Kam, P. Kner, D. Agard, and J. W. Sedat, “Modelling the application of adaptive optics to wide-field microscope live imaging,” J. Microsc.226, 33–42 (2007). [CrossRef] [PubMed]
  10. M. J. Booth, “Adaptive optics in microscopy,” Philos. Transact. A Math. Phys. Eng. Sci.365, 2829–2843 (2007). [CrossRef] [PubMed]
  11. R. Juškaitis and T. Wilson, “The measurement of the amplitude point spread function of microscope objective lenses,” J. Microsc.189, 8–11 (1998). [CrossRef]
  12. P. Pankajakshan, A. Dieterlen, G. Engler, Z. Kam, L. Blanc-Feraud, J. Zerubia, and J.-C. Olivo-Marin, “Wavefront sensing for aberration modeling in fluorescence macroscopy,” in Proc. IEEE International Symposium on Biomedical Imaging (ISBI), IEEE (IEEE, Chicago, USA, 2011).
  13. P. Pankajakshan, “Blind Deconvolution for Confocal Laser Scanning Microscopy,” Ph.D. thesis, Université de Nice Sophia-Antipolis (2009).
  14. T. J. Holmes, D. Biggs, and A. Abu-Tarif, “Blind Deconvolution,” in Handbook of Biological Confocal Microscopy, 3rd ed, J. B. Pawley, ed. (Springer, New York, 2006), Chap. 24, pp. 468–487. [CrossRef]
  15. B. M. Hanser, M. G. Gustafsson, D. A. Agard, and J. W. Sedat, “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett.28, 801–803 (2003). [CrossRef] [PubMed]
  16. J. E. Webb, “Distortion tuning of quasi-telecentric lens,” US Patent 7646543 (2010).
  17. J. Winterot and T. Kaufhold, “Optical arrangement and method for the imaging of depth-structured objects,” US Patent 7564620 (2009).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: AVI (23072 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited