OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 17, Iss. 10 — May. 11, 2009
  • pp: 7873–7892
« Show journal navigation

High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy

Timothy R. Hillman, Thomas Gutzler, Sergey A. Alexandrov, and David D. Sampson  »View Author Affiliations


Optics Express, Vol. 17, Issue 10, pp. 7873-7892 (2009)
http://dx.doi.org/10.1364/OE.17.007873


View Full Text Article

Acrobat PDF (1642 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We utilize synthetic-aperture Fourier holographic microscopy to resolve micrometer-scale microstructure over millimeter-scale fields of view. Multiple holograms are recorded, each registering a different, limited region of the sample object’s Fourier spectrum. They are “stitched together” to generate the synthetic aperture. A low-numerical-aperture (NA) objective lens provides the wide field of view, and the additional advantages of a long working distance, no immersion fluids, and an inexpensive, simple optical system. Following the first theoretical treatment of the technique, we present images of a microchip target derived from an annular synthetic aperture (NA = 0.61) whose area is 15 times that due to a single hologram (NA = 0.13); they exhibit a corresponding qualitative improvement. We demonstrate that a high-quality reconstruction may be obtained from a limited sub-region of Fourier space, if the object’s structural information is concentrated there.

© 2009 Optical Society of America

1. Introduction

Synthetic aperture Fourier holographic optical microscopy [1

1. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic aperture Fourier holographic optical microscopy,” Phys. Rev. Lett . 97, 168102 (2006). [CrossRef] [PubMed]

,2

2. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Digital Fourier holography enables wide-field, superresolved, microscopic characterization,” in ‘Optics in 2007’, Opt. Photonics News 18, 29 (Dec. 2007). [CrossRef]

] has recently been proposed for the wide-field characterization of sample microstructure. It has been suggested as a means for overcoming a significant problem in high-resolution optical microscopy: the severe system and sample-preparation constraints when high-numerical-aperture (NA) optics are utilized. One of the most significant of these is the limited objective field of view, a shortcoming which may necessitate the recording and tiling of multiple images of the target, which is common, for example, in histopathology.

Our proposed technique utilizes a low-NA optical system, whose resolving power would not be sufficient to characterize the samples of interest if conventional imaging methods were used. It shares this characteristic with two alternative techniques pioneered by our research group for addressing the same problem: spatially resolved angular scattering spectroscopy [2–4

2. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Digital Fourier holography enables wide-field, superresolved, microscopic characterization,” in ‘Optics in 2007’, Opt. Photonics News 18, 29 (Dec. 2007). [CrossRef]

], and optical spectral encoding [5

5. S. A. Alexandrov and D. D. Sampson, “Spatial information transmission beyond a systems diffraction limit using optical spectral encoding of the spatial frequency,” J. Opt. A - Pure Appl. Opt . 10, 025304 (2008). [CrossRef]

].

The current technique involves constructing a high-resolution image by sequentially combining multiple recorded holograms. Unlike the image-tiling approach, each hologram provides information about the entire sample, specific to a particular, limited region of its Fourier spectrum. The Fourier spectra are then superposed to generate a large synthetic aperture, from which the high-quality image reconstruction can be obtained.

A high-resolution, wide-field reconstructed image is characterized by a large space-bandwidth product. For a detector with a given pixel count, equal numbers of holographic exposures would be required to generate it whether recording was performed in the Fourier domain (as in our approach), or the direct-image domain. However, our method has a number of in-principle advantages over alternative direct-imaging techniques. A reconstructed image is generated with resolution that can vastly exceed the limit of the inexpensive, low-NA, conventional microscopic imaging system used to acquire it. The long working distances thus permitted, and the elimination of the need for fluid immersion at high NA, lessen the restrictions on the sample type, and its preparation requirements. Moreover, given a priori sample structural information, specific regions of its Fourier spectrum with high information density may be targeted, allowing high-quality, wide-field reconstructions to be rapidly generated, despite the fact that other Fourier-spectral regions may be excluded.

Our technique combines the principles of synthetic aperture and digital holographic microscopy. The synthetic aperture concept was first conceived of by Ryle in relation to radio telescopy [6

6. M. Ryle and A. Hewish, “The synthesis of large radio telescopes,” Mon. Not. R. Astron. Soc. 120, 220–230 (1960).

]. Gabor’s invention of holography [7

7. D. Gabor, “A new microscopic principle,” Nature (London) 161, 777–778 (1948). [CrossRef]

] allowed for complex amplitude profiles of propagating optical wavefields to be registered. The ability to represent holographic recordings digitally, and thereby perform image reconstructions numerically [8

8. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett . 11, 77–79 (1967). [CrossRef]

, 9

9. U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, Berlin, 2005).

], greatly simplifies the task of applying image-processing operations to the holograms prior to reconstruction. Naturally, digital holography has benefited immensely from the continuous advances in modern personal computer power, and the recent advent of high-pixel-count, high-sensitivity array detectors.

In the field of microscopy, digital holography has been applied to the quantitative phase (and amplitude) measurements of samples [10–12

10. B. Rappaz, P. Marquet, E. Cuche, Y. Emery, C. Depeursinge, and P. J. Magistretti, “Measurement of the integral refractive index and dynamic cell morphometry of living cells with digital holographic microscopy,” Opt. Express 13, 9361–9373 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-13-23-9361. [CrossRef] [PubMed]

]. The synthetic aperture principle has been previously applied to digital holography. Approaches have included recording multiple images after: translating the recording camera in order to capture a greater portion of the sample wave [13–15

13. J. H. Massig, “Digital off-axis holography with a synthetic aperture,” Opt. Lett . 27, 2179–2181 (2002). [CrossRef]

]; moving a variable-position spatial filter in the Fourier plane [16

16. F. Le Clerc, M. Gross, and L. Collot, “Synthetic-aperture experiment in the visible with on-axis digital heterodyne holography,” Opt. Lett . 26, 1550–1552 (2001). [CrossRef]

]; or tilting the sample [17

17. R. Binet, J. Colineau, and J. C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt . 41, 4775–4782 (2002). [CrossRef] [PubMed]

]. Reference [18

18. J. R. Price, P. R. Bingham, and C. E. Thomas Jr., “Improving resolution in microscopic holography by computationally fusing multiple, obliquely illuminated object waves in the Fourier domain,” Appl. Opt . 46, 827–833 (2007). [CrossRef] [PubMed]

] reports using oblique object illumination to achieve an improvement in system NA, from 0.59 to 0.78.

The technique “imaging interferometric microscopy” [26

26. Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, “Imaging interferometric microscopy - approaching the linear systems limits of optical resolution,” Opt. Express 15, 6651–6663 (2007), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-15-11-6651. [CrossRef] [PubMed]

,27

27. Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, “Imaging interferometric microscopy,” J. Opt. Soc. Am. A 25, 811–822 (2008). [CrossRef]

] operates in transmission mode with highly off-axis illumination. The undiffracted light and a narrow cone of diffracted light are collected separately and recombined in a direct partial image whose resolution approaches the optical limit. Multiple such partial images, constituting different object spatial frequency components, are combined to form the complete image.

Reference [28

28. T. Turpin, L. Gesell, J. Lapides, and C. Price, “Theory of the synthetic aperture microscope,” Proc. SPIE 2566, 230–240 (1995). [CrossRef]

] presented a theoretical description of a holographic approach for sequentially capturing regions of the three-dimensional (3D) Fourier spectrum of an object. The sample was to be placed on a rotating stage, and backscattered light collected for multiple different illumination/detection angle pairs.

The principal advantage of our synthetic aperture approach over the other reported methods is its ability to effectively synthesize the complex reflectance profile of a sample object over both a large range of spatial frequencies and a wide field of view, with a simple optical system. In the current implementation, two-dimensional (2D) objects are imaged in reflection mode; the CCD detection array is placed in a plane conjugate to the optical Fourier plane of the object. The range of captured object 2D spatial frequencies in each recording depends on the illumination conditions and the detector position.

In this paper, we provide a brief theoretical presentation of our technique, including a sequential treatment of the operations required to process each individual recorded hologram, which includes digital phase-distribution correction. It is then necessary to pairwise align and phase-match successive recorded holograms in order to synthesize the large aperture. When holograms are recorded at multiple illumination angles for 3D (as opposed to planar) objects, the issue of decorrelation is introduced to the process of constructing a synthetic aperture. This can be explained in terms of the range of three-dimensional spatial frequencies accessed by each recording. We quantify this effect, highlighting a recording scheme for which it is minimally problematic.

The procedure is elucidated by direct experimental demonstration. The microprocessor target contains high-resolution scattering and diffraction components. The improvement in reconstruction quality as successively greater numbers of holograms are synthesized is explicitly depicted. Our concluding discussion emphasizes anticipated future developments of the technique.

2. Theory

2.1. Fundamentals of the technique

Our technique was introduced in Ref. [1

1. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic aperture Fourier holographic optical microscopy,” Phys. Rev. Lett . 97, 168102 (2006). [CrossRef] [PubMed]

]; we reiterate and explicate its operating principles here. For the off-axis holographic approach, both “sample” and “reference” optical wavefields (waves), derived from the same highly coherent optical source, are incident upon the recording plane; the hologram is their (intensity) interference pattern. The sample wave has been back-scattered or back-reflected from the object of interest, prior to undergoing an optical Fourier transform operation, and the reference wave is an off-axis plane wave.

Fig. 1. (a) Depiction of the four spatial domains of the sample wave path from the input to the output plane. The illumination-wave (IW) and reference-wave (RW) polar angles, θi and θr, respectively, are shown; (b),(c) Definition of the coordinate-system and Fourier/inverse-Fourier transform conventions adopted in the paper. The illumination and reference-wave azimuthal angles, ϕi and ϕr, respectively, are displayed.

In order to describe the input and output field distributions, it is necessary to establish some notations and conventions to be adopted in the remainder of the paper. We describe the scalar optical field at each point in space with a complex amplitude (phasor), suppressing the time-dependent factor exp(-j2πvt), where v represents optical frequency, and t time. We further prescribe that the Fourier transform operation, , be applied to the complex amplitude distributions in the input plane (and its respective conjugates), but the inverse Fourier transform operation, -1, be applied to those in the output plane (and its conjugates). The spatial frequency variables are defined accordingly:

:V1(ξ,η)𝓥1(νξ,νη),1:U4(x,y)4(νx,νy).
(1)

These relations are illustrated in Fig. 1(b) and (c), respectively. The Fourier transform operation, acting from domain (ξ, η) to domain (vξ , vη), is defined:

𝓥1(νξ,νη)[V1(ξ,η)]U(ξ,η)exp[j2π(ξνξ+ηνη)]dξdη.
(2)

U4(x,y)=jM𝓥1(xM,−yM),
(3)

where M = λf 1 f 3/f 2 is a scaling constant, with units of squared length. (The quantity λ is optical wavelength.)

The digital CCD recording array is placed in the output plane, centered on-axis. The effect of its limited size is, thus, to perform the action of a low-pass transfer function upon the Fourier spectrum of the input plane distribution. We assume that the object may be described by an amplitude reflectance function, r(ξ,η), the distribution we wish to extract. If the object is illuminated by a wavefield described by the complex amplitude distribution Ai(ξ, η), then the complex amplitude distribution in Plane 1 is given by

V1(ξ,η)=r(ξ,η)Ai(ξ,η).
(4)

We assume that the illumination wave is plane, described by polar angle θi with respect to the optical axis and azimuthal angle ϕi with respect to the lateral coordinate system. Then Ai(ξ,η) =A 0 exp [-j2π(γξξ + γηη)], where A 0 is, in general, a complex constant, γξ2 + γξ2 = [sinθi/λ] , and γ η/γ ξ = tanϕi. The effect of this wave is to impart a phase ramp to the distribution r(ξ,η), effectively shifting a “bandpass” range of spatial frequencies to “baseband”. That is, if the output-plane rectangular recording area is defined by the region |x| ≤ L/2, |y| ≤ H/2, that is, its dimensions are L×H, then the range of object spatial frequencies (vξ, vη) accessible to the recording are defined by the inequalities:

νξγξL/(2M),|νηνηH/(2M).
(5)

Equation (5) is foundational to the synthetic aperture technique, demonstrating the dependence of the detectable spatial-frequency range upon the parameters λ, θi, and ϕi (through the auxiliary variables M, γξ=, and γη). That is, by recording multiple holograms under different illumination-wave conditions, multiple regions of the object’s Fourier spectrum can be acquired and combined to generate high-resolution reconstructions. The illumination-wave directions θi, ϕi are illustrated in Fig. 1(a),(b). The equivalent parameters for the plane reference wave, θr and ϕr, are also displayed (in parts (a),(c)).

If the detector is not limited to the on-axis position, then, for given illumination conditions, regions of the object’s Fourier spectrum beyond the limited range of Eq. (5) may be accessed. Naturally, for object scattering or diffraction at large off-axis angles, the Fresnel approximation is no longer applicable, so Eq. (3) is not valid. This is evinced by the fact that maximum spatial frequencies of V 1 (ξ, η) which can be detected in the “far field” are not infinite, no matter how far the recording plane is “extended”. Instead, they correspond to scattered propagating waves near-orthogonal to the optical axis [36

36. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, Englewood, Colorado, 2005, 3rd ed.).

], and have modulus equal to 1/λ. Thus, the detectable object spatial frequencies (vξ, vη) are those which satisfy the inequality:

(νξ,νη)(γξ,γη)<1/λ.
(6)

Since the carrier frequency (γξη) is also restricted to values having modulus less than 1/λ, then the object spatial frequencies which are accessible to the synthetic aperture approach when the illumination and detection angles are allowed to vary freely in all reflection configurations are those whose modulus is less than 2/λ. Considering scattering within the plane of incidence, as depicted in Fig. 2(a), the detected object spatial frequency corresponding to the plane-of-incidence illumination/detection-angle pair (θi, θd), is equal to:

νs=sinθisinθd/λ.
(7)

(The quantity θd may range from -π/2 to π/2.) We make the observation here that the effect of varying the detector position can be simulated by keeping it fixed on-axis and tilting the object instead [17

17. R. Binet, J. Colineau, and J. C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt . 41, 4775–4782 (2002). [CrossRef] [PubMed]

,26

26. Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, “Imaging interferometric microscopy - approaching the linear systems limits of optical resolution,” Opt. Express 15, 6651–6663 (2007), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-15-11-6651. [CrossRef] [PubMed]

]. However, for the purposes of this paper, an on-axis detector configuration is used exclusively.

Part (b) of Fig. 2 depicts the “range of support”, or accessible object spatial frequencies, for both the on-axis and unrestricted detector configurations, and part (c) of the figure elucidates the Eq.-(5) relations.

By recording multiple holograms, a synthetic coherent transfer function (CTF) for the microscopic imaging system can be constructed, that is, corresponding to the near-spatially invariant linear system relating the object structure to the output plane complex amplitude distribution. Ideally, aberration and apodisation effects can be ignored or corrected for; the CTF magnitude and phase will be near-constant over the region of support.

The ability of off-axis holography to obtain the sample-wave complex amplitude distribution incident upon the detector is, of course, well known. An analysis appropriate to our optical setup has been provided in Ref. [4

4. T. R. Hillman, S. A. Alexandrov, T. Gutzler, and D. D. Sampson, “Microscopic particle discrimination using spatially-resolved Fourier-holographic light scattering angular spectroscopy,” Opt. Express 14, 11088–11102 (2006), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-14-23-11088. [CrossRef] [PubMed]

]; we do not repeat it here. We merely note that the accessible object field of view is restricted by the presence of the complex-conjugate “twin image” in the reconstruction. The size of the rectangular field stop must be chosen to avoid overlap between the two first-order image terms. (This constraint can be interpreted as a limitation in encoding a complex amplitude distribution into a real interference signal.) In order to avail the complex reconstruction of the full pixel count of the detector, phase stepping the reference arm should be introduced in order to remove the complex ambiguity. We intend to incorporate this ability into future implementations of our technique.

2.2. Procedure for optimizing individual-hologram reconstruction

𝓚Δz(νξ,νη)[kΔz(ξ,η)]=exp(j2πΔz/λ)exp[jπλΔz(νξ2+νη2)].
(8)
Fig. 2. (a) Depiction of the illumination wave, and scattered or diffracted waves in the plane of incidence. An off-axis detection solid angle is also shown; (b) Regions depicting the accessible spatial frequencies for different coherent imaging systems. Inner region (purple boundary): Circular range covered by a single 0.75-NA lens (for a conventional coherent imaging system); Region 2 (red solid boundary): Accessible region to coordinates (γξη); Region 3 (red dotted boundary): Range of spatial frequencies accessible to the synthetic aperture, if the rectangular detector is located on-axis; Outer region (green solid boundary): Range of accessible spatial frequencies when the on-axis restriction is removed; (c) Upper-right quadrant: The rectangular range of spatial frequencies accessible to a single holographic recording. The dependence of this region on the illumination-wave parameters is demonstrated in the other three quadrants.
Fig. 3. Effect of defocusing in object and reconstruction planes. Both quantities g and d, as displayed, are positive.

In Fig. 3(a), we maintain the convention that Plane 1 and 4 be situated in the respective focal planes of lenses L1 and L3. Now, we allow the object and the recording planes to be slightly defocused from these positions, as indicated, with respective complex amplitude distributions V obj(ξ,η) and U rec(x,y). (The distances d, g, depicted are positive.) Then

Urec(x,y)=jM{𝓚d(νx,νy)1[𝓥obj(xM,yM)𝓚g(xM,yM)]},
(9)

where 𝒱obj = ℱ(V obj). In writing Eq. (9), we have noted that ℱ-1[k z(x,y)] = 𝓚z (νxy), that is, the Fourier transform of k z is equal to its inverse Fourier transform.

If the quantities d,g are known, then Eq. (9) indicates the procedure for obtaining 𝒱obj, and thus V obj, from the recording plane distribution U rec(x,y). That is, it is necessary to sequentially invert the nested sequence of operations on the right-hand side of the equation. As indicated by Eq. (8), defocus-correction in a given plane is achieved by multiplying the Fourier or inverse-Fourier transformed distribution by a circularly symmetric quadratic phase factor.

We briefly consider the effects of defocusing on the quality of the reconstruction. Clearly, it is a “lossless” process, granted the ability of holography to describe a propagating wave-field based on the complex amplitude distribution recorded in any transverse plane. Recording-plane defocusing results in the merging of sample Fourier components, thereby increasing the Fourier-spectral range captured within each exposure. (The range includes components corresponding to the region immediately exterior to the L × H bounding rectangle in Plane 4.) The trade-off is the simultaneous reduction in the object’s field of view. This can be understood through the influence of the quadratic phase factor 𝓚d (νx, νy), which exhibits large local spatial frequencies when its arguments take on high values, ultimately exceeding the Nyquist limit set by the sampling rate of the (νx, νy)-distribution, which in turn is determined by the finite size of the recording-plane detector. We impose the restriction that the local spatial frequencies be much less than this Nyquist limit, over the entire object field of view. If D 4 is a representative “maximum diameter” of the recording-plane detection region (i.e., D 4L,H), and D 1 is a representative diameter of the object-plane field of view, then we obtain the following inequality for d (and similarly, for g):

dMD4λD1;gMD1λD4.
(10)

The advantages of defocusing when capturing a sample diffraction peak can be quantified by comparing the areas of the respective in-focus and defocused spots. Assuming that the exit pupil of the recording-plane optical imaging system is limited by the rectangular field stop (of side-length f 2 D 1/f 1), then the spot sizes are determined by the diffraction limit, and geometrical optics, respectively. Their effective areas (defined as the ratio of total optical power to peak intensity) are (M/D 1)2 and (λdD 1/M)2, respectively. The defocusing dynamic range advantage is given by the ratio of these quantities, provided the spots corresponding to separate diffraction orders don’t overlap. The quantities may also be used to determine the number of detector pixels that sample each spot.

The effects of other corrupting or distorting influences within the optical system may be divided into two categories. In the first category, we consider distortions are linear and spatially invariant, so that they may be described by the application of a convolution kernel to the complex amplitude distribution. Such distortions include the aforementioned defocus, as well as higher-order optical aberrations. In the second category, we consider distortions that can be represented by the application of a multiplicative phase factor to the distribution. Examples would include non-planarity of either the illumination or reference beams. Importantly, both categories of distortion can be modeled with a multiplicative factor, applied to either the distribution in the plane under consideration, or to its Fourier/inverse-Fourier transform. Thus, the effect of both categories, on both planes, can be absorbed into generalized functions 𝓚′d (νx, νy) and 𝓚′g(νξ, νy), which should be substituted for their non-primed equivalents in Eq. (9).

We note that the incorporation of all optical distortion effects into two functions 𝓚′d and 𝓚′g does not fully address the issue of the order in which these correction operations should be applied. This problem is not so severe as one might suppose because, if the effects are minor, the order in which a large-scale multiplicative factor and a small-scale convolution kernel are applied to a distribution is of negligible consequence. Thus, if we assume that defocus remains the dominant distorting effect, then Eq. (9), incorporating the generalized functions, appropriately describes the optical system. By the same assumption, the inequalities of Eq. (10) remain applicable.

We estimate the functions 𝓚′d, 𝓚′g using polynomial phase factors, restricted to the sixth order for the purposes of this paper. That is,

𝓚̂d(νx,νy)=exp(js=26n=0sDn,snνxnνysn),𝓚^g(νξ,νη)=exp(js=26n=0sGn,snνξnνηsn),
(11)

where the “hat” notation indicates estimated value. Only second-order and greater terms are considered in the exponent; a constant phase factor (“piston”), or linear phase “ramps” (“tip” and “tilt”), are treated later when combining multiple holograms.

The effect of Eq. (11) is to replace 𝓚′d,g, both with one independent parameter (d or g), with 𝓚′d,g, both with twenty-five independent parameters (the “Dn,m”s or “Gn,m”s). Since these parameters (with the possible exception of object plane defocus) are properties of the optical system, not of the sample under investigation, then they may, in general, be estimated using a target, or “control” sample, and the values determined from this approach applied to more general sample choices. For example, in order to estimate 𝓚′d, one might choose to utilize a strongly diffracting, structured sample. Then 𝓚^d can be optimized by ensuring that the spots in the Fourier plane corresponding to multiple diffraction orders are as tightly focused as possible. To this end, we invoke a sharpness metric maximization algorithm [38

38. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25, 983–994 (2008). [CrossRef]

]. In terms of our notation, based on Eq. (9), the estimated Plane-4 distribution is:

Û4(x,y)={[𝓚d(νx,νy)]11[Urec(x,y)]}.
(12)

Because the function 𝓚^d has modulus 1, then by Parseval’s theorem, the integral ∬-∞ |Û 4(x,y)|2dxdy does not depend on it (conservation of total power). However, the integral Md = ∬-∞ |Û 4(x,y)|4 dxdy will be greatest when the optical power is concentrated into few sharp, bright points. Thus, maximizing this quantity will ensure that the focused peaks are maximally distinct from the background.

Maximization was achieved by utilizing a conjugate-gradient routine algorithm [38

38. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25, 983–994 (2008). [CrossRef]

], as implemented in Matlab [39

39. M. King, “Matlab m-files for multidimensional nonlinear conjugate gradient method” (2005), http://users.ictp.it/?mpking/cg.html, accessed 12 December 2008.

]. In practice, we parametrized the polynomials presented in Eq. (11) using a Legendre-polynomial expansion. The orthogonality of the basis functions over the rectangular arrays we utilize ensured minimal interdependence between the separate parameters in optimizing Md. The optimization procedure was initially applied to the lowest-order parameters alone, with the estimated results being used to initialize the routine as higher-order parameters were cumulatively incorporated in successive iterations. Indeed, the lower-order parameters (second and third) had the greatest influence in maximizing Md higher-order parameters were fitted with diminishing significance, and robustness. Nonetheless, their increasingly marginal impact was beneficial, which justified their inclusion.

Once 𝓚^d has been determined, then the coefficients of 𝓚^g can be determined in a similar way, utilizing a metric Mg. (This assumes, of course, that the target object is sufficiently structured so that maximizing the image contrast is equivalent to bringing it into optimum focus.)

Inversion of Eq. (9), utilizing the primed operators, yields

𝓥^obj(xM,yM)=[𝓚̂g(xM,yM)]1{[𝓚̂d(νx,νy)]11[jMUrec(x,y)]},
(13)

that is, a particular region of the object’s Fourier spectrum can be acquired.

2.3. Correlation between separately recorded holograms

It is necessary, in forming the synthetic aperture, to seamlessly “stitch” together the multiple regions 𝒱obj acquired from Eq. (13). A convenient way to achieve this is by ensuring that successive recorded holograms access partially overlapping regions of the object’s Fourier spectrum, so that they can be accurately aligned, and phase errors between them corrected. The purpose of the current sub-section is to describe circumstances under which this task is confounded by the fact that the ostensibly overlapping regions from separate holograms are uncorrelated. This will occur when the sample structure is not limited to a 2D reflecting plane, but instead, must be described as a 3D scattering distribution.

Our spatial-frequency analysis must be extended to three dimensions, also. If we assume that the object is weakly elastic scattering, then its interaction with the illumination wave may be described in terms of the first Born approximation [29

29. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, Cambridge, UK, 1999, 7th ed.).

, Sub-section 13.1.2]. Under this assumption, each pair of illumination and detection wavevectors corresponds to a particular 3D spatial frequency component of the sample. That is, if the sample is illuminated by a monochromatic plane wave with wavevector k 0, then the plane-wave component of the scattered light with wavevector k has complex amplitude proportional to the component of the sample 3D angular spatial frequency K, where K = k - k 0. Each such component will correspond to a point of the far-field complex amplitude distribution, or, alternatively, to a point in the Fourier-plane complex amplitude distribution.

For a fixed k 0, the locus of points corresponding to the tip of the vector K is a sphere, of radius k = |k 0|, known as Ewald’s sphere of reflection [29

29. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, Cambridge, UK, 1999, 7th ed.).

, p.701]. The union of all such spheres is the surface and interior of another sphere, the Ewald limiting sphere, which has radius 2k and is centered at the origin.

Our current holographic system, which operates in reflection mode, has a restricted access to one hemisphere of the Ewald limiting sphere (and its interior). We are further limited to near-on-axis detection, which means that each hologram records information corresponding to a cap at the “apex” of one Ewald sphere [40

40. S. S. Kou and C. J. R. Sheppard, “Imaging in digital holographic microscopy,” Opt. Express 15, 13640–13648 (2007), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-15-21-13640. [CrossRef] [PubMed]

]. This point is illustrated in Fig. 4. For consistency with the remainder of this paper, our depiction is in spatial frequency space, as opposed to angular spatial frequency space.

Each sample object should now be described in terms of its 3D scattering potential FS(ξ, η, z) [29

29. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, Cambridge, UK, 1999, 7th ed.).

, p.696], or its Fourier spectrum ℱS(νξ, νz) = ℱ(FS).

We present an idealized expression for the autocorrelation function of FS, which is:

ΓFS(ξ,η,z,ξ,η,z)FS(ξ,η,z)FS*(ξ,η,z)¯=δ(ξξ,ηη,zz)fT(ξ,η)fA(z).
(14)

In the given form, the scattering/reflectance variations within the sample are assumed to be so rapid that they are delta-correlated. This is a reasonable assumption if it may be assumed that many independent “correlation volumes” of the solid sample, or “correlation areas” of the rough surface sample, are contained within the field of view of the optical system. (Equation (14), which ostensibly describes a solid sample, may also be interpreted as describing a rough surface.) The positive-valued transverse function fT(ξ, η), and axial function fA(z), define the spatial extent of the sample. Specifically, they give the expected relative intensity reflectance profile of the sample (with appropriate length dimensions to cancel those of the delta function), due to the fact that ΓFS is the expected value of the product of two FS factors (which scale optical fields). For convenience, fT and fA have been chosen to be separable in this manner. The transverse function describes the lateral extent of the imaging area, and the axial function describes the extent of the height variations, which are assumed to be uniformly distributed over the field of view.

Based on ΓFS, we may define the autocorrelation function of ℱS but consider only variations in the νz direction, represented by the term ∆νz. That is,

ΓℱS(νξ,νη,νz;Δνz)S(νξ,νη,νz+Δνz/2)S*(νξ,νη,νzΔνz/2)¯.
(15)

We can then define a normalized correlation function:

μℱS(Δνz)ΓℱS(νx,νy,νz;Δνz)ΓℱS(νx,νy,νz;0)=A(Δνz)A(0),
(16)

where ℱA(∆νz) = ℱ{fA(z)}, and the final equality follows from direct substitution of Eq. (14) into the defining equation for the Fourier transform. The modulus of the quantity μ S(∆νz) ranges from 0 to 1, with a value of 1 representing fully correlated complex amplitudes, and a value of 0 fully uncorrelated complex amplitudes. Importantly, it depends only on the parameter ∆νz, the separation in spatial frequency space between the complex amplitudes of interest.

Δvz=Δ(cosθi)λsinθiΔθiλ.
(17)
Fig. 4. (a) Depiction of multiple Ewald hemispheres (as black semicircles) associated with incident wavevectors that lie within the depicted (νξ, νz)-plane. A full Ewald-sphere circle is depicted in magenta, along with its corresponding incident wavevector; the circle passes through the origin. Sphere caps corresponding to a narrow, on-axis detection solid angle are depicted in red. Two are highlighted (in blue and green), corresponding to different polar illumination angles θi. Their projections onto (νξ, νη) -space overlap, yet they are separated in 3D-space by the distance ∆νz; (b) The effect of increasing the wavelength λ on the accessible spatial frequencies in 2D- and 3D-space, if θi, ϕi are held fixed. (c) If λ, θi are held constant, but ϕi is varied over 2π, an annular synthetic aperture can be generated in 2D-space, with negligible 3D decorrelation effect.

The final approximation is valid provided that ∆θi is small.

We shall adopt the assumption (for example, of Ref. [41

41. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company, Englewood, Colorado, 2007).

, Sub-section 4.5.4]) that the sample is a scattering surface with Gaussian height fluctuations, with variance σh 2. Then the expected axial intensity reflectance function fA(z) will be proportional to the probability density function (pdf) of a zero-mean Gaussian variable with this variance, that is:

fA(z)=C0exp(z22σh2),
(18)

where C 0 is a positive constant. Then (cf. [41

41. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company, Englewood, Colorado, 2007).

, Eq. (5–60)]):

μℱs(Δvz)=μℱs(Δvz)=exp[2π2σh2(Δvz)2].
(19)

We consider an example. If the illumination polar angle θi = 45°, and the wavelength λ = 632.8nm, then the normalized correlation function modulus |μℱS| will not fall below 1/e provided that σhνz > 0.23. If the surface roughness is such that σh/λ = 5, then by Eq. (17), the maximum allowed polar angle deviation between measurements would be about 3.6°.

Importantly, since Eq. (17) does not depend on illumination azimuthal angle ϕi, an effective method for evading the decorrelation issue is to vary this angle alone between holographic recordings, keeping the polar angle θi fixed. This limits the synthetic CTF to an annular-shaped region (see Fig. 4(c)), but it is the approach we adopt in the current paper. For such a CTF, the synthesized images will resemble those generated using dark-field coherent microscopy. We note finally that virtually all real objects, as opposed to hypothetical, ideal ones such as perfect phase gratings, scatter sufficiently to produce some spectral intensity over all regions of the measured frequency space. Thus, providing the 3D constraint is satisfied, separately recorded, overlapping spatial-frequency spectra should exhibit measurable correlation. For the same reason, images of highly scattering targets will demonstrate a quantitative resolution improvement with increasing synthetic aperture area, no matter what illumination polar angle is chosen.

3. Experimental setup and Methodology

The schematic of the experimental setup used to acquire the holograms is depicted in Fig. 5. Light from the 33-mW HeNe laser source (λ = 632.8 nm) was split into sample and reference arms using the beamsplitter B1. A telescope system is used to expand the reference beam. The object is plane-wave illuminated off-axis and its scattered and diffracted light follows the optical path described in Section 2. Lenses L1, L2, and L3 have focal lengths 40 mm, 150 mm, and 400 mm, respectively. The objective (L1) is a Mitutoyo infinity-corrected long-working-distance objective, Mitutoyo Plan Apo 5×, with NA = 0.14 and working distance 34 mm. The object is placed on a rotation stage; multiple holograms are recorded by rotating it clockwise in increments of 4°. That is, the illumination conditions were held fixed over the entire sequence of recordings; however, the azimuthal angle ϕi was effectively rotated anti-clockwise in increments of 4° relative to the sample. For this reason, the rectangle of Fig. 2(c) corresponding to accessible region of the Fourier plane does not maintain the same orientation as the axes shown; instead, it also rotates about the origin. The polar angle of illumination selected was θi = 62°.

Fig. 5. Schematic of optical system, showing reference-arm and sample-arm paths. Multiple optical-ray trajectories are shown in the latter path. B1,2: beam-splitters; M1,2,3: mirrors; L1,2,3: lenses; P: pinhole; RFS: rectangular field stop; CCD: recording array.

The CCD camera is a Redlake MegaPlus II ES 11000 with a Kodak 11-megapixel, 12-bit monochrome imaging sensor. Its pixel size is 9 μm × 9 μm resulting in an active imaging area of 36 mm × 24 mm (4008×2672 pixels). For each rotation angle, three exposures were taken, the hologram, and “reference” and “sample” recordings (achieved by blocking the beam in the other arms). By subtracting the two last-mentioned recordings from the hologram, the non-interference terms in the reconstruction could be suppressed [4

4. T. R. Hillman, S. A. Alexandrov, T. Gutzler, and D. D. Sampson, “Microscopic particle discrimination using spatially-resolved Fourier-holographic light scattering angular spectroscopy,” Opt. Express 14, 11088–11102 (2006), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-14-23-11088. [CrossRef] [PubMed]

].

The 90 sets of exposures acquired over a full circle took approximately 45 minutes to acquire, with the most costly operations being the acquisition and saving of the data, rotating the sample, and the programmed delays necessary to reduce vibration. Some time was spent manually adjusting optical-density filters according to the strength of the detected signal at each illumination angle. We intend to automate this last-mentioned process in future implementations of the technique. Rotation stage wobble, which to first order applies only a linear phase ramp to the object plane complex amplitude distribution, is negligible for our setup. The manufacturer’s (Newport) specifications report an average angular error of about 2.3 μrad over 720° of rotation.

The object that we used in the experiment was an Intel Pentium Pro processing unit, which is both highly scattering, like virtually all reflection-mode targets, but is also notable for its regular structure in the image domain, which generates a regular pattern of diffraction peaks in a sub-region of the Fourier spectrum. The last-mentioned features allowed us to perform sharpness metric maximization (as described in Sub-section 2.2) in order to estimate the functions 𝓚′d, 𝓚′g. Based on the assumption that these functions are not sample-dependent, the median values of the Legendre polynomial coefficients obtained from multiple different holograms (illumination angles), were selected as those to be applied globally to the set of 90.

Thus, each hologram could be processed according to Eq. (13), in order to derive a digital representation of one specific region of the object’s complex Fourier spectrum. Each such region, we may assume, will have been corrected for defocus and other corrupting factors, including the parameter d, which was deliberately set to 7 cm, for the reasons given in Sub-section 2.2. Given that the rectangular field stop in Plane 3 limited the field of view of the object to 2.9 mm × 2.9 mm, and that the measured parameter g was always less than 100 μm, then it is readily confirmed that the inequalities of Eq. (10) are easily satisfied. The choice of d enabled Fourier-plane diffraction-limited spots to be sampled by over 40,000 detector pixels instead of merely 7.

To generate the synthetic aperture, it is firstly necessary to rotate the recovered complex amplitude distributions in order to compensate for the 4° offsets between them. Next, they must be translated with respect to each other to achieve alignment in both the Fourier-spectral and object-reconstruction domains. This is equivalent to applying first-order (linear) phase ramps to their respective “transform” domains.

The relative translation between successive (overlapping) Fourier spectra is determined by finding the global peak (maximum modulus) location of the cross-correlation of their (non-negative, real) amplitude distributions. The displacement of the peak from the origin is equal to the displacement between the spectra. Phase distributions are ignored at this stage, since they are affected by the still-uncorrected reconstruction-domain misalignments.

For highly structured samples, the reconstruction-domain translations can be determined in a similar way. However, since the distributions in this domain should not be identical over the region of overlap, this approach will not be successful for general samples. Instead, alignment may be performed by ensuring the phase difference between the overlapping regions in the Fourier-spectral domain is near-constant. Most importantly, phase ramps should be compensated for.

Before phase ramps can be removed, we allow for the possibility that a residual defocus difference ∆g exists between successive reconstructions. This can be corrected by applying the multiplicative factor 𝓚g(νξ, νη), from Eq. (8), to one Fourier spectrum, and the factor 𝓚-∆g/2(νξ, νη) to the other. The optimum “relative defocus” parameter ∆g between successive holograms is chosen so that the phase difference between their overlapping Fourier spectra is best approximated by a linear ramp. (Expressing the phase difference as the imaginary argument of an exponential function, this is equivalent to ensuring the modulus of the peak of its inverse Fourier transform is maximized.) Of course, the magnitude and orientation of the phase ramp corresponds to the displacement between the reconstructions. Once it has been compensated for, the phase difference between the overlapping regions should be near-constant. One of the holograms should be multiplied by a constant phase factor to set this constant to zero.

Since the holograms are corrected pairwise around the annular synthetic aperture, then inevitably errors in these processes will accumulate. The magnitude of these errors can be evaluated by comparing the translation/phase errors associated with the pair of holograms consisting of the first and the final in the sequence. They are, of course, linked by the chain consisting of all the intermediate holograms, but they also overlap in their own right. Any errors between this pair must be corrected for, of course, completing the chain 1 → 2 → 3 → … → final → 1. The residual translation/relative phase errors associated with the chain must be distributed evenly about it. A final position-dependent phase-correction factor can be applied to the annular synthetic aperture, corresponding to a slowly varying function with a single argument: polar angle. Its functional form should be describable using only a few parameters, in our case its values at integer multiples of π/4, which can be optimized using the sharpness metric maximization approach.

4. Results

Figure 6 presents a brightfield reflection image of the Pentium Pro target. A 4× objective was utilized, with NA = 0.13.

Fig. 6. Brightfield reflection microscope image of the sample target. A selected region of the image is magnified.

The holographic reconstructions of the target are presented in Fig. 7, a movie in which each of the 90 frames corresponds to a single hologram in the set. The left panel indicates the accessible region of the Fourier spectrum by enclosing it with a dark-red dotted line; its shape is defined by the intersection of a the rectangular CCD array and the circular aperture of the objective lens. The magnitude of the Fourier spectrum (for this region) is indicated using a linear gray scale. The object reconstruction in the right panel is due to this hologram alone. It is plotted as an optical power distribution, the squared modulus of the object-plane complex amplitude. Also indicated with a faded-red dotted line in the left panel is the accessible region of following hologram in the sequence. The information contained in this second hologram is not used to generate either the Fourier spectrum or object reconstruction. Instead, a map of the phase difference between the two Fourier-spectral regions, properly translated with respect to each other, is displayed as an inset to the left panel. (A low-pass filter was applied to the result in order to suppress pixel-scale, salt-and-pepper-noise effects.)

Fig. 7. (Media 1) showing the object reconstructions due to the individual recorded holograms. The accessible region of the Fourier spectrum is depicted with a dark-red boundary in the left-hand panel. The faded-red boundary surrounds the equivalent region for the next hologram in the sequence; the phase difference between the two is presented in the inset. A linear grayscale was used for the spectrum and reconstruction, with its “saturation value” (“Max.”, in arbitrary units), invariant over all frames. The blue labels on the color bar refer to the “Phase difference” inset.
Fig. 8. Object reconstruction as the synthetic aperture is built up cumulatively from 45 holograms at 8° intervals. The region marked with a red square is magnified. The regions marked blue and green feature in Fig. 9. (Media 2)

The effect of combining multiple holograms to form a synthetic aperture is shown in the movie of Fig. 8. The aperture cumulatively constructed from 45 holograms (every second one of the original set), and the object reconstruction is displayed. The “Magnification” region is marked with a red boundary square in the full reconstruction. The improvement in reconstruction quality as the aperture size is increased is most evident in this right-most panel. Not only are new features rendered visible as more holograms are added, but the fine structure of the regular or periodic components of the object is revealed. Of course, the visibility of the large-scale features (relative to the background scattering signal) would be increased, as in Fig. 6, for example, if the low-frequency components of the Fourier spectrum were captured in addition to the annular bandpass components.

Fig. 9. Magnified reconstructions due to different hologram subsets, with the top and bottom rows labelled green and blue corresponding to the similarly colored regions of Fig. 8 (middle panel). The left-most panels show 15 holograms, which are combined to generate the second of the three reconstructions presented (green or blue). (The reconstruction is derived from the “Displayed synthetic aperture.”) The first reconstruction (orange) is due to a single hologram, indicated in the left-most panel with an orange border, and the final reconstruction is due to the full set of 90 holograms. The magnified regions in the top row are 25 μm × 25 μm; those in the bottom row are 15 μm × 15 μm. The units for the spatial frequency-domain panel are μm-1, and the color bar from the previous figures is applicable.

We note that the “areas” in spatial-frequency space covered by the respective apertures are 0.14 μm-2, 0.47 μm-2, and 2.1 μm-2. These are equivalent to objective lens NAs (in the absence of a central aperture stop) of 0.13, 0.24, and 0.52. The maximum object spatial frequencies accessed by our synthetic aperture are equal to those of an objective with NA = 0.61. (The discrepancy between the values 0.52 and 0.61 is due to the fact that our synthetic aperture is an annulus, not a solid circle.) To convert the quantities to resolution values in the object-reconstruction domain, we consider the effective areas of the squared moduli (intensity distributions) of the apertures’ associated complex-amplitude point-spread functions. The effective area, defined in a similar manner to the identically named quantity in Sub-section 2.2, is equal to the ratio of the integral (over all space) of an intensity distribution to its peak value. Conveniently, by Parseval’s theorem, it is equal to the inverse of the aperture area. A one-dimensional resolution parameter can be equated with the diameter of a circle whose area is equal to the effective area. For the three cases above, respectively, the resolutions, thus defined, are 3.0 μm, 2.6 μm × 1.0 μm, and 0.77 μm, respectively. (The non-cylindrical symmetry of the intermediate case is represented through the major and minor diameters of an ellipse.) These resolutions represent ideal values, assuming that aberrations over the entire extent of the aperture have been fully compensated for. Clearly, this may not be the case even when the “final phase-correction factor” described in the penultimate paragraph of Sub-section 2.3 has been applied. This issue is discussed further in the following section.

5. Conclusions

The experimental results presented in this paper demonstrate how high-resolution, wide-field object reconstruction can be successfully performed using our synthetic aperture microscopy technique, provided that several issues are dealt with. Most importantly, aberration or phase correction procedures must be performed on each individual recording, and the holograms recorded in such a way as to ensure overlap and significant correlation between successively captured regions of Fourier space.

A key feature of our technique is that only a low-NA optical system is required to synthesize high-resolution images. This means that it enjoys the advantage of a long working distance in addition to its wide field of view; it further avoids the necessity to use high-refractive-index immersion fluids. Although the synthetic NA we generated experimentally (of 0.61) is less than that for conventional immersion-fluid objectives, extension to much greater values will be possible by increasing the polar illumination angle and varying the detector position (or tilting the sample).

Indeed, when our technique is to be used in an unrestricted-detector-position configuration, the accessible spatial frequency range is equal to twice that of conventional coherent imaging systems (with NA = 1), in both dimensions. The synthetic CTF has constant modulus over its entire extent, so that high-frequency object components are rendered with high visibility.

For a fixed detector position, different radial positions in Fourier space may be accessed by varying either the illumination polar angle or the wavelength. (It may be necessary to vary these parameters incrementally to avoid the decorrelation problem.) Importantly, the entire (solid circle) accessible region of the Fourier spectrum can be acquired merely by rotating the sample and sweeping (or tuning) the wavelength. No physical scanning of the optical system is required, meaning that it can be well-characterized using a suitable target.

We have noted that a priori targeting of particular Fourier-spectral frequency ranges will limit the number of holograms required to generate high-quality reconstructions. This is true for the target utilized in our current experiment; most of its spectral information was concentrated in the vertical or horizontal directions.

The main problem associated with pairwise sequentially phase-matching holograms is that small errors (aberrations) may accumulate over large apertures, leading to potential blurring, for example. We have proposed the application of a slowly varying, polar-angle-dependent phase factor as a means for correcting them. Its success would depend on the existence of well-defined sample features that can be brought into sharp focus. Greater robustness to the issue would be afforded to the technique, for more general samples, if multiple holograms were recorded along radial lines (rather than merely around a circle), since phase-matching constraints would be borne in two dimensions.

Two planned future developments of the technique, in addition to effective radial scanning of Fourier space, are the incorporation of phase-shifting interferometry, and the possibility of three-dimensional imaging. The former will allow for more accurate holographic phase measurements, and the elimination of the high-frequency carrier imposed by the off-axis reference wave. For the latter, which incorporates optical sectioning, reconstruction will be based on the 3D transfer function formalism.

In conclusion, we have demonstrated that high-resolution, wide-field images may be reconstructed from many tens of separately recorded partially overlapping Fourier holograms. The continued refinement and evolution of this technique could lead to unique regimes of operation currently inaccessible to other optical techniques.

Acknowledgment

The authors are grateful to Abhijit Patil for useful discussions.

References and links

1.

S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic aperture Fourier holographic optical microscopy,” Phys. Rev. Lett . 97, 168102 (2006). [CrossRef] [PubMed]

2.

S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Digital Fourier holography enables wide-field, superresolved, microscopic characterization,” in ‘Optics in 2007’, Opt. Photonics News 18, 29 (Dec. 2007). [CrossRef]

3.

S. A. Alexandrov, T. R. Hillman, and D. D. Sampson, “Spatially resolved Fourier holographic light scattering angular spectroscopy,” Opt. Lett . 30, 3305–3307 (2005). [CrossRef]

4.

T. R. Hillman, S. A. Alexandrov, T. Gutzler, and D. D. Sampson, “Microscopic particle discrimination using spatially-resolved Fourier-holographic light scattering angular spectroscopy,” Opt. Express 14, 11088–11102 (2006), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-14-23-11088. [CrossRef] [PubMed]

5.

S. A. Alexandrov and D. D. Sampson, “Spatial information transmission beyond a systems diffraction limit using optical spectral encoding of the spatial frequency,” J. Opt. A - Pure Appl. Opt . 10, 025304 (2008). [CrossRef]

6.

M. Ryle and A. Hewish, “The synthesis of large radio telescopes,” Mon. Not. R. Astron. Soc. 120, 220–230 (1960).

7.

D. Gabor, “A new microscopic principle,” Nature (London) 161, 777–778 (1948). [CrossRef]

8.

J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett . 11, 77–79 (1967). [CrossRef]

9.

U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, Berlin, 2005).

10.

B. Rappaz, P. Marquet, E. Cuche, Y. Emery, C. Depeursinge, and P. J. Magistretti, “Measurement of the integral refractive index and dynamic cell morphometry of living cells with digital holographic microscopy,” Opt. Express 13, 9361–9373 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-13-23-9361. [CrossRef] [PubMed]

11.

C. J. Mann, L. Yu, C.-M. Lo, and M. K. Kim, “High-resolution quantitative phase-contrast microscopy by digital holography,” Opt. Express 13, 8693–8698 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-13-22-8693. [CrossRef] [PubMed]

12.

M. Sebesta and M. Gustafsson, “Object characterization with refractometric digital Fourier holography,” Opt. Lett . 30, 471–473 (2005). [CrossRef] [PubMed]

13.

J. H. Massig, “Digital off-axis holography with a synthetic aperture,” Opt. Lett . 27, 2179–2181 (2002). [CrossRef]

14.

L. Martínez-León and B. Javidi, “Synthetic aperture single-exposure on-axis digital holography,” Opt. Express 16, 161–169 (2008), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-16-1-161. [CrossRef] [PubMed]

15.

J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt . 47, 5654–5659 (2008). [CrossRef] [PubMed]

16.

F. Le Clerc, M. Gross, and L. Collot, “Synthetic-aperture experiment in the visible with on-axis digital heterodyne holography,” Opt. Lett . 26, 1550–1552 (2001). [CrossRef]

17.

R. Binet, J. Colineau, and J. C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt . 41, 4775–4782 (2002). [CrossRef] [PubMed]

18.

J. R. Price, P. R. Bingham, and C. E. Thomas Jr., “Improving resolution in microscopic holography by computationally fusing multiple, obliquely illuminated object waves in the Fourier domain,” Appl. Opt . 46, 827–833 (2007). [CrossRef] [PubMed]

19.

C. Liu, Z. G. Liu, F. Bo, Y. Wang, and J. Q. Zhu, “Super-resolution digital holographic imaging method,” Appl. Phys. Lett . 81, 3143–3145 (2002). [CrossRef]

20.

M. Paturzo, F. Merola, S. Grilli, S. De Nicola, A. Finizio, and P. Ferraro, “Super-resolution in digital holography by a two-dimensional dynamic phase grating,” Opt. Express 16, 17107–17118 (2008), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-16-21-17107. [CrossRef] [PubMed]

21.

V. Mico, Z. Zalevsky, P. Garcia-Martinez, and J. Garcia, “Single-step superresolution by interferometric imaging,” Opt. Express 12, 2589–2596 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-12-2589. [CrossRef] [PubMed]

22.

C. Yuan, H. Zhai, and H. Liu, “Angular multiplexing in pulsed digital holography for aperture synthesis,” Opt. Lett . 33, 2356–2358 (2008). [CrossRef] [PubMed]

23.

V. Mico, Z. Zalevsky, P. García-Martínez, and J. García, “Synthetic aperture superresolution with multiple off-axis holograms,” J. Opt. Soc. Am. A 23, 3162–3170 (2006). [CrossRef]

24.

V. Mico, Z. Zalevsky, and J. García, “Common-path phase-shifting digital holographic microscopy: A way to quantitative phase imaging and superresolution,” Opt. Commun . 281, 4273–4281 (2008). [CrossRef]

25.

V. Mico, O. Limon, A. Gur, Z. Zalevsky, and J. García, “Transverse resolution improvement using rotating-grating time-multiplexing approach,” J. Opt. Soc. Am. A 25, 1115–1129 (2008). [CrossRef]

26.

Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, “Imaging interferometric microscopy - approaching the linear systems limits of optical resolution,” Opt. Express 15, 6651–6663 (2007), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-15-11-6651. [CrossRef] [PubMed]

27.

Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, “Imaging interferometric microscopy,” J. Opt. Soc. Am. A 25, 811–822 (2008). [CrossRef]

28.

T. Turpin, L. Gesell, J. Lapides, and C. Price, “Theory of the synthetic aperture microscope,” Proc. SPIE 2566, 230–240 (1995). [CrossRef]

29.

M. Born and E. Wolf, Principles of Optics (Cambridge University Press, Cambridge, UK, 1999, 7th ed.).

30.

V. Lauer, “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microsc . 205, 165–176 (2002). [CrossRef] [PubMed]

31.

F. Charrière, A. Marian, F. Montfort, J. Kuehn, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Cell refractive index tomography by digital holographic microscopy,” Opt. Lett . 31, 178–180 (2006). [CrossRef] [PubMed]

32.

B. Simon, M. Debailleul, V. Georges, V. Lauer, and O. Haeberlé, “Tomographic diffractive microscopy of transparent samples,” Eur. Phys. J. Appl. Phys . 44, 29–35 (2008). [CrossRef]

33.

W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007). [CrossRef] [PubMed]

34.

W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Extended depth of focus in tomographic phase microscopy using a propagation algorithm,” Opt. Lett . 33, 171–173 (2008). [CrossRef] [PubMed]

35.

S. S. Kou and C. J. R. Sheppard, “Image formation in holographic tomography,” Opt. Lett . 33, 2362–2364 (2008). [CrossRef] [PubMed]

36.

J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, Englewood, Colorado, 2005, 3rd ed.).

37.

H. H. Arsenault and G. April, “Fourier holography,” in Handbook of Optical Holography, H. J. Caulfield , ed. (Academic Press, New York, 1979), pp. 165–180.

38.

S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25, 983–994 (2008). [CrossRef]

39.

M. King, “Matlab m-files for multidimensional nonlinear conjugate gradient method” (2005), http://users.ictp.it/?mpking/cg.html, accessed 12 December 2008.

40.

S. S. Kou and C. J. R. Sheppard, “Imaging in digital holographic microscopy,” Opt. Express 15, 13640–13648 (2007), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-15-21-13640. [CrossRef] [PubMed]

41.

J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company, Englewood, Colorado, 2007).

OCIS Codes
(070.0070) Fourier optics and signal processing : Fourier optics and signal processing
(090.0090) Holography : Holography
(090.1000) Holography : Aberration compensation
(170.0180) Medical optics and biotechnology : Microscopy
(090.1995) Holography : Digital holography

ToC Category:
Microscopy

History
Original Manuscript: February 9, 2009
Revised Manuscript: April 15, 2009
Manuscript Accepted: April 23, 2009
Published: April 28, 2009

Virtual Issues
Vol. 4, Iss. 7 Virtual Journal for Biomedical Optics

Citation
Timothy R. Hillman, Thomas Gutzler, Sergey A. Alexandrov, and David D. Sampson, "High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy," Opt. Express 17, 7873-7892 (2009)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-17-10-7873


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, "Synthetic aperture Fourier holographic optical microscopy," Phys. Rev. Lett. 97,168102 (2006). [CrossRef] [PubMed]
  2. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, "Digital Fourier holography enables wide-field, superresolved, microscopic characterization," in ‘Optics in 2007’, Opt. Photonics News 18, 29 (Dec. 2007). [CrossRef]
  3. S. A. Alexandrov, T. R. Hillman, and D. D. Sampson, "Spatially resolved Fourier holographic light scattering angular spectroscopy," Opt. Lett. 30, 3305-3307 (2005). [CrossRef]
  4. T. R. Hillman, S. A. Alexandrov, T. Gutzler, and D. D. Sampson, "Microscopic particle discrimination using spatially-resolved Fourier-holographic light scattering angular spectroscopy," Opt. Express 14, 11088-11102 (2006), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-14-23-11088. [CrossRef] [PubMed]
  5. S. A. Alexandrov and D. D. Sampson, "Spatial information transmission beyond a systems diffraction limit using optical spectral encoding of the spatial frequency," J. Opt. A - Pure Appl. Opt. 10, 025304 (2008). [CrossRef]
  6. M. Ryle and A. Hewish, "The synthesis of large radio telescopes," Mon. Not. R. Astron. Soc. 120, 220-230 (1960).
  7. D. Gabor, "A new microscopic principle," Nature (London) 161, 777-778 (1948). [CrossRef]
  8. J. W. Goodman and R. W. Lawrence, "Digital image formation from electronically detected holograms," Appl. Phys. Lett. 11, 77-79 (1967). [CrossRef]
  9. U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, Berlin, 2005).
  10. B. Rappaz, P. Marquet, E. Cuche, Y. Emery, C. Depeursinge, and P. J. Magistretti, "Measurement of the integral refractive index and dynamic cell morphometry of living cells with digital holographic microscopy," Opt. Express 13,9361-9373 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-13-23-9361. [CrossRef] [PubMed]
  11. C. J. Mann, L. Yu, C.-M. Lo, and M. K. Kim, "High-resolution quantitative phasecontrast microscopy by digital holography," Opt. Express 13, 8693-8698 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-13-22-8693. [CrossRef] [PubMed]
  12. M. Sebesta and M. Gustafsson, "Object characterization with refractometric digital Fourier holography," Opt. Lett. 30, 471-473 (2005). [CrossRef] [PubMed]
  13. J. H. Massig, "Digital off-axis holography with a synthetic aperture," Opt. Lett. 27, 2179-2181 (2002). [CrossRef]
  14. L. Martınez-Leon and B. Javidi, "Synthetic aperture single-exposure on-axis digital holography," Opt. Express 16,161-169 (2008), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-16-1-161. [CrossRef] [PubMed]
  15. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, "High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning," Appl. Opt. 47, 5654-5659 (2008). [CrossRef] [PubMed]
  16. F. Le Clerc, M. Gross, and L. Collot, "Synthetic-aperture experiment in the visible with on-axis digital heterodyne holography," Opt. Lett. 26, 1550-1552 (2001). [CrossRef]
  17. R. Binet, J. Colineau, and J. C. Lehureau, "Short-range synthetic aperture imaging at 633 nm by digital holography," Appl. Opt. 41, 4775-4782 (2002). [CrossRef] [PubMed]
  18. J. R. Price, P. R. Bingham, and C. E. Thomas, Jr., "Improving resolution in microscopic holography by computationally fusing multiple, obliquely illuminated object waves in the Fourier domain," Appl. Opt. 46, 827-833 (2007). [CrossRef] [PubMed]
  19. C. Liu, Z. G. Liu, F. Bo, Y. Wang, and J. Q. Zhu, "Super-resolution digital holographic imaging method," Appl. Phys. Lett. 81, 3143-3145 (2002). [CrossRef]
  20. M. Paturzo, F. Merola, S. Grilli, S. De Nicola, A. Finizio, and P. Ferraro, "Super-resolution in digital holography by a two-dimensional dynamic phase grating," Opt. Express 16,17107-17118 (2008), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-16-21-17107. [CrossRef] [PubMed]
  21. V. Mico, Z. Zalevsky, P. Garcia-Martinez, and J. Garcia, "Single-step superresolution by interferometric imaging," Opt. Express 12, 2589-2596 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-12-2589. [CrossRef] [PubMed]
  22. C. Yuan, H. Zhai, and H. Liu, "Angular multiplexing in pulsed digital holography for aperture synthesis," Opt. Lett. 33, 2356-2358 (2008). [CrossRef] [PubMed]
  23. V. Mico, Z. Zalevsky, P. Garcıa-Martınez, and J. Garcıa, "Synthetic aperture superresolution with multiple offaxis holograms," J. Opt. Soc. Am. A 23, 3162-3170 (2006). [CrossRef]
  24. V. Mico, Z. Zalevsky, and J. Garcıa, "Common-path phase-shifting digital holographic microscopy: A way to quantitative phase imaging and superresolution," Opt. Commun. 281, 4273-4281 (2008). [CrossRef]
  25. V. Mico, O. Limon, A. Gur, Z. Zalevsky, and J. Garcıa, "Transverse resolution improvement using rotatinggrating time-multiplexing approach," J. Opt. Soc. Am. A 25, 1115-1129 (2008). [CrossRef]
  26. Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, "Imaging interferometric microscopy - approaching the linear systems limits of optical resolution," Opt. Express 15, 6651-6663 (2007), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-15-11-6651. [CrossRef] [PubMed]
  27. Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, "Imaging interferometric microscopy," J. Opt. Soc. Am. A 25, 811-822 (2008). [CrossRef]
  28. T. Turpin, L. Gesell, J. Lapides, and C. Price, "Theory of the synthetic aperture microscope," Proc. SPIE 2566, 230-240 (1995). [CrossRef]
  29. M. Born and E. Wolf, Principles of Optics (Cambridge University Press, Cambridge, UK, 1999, 7th ed.).
  30. V. Lauer, "New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope," J. Microsc. 205, 165-176 (2002). [CrossRef] [PubMed]
  31. F. Charriere, A. Marian, F. Montfort, J. Kuehn, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, "Cell refractive index tomography by digital holographic microscopy, " Opt. Lett. 31, 178-180 (2006). [CrossRef] [PubMed]
  32. B. Simon, M. Debailleul, V. Georges, V. Lauer, and O. Haeberl’e, "Tomographic diffractive microscopy of transparent samples," Eur. Phys. J. Appl. Phys. 44, 29-35 (2008). [CrossRef]
  33. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, "Tomographic phase microscopy," Nat. Methods 4, 717-719 (2007). [CrossRef] [PubMed]
  34. W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, "Extended depth of focus in tomographic phase microscopy using a propagation algorithm," Opt. Lett. 33, 171-173 (2008). [CrossRef] [PubMed]
  35. S. S. Kou and C. J. R. Sheppard, "Image formation in holographic tomography," Opt. Lett. 33, 2362-2364 (2008). [CrossRef] [PubMed]
  36. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, Englewood, Colorado, 2005, 3rd ed.).
  37. H. H. Arsenault and G. April, "Fourier holography," in Handbook of Optical Holography, H. J. Caulfield, ed. (Academic Press, New York, 1979), pp. 165-180.
  38. S. T. Thurman and J. R. Fienup, "Phase-error correction in digital holography," J. Opt. Soc. Am. A 25, 983-994 (2008). [CrossRef]
  39. M. King, "Matlab m-files for multidimensional nonlinear conjugate gradient method" (2005), http://users.ictp.it/∼mpking/cg.html, accessed 12 December 2008.
  40. S. S. Kou and C. J. R. Sheppard, "Imaging in digital holographic microscopy," Opt. Express 15,13640-13648 (2007), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-15-21-13640. [CrossRef] [PubMed]
  41. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company, Englewood, Colorado, 2007).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: MOV (6264 KB)     
» Media 2: MOV (8382 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited