OSA's Digital Library

Optics Express

Optics Express

  • Editor: Michael Duncan
  • Vol. 13, Iss. 6 — Mar. 21, 2005
  • pp: 2160–2175
« Show journal navigation

Multi-aperture Fourier transform imaging spectroscopy: theory and imaging properties

Samuel T. Thurman and James R. Fienup  »View Author Affiliations


Optics Express, Vol. 13, Issue 6, pp. 2160-2175 (2005)
http://dx.doi.org/10.1364/OPEX.13.002160


View Full Text Article

Acrobat PDF (462 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Fourier transform imaging spectroscopy (FTIS) can be performed with a multi-aperture optical system by making a series of intensity measurements, while introducing optical path differences (OPD’s) between various subapertures, and recovering spectral data by the standard Fourier post-processing technique. The imaging properties for multi-aperture FTIS are investigated by examining the imaging transfer functions for the recovered spectral images. For systems with physically separated subapertures, the imaging transfer functions are shown to vanish necessarily at the DC spatial frequency. Also, it is shown that the spatial frequency coverage of particular systems may be improved substantially by simultaneously introducing multiple OPD’s during the measurements, at the expense of limiting spectral coverage and causing the spectral resolution to vary with spatial frequency.

© 2005 Optical Society of America

1. Introduction

Multi-aperture systems use a number of relatively small-aperture optics together in such a way that the resolution is comparable to that of a larger single-aperture system. Such systems include segmented-aperture telescopes and multiple-telescope arrays (MTA’s), an example of which is illustrated in Fig.1. Such resolutions can only be achieved when the optical path length through each subaperture (one segment of the aperture or a single telescope in the array) are equal. In a real system, this is accomplished by adjusting path length control elements for each subaperture. In Fig. 1 these elements are shown as “optical trombones,” the length of which may be adjusted by moving a corner mirror. Advantages of multi-aperture systems over comparable monolithic systems include lower weight and volume [1

1. J. S. Fender, “Synthetic apertures: an overview,” in Synthetic Aperture Systems, J. S. Fender, ed., Proc. SPIE440, 2–7 (1983).

], and reduced cost [2

2. S.-J. Chung, D. W. Miller, and O. L. de Weck, “Design and implementation of sparse aperture imaging systems,” in Highly Innovative Space Telescope Concepts, H. A. MacEwen, ed., Proc. SPIE4849, 181–191 (2002).

]. Reduced weight and volume are especially important for space-deployed systems. For example, the design for NASA’s James Webb Space Telescope includes a segmented primary that will be folded up during launch [3

3. D. Redding, S. Basinger, A. E. Lowman, A. Kissil, P. Bely, R. Burg, and R. Lyon, “Wavefront sensing for a next generation space telescope,” in Space Telescopes and Instruments V, P. Y. Bely and J. B. Breckinridge, eds., Proc. SPIE3356, 758–772 (1998).

]. One challenging aspect of using multi-aperture systems is phasing the subapertures. Kendrick et al. [4

4. R. L. Kendrick, A. L. Duncan, and R. Sigler, “Imaging Fizeau interferometer: experimental results,” presented at Frontiers in Optics, Tucson, Arizona, 5–9 Oct. 2003 (post-deadline paper 15).

] have demonstrated closed-loop phasing of a nine-aperture system while imaging an extended object using the phase diversity technique [5

5. R. G. Paxman, T. J. Schultz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992). [CrossRef]

]. If a multi-aperture system is sparse, additional tradeoffs include longer exposure times [6

6. J. R. Fienup, “MTF and integration time versus fill factor for sparse-aperture imaging systems,” in Imaging Technologies and Telescopes, J. W. Bilbro, et al., eds., Proc. SPIE4091, 43–47 (2000).

] and increased need for image post-processing [7

7. J. R. Fienup, D. Griffith, L. Harrington, A. M. Kowalczyk, J. J. Miller, and J. A. Mooney, “Comparison of reconstruction algorithms for images from sparse-aperture systems,” in Image Reconstruction from Incomplete Data II, P. J. Bones, et al., eds., Proc. SPIE4792, 1–8 (2002).

].

Fourier transform spectroscopy [8

8. J. Kauppinen and J. Partanen, Fourier Transforms in Spectroscopy, (Wiley-VCH, Berlin, 2001). [CrossRef]

] is a standard method for obtaining spectral data through post-processing of a series of polychromatic intensity measurements. The technique can be employed in an imaging system by relaying the image through a Michelson interferometer [9

9. N. J. E. Johnson, “Spectral imaging with the Michelson interferometer,” in Infrared Imaging Systems Technology, Proc. SPIE226, 2–9 (1980).

], which is used to introduce the optical path differences (OPD’s) necessary for performing the spectroscopy [10

10. C. L. Bennett, M. Carter, D. Fields, and J. Hernandez, “Imaging Fourier transform spectrometer,” in Imaging Spectrometry of the Terrestrial Environment, G. Vane, ed., Proc. SPIE1937, 191–200 (1993).

,11

11. M. R. Carter, C. L. Bennett, D. J. Fields, and F. D. Lee, “Livermore imaging Fourier transform infrared spectrometer,” in Imaging Spectrometry, M. R. Descour, J. M. Mooney, D. L. Perry, and L. R. Illing, eds., Proc. SPIE2480, 380–386 (1995).

]. One alternative to the Michelson design is double Fourier transform interferometry [12

12. K. Itoh and Y. Ohtsuka, “Fourier transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,” J. Opt. Soc. Am. A 3, 94–100 (1986). [CrossRef]

,13

13. J.-M. Mariotti and S. T. Ridgeway, “Double Fourier spatio-spectral interferometry: combining high spectral and high spatial resolution in the near infrared,” Astron. Astrophys. 195, 350–363 (1988).

], where the spectroscopy and imaging are performed by Fourier transforming temporal and spatial coherence measurements, respectively.

Fig. 1. Illustration of multiple-telescope array with four subaperture telescopes.

2. Imaging model

While an ideal spectroscopic system is reflective, our modeling is based on the simplified, equivalent thin-lens refractive system shown in Fig. 2. Shown are: (i) an object plane with coordinates (xo,yo ), (ii) a collimating lens of focal length fo , (iii) a pupil plane with coordinates (ξ,η), containing the various subapertures and associated path-delay elements, (iv) an imaging lens of focal length fi , and (v) an image plane with coordinate (x,y). The subapertures are grouped together according to the path delays introduced during data collection. In general there are Q groups indexed by the integer q∈[1,Q]. The amplitude transmittance of the pupil and associated delay elements is written as

Tpup(ξ,η,ν,τ)=q=1QTq(ξ,η,ν)exp(i2πνγqτ),
(1)

where ν is the optical frequency, τ is a time-delay variable, and Tq (ξ,η,ν) and γq are respectively the amplitude transmittance and relative delay rate of the qth subaperture group. Each Tq (ξ,η,ν) is written as a function of ν to allow for aberrations. The path delay common to the qth group is given by qτ, where c is the speed of light (note that this restricts the model to delays that are linear in time). Without loss of generality, the subaperture groups are organized such that γ 1=0, γ q+1>γq , and γQ =1. In this context, a conventional FTIS system based on a Michelson interferometer can be modeled as a system with two identical, overlapping subaperture groups (formed by the beamsplitter in a real system) with a path delay equal to the OPD between the arms of the interferometer.

Fig. 2. Simplified refractive model for a multi-aperture optical system.

For a spatially incoherent object, the image plane intensity I(x,y,τ), which is a function of the time-delay variable τ, can be written in terms of the object spectral density So (xo,yo) as

I(x,y,τ)=κ1M2So(xM,yM,ν)h(xx,yy,ν,τ)dxdydν,
(2)

where κ is a constant, M=-fi/fo is the system magnification, x′=Mxo, y′=Myo , and h(x,y,ν,τ) is the monochromatic point spread function (PSF) (intensity impulse response) for the system, which can be written as

h(x,y,ν,τ)=q=1Qhq,q(x,y,ν)+p=1Qq=1qpQhp,q(x,y,ν)exp[i2πν(γpγq)τ].
(3)

The terms hp,q (x,y,ν) are referred to as spectral point spread functions (SPSF’s) and are defined as

hp,q(x,y,ν)=tp(x,y,ν)tq*(x,y,ν),
(4)

where tq (x,y,ν) is the coherent impulse response of the qth subaperture group, given by

tq(x,y,ν)=1λ2fi2Tq(ξ,η,ν)exp[i2π(xλfiξ+yλfiη)]dξdη.
(5)

The terms tq (x,y,ν) can be complex-valued since the subaperture groups are asymmetric about, or offset from, the optical axis in the pupil plane. Note that the path delays introduced for the spectroscopy are included in Eq. (3) as additional phase terms; any other phase terms (aberrations) are included in Tq (ξ,η,ν). The spectroscopy is based on temporal coherence effects, but the role of spatial coherence may not be immediately obvious. For this reason, the Appendix contains a derivation of Eq. (2) based on partial coherence theory.

The normalized monochromatic optical transfer function (OTF) for the system can be written as

H(fx,fy,ν,τ)=h(x,y,ν,τ)exp[i2π(fxx+fyy)]dxdyh(x,y,ν,τ)dxdy
=Tpup(λfifx,λfify,ν,τ)Tpup(λfifx,λfify,ν,τ)Tpup(ξ,η,ν,τ)2dξdη
=q=1QHq,q(fx,fy,ν)+p=1Qq=1qpQHp,q(fx,fy,ν)exp[i2πν(γpγq)τ],
(6)

where the ⋆ symbol indicates a two-dimensional cross-correlation with respect to the spatial-frequency coordinates fx and fy , the second equality follows from Eqs. (4) and (5), and the terms Hp,q (fx,fy) are referred to as spectral optical transfer functions (SOTF’s), which are defined as the normalized two-dimensional Fourier transform of the corresponding SPSF’s, i.e.,

Hp,q(fx,fy,ν)=hp,q(x,y,ν)exp[i2π(fxx+fyy)]dxdyh(x,y,ν,τ)dxdy
=Tp(λfifx,λfify,ν)Tq(λfifx,λfify,ν)Tpup(ξ,η,ν,τ)2dξdη.
(7)

For the multiple aperture case, the denominator of this expression is independent of τ and it is equal to the area of the entire pupil when the pupil is binary. Note that both the PSF and the OTF consist of a double summation of terms that are modulated with respect to the time-delay variable τ, and a single summation of unmodulated terms. For a Michelson system, note that the SPSF and SOTF are equivalent to the normal PSF and OTF for incoherent imaging, since the subaperture groups are identical and overlapping.

3. Spectral data

Spectral information can be obtained from a series of image-plane intensity measurements by the standard Fourier technique: (i) subtracting the fringe bias at each image point and (ii) Fourier transforming the data along the τ-dimension to the ν′-domain. Starting from Eq. (2) and performing these steps yields the spectral image

Si(x,y,ν)=κp=1Qq=1qpQ1γpγqM2So(xM,yM,νγpγq)
×hp,q(xx,yy,νγpγq)dxdy.
(8)

Transforming this equation to the spatial frequency domain yields

Gi(fx,fy,ν)=κp=1Qq=1qpQ1γpγqGo(Mfx,Mfy,νγpγq)Hp,q(fx,fy,νγpγq),
(9)

where the spectral-spatial transforms Gi (fx,fy,ν′) and Go (fx,fy) are the two-dimensional spatial Fourier transforms of the spectral image and the object spectral density, respectively. Notice that the spectral image in Eq. (8) is a double summation of the object spectral density convolved with each of the SPSF terms that are modulated in Eq. (3), i.e., terms for which γpq ≠0. Thus, each term contains unique spatial information as it is convolved with a different SPSF. This is evident in Eq. (9) by the fact that the only spatial frequencies in the recovered spectral data are those passed by SOTF terms that are modulated in Eq. (6). Also notice that the spectral dimension of each term in Eqs. (8) and (9) is scaled by the factor 1/(γpq ). Thus, terms for which |γpq |≠1 appear at scaled optical frequencies ν′=(γpq )ν in the ν′-domain. This will occur only for Q ≥ 3. If there are just two groups of subapertures (Q=2), then there is just a single value of |γ 1-γ 2|=1.

When Q≥3, it is desirable to map the data in each term of Eqs. (8) and (9) back to the base optical frequencies, thus forming a composite spectral image that contains all of the collected spatial information in each spectral band. Typically, a multi-aperture system is designed such that the OTF does not have any gaps with missing spatial frequencies. This implies that the SOTF terms Hq,p (fx,fy) will overlap somewhat in the (fx,fy ) plane, and one cannot completely separate the different terms in that plane. However, the terms can be separated with respect to ν′ by limiting the spectral bandwidth of the object and choosing the relative delay rates appropriately. To illustrate, suppose the object spectrum is limited to optical frequencies in the range ν 1νν 2 by a spectral filter placed in the system during the measurements. Then spectral data will appear in Si (x,y,ν′) at multiple intervals in the ν′-domain given by ν 1/(γpq )≤ν′≤ν 2/(γpq ) for all unique, non-zero values of γqp . The band limits (ν 1 and ν 2) and the relative delay rates (γq’s) can be chosen such that these intervals do not overlap, making the data separable in the ν′-domain. For example, for Q=3, γ 1=0, γ 2=1/3, and γ 3=1, the spectra are separated in ν′ space if ν 2-ν 1<ν 2/3. An example of this is shown in Sec. 7. Assuming this is the case, the data in each term of Eq. (8) can be mapped to the base optical frequencies ν to form a composite spectral image

Scomp(x,y,ν)=Δγ>0ΔγSi(x,y,Δγν)forν1νν2,
(10)

where Δγ=γpq , the relative delay rate differences. Substituting from Eq. (8) yields

Scomp(x,y,ν)=κp=1Qq=1Δγ>0Q1M2So(xM,yM,ν)
×hp,q(xx,yy,ν)dxdyforν1νν2.
(11)

4. Complex-valued spectral images

Si(x,y,ν)=Si*(x,y,ν),
(12)

and the spectral-spatial transform is Hermitian about the origin of the (fx,fy,ν′) domain, i.e.,

Gi(fx,fy,ν)=Gi*(fx,fy,ν).
(13)

Note that the object spectral density So (xo,yo) is a one-sided spectrum, i.e., non-zero only for positive frequencies ν>0, but the spectral image cube has a two-sided spectrum. Referring to Eqs. (8) and (9), one can see that the spectral data at positive and negative temporal frequencies consists of terms for which γpq >0 and γpq <0, respectively. Equation (12) states that the spectral image values at negative optical frequencies are the complex conjugate of those at positive frequencies. This can be seen from Eq. (8) and by the fact that hp,q (x,y,ν)=h*q,p (x,y,ν) [see Eq. (4)]. The Hermitian symmetry in the (fx,fy′) domain expressed by Eq. (13) is apparent in Eq. (9), since the Fourier transform of the real-valued object spectral density is Hermitian about the DC spatial frequency, i.e., Go (fx,fy)=G*o(-fx,-fy), and by the fact that Hp,q (fx,fy)=H*q,p(-fx,-fy) [see Eq. (7)]. At positive optical frequencies, the spectral image only contains spatial frequencies corresponding to vector separations oriented from subaperture group p toward subaperture group q where γpq >0. Spatial frequencies corresponding to the oppositely-oriented vector separations appear at negative optical frequencies.

In certain cases, which include Michelson-based systems, the spectral images are real-valued, because the subaperture groups possess a particular symmetry in the pupil plane. In this case, the spatial frequency data and thus the SOTF terms must possess Hermitian symmetry about the DC spatial frequency in each spectral band, i.e., Hp,q (fx,fy)=H*p,q(-fx,-fy). Along with Eq. (7), this condition implies that Hp,q (fx,fy)=Hq,p (fx,fy). Note that this relation holds for Michelson-based systems (with common-path aberrations only), since the subaperture groups are identical and overlapping. Also, real-valued spectral images imply that the fringe packets described by I(x,y,τ) are symmetric with respect to the time delay variable τ, while complex-valued spectral images imply that the fringe packets are asymmetric. In systems, like those based on Michelson interferometer design, where the fringe packets are symmetric, measurements only need to be made for either positive or negative time delays. For a general multi-aperture system however, the fringes will usually be asymmetric, and thus measurements need to be made for both positive and negative time delays.

Si(Re)(x,y,ν)=12[Si(x,y,ν)+Si*(x,y,ν)],
(14)

and its spatial Fourier transform is given by

Gi(Re)(fx,fy,ν)=12[Gi(fx,fy,ν)+Gi*(fx,fy,ν)],
(15)

where it is emphasized that Gi(Re)(fx,fy,ν′) is ordinarily complex-valued. Using Eq. (9) and the fact that Go (fx,fy)=G*o(-fx,-fy) yields

Gi(Re)(fx,fy,ν)=κp=1Qq=1qpQ1γpγqGo(Mfx,Mfy,νγpγq)
×12[Hp,q(fx,fy,νγpγq)+Hp,q*(fx,fy,νγpγq)].
(16)

Similarly, the spatial transform of the imaginary part of the spectral images Si(Im)(fx,fy,ν′) can be written as

Gi(Im)(fx,fy,ν)=κp=1Qq=1qpQ1γpγqGo(Mfx,Mfy,νγpγq)
×12i[Hp,q(fx,fy,νγpγq)Hp,q*(fx,fy,νγpγq)].
(17)

5. Imaging properties

In essence, the SOTF’s are the spatial transfer functions for the spectral images and thus determine the imaging properties of the system. According to Eq. (7), the SOTF’s are calculated as the cross-correlation between subaperture groups, rather than the autocorrelation of the entire aperture as is the OTF for a normal imaging system. Above, it was noted that the SOTF for a Michelson-based FTIS is equivalent to the traditional OTF, since the system can be described as two identical, overlapping subaperture groups. In a multi-aperture system however, the subaperture groups are physically separated in the pupil plane, and thus the SOTF’s vanish necessarily at the DC spatial frequency and in some neighborhood around it. If the minimum separation in the pupil plane between two subaperture groups is d, then the SOTF vanishes for spatial frequencies below the cutoff frequency given by fc=d/(λfi ). For this reason, spectral images from a multi-aperture system are zero-mean, high-pass-filtered versions of the object.

6. Spectral resolution

In practice, the image intensity can only be measured over a finite range of time delay values, i.e., -τmaxττmax . Taking this into account yields the following expression for the image spectral data instead of Eq. (8)

Si(x,y,ν)=κp=1Qq=1qpQ1M2So(xM,yM,ν)hp,q(xx,yy,ν)
×2τmaxsin⁡c[2τmax(γpγq)(νγpγqν)]dxdydν,
(18)

where sinc(ν)=sin(πν)/(πν). Notice that the spectral image is now convolved in the spectral dimension with a sinc function that limits the spectral resolution. If the object data is bandlimited in the spectral dimension to the interval ν 1νν 2, and the data leakage between the each of the intervals ν 1/(γpq )≤ν′≤ν 2/(γpq ) is negligible, then the composite spectral image is given approximately by

Scomp(x,y,ν)κp=1Qq=1Δγ>0Q1M2So(xM,yM,ν)hp,q(xx,yy,ν)
×2τmax(γpγq)sin⁡c[2τmax(γpγq)(νν)]dxdydνforν1νν2.
(19)

In this equation, it is easy to see that each term in the summation is convolved with a sinc function having a zero-to-first-null width of 1/[2τmax (γpq )]. The spectral resolution of each term decreases with the quantity γpq , because the effective time-delay range over which data is collected for each term is scaled by the same factor. By transforming Eq. (19) to the spatial frequency domain, it is easy to see that the spectral resolution varies with spatial frequency for Q>2.

7. Simulation examples

This section presents two multi-aperture FTIS simulations based on an aberration-free system having three subapertures in the equilateral-triangle arrangement shown in Fig. 4. Each subaperture is circular of radius R, and the displacement of each subaperture from the optical axis is r=1.5R. The coordinates for the center of each subaperture are given by (ξ 1,η 1)=(0, r), (ξ 2,η 2)=(√3r/2, -r/2), and (ξ 3,η 3)=(-√3r/2, -r/2), making the closest separation between two subapertures √3r-2R. The subapertures are grouped individually with the following relative delay rates: γ 1=0, γ 2=1/3, and γ 3=1. In both simulations the spectrum is assumed to be limited to the interval ν 1νν 2, where ν 1=0.9ν 0, ν 2=1.1ν 0, and ν 0 is the mean optical frequency.

Fig. 4 shows a single frame of a movie that illustrates the effect of the OPD’s on the pupil function, the PSF, and the OTF of the three-telescope system used for the simulations at the mean optical frequency ν 0 over the range of time-delays 0≤τ≤3/ν 0. Fig. 4(a) indicates the magnitude of the relative phase delay modulo 2π for each subaperture by grayscale tone (white represents zero phase delay and black represents ±π phase delay). Fig. 4(b) shows the monochromatic PSF, where the circle represents the Airy disk radius for a single subaperture at ν=ν 0. The PSF can be viewed as a set of interference fringes underneath an Airy envelope function, which is the diffraction pattern for a single subaperture. As the time delay variable changes, the fringes move under the envelope. Fig. 4(c) shows the magnitude of the real part of the OTF. Notice that only spatial frequencies that correspond to vector separations between subapertures are modulated during the movie, and the rate of modulation for various spatial frequencies is proportional to the difference in the relative delay rates of each corresponding pair of subapertures.

Fig. 3. Pupil of optical system used in simulations.
Fig. 4. Movie (455KB) showing the effect of the OPD’s on the optical system at ν=ν 0 as the time-delay variable is changed from τ=0 to τ=3/ν 0: (a) the magnitude of the relative phase delay of each subaperture, where white represents 0 and black represents ±π, (b) the PSF, and (c) the magnitude of the real part of the OTF.
Fig. 5. Localization of FTIS signal in: (a) the raw intensity data cube, (b) the spectral image cube, and (c) the spectral-spatial transform cube. In each cube, the FTIS signal is localized to the darkly shaded regions.

Fig. 5 shows the localization of the FTIS signal in three transform domains for the example parameters above. Fig. 5(a) represents the intensity measurements I(x,y,τ). In this domain, the signal, which is essentially a fringe packet at each image point, occupies the whole domain. Fig. 5(b) represents the spectral image Si (x,y,ν′). In these examples, the signal is localized to six spectral bands along the ν′-dimension. By Hermitian symmetry about the plane ν′=0, the signal at negative ν′ is the complex conjugate of the signal at positive ν′. Of the three spectral bands for ν′>0, the one at largest ν′ represents spectral image data at the base optical frequencies (from the interaction of subapertures 1 and 3), the middle band represents data scaled to 2/3 of the base optical frequencies (from subapertures 2 and 3), and the third, at smallest ν′, represents data that appears at 1/3 of the base optical frequencies (from subapertures 1 and 2). Fig. 5(c) represents the spectral-spatial transform Gi (fx,fy,ν′). Here, the FTIS signal is further localized to the support of the SOTF terms. Each semi-transparent skewed cone represents the support of a SOTF term. From this figure we can see how the spectral and spatial frequency information can be separated. The sparsity of the data in this domain will also make possible significant noise filtering.

7.1. Point object

The purpose of this point-object example is to provide a physical understanding of the effects that contribute to the magnitude and phase of the recovered spectral data. This simulation is based on an object with the spectral density

So(x,y,ν)=Erect[(νν0)(ν2ν1)]δ(x,y),
(20)

where E is a constant with units of [W m-2 Hz-1], rect(ν) vanishes everywhere except for |ν|≤1/2, where it equals unity, and δ(x′,y′) is the two-dimensional Dirac delta function. This represents an on-axis point source with a uniform spectral exitance in the band of interest. The image intensity is obtained by substituting this expression into Eq. (2) and simplifying to yield

I(x,y,τ)=κEν1ν2h(x,y,ν,τ)dν.
(21)

where the PSF h(x,y,ν,τ) is given by Eqs. (3) and (4) with

tq(x,y,ν)=πR2λ2fi2jinc(2Rλfix2+y2)exp[i2πλfi(xξq+yηq)],
(22)

where jinc(ρ)=2J 1(πρ)/(πρ), and J 1 is the first-order Bessel function of the first kind. Fig. 6(a) and (b) show the calculated image intensity as a function of the time delay variable at two points in the image plane: (i) Point A with coordinates (xA,yA )=(0,0), and (ii) point B with coordinates (xB,yB )=(0,0.61λ 0 fi/R), where λ 0=c/ν 0. Note that Point A corresponds to the geometric image location of the point object, and the distance between the points is equal to the Airy disk radius corresponding to the diffraction pattern of a single subaperture at the mean optical frequency ν 0. The data in the figure is in units of I 0, which is the intensity at Point A for τ=0. The figure shows that the fringe packet at Point A is symmetric about τ=0, while the fringe packet at Point B is asymmetric. Fig. 6(c), (d), and (e) show the intensity contributions at Point B due to the interference between each pair of subapertures. In general, each contribution is symmetric about some non-zero time delay, i.e. τp,q for the contribution from subaperture groups p and q. The following expression for τp,q can be obtained by substituting Eqs. (4) and (22) into Eq. (3) and solving for the time delay that yields zero phase for the (q,p) term at Point B,

τp,q=xB(ξpξq)+yB(ηpηq)cfi(γpγq).
(23)

Since each contribution has a different shift, the fringe packet at Point B is asymmetric. The fringe packet at Point A is symmetric, because each contribution is centered about τ=0. All points in the image of an extended scene will have a mixture of the characteristics of Points A and B, especially for sparse-aperture systems, which have PSF’s with sidelobes much larger than those of conventional filled-aperture systems. Fig. 7 shows the spectral data at Points A and B for positive temporal frequencies in the ν′-domain. Notice that three scaled versions of the object spectral data are clearly visible. The data at the base optical frequencies is due to the interference between subapertures 1 and 3, since γ 3-γ 1=1, the data scaled to 2/3 of the base frequencies is associated with subapertures 2 and 3, since γ 3-γ 2=2/3 and the data closest to the origin associated with subapertures 1 and 2, is scaled to 1/3 of the base optical frequencies, since γ 2-γ 1=1/3. The recovered spectrum at Points A and B are real- and complex-valued, respectively, since the corresponding fringe packets are symmetric and asymmetric, respectively, as shown in Fig. 6. The spectral content at each point is dependent on the SPSF’s. Note that the spectral data at Point A is bluer than the actual object spectral density, since higher optical frequencies are focused more tightly onto the geometric image point than lower optical frequencies, as is the case for ordinary imaging of point objects. Also, the magnitude of the spectral data at Point B goes to zero at ν 0/3, 2ν 0/3, and ν 0 since Point B is located at a point where the SPSF’s (centered about Point A) vanish for ν 0. The ringing artifacts in the spectral data are due to the convolution in the spectral dimension with a sinc function [see Eq. (18)]. These artifacts can be reduced by applying a window function to intensity data in the τ-dimension before taking the Fourier transform.

Fig. 6. Image intensity versus τ for the point object simulation: (a) at Point A, (b) at Point B, and contributions to the intensity at Point B due to the interference between subapertures: (c) 1 and 2, (d) 2 and 3, and (e) 1 and 3.
Fig. 7. Spectral data from point object simulation at positive temporal frequencies in the ν′-domain: (a) at Point A (real-valued) and (b) at Point B (real and imaginary parts).

7.2. Extended object

Fig. 8. The extended object simulation: (a) movie (582KB) of the object data versus ν(the still frame shows the data at ν=1.03ν 0), (b) size of the pupil in spatial frequencies corresponding to ν=1.03ν 0, and (c) movie (746KB) of the image intensity versus τ(the still frame shows the image intensity at τ=0).
Fig. 9. Spectral image data from the extended-object simulation. The top row shows the real part of spectral images at: (a) ν′=0.34ν 0, (c) ν′=0.68ν 0, and (e) ν′=1.03ν0. The bottom row shows the Fourier magnitude of each image. For the spectral images, note that dark grays represent negative values and light grays represent positive values.
Fig. 10. Composite spectral image data from extended object simulation: (a) the real part of the spectral image at ν=1.03ν 0, (b) the corresponding Fourier magnitude, (c) the imaginary part of the same spectral image, and (d) the corresponding Fourier magnitude. For the spectral images, note that dark grays represent negative values, middle gray represents zero, and light grays represent positive values.

8. Discussion

Fourier transform imaging spectroscopy can be performed with a multi-aperture optical system by using existing path-length control elements to introduce the required OPD’s. The theory presented shows that spectral data can be obtained from polychromatic intensity measurements by the standard Fourier technique, but the DC spatial frequency components are missing from the resulting spectral images. This is due to the fact that the spatial transfer functions for these images, the SOTF’s, are given by the cross-correlations between the pupil functions for groups of subapertures that have different path-delays during data collection. Since the subapertures do not normally overlap physically, the SOTF’s vanish in some finite region around DC spatial frequency. Thus, the spectral images are also missing some low spatial frequency content. This poses an interesting image reconstruction problem. Linear algorithms, such as the Wiener-Helstrom filter [17

17. C. W. Helstrom, “Image restoration by the method of least squares,” J. Opt. Soc. Am. 57, 297–303 (1967). [CrossRef]

], cannot reconstruct the missing low spatial frequencies. However, nonlinear algorithms may be able to reconstruct the missing data based on constraints and specific assumptions about the object. It is unclear whether superresolution algorithms [18

18. B. R. Hunt, “Super-resolution of images: algorithms, principles, performance,” International Journal of Imaging Systems and Technology 6, 297–304 (1995). [CrossRef]

], which are typically used to fill in missing high spatial frequencies, can fill in missing low spatial frequency data. However, we have had some success filling in the low spatial frequencies by maximizing a derivative-based sharpness metric [19

19. S. T. Thurman and J. R. Fienup, “Fourier transform imaging spectroscopy with a multiple-aperture telescope: band-by-band image reconstruction,” in Optical, Infrared, and Millimeter Space Telescopes, J. C. Mather, ed., Proc. SPIE5487-68 (2004).

], which assumes that the object consists of regions that are piecewise uniform in the spatial dimensions, subject to constraints, which require the reconstruction to be consistent with the panchromatic fringe bias data [20

20. S. T. Thurman and J. R. Fienup, “Reconstruction of multispectral image cubes from multiple-telescope array Fourier transform imaging spectrometer,” presented at Frontiers in Optics, Rochester, New York, 10–14 Oct. 2004, paper FTuB3.

].

In particular systems, the imaging properties can be improved significantly by introducing multiple OPD’s between the subapertures for each intensity measurement instead of using a single OPD. This technique offers the ability to collect spectral data over a larger area of the spatial frequency plane, but has two significant trade-offs: (i) the spectral bandwidth of the system needs to be limited and (ii) the spectral resolution varies with spatial frequency.

Appendix

The multi-aperture FTIS is based on temporal coherence effects, but the role of spatial coherence effects may not be immediately obvious. For this reason, this section presents a description of the derivation of Eq. (2) based on partial coherence theory. The cross-spectral density function is propagated through the system using Fresnel-like transforms and generalized transmission functions. An expression for the image intensity is given for a general partially coherent object, which is then simplified for a spatially incoherent object. The final result shows that spatial coherence effects do not play a role in the measurements.

The cross spectral density in a plane z=constant is defined in Section 4.3.2 of Ref. [21

21. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics, (Cambridge University Press, Cambridge, 1995).

] as

W(z)(x1,y1,x2,y2,ν)δ(νν')=V(x1,y1,ν)V*(x2,y2,ν),
(24)

where V(x,y,ν) is the generalized temporal Fourier transform of the analytic signal representation of the scalar electric field at the point (x,y) in the plane of interest. Note that this definition is the complex conjugate of the quantity in Ref. [21

21. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics, (Cambridge University Press, Cambridge, 1995).

], in order to conform to the convention of Refs. [23

23. J. W. Goodman, Statistical Optics, (Wiley, New York, 2000) Sec. 3.5.

,23

23. J. W. Goodman, Statistical Optics, (Wiley, New York, 2000) Sec. 3.5.

]. The cross-spectral density obeys two Helmholtz equations and can be propagated from a plane z=0 to a plane z=d >0 by two applications of Rayleigh’s first diffraction formula (see Sec. 4.4.2 of Ref. [21

21. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics, (Cambridge University Press, Cambridge, 1995).

]). By making the standard paraxial physical-optics approximations, the propagation equation can be written in the following form

W(d)(x1,y1,x2,y2,ν)=1λ2d2dx1dy1dx1dy2W(0)(x1,y1,x2,y2,ν)
×exp{iπλd[(x1x1)2+(y1y1)2(x2x2)2(y2y2)2]},
(25)

where W (0)(x 1,y 1,x 2,y 2,ν) and W (d)(x 1,y 1,x 2,y 2,ν) represent the cross-spectral densities in the planes z=0 and z=d, respectively, the distance between the planes is assumed to be many optical wavelengths (dλ), and the Fresnel approximation [24

24. J. Goodman, Introduction to Fourier Optics2nd ed., (McGraw-Hill, New York, 1996).

] has been used.

The concept of a generalized pupil function for the scalar optical field can be applied to the cross-spectral density. If T(x,y,ν) describes the complex amplitude transmission in the plane z=0, such that

Vtrans(x,y,ν)=T(x,y,ν)Vinc(x,y,ν),
(26)

where Vinc (x,y,ν) represents the field incident from the half-space z<0 and Vtrans (x,y,ν) represents the field transmitted into the half-space z>0, then by substitution into Eq. (24), one can write

Wtrans(0)(x1,y1,x2,y2,ν)=T(x1,y1,ν)T*(x2,y2,ν)Winc(0)(x1,y1,x2,y2,ν),
(27)

where W (0) inc(x 1,y 1,x 2,y 2,ν) and W (0) trans(x 1,y 1,x 2,y 2,ν) represent the incident and transmitted cross-spectral densities, respectively. The standard transmission function for a lens is given in Section 5.1 of Ref. [24

24. J. Goodman, Introduction to Fourier Optics2nd ed., (McGraw-Hill, New York, 1996).

], and the transmission function for the pupil plane Tpup (ξ,η,ν,τ) is given in Eq. (1). Note that the pupil transmission function is written explicitly as a function of the time-delay variable.

The cross-spectral density is propagated through the multi-aperture FTIS system shown in Fig. 2 by repeated application of Eqs. (25) and (27). After simplification, the cross-spectral density in the image plane W (i)(x 1,y 1,x 2,y 2,ν,τ) can be expressed as

W(i)(x1,y1,x2,y2,ν,τ)=dx1dy1dx2dy21M2W(o)(x1M,y1M,x2M,y2M,ν)
×exp[iπλfo(1d1fo)(x12M2+y12M2x22M2y22M2)]
×exp[iπλfi(1d2fi)(x12+y12x22y22)]
×{q=1Qp=1Qtq(x1x1,y1y1,ν)tp*(x2x2,y2y2,ν)
×exp[i2πν(γqγp)τ],
(28)

I(x,y,τ)=W(i)(x,y,x,y,ν,τ)dν.
(29)

For a spatially incoherent object, the object spectral density can be written as [25

25. M. J. Beran and G. B. Parrent Jr., “The mutual coherence of incoherent radiation,” Nuovo Cimento 27, 1049–1065 (1963). [CrossRef]

]

W(o)(x1,y1,x2,y2,ν)=κSo(x1,y1,ν)δ(x1x2,y1y2)
(30)

where So (x′,y′,ν) is the spectral density of the object and κ=λ 2/π for a perfectly incoherent object. Substituting Eqs. (28) and (30) into Eq. (29) and simplifying yields Eq. (2).

Acknowledgment

This work was supported by Lockheed Martin Corporation.

References and Links

1.

J. S. Fender, “Synthetic apertures: an overview,” in Synthetic Aperture Systems, J. S. Fender, ed., Proc. SPIE440, 2–7 (1983).

2.

S.-J. Chung, D. W. Miller, and O. L. de Weck, “Design and implementation of sparse aperture imaging systems,” in Highly Innovative Space Telescope Concepts, H. A. MacEwen, ed., Proc. SPIE4849, 181–191 (2002).

3.

D. Redding, S. Basinger, A. E. Lowman, A. Kissil, P. Bely, R. Burg, and R. Lyon, “Wavefront sensing for a next generation space telescope,” in Space Telescopes and Instruments V, P. Y. Bely and J. B. Breckinridge, eds., Proc. SPIE3356, 758–772 (1998).

4.

R. L. Kendrick, A. L. Duncan, and R. Sigler, “Imaging Fizeau interferometer: experimental results,” presented at Frontiers in Optics, Tucson, Arizona, 5–9 Oct. 2003 (post-deadline paper 15).

5.

R. G. Paxman, T. J. Schultz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992). [CrossRef]

6.

J. R. Fienup, “MTF and integration time versus fill factor for sparse-aperture imaging systems,” in Imaging Technologies and Telescopes, J. W. Bilbro, et al., eds., Proc. SPIE4091, 43–47 (2000).

7.

J. R. Fienup, D. Griffith, L. Harrington, A. M. Kowalczyk, J. J. Miller, and J. A. Mooney, “Comparison of reconstruction algorithms for images from sparse-aperture systems,” in Image Reconstruction from Incomplete Data II, P. J. Bones, et al., eds., Proc. SPIE4792, 1–8 (2002).

8.

J. Kauppinen and J. Partanen, Fourier Transforms in Spectroscopy, (Wiley-VCH, Berlin, 2001). [CrossRef]

9.

N. J. E. Johnson, “Spectral imaging with the Michelson interferometer,” in Infrared Imaging Systems Technology, Proc. SPIE226, 2–9 (1980).

10.

C. L. Bennett, M. Carter, D. Fields, and J. Hernandez, “Imaging Fourier transform spectrometer,” in Imaging Spectrometry of the Terrestrial Environment, G. Vane, ed., Proc. SPIE1937, 191–200 (1993).

11.

M. R. Carter, C. L. Bennett, D. J. Fields, and F. D. Lee, “Livermore imaging Fourier transform infrared spectrometer,” in Imaging Spectrometry, M. R. Descour, J. M. Mooney, D. L. Perry, and L. R. Illing, eds., Proc. SPIE2480, 380–386 (1995).

12.

K. Itoh and Y. Ohtsuka, “Fourier transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,” J. Opt. Soc. Am. A 3, 94–100 (1986). [CrossRef]

13.

J.-M. Mariotti and S. T. Ridgeway, “Double Fourier spatio-spectral interferometry: combining high spectral and high spatial resolution in the near infrared,” Astron. Astrophys. 195, 350–363 (1988).

14.

M. Frayman and J. A. Jamieson, “Scene imaging and spectroscopy using a spatial spectral interferometer,” in Amplitude and Intensity Spatial Interferometry, J. B. Breckingridge, ed., Proc. SPIE1237, 585–603 (1990).

15.

R. L. Kendrick, E. H. Smith, and A. L. Duncan, “Imaging Fourier transform spectrometry with a Fizeau interferometer,” in Interferometry in Space, M. Shao, ed., Proc. SPIE4852, 657–662 (2003).

16.

Provided through the courtesy of Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California (http://aviris.jpl.nasa.gov/).

17.

C. W. Helstrom, “Image restoration by the method of least squares,” J. Opt. Soc. Am. 57, 297–303 (1967). [CrossRef]

18.

B. R. Hunt, “Super-resolution of images: algorithms, principles, performance,” International Journal of Imaging Systems and Technology 6, 297–304 (1995). [CrossRef]

19.

S. T. Thurman and J. R. Fienup, “Fourier transform imaging spectroscopy with a multiple-aperture telescope: band-by-band image reconstruction,” in Optical, Infrared, and Millimeter Space Telescopes, J. C. Mather, ed., Proc. SPIE5487-68 (2004).

20.

S. T. Thurman and J. R. Fienup, “Reconstruction of multispectral image cubes from multiple-telescope array Fourier transform imaging spectrometer,” presented at Frontiers in Optics, Rochester, New York, 10–14 Oct. 2004, paper FTuB3.

21.

L. Mandel and E. Wolf, Optical Coherence and Quantum Optics, (Cambridge University Press, Cambridge, 1995).

22.

M. Born and E. Wolf, Principles of Optics, 7th (expanded) ed., (Cambridge University Press, Cambridge, 2002) Sec. 10.2.

23.

J. W. Goodman, Statistical Optics, (Wiley, New York, 2000) Sec. 3.5.

24.

J. Goodman, Introduction to Fourier Optics2nd ed., (McGraw-Hill, New York, 1996).

25.

M. J. Beran and G. B. Parrent Jr., “The mutual coherence of incoherent radiation,” Nuovo Cimento 27, 1049–1065 (1963). [CrossRef]

OCIS Codes
(070.2580) Fourier optics and signal processing : Paraxial wave optics
(110.2990) Imaging systems : Image formation theory
(110.4850) Imaging systems : Optical transfer functions
(110.6770) Imaging systems : Telescopes
(120.3180) Instrumentation, measurement, and metrology : Interferometry
(120.6200) Instrumentation, measurement, and metrology : Spectrometers and spectroscopic instrumentation
(300.6300) Spectroscopy : Spectroscopy, Fourier transforms

ToC Category:
Research Papers

History
Original Manuscript: January 5, 2005
Revised Manuscript: March 10, 2005
Published: March 21, 2005

Citation
Samuel Thurman and James Fienup, "Multi-aperture Fourier transform imaging spectroscopy: theory and imaging properties," Opt. Express 13, 2160-2175 (2005)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-13-6-2160


Sort:  Journal  |  Reset  

References

  1. J. S. Fender, �??Synthetic apertures: an overview,�?? in Synthetic Aperture Systems, J. S. Fender, ed., Proc. SPIE 440, 2-7 (1983).
  2. S.-J. Chung, D. W. Miller, and O. L. de Weck, �??Design and implementation of sparse aperture imaging systems,�?? in Highly Innovative Space Telescope Concepts, H. A. MacEwen, ed., Proc. SPIE 4849, 181-191 (2002).
  3. D. Redding, S. Basinger, A. E. Lowman, A. Kissil, P. Bely, R. Burg, and R. Lyon, �??Wavefront sensing for a next generation space telescope,�?? in Space Telescopes and Instruments V, P. Y. Bely and J. B. Breckinridge, eds., Proc. SPIE 3356, 758-772 (1998).
  4. R. L. Kendrick, A. L. Duncan, and R. Sigler, �??Imaging Fizeau interferometer: experimental results,�?? presented at Frontiers in Optics, Tucson, Arizona, 5-9 Oct. 2003 (post-deadline paper 15).
  5. R. G. Paxman, T. J. Schultz, and J. R. Fienup, �??Joint estimation of object and aberrations by using phase diversity,�?? J. Opt. Soc. Am. A 9, 1072-1085 (1992). [CrossRef]
  6. J. R. Fienup, �??MTF and integration time versus fill factor for sparse-aperture imaging systems,�?? in Imaging Technologies and Telescopes, J. W. Bilbro, et al., eds., Proc. SPIE 4091, 43-47 (2000).
  7. J. R. Fienup, D. Griffith, L. Harrington, A. M. Kowalczyk, J. J. Miller, and J. A. Mooney, �??Comparison of reconstruction algorithms for images from sparse-aperture systems,�?? in Image Reconstruction from Incomplete Data II, P. J. Bones, et al., eds., Proc. SPIE 4792, 1-8 (2002).
  8. J. Kauppinen and J. Partanen, Fourier Transforms in Spectroscopy, (Wiley-VCH, Berlin, 2001). [CrossRef]
  9. N. J. E. Johnson, �??Spectral imaging with the Michelson interferometer,�?? in Infrared Imaging Systems Technology, Proc. SPIE 226, 2-9 (1980).
  10. C. L. Bennett, M. Carter, D. Fields, and J. Hernandez, �??Imaging Fourier transform spectrometer,�?? in Imaging Spectrometry of the Terrestrial Environment, G. Vane, ed., Proc. SPIE 1937, 191-200 (1993).
  11. M. R. Carter, C. L. Bennett, D. J. Fields, and F. D. Lee, �??Livermore imaging Fourier transform infrared spectrometer,�?? in Imaging Spectrometry, M. R. Descour, J. M. Mooney, D. L. Perry, and L. R. Illing, eds., Proc. SPIE 2480, 380-386 (1995).
  12. K. Itoh and Y. Ohtsuka, �??Fourier transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,�?? J. Opt. Soc. Am. A 3, 94-100 (1986). [CrossRef]
  13. J.-M. Mariotti and S. T. Ridgeway, �??Double Fourier spatio-spectral interferometry: combining high spectral and high spatial resolution in the near infrared,�?? Astron. Astrophys. 195, 350-363 (1988).
  14. M. Frayman and J. A. Jamieson, �??Scene imaging and spectroscopy using a spatial spectral interferometer,�?? in Amplitude and Intensity Spatial Interferometry, J. B. Breckingridge, ed., Proc. SPIE 1237, 585-603 (1990).
  15. R. L. Kendrick, E. H. Smith, and A. L. Duncan, �??Imaging Fourier transform spectrometry with a Fizeau interferometer,�?? in Interferometry in Space, M. Shao, ed., Proc. SPIE 4852, 657-662 (2003).
  16. Provided through the courtesy of Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California (<a href= "http://aviris.jpl.nasa.gov/">http://aviris.jpl.nasa.gov/</a>.)
  17. C. W. Helstrom, �??Image restoration by the method of least squares,�?? J. Opt. Soc. Am. 57, 297-303 (1967). [CrossRef]
  18. B. R. Hunt, �??Super-resolution of images: algorithms, principles, performance,�?? International Journal of Imaging Systems and Technology 6, 297-304 (1995). [CrossRef]
  19. S. T. Thurman and J. R. Fienup, �??Fourier transform imaging spectroscopy with a multiple-aperture telescope: band-by-band image reconstruction,�?? in Optical, Infrared, and Millimeter Space Telescopes, J. C. Mather, ed., Proc. SPIE 5487-68 (2004).
  20. S. T. Thurman and J. R. Fienup, �??Reconstruction of multispectral image cubes from multiple-telescope array Fourier transform imaging spectrometer,�?? presented at Frontiers in Optics, Rochester, New York, 10-14 Oct. 2004, paper FTuB3.
  21. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics, (Cambridge University Press, Cambridge, 1995).
  22. M. Born and E. Wolf, Principles of Optics, 7th (expanded) ed., (Cambridge University Press, Cambridge, 2002) Sec. 10.2.
  23. J. W. Goodman, Statistical Optics, (Wiley, New York, 2000) Sec. 3.5.
  24. J. Goodman, Introduction to Fourier Optics 2nd ed., (McGraw-Hill, New York, 1996).
  25. M. J. Beran and G. B. Parrent, Jr., �??The mutual coherence of incoherent radiation,�?? Nuovo Cimento 27, 1049-1065 (1963). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: AVI (455 KB)     
» Media 2: AVI (582 KB)     
» Media 3: AVI (746 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited