OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 9 — May. 6, 2013
  • pp: 10511–10525
« Show journal navigation

Wigner function measurement using a lenslet array

Lei Tian, Zhengyun Zhang, Jonathan C. Petruccelli, and George Barbastathis  »View Author Affiliations


Optics Express, Vol. 21, Issue 9, pp. 10511-10525 (2013)
http://dx.doi.org/10.1364/OE.21.010511


View Full Text Article

Acrobat PDF (1874 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Geometrical–optical arguments have traditionally been used to explain how a lenslet array measures the distribution of light jointly over space and spatial frequency. Here, we rigorously derive the connection between the intensity measured by a lenslet array and wave–optical representations of such light distributions for partially coherent optical beams by using the Wigner distribution function (WDF). It is shown that the action of the lenslet array is to sample a smoothed version of the beam’s WDF (SWDF). We consider the effect of lenslet geometry and coherence properties of the beam on this measurement, and we derive an expression for cross–talk between lenslets that corrupts the measurement. Conditions for a high fidelity measurement of the SWDF and the discrepancies between the measured SWDF and the WDF are investigated for a Schell–model beam.

© 2013 OSA

1. Introduction

In order to extract the most information possible from an optical field, one can look for complete descriptions of the wave properties of a field that allow physically measurable quantities (e.g. intensity, energy density, Poynting vector) to be calculated over a region of free space. For most propagating optical fields, a description on some plane transverse to the optical axis is sufficient to determine the rest of the field; we will denote position on the plane by r = (x, y) and position along the optical axis by z. For monochromatic coherent light, the complex–valued field U(r; z) provides such a description. For a quasi–monochromatic partially coherent field, a common description is the mutual intensity, J(r1, r2; z), i.e. the statistical correlation of the field at all possible pairs of points over a plane [1

1. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995).

]. The mutual intensity can be written as
J(r1,r2;z)=U(r1;z)U*(r2;z),
(1)
where 〈·〉 denotes the ensemble average over statistical realizations of the field U, and the intensity is simply given by I(r) = J(r, r).

The mutual intensity may be measured directly by using interferometry: two sheared versions of a field are overlapped in a Young [2

2. B. J. Thompson and E. Wolf, “Two-beam interference with partially coherent light,” J. Opt. Soc. Am. 47, 895 (1957) [CrossRef] .

], Michelson stellar [3

3. W. Tango and R. Twiss, “Michelson stellar interferometry,” Prog. Optics 17, 239–277 (1980) [CrossRef] .

] or rotational shear [4

4. K. Itoh and Y. Ohtsuka, “Fourier-transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,” J. Opt. Soc. Am. A 3, 94–100 (1986) [CrossRef] .

,5

5. D. L. Marks, R. A. Stack, and D. J. Brady, “Three-dimensional coherence imaging in the Fresnel domain,” Appl. Opt. 38, 1332–1342 (1999) [CrossRef] .

] arrangement, and the fringe visibility gives the mutual intensity. However, interferometric methods are subject to mechanical scanning stability and signal–to–noise limitations. Other techniques use the fact that the intensity at any point in space may be computed from a pair of propagation integrals acting on J. By measuring the intensity over many different defocused planes, these integrals may be inverted to recover J through a process known as phase space tomography [6

6. M. G. Raymer, M. Beck, and D. McAlister, “Complex wave-field reconstruction using phase-space tomography,” Phys. Rev. Lett. 72, 1137–1140 (1994) [CrossRef] .

11

11. L. Tian, S. Rehman, and G. Barbastathis, “Experimental 4D compressive phase space tomography,” in “Frontiers in Optics,” (Optical Society of America, 2012), p. FM4C.4.

].

An alternative to the mutual intensity for describing an optical field is the radiance, a function that characterizes the distribution of power over position and direction. Let the radiance at some plane z be denoted by B(r, p; z) where p = (px, py) is the transverse component of the unit direction vector. Propagation of radiance is based on geometric optics and is simple to calculate:
B(r,p;z)=B(rzp,p;0)
(2)
The intensity at any point is given by the integral of B over all directions
I(r;z)=B(r,p;z)d2p=B(rzp,p;0)d2p.
(3)
Use of the radiance predates the wave theory of light, and it was initially described by assigning non-negative values to all trajectories coming from source points. Such descriptions are insufficient to model wave effects since these trajectories contain no information about constructive or destructive interference. However, in certain situations, these wave effects can be safely ignored, and a lenslet array can then be used to obtain an estimate for the radiance since they allow joint measurement of the spatial and directional distribution of light, as in Shack-Hartmann sensors [12

12. B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(2001) [PubMed] .

], integral imaging systems [13

13. G. Lippmann, “La photographie integrale,” Comptes-Rendus, Academie des Sciences 146 (1908).

15

15. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009) [CrossRef] [PubMed] .

] and light field cameras [16

16. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992) [CrossRef] .

, 17

17. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

].

Fig. 1 Illustration of the measurement under a single lenslet of the SWDF in one spatial dimension (r0 = x0 and u = ux). (a) A lenslet centered at x0 maps points (x0, ux) in the SWDF domain to positions x0 + λfux at the lenslet’s Fourier plane. (b) A convolution of the aperture WDF with the incident WDF forms the SWDF.

Since our goal is to accurately measure the incident WDF, we can interpret the construction of the SWDF in Eq. (5) as smoothing the incident WDF by convolution with the aperture WDF, as illustrated in Fig. 1(b). Ideally, then, one would seek to use an aperture for which 𝒲pδ(r)δ(u), where δ is the Dirac delta function in order to measure the incident WDF with perfect resolution. However it may be easily verified from Eq. (4) that the existence of this aperture WDF is physically impossible; uncertainty relationships place limits on the product of the widths of 𝒲p in space and spatial frequency. The finite extent of 𝒲p limits the resolution in space and spatial frequency of the measurement of 𝒲i. Despite these limitations, in many cases the SWDF itself is useful to provide a direct estimate of the WDF [24

24. A. Wax and J. E. Thomas, “Optical heterodyne imaging and Wigner phase space distributions,” Opt. Lett. 21, 1427–1429 (1996) [CrossRef] [PubMed] .

, 26

26. L. Waller, G. Situ, and J. Fleischer, “Phase–space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012) [CrossRef] .

]. However, if the exact WDF is required, Eq. (6) shows that the recovery of the WDF from the SWDF generally requires deconvolution [25

25. H. N. Chapman, “Phase-retrieval X-ray microscopy by Wigner–distribution deconvolution,” Ultramicroscopy 66, 153–172 (1996) [CrossRef] .

].

A periodic array of lenslets enables measurements of different r0 simultaneously and removes the need to scan. The advantage of using a lenslet array is that a detector pixel under a given lenslet at position r0 measures a point (r0, u) in the SWDF domain according to the mapping shown in Fig. 1(a). Therefore the geometry of an array of lenslets can be tailored to measure a desired region of the SWDF domain. In section 2 we rigorously derive the relationship between measured intensity and the SWDF of the incident field for an array of lenslets. For simplicity, we consider scalar fields in one spatial dimension. We demonstrate that, in general, the unique mapping implied by Eq. (5) no longer holds. We show that the intensity at a detector pixel in general contains light from multiple lenslets which we call cross–talk. Accurate measurement of the SWDF requires minimizing this cross–talk. In addition, both fully incoherent and fully coherent cases can have considerable amounts of cross–talk. In Section 3, we illustrate tradeoffs between coherence and fidelity using a numerical example, showing that there exists an optimal “Goldilocks” regime for array pitch, given the the coherence width of the input light, such that cross–talk is reduced to a minimum without the need for additional barriers to block light between lenslets. It is in this optimal regime that each detector pixel corresponds to a single point in the SWDF domain, allowing lenslet array systems to measure the SWDF with high accuracy. Since our goal is direct measurement of the field’s coherence properties, we also consider the discrepancy between the SWDF and WDF for this example.

2. Theory

We now present a rigorous analysis of a 1D field passing through a 1D lenslet array. This analysis can be easily extended to a 2D rectangular array; other configurations, such as a 2D hexagonal array, require straightforward modifications. Consider a quasi–monochromatic paraxial field with wavelength λ incident upon an ideal lenslet array with 100% fill factor, as illustrated in Fig. 2. A detector is placed at the back focal plane of the lenslet array, and we will refer to the region directly behind each lenslet on the detector as that lenslet’s detector cell. We assume an array of 2N + 1 identical unaberrated thin lenses, each of width w and focal length f. The transmittance function of such an array is given by
T(x)=l=NNrect(xlww)exp[iπλf(xlw)2],
(7)
where rect(·) denotes a rectangular function. In this configuration, we assign an integer index l to each lenslet, with the center lenslet having index l = 0; the N lenslets above and N lenslets below the center lenslet take on positive and negative values of l, respectively. Thus, the center of each lenslet is located at x = lw. We have assumed an odd number of lenslets to simplify notation, although the results we obtain can easily be extended to an even number of lenslets.

Fig. 2 Lenslet array geometry.

The mutual intensity immediately to the right of the lenslet array is [1

1. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995).

]
J1(x¯+x2,x¯x2)=Ji(x¯+x2,x¯x2)T(x¯+x2)T*(x¯x2),
(8)
where and x′ are the center and difference coordinates, respectively, Ji is the mutual intensity of the illumination immediately before the lenslet array; the subscript i indicates that its associated function describes properties of the incident field at the input plane, and we will use this notation through the rest of the manuscript.

As a stepping stone to the full relationship between the incident field and the observed intensity behind the lenslet array, we will first consider a simpler system wherein we scan through the lenslets. That is, instead of letting light pass simultaneously through all the lenslets while recording the intensity image, we only let light pass through one lenslet at a time, cycling through all the lenslets while still recording a single image. This removes the effect of cross–lenslet interference, whose derivation we will consider later.

According to Eq. (5), each measurement samples the SWDF over spatial frequency with position fixed at the lenslet’s center, lw. The aperture of each lenslet is a rect function of width w, and thus the aperture WDF is given by
𝒲p(x,u)=sin[2πu(w2|x|)]πurect(xw)
(9)
The total intensity at the detector plane is given by
I0(xo)=1λfl=NNS[𝒲i,𝒲p](lw,xolwλf).
(10)
where 1λfS[𝒲i,𝒲p](lw,xolwλf) is the contribution of light through a single lenslet.

It is clear from this equation that the SWDF is sampled spatially at intervals of w, the spacing of the lenslet centers. The sampling rate along the spatial frequency axis in the SWDF is determined by both the detector pixel size and the linear mapping u = (xolw)/λf between detector coordinate xo and spatial frequency coordinate u. The mapping can be explained by the fact that (xolw)/f equals to the angle between the ray reaching the detector pixel at xo and the optical axis of the lth lenslet under a small angle approximation. Note that if the angular spread of the SWDF is large enough, each detector cell will include contributions to intensity not only from the SWDF associated with its lenslet, but also from neighboring lenslets. This can be prevented by increasing the size of the lenslets or by decreasing the angular spread of the incident field by placing either a main lens with finite numerical aperture in front of the array [17

17. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

] or physical barriers between lenslets [27

27. H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003) [CrossRef] [PubMed] .

]. If we assume that each detector cell measures only light from its associated lenslet, then we would have a detected intensity of the following form
ISWDF(xo)=1λfl=NNS[𝒲i,𝒲p](lw,xolwλf)rect(xolww).
(11)
We refer to this expression as ISWDF, because the intensity measured at xo maps uniquely to the point [l̂w, (xol̂w)/λf] in the SWDF, where is xo/w rounded to the nearest integer.

To demonstrate the sampling described by Eqs. (10)(12), an array containing three lenslets (centered at −w, 0, w) is shown in Fig. 3. According to Eq. (10), three lines sampled at spatial coordinates −w, 0, w parallel to the u–axis from the SWDF are mapped to the detector plane (marked by different colors in Fig. 3). To ensure one–to–one mapping, the maximum spatial frequency um of the lth line sample cannot exceed w/2λf, as shown in case (a); otherwise, points at (lw, um) and [(l + 1)w, umw/(λf)] from the SWDF domain will be measured by the same detector pixel at xo = lw +λfum, as shown in case (b).

Fig. 3 Sampling of the SWDF using an array of three lenslets. (a) One–to–one mapping from the SWDF to the detector coordinate according to u = (xolw)/(λf) as the angular spread of the SWDF is narrower than the numerical aperture of a lenslet. (b) Multiple points in the SWDF domain contribute to detector pixels in the cross–talk region as the angular spread of the incident field is wider than the numerical aperture of a lenslet, which produces the 0th order cross–talk.

So far, we have only considered the incoherent superposition of light from all of the lenslets, whereas light passing through all of the lenslets simultaneously should create additional interference terms. Since light from lenslets separated by a distance greater than the incident field’s coherence width will not create appreciable interference when mixed, it is useful to enumerate these cross–talk terms with an index n proportional to the lenslet separation. All possible pairs of lenslets with indices l′ and l″ such that |l′ − l″| = n > 0 contribute to the nth order cross–talk term Ic(n), given by
Ic(n)(xo)=2λfl=N+n2Nn2𝒲i(x¯,u)𝒲p(x¯lw,uxolwλf)cos[2π(xxoλf+u)nw]dx¯du.
(13)
Note that when n is odd, l takes a value halfway between two integers, and thus 𝒲p is centered at the edge between the (l −1/2)th and (l +1/2)th lenslets; when n is even, l takes every integer value, thus 𝒲p is centered at the lth lenslet. We expect the n = 1 term to be significant even in highly incoherent fields, since some points near the boundary between two neighboring lenslets are expected to be within the coherence width of the field.

The total output intensity, considering all of the discussed effects, can be written as the sum of three components
I(xo)=ISWDF(xo)+Ic(0)(xo)+n=12NIc(n)(xo).
(14)
A detailed derivation of this result, obtained by performing Fresnel propagation integrals on Eq. (8), is given in Appendix B. Equation 14 demonstrates that if all orders of the cross–talk could be made small, then the measured intensity would be an accurate representation of the SWDF. In order for the cross–talk to be negligible, both the angular spread of the SWDF should be small [for Ic(0)(xo)] and the coherence width should be less than the width of a single lenslet [for Ic(n)(xo)]. In order to optimally measure the SWDF, the angular and coherence widths of the SWDF should be balanced so that as much of each lenslet’s detector cell is utilized as possible while minimizing cross–talk. It should also be noted that even with minimal cross–talk, the measurement yields only the SWDF; recovery of the mutual intensity (or WDF) of the field still requires deconvolution of the SWDF with the aperture WDF.

3. Numerical example

We study the effect of coherence width on the quality of the resulting measurement by the following example. Let us consider a spatially homogeneous incident field of wide enough spatial extent compared to the width of the lenslet array that it can be approximated as infinitely wide. The transverse coherence of the field is described using a Gaussian–correlated Schell model, such that the mutual intensity is
Ji(x1,x2)=exp[(x2x1)22σc2].
(15)
The coherence width is proportional to the standard deviation σc of the coherence term. The WDF of the incident field is therefore independent of and is given by
𝒲i(x¯,u)=12πσuexp[u22σu2],
(16)
where σu = 1/(2πσc) quantifies the spatial frequency bandwidth of the WDF and is proportional to the angular spread of the field. The SWDF resulting from the convolution between the WDF of the input field and that of a rectangular aperture is
S[𝒲i,𝒲p](x¯,u)=w22πσuexp[(uu)22σu2][sin(πwu)πwu]2du.
(17)
Therefore the SWDF term is
ISWDF(xo)=w22πσuλfl=NNrect(xolww)exp{[(xolw)/λfu]22σu2}[sin(πwu)πwu]du,
(18)
and the 0th order cross–talk term is
Ic(0)(xo)=w22πσuλfl=NN[1rect(xolww)]exp{[(xolw)/λfu]22σu2}[sin(πwu)πwu]2du.
(19)
The nth order cross–talk term by carrying out the integration in Eq. (13) is
Ic(n)(xo)=2w22πσuλfl=N+n2Nn2exp{[(xolw)/λfu]22σu2}cos(2πnwu)sin[πw(nu0+2u)/2]πw(nu0+2u)/2sin[πw(nu02u)/2]πw(nu02u)/2du,
(20)
where u0 = w/λf is the spatial frequency support of a single lenslet. The total output intensity measured at a detector pixel will be the sum of the SWDF and all cross–talk terms, as given in Eq. (14).

Three different cases with varying coherence widths are simulated based on the results in Eqs. (17)(20). In the simulation, the wavelength of the incident field is 500nm. Five lenslets are used in the array, each having width w = 330μm and focal length f = 5mm, yielding spatial frequency support of u0 = 0.132μm−1. The simulation results are shown in Fig. 4. For all three cases, the total output intensity in row (a) is composed of the SWDF term in row (b) and the total contribution of cross–talk in row (c). The total cross–talk is further analyzed by decomposing it as the 0th order term in row (d) and the total of higher order terms in row (e). Simulations on arrays with larger numbers of lenslets were also conducted; results are not shown here because they are very similar to the ones in Fig. 4.

Fig. 4 Left: highly incoherent; middle: highly coherent; and right: partially coherent case. (a) Total output intensity is composed of (b) SWDF term and (c) total contribution from cross–talk terms. The total cross–talk is composed of (d) 0th order cross–talk and (e) total of higher order cross–talk. All the intensities are normalized to the maximum value in the total output. The horizontal axis is the spatial coordinate normalized by the width of a lenslet.

In the highly incoherent case shown in the left column (σc = 0.01w), higher order cross–talk is minimal. However, due to the large angular spread in the incident field, the measurement is corrupted by 0th order cross–talk. The opposite is the highly coherent case, shown in the middle column (σc = 20w). Here, most of the cross–talk comes from higher order terms. The results for a partially coherent field (σc = 0.1w) is shown in the right column; cross–talk contributes minimally to the final intensity, although both 0th order and higher order terms are present.

Notice that although the SWDF itself is homogeneous in x, the intensities measured in Fig. 4 are not identical under each lenslet. The reason for this is cross–talk. As expressed in Eq. (14) cross–talk under a given lenslet results from the light from neighboring lenslets. The number of neighboring lenslets contributing to the cross–talk depends on both the coherence width of the field and on the angular spread of the field. For a lenslet near the edge of the array, there are few neighbors in the direction of the edge, meaning less cross–talk from that direction. Since our simulated lenslet array consists of only 5 lenslets for simplicity, at least 2 of the lenslets are “edge lenslets,” and depending on the coherence width and angular spread, edge effects may influence all lenslets. Practical lenslet arrays are likely to contain considerably more lenslets, and generally most lenslets will have minimal contributions from edge effects. Because of this, in comparing measured intensity to the WDF and SWDF of the incident field, we consider only the intensity under the central lenslet in our simulated array.

The spatial position under the central lenslet maps into the SWDF domain according to (x = 0, u = xo/λf). In Fig. 5 we compare the total output intensity under the central lenslet (dotted green lines) to the corresponding slice (u = xo/λf) of the SWDF (dashed blue lines) for each of the three simulated fields. The matching slice of the WDF (solid red lines) illustrates the effect of aperture convolution that generates the SWDF. In both the highly incoherent and partially coherent cases, the SWDF and WDF are very similar, since the WDF of the aperture is much smaller than any variations in the incident WDFs. In the highly coherent case, the incident WDF is narrower in u than the aperture WDF, and therefore the SWDF is significantly broadened by the convolution. In order to recover the WDF from the measured intensity, deconvolution is necessary [25

25. H. N. Chapman, “Phase-retrieval X-ray microscopy by Wigner–distribution deconvolution,” Ultramicroscopy 66, 153–172 (1996) [CrossRef] .

].

Fig. 5 Comparison of WDF (solid red line), SWDF (dashed blue lines) and total output intensity (dotted green lines) for (a) highly incoherent (σc = 0.01w), (b) highly coherent (σc = 20w), and (c) partially coherent (σc = 0.1w) incident light.

We quantify how close the lenslet measurement is to the WDF of the incident field by comparing the total output intensity under the central lenslet to the incident WDF at those (x, u) coordinates using an error metric Rerror:
Rerror=|totaloutputintensityincidentWDF|2|totaloutputintensity|2.
(21)
The total output intensity contains contributions both from the SWDF and from cross–talk. To compare the relative importance of these contributions, we quantify the cross–talk corruption in the output through the cross–talk intensity fraction Rcross–talk as
Rcrosstalk=|totalintensityinalltermsofcrosstalk|2|totaloutputintensity|2.
(22)
The SWDF itself is a smoothed version of the incident WDF. To quantify the discrepancy due to this smoothing, we define the convolution error Rconv as
Rconv=|SWDFincidentWDF|2|totaloutputintensity|2.
(23)

All these error metrics are plotted as functions of the coherence width of the incident field normalized to the lenslet width (σc/w) in Fig. 6. As seen in the dashed green curve, the contribution from cross–talk increases quickly as the field becomes less coherent. When the field becomes more coherent, the contribution from cross–talk also increases until it saturates to the point in which the field is coherent within the whole array. There exists a partially coherent regime where the SWDF can be measured with minimal cross–talk corruption. Depending on accuracy requirements, this regime may provide acceptable measurements. For example, if less than 1% of cross–talk can be tolerated, then the coherence width should be such that 0.02w < σc < w. On the other hand, the convolution error increases as the field becomes more coherent, making the SWDF a less accurate estimate of the WDF in these situations. Rerror considers artifacts from both cross–talk and convolution, has a similar shape to the cross–talk curve. The measurement deviates from the original WDF except in a partially coherent region. If error needs to be at most 1%, then we would need 0.02w < σc < 0.4w.

Fig. 6 Rerror in solid blue curve, cross–talk intensity fraction Rcross–talk in dashed green curve, and convolution error Rconv in red dotted curve as functions of the normalized coherence length of incident light σc/w.

4. Concluding Remarks

Although the numerical example was chosen explicitly to consider the effect of coherence width on the measurement of the SWDF using a lenslet array, this simple model can also provide useful insights for a much broader class of fields whose intensity varies slowly across the field, with features much wider than the coherence width. As a rule of thumb, higher order (coherent) cross–talk can be reduced by ensuring that the lenslet apertures are at least one coherence width in size. This makes intuitive sense, since an aperture larger than the coherence width will not cause the incident beam to diffract significantly, and any light that is diffracted from the aperture will not interfere with that from neighboring lenslets. Both 0th and higher order cross–talk can be reduced by ensuring the incident illumination’s angular spread is such that each lenslet primarily illuminates only the pixels lying within its detector cell, such that there is a nearly one–to–one mapping from SWDF space to each detector pixel.

It should also be noted that we have derived these results under the paraxial approximation and that both the 0th and higher order cross–talk can include contributions for which the light propagates highly non–paraxially from one lenslet to its neighbors. In these cases, we expect that a similar analysis can be performed using non–paraxial versions of the Wigner function [28

28. K. B. Wolf, M. A. Alonso, and G. W. Forbes, “Wigner functions for Helmholtz wave fields,” J. Opt. Soc. Am. A 16, 2476–2487 (1999) [CrossRef] .

, 29

29. S. Cho, J. Petruccelli, and M. Alonso, “Wigner functions for paraxial and nonparaxial fields,” J. Mod. Optic. 56, 1843–1852 (2009) [CrossRef] .

], although this is outside the scope of our current work.

As was discussed while analyzing the example, there are cases where the SWDF is not an accurate estimate of the WDF. As illustrated in Fig. 1(b), the SWDF can be interpreted as blurring the incident field’s WDF by convolution with the aperture WDF. The problem of recovering the incident WDF is therefore similar to improving the resolution of an optical imaging system through deconvolution. We therefore expect there to be significant difficulties especially when the incident WDF contains features smaller than the aperture WDF. Performing deconvolution to recover the WDF may benefit from techniques such as coded apertures [30

30. N. Lindlein, J. Pfund, and J. Schwider, “Algorithm for expanding the dynamic range of a shack-hartmann sensor by using a spatial light modulator array,” Opt. Eng. 40, 837–840 (2001) [CrossRef] .

, 31

31. M. E. Gehm, S. T. McCain, N. P. Pitsianis, D. J. Brady, P. Potuluri, and M. E. Sullivan, “Static two-dimensional aperture coding for multimodal, multiplex spectroscopy,” Appl. Opt. 45, 2965–2974 (2006) [CrossRef] [PubMed] .

] and compressed sensing [10

10. L. Tian, J. Lee, S. B. Oh, and G. Barbastathis, “Experimental compressive phase space tomography,” Opt. Express 20, 8296–8308 (2012) [CrossRef] [PubMed] .

, 32

32. Z. Zhang, Z. Chen, S. Rehman, and G. Barbastathis, “Factored form descent: a practical algorithm for coherence retrieval,” Opt. Express 21, 5759–5780 (2013) [CrossRef] [PubMed] .

].

Appendix A: Derivation of Eqs. (5) and (6)

Appendix B: Proof of Eq. (14)

The substitution of the change of variables leads
I(xo)=1λfn=2Neven2Nl=N+|n|2N|n|2rect(x¯lw+(xnw)/2w)rect(x¯lw+(xnw)/2w)Ji(x¯+x2,x¯x2)exp[i2πλf(xolw)x+i2πλf(x¯lw)nw]dx¯dx+1λfn=2N+1odd2N1l=N+|n|2N|n|2rect(x¯lw+(xnw)/2w)rect(x¯lw(xnw)/2w)Ji(x¯+x2,x¯x2)exp[i2πλf(xolw)x+i2πλf(x¯lw)nw]dx¯dx.
(40)
Notice that a term of fixed n contributes a non–zero value to I(xo) only if the two rect–functions overlap. This implies that the separation x′ between the pair of correlating points on the incident field can only take certain values, as determined by the following inequalities
|x¯lw|<w/4,
(41)
(n1)w+2|x¯lw|<x<(n+1)w2|x¯lw|.
(42)
Eq. (42) implies that x′ is bounded to a region of width 2w − 4|lw| centered at nw. Also recall that the magnitude of mutual intensity is significantly larger than zero at large separation distance x′ only if the field is highly coherent. This implies that more terms in the summation over n need to be considered if the field is more coherent. To simplify Eq. (40), we relate I(xo) to the WDF of the incident field and the WDF 𝒲p(, u) of a rectangular aperture of width w, by completing the integration with respect to x′ to yield
I(xo)=1λf{n=2N+1odd2N1l=N+|n|2N|n|2𝒲i(x¯,u)𝒲p(x¯lw,uxolwλf)exp[i2π(x¯xoλf+u)nw]dx¯du+n=2Neven2Nl=N+|n|2N|n|2𝒲i(x¯,u)𝒲p(x¯,lw,uxolwλf)exp[i2π(x¯xoλf+u)nw]dx¯du.}
(43)
Finally, by combining the complex conjugate terms in n, we arrive at Eq. (14).

Acknowledgments

The authors thank Laura A. Waller and Hanhong Gao for their helpful discussions. This research was funded by the Chevron Energy Technology Company and Foxconn Technology Group.

References and links

1.

L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995).

2.

B. J. Thompson and E. Wolf, “Two-beam interference with partially coherent light,” J. Opt. Soc. Am. 47, 895 (1957) [CrossRef] .

3.

W. Tango and R. Twiss, “Michelson stellar interferometry,” Prog. Optics 17, 239–277 (1980) [CrossRef] .

4.

K. Itoh and Y. Ohtsuka, “Fourier-transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,” J. Opt. Soc. Am. A 3, 94–100 (1986) [CrossRef] .

5.

D. L. Marks, R. A. Stack, and D. J. Brady, “Three-dimensional coherence imaging in the Fresnel domain,” Appl. Opt. 38, 1332–1342 (1999) [CrossRef] .

6.

M. G. Raymer, M. Beck, and D. McAlister, “Complex wave-field reconstruction using phase-space tomography,” Phys. Rev. Lett. 72, 1137–1140 (1994) [CrossRef] .

7.

K. G. Larkin and C. J. R. Sheppard, “Direct method for phase retrieval from the intensity of cylindrical wave fronts,” J. Opt. Soc. Am. A 16, 1838–1844 (1999) [CrossRef] .

8.

D. M. Marks, R. A. Stack, and D. J. Brady, “Astigmatic coherence sensor for digital imaging,” Opt. Lett. 25, 1726–1728 (2000) [CrossRef] .

9.

S. Cho and M. A. Alonso, “Ambiguity function and phase-space tomography for nonparaxial fields,” J. Opt. Soc. Am. A 28, 897–902 (2011) [CrossRef] .

10.

L. Tian, J. Lee, S. B. Oh, and G. Barbastathis, “Experimental compressive phase space tomography,” Opt. Express 20, 8296–8308 (2012) [CrossRef] [PubMed] .

11.

L. Tian, S. Rehman, and G. Barbastathis, “Experimental 4D compressive phase space tomography,” in “Frontiers in Optics,” (Optical Society of America, 2012), p. FM4C.4.

12.

B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(2001) [PubMed] .

13.

G. Lippmann, “La photographie integrale,” Comptes-Rendus, Academie des Sciences 146 (1908).

14.

A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003) [CrossRef] [PubMed] .

15.

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009) [CrossRef] [PubMed] .

16.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992) [CrossRef] .

17.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).

18.

A. T. Friberg, “On the existence of a radiance function for finite planar sources of arbitrary states of coherence,” J. Opt. Soc. Am. 69, 192–198 (1979) [CrossRef] .

19.

E. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 0749–0759 (1932) [CrossRef] .

20.

L. Dolin, “Beam description of weakly-inhomogeneous wave fields,” Izv. Vyssh. Uchebn. Zaved. Radiofiz 7, 559–563 (1964).

21.

A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. 58, 1256–1259 (1968) [CrossRef] .

22.

Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in “IEEE International Conference on Computational Photography (ICCP),” (IEEE, 2009), pp. 1–10 [CrossRef] .

23.

H. Bartelt, K.-H. Brenner, and A. Lohmann, “The Wigner distribution function and its optical production,” Opt. Commun. 32, 32–38 (1980) [CrossRef] .

24.

A. Wax and J. E. Thomas, “Optical heterodyne imaging and Wigner phase space distributions,” Opt. Lett. 21, 1427–1429 (1996) [CrossRef] [PubMed] .

25.

H. N. Chapman, “Phase-retrieval X-ray microscopy by Wigner–distribution deconvolution,” Ultramicroscopy 66, 153–172 (1996) [CrossRef] .

26.

L. Waller, G. Situ, and J. Fleischer, “Phase–space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012) [CrossRef] .

27.

H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003) [CrossRef] [PubMed] .

28.

K. B. Wolf, M. A. Alonso, and G. W. Forbes, “Wigner functions for Helmholtz wave fields,” J. Opt. Soc. Am. A 16, 2476–2487 (1999) [CrossRef] .

29.

S. Cho, J. Petruccelli, and M. Alonso, “Wigner functions for paraxial and nonparaxial fields,” J. Mod. Optic. 56, 1843–1852 (2009) [CrossRef] .

30.

N. Lindlein, J. Pfund, and J. Schwider, “Algorithm for expanding the dynamic range of a shack-hartmann sensor by using a spatial light modulator array,” Opt. Eng. 40, 837–840 (2001) [CrossRef] .

31.

M. E. Gehm, S. T. McCain, N. P. Pitsianis, D. J. Brady, P. Potuluri, and M. E. Sullivan, “Static two-dimensional aperture coding for multimodal, multiplex spectroscopy,” Appl. Opt. 45, 2965–2974 (2006) [CrossRef] [PubMed] .

32.

Z. Zhang, Z. Chen, S. Rehman, and G. Barbastathis, “Factored form descent: a practical algorithm for coherence retrieval,” Opt. Express 21, 5759–5780 (2013) [CrossRef] [PubMed] .

OCIS Codes
(110.4980) Imaging systems : Partial coherence in imaging
(050.5082) Diffraction and gratings : Phase space in wave options

ToC Category:
Imaging Systems

History
Original Manuscript: March 8, 2013
Revised Manuscript: April 12, 2013
Manuscript Accepted: April 13, 2013
Published: April 23, 2013

Citation
Lei Tian, Zhengyun Zhang, Jonathan C. Petruccelli, and George Barbastathis, "Wigner function measurement using a lenslet array," Opt. Express 21, 10511-10525 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-9-10511


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995).
  2. B. J. Thompson and E. Wolf, “Two-beam interference with partially coherent light,” J. Opt. Soc. Am.47, 895 (1957). [CrossRef]
  3. W. Tango and R. Twiss, “Michelson stellar interferometry,” Prog. Optics17, 239–277 (1980). [CrossRef]
  4. K. Itoh and Y. Ohtsuka, “Fourier-transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,” J. Opt. Soc. Am. A3, 94–100 (1986). [CrossRef]
  5. D. L. Marks, R. A. Stack, and D. J. Brady, “Three-dimensional coherence imaging in the Fresnel domain,” Appl. Opt.38, 1332–1342 (1999). [CrossRef]
  6. M. G. Raymer, M. Beck, and D. McAlister, “Complex wave-field reconstruction using phase-space tomography,” Phys. Rev. Lett.72, 1137–1140 (1994). [CrossRef]
  7. K. G. Larkin and C. J. R. Sheppard, “Direct method for phase retrieval from the intensity of cylindrical wave fronts,” J. Opt. Soc. Am. A16, 1838–1844 (1999). [CrossRef]
  8. D. M. Marks, R. A. Stack, and D. J. Brady, “Astigmatic coherence sensor for digital imaging,” Opt. Lett.25, 1726–1728 (2000). [CrossRef]
  9. S. Cho and M. A. Alonso, “Ambiguity function and phase-space tomography for nonparaxial fields,” J. Opt. Soc. Am. A28, 897–902 (2011). [CrossRef]
  10. L. Tian, J. Lee, S. B. Oh, and G. Barbastathis, “Experimental compressive phase space tomography,” Opt. Express20, 8296–8308 (2012). [CrossRef] [PubMed]
  11. L. Tian, S. Rehman, and G. Barbastathis, “Experimental 4D compressive phase space tomography,” in “Frontiers in Optics,” (Optical Society of America, 2012), p. FM4C.4.
  12. B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg.17(2001). [PubMed]
  13. G. Lippmann, “La photographie integrale,” Comptes-Rendus, Academie des Sciences146 (1908).
  14. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt.42, 7036–7042 (2003). [CrossRef] [PubMed]
  15. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt.48, H77–H94 (2009). [CrossRef] [PubMed]
  16. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell.14, 99–106 (1992). [CrossRef]
  17. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, Stanford (2005).
  18. A. T. Friberg, “On the existence of a radiance function for finite planar sources of arbitrary states of coherence,” J. Opt. Soc. Am.69, 192–198 (1979). [CrossRef]
  19. E. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev.40, 0749–0759 (1932). [CrossRef]
  20. L. Dolin, “Beam description of weakly-inhomogeneous wave fields,” Izv. Vyssh. Uchebn. Zaved. Radiofiz7, 559–563 (1964).
  21. A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am.58, 1256–1259 (1968). [CrossRef]
  22. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in “IEEE International Conference on Computational Photography (ICCP),” (IEEE, 2009), pp. 1–10. [CrossRef]
  23. H. Bartelt, K.-H. Brenner, and A. Lohmann, “The Wigner distribution function and its optical production,” Opt. Commun.32, 32–38 (1980). [CrossRef]
  24. A. Wax and J. E. Thomas, “Optical heterodyne imaging and Wigner phase space distributions,” Opt. Lett.21, 1427–1429 (1996). [CrossRef] [PubMed]
  25. H. N. Chapman, “Phase-retrieval X-ray microscopy by Wigner–distribution deconvolution,” Ultramicroscopy66, 153–172 (1996). [CrossRef]
  26. L. Waller, G. Situ, and J. Fleischer, “Phase–space measurement and coherence synthesis of optical beams,” Nat. Photonics6, 474–479 (2012). [CrossRef]
  27. H. Choi, S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express11, 927–932 (2003). [CrossRef] [PubMed]
  28. K. B. Wolf, M. A. Alonso, and G. W. Forbes, “Wigner functions for Helmholtz wave fields,” J. Opt. Soc. Am. A16, 2476–2487 (1999). [CrossRef]
  29. S. Cho, J. Petruccelli, and M. Alonso, “Wigner functions for paraxial and nonparaxial fields,” J. Mod. Optic.56, 1843–1852 (2009). [CrossRef]
  30. N. Lindlein, J. Pfund, and J. Schwider, “Algorithm for expanding the dynamic range of a shack-hartmann sensor by using a spatial light modulator array,” Opt. Eng.40, 837–840 (2001). [CrossRef]
  31. M. E. Gehm, S. T. McCain, N. P. Pitsianis, D. J. Brady, P. Potuluri, and M. E. Sullivan, “Static two-dimensional aperture coding for multimodal, multiplex spectroscopy,” Appl. Opt.45, 2965–2974 (2006). [CrossRef] [PubMed]
  32. Z. Zhang, Z. Chen, S. Rehman, and G. Barbastathis, “Factored form descent: a practical algorithm for coherence retrieval,” Opt. Express21, 5759–5780 (2013). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited