## Wave optics theory and 3-D deconvolution for the light field microscope |

Optics Express, Vol. 21, Issue 21, pp. 25418-25439 (2013)

http://dx.doi.org/10.1364/OE.21.025418

Acrobat PDF (8015 KB)

### Abstract

Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.

© 2013 OSA

## 1. Introduction

1. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH . (2006) 924–934. [CrossRef]

2. M. Levoy, Z. Zhang, and I. McDowell, “Recording and controlling the 4D light field in a microscope using microlens arrays,” Journal of Microscopy **235**, 144–162 (2009). [CrossRef] [PubMed]

1. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH . (2006) 924–934. [CrossRef]

*z*-plane in the case of a planar test target, we achieve up to an 8-fold improvement in resolution over previously reported limits (see Fig. 1). This is achieved by modeling the spatially varying point spread function of the LFM using wave optics and using this model to perform 3-D deconvolution. Such deconvolution mitigates the problem of decreased lateral resolution, thereby addressing one of the main drawbacks of light field microscopy.

## 2. Background

### 2.1. 3-D imaging with the light field microscope

1. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH . (2006) 924–934. [CrossRef]

4. R. Ng, “Fourier slice photography,” in Proceedings of ACM SIGGRAPH (2005). 735–744. [CrossRef]

### 2.2. Light field imaging as limited-angle tomography

*N*×

*N*pixels behind each lenslet will contain pinhole views at

*N*

^{2}different angles covering the numerical aperture of the microscope objective. This suggests that light field microscopy is essentially a simultaneous tomographic imaging technique in which all

*N*

^{2}projections are collected at once as pinhole views. Thus, successful deconvolution amounts to fusing these low resolution views to create a high resolution volumetric reconstruction.

*z*-plane.

### 2.3. Aliasing and computational super-resolution

3. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances and challenges in super-resolution,” International Journal of Imaging Systems and Technology **14**, 47–57 (2004). [CrossRef]

5. M. Bertero and C. de Mol, “III Super-resolution by data inversion,” in *Progress in Optics* (Elsevier, 1996) pp. 129–178. [CrossRef]

6. T. Pham, L. van Vliet, and K. Schutte, “Influence of signal-to-noise ratio and point spread function on limits of superresolution,” Proc. SPIE **5672**, 169–180 (2005). [CrossRef]

7. S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell. **24**. 1167–1183 (2002) [CrossRef]

9. T. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. **34**. 972–986 (2012). [CrossRef]

10. W. Chan, E. Lam, M. Ng, and G. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing **18**. 83–101. (2007). [CrossRef]

*z*-planes where samples are dense relative to the spacing between the lenslets, it is possible to combine pinhole views to recover resolution up to the band limit. However, there are depths where the samples are redundant, most notably at the native object plane (although partial redundancy can also be seen in the figure at

*z*= 72 μm and

*z*= 109 μm). At these depths the sampling requirement may not be met, and super-resolution cannot always be fully realized. This is why the

*z*= 0 μm plane in Fig. 1(c) remains a low-resolution, aliased image despite having been processed by our deconvolution algorithm.

## 3. Light field deconvolution

**f**represents the light field, the vector

**g**is the discrete volume being reconstructed, and

*H*is a measurement matrix modeling the forward imaging process. The coefficients of

*H*are largely determined by the point spread function of the light field microscope. Our first task will be to develop a model for this point spread function.

### 3.1. The light field PSF

12. D. A. Agard, “Optical sectioning microscopy: cellular architecture in three dimensions,” Annual review of biophysics and bioengineering **13**, 191–219. (1984). [CrossRef] [PubMed]

**p**∈ ℝ

^{3}is the position in a volume containing isotropic emitters whose combined intensities are distributed according to

*g*(

**p**). When imaged, this volume gives rise to continuous 2-D intensity pattern

*f*(

**x**) at the image sensor plane. The optical impulse response

*h*(

**x**,

**p**) is a function of both the position

**p**in the volume being imaged as well as the position

**x**∈ ℝ

^{2}on the sensor plane.

*h*(

**x**,

**p**) in Eq. (2) because fluorescence microscopy is an incoherent, and therefore linear, imaging process. Although light from a single point emitter produces a coherent interference pattern (and therefore the function

*h*(

**x**,

**p**) is a complex field containing both amplitude and phase information), coherence effects between any two point sources average out to a mean intensity level due to the rapid, random fluctuations in the emission time of different fluorophores. As a result, there are no interference effects when light from two sources interact; their contributions on the image sensor are simply the sum of their intensities.

*λ*. This approximation is reasonably accurate for fluorescent imaging with a narrow wavelength emission filter. Second, our model adopts the first Born approximation [13], i.e. it assumes that there is no scattering in the volume. Once emitted, the light from a point is assumed to radiate as a perfect spherical wavefront until it arrives at the microscope objective. This approximation holds up well when imaging a volume free of occluding or heavily scattering objects. However, performance does degrade when imaging deep into weakly scattering samples or into samples with varying indices of refraction. Modeling these effects is the subject of future work.

*f*varies by microscope manufacturer (

_{tl}*f*= 200

_{tl}*mm*for our Nikon microscope), and the focal length of the objective can be computed from the magnification of the objective lens:

*f*=

_{obj}*f*/M.

_{tl}*U*(

_{i}**x**,

**p**) generated by a point source at

**p**can be computed using scalar Debye theory [11]. For an objective with a circular aperture, a point source at

**p**= (

*p*

_{1},

*p*

_{2},

*p*

_{3}) produces a wavefront at the native image plane described by the integral, where

*J*

_{0}(·) is the zeroth order Bessel function of the first kind, and

*ρ*is the normalized radius from the center of the pupil. The variables

*v*and

*u*represent normalized radial and axial optical coordinates, The half-angle of the numerical aperture

*α*= sin

^{−1}(NA/

*n*) and the wave number

*k*= 2

*πn/λ*are computed using the emission wavelength

*λ*and the index of refraction

*n*of the sample. The function

*P*(

*θ*) is the apodization function of the microscope. For Abbe-sine corrected objectives,

14. M. R. Arnison and C. J. R. Sheppard, “A 3D vectorial optical transfer function suitable for arbitrary pupil functions,” Optics communications **211**, 53–63 (2002). [CrossRef]

15. A. Egner and S. W. Hell, “Equivalence of the Huygens–Fresnel and Debye approach for the calculation of high aperture point-spread functions in the presence of refractive index mismatch,” Journal of Microscopy **193**, 244–249 (1999). [CrossRef]

*f*

_{μlens}and pitch

*d*. This lenslet can be modeled as an amplitude mask representing the lenslet aperture and a phase mask representing the refraction of light through the lenslet itself: The same amplitude and phase mask is applied in a tiled fashion to the rest of the incoming wavefront. Application of the full, tiled microlens array can be described as a convolution of a 2-D comb function with

*ϕ*(

**x**):

*f*

_{μlens}from the native image plane to the sensor plane. The lenslets used in the LFM have a Fresnel number between 1 and 10, so Fresnel propagation is an accurate and computationally attractive approach for modeling light transport from the microlens array to the sensor [16, p.55]. The final light field PSF can thus be computed using the Fourier transform operator

*ℱ*{·} as where the exponential term is the transfer function for a Fresnel diffraction integral, and

*ω*and

_{x}*ω*are spatial frequencies along the

_{y}*x*and

*y*directions in the sensor plane.

### 3.2. Discretized optical model

*f*(

**x**) is sampled by a camera containing

*N*pixels. To simplify the notation, we re-order these pixels into a vector

_{p}*g*(

**p**) is sub-divided into

*N*=

_{v}*N*×

_{x}*N*×

_{y}*N*voxels and re-ordered into a vector

_{z}**f**is fixed by the number of pixels in the image sensor, the dimensionality of

**g**(i.e. the sampling rate of the reconstructed volume) is adjustable in our algorithm. Clearly, the sampling rate should be high enough to capture whatever information can be reconstructed from the light field. However, oversampling will lead to a rapid increase in computational cost without any real benefit. We will be explicit about our choice of volume sampling rates when we present results in Section 4, but here we will establish the following useful definition. A volume sampled at the “lenslet sampling period” has voxels with a spacing equal to the lenslet pitch

*d*divided by the objective magnification M. For example, when imaging with a 125 μm pitch microlens array and a 20× microscope objective, the lenslet sampling period would be 6.25

*μm*. In this paper, we will sample the volume more finely at a rate that is a “super-sample factor”

*s*∈ times the lenslet sampling rate (where the lenslet sampling rate is the reciprocal of the lenslet sampling period). The sampling rate of the volume is therefore

*s*M/

*d*. Continuing our example, a reconstruction with super-sample factor

*s*= 16 would result in a volume with voxels spaced by 0.39

*μm*. We refer to this as a volume that is sampled at 16× the lenslet sampling rate.

*H*whose coefficients

*h*indicate the proportion of the light arriving at pixel

_{ij}*j*from voxel

*i*. Voxels in the reconstructed volume and pixels on the sensor have finite volume and area, respectively. Therefore the coefficients of

*H*are computed via a definite integral over the continuous light field PSF, where

*α*is the area for pixel

_{j}*j*, and

*β*is the volume for voxel

_{i}*i*. We assume that pixels are square and have have a 100% fill factor, which is a good approximation for modern scientific image sensors. Voxel

*i*is integrated over a cubic volume centered at a point

**p**

*.*

_{i}*w*(

_{i}**p**) that weights the PSF contribution at the center of a voxel more than the at its edges is introduced for this purpose. In our implementation we use a Hann (i.e. raised cosine) window with width equal to two times the volume sampling period, although a different resampling filter could be used if desired. We found the exact choice of resampling filter to be incidental to the algorithm; once the voxel sampling rate surpasses the band limit discussed in Sec. 2.2, the choice of resampling filter has little impact on the quality of the reconstruction.

*H*contain the discrete versions of the light field point spread functions. That is, column

*i*contains the

*forward projection*generated when a single non-zero voxel

*i*is projected according to Eq. (1). In Fig. 4(b) we see that the rows of

*H*(or alternatively the columns of

*H*) also have an interesting interpretation. We call these

^{T}*pixel back projections*by analogy to back projection operators in tomographic reconstruction algorithms, where the transpose of the measurement matrix is conceptualized as a projection of the measurement back into the volume. A pixel back projection from column

*j*shows the position and proportion of light in the volume that contributes to the total intensity recorded at pixel

*j*in the light field. In essence, the pixel back projection allows us to visualize the relative weight of coefficients in a single row of

*H*when it operates on a volume

**g**.

### 3.3. Sensor noise model

*i*th pixel follows Poisson statistics with a rate parameter equal to

**f**=(

_{i}*H*

**g**)

*– i.e. the light intensity incident on the*

_{i}*i*th pixel. Read noise can be largely ignored for modern CCD and sCMOS cameras, although it can be added to the model below if desired. If we also consider photon shot noise arising from a background fluorescence light field

**b**measured prior to imaging, then the stochastic, discrete imaging model is given by where the measured light field

**f̂**is a random vector with Poisson-distributed pixel values measured in units of photoelectrons

*e*

^{−}.

### 3.4. Solving the inverse problem

**f̂**given a particular volume

**g**and background

**b**is where

*i*∈

^{Np}is the sensor pixel index. As the Poisson likelihood is log-concave, maximizing the negative log-likelihood over

**g**and

**b**yields a convex problem with the following gradient descent update: where the diag(·) operator returns a matrix with the argument on the diagonal and zeros elsewhere. This is the well-known Richardson-Lucy iteration scheme.

*H*, whose structure captures the unique geometry of the light field PSF, this model is essentially identical to those that have been proposed in image de-blurring and deconvolution algorithms in astronomy and microscopy [17

17. J. M. Bardsley and J. G. Nagy, “Covariance-preconditioned iterative methods for nonnegatively constrained astronomical imaging,” SIAM journal on matrix analysis and applications **27**, 1184–1197 (2006). [CrossRef]

18. M. Bertero, P. Boccacci, G. Desidera, and G. Vicidomini, “Image deblurring with Poisson data: from cells to galaxies,” Inverse Problems **25**, 123006 (2009). [CrossRef]

*H*in memory, much less apply it in the iterative updates of Eq. (9). We therefore must exploit the specific structure of

*H*in order represent it as a linear operator that can be applied to a vector without explicitly constructing a matrix.

*H*, which contain the discrete versions of the point spread functions as described in section 3.2, have sparse support on the camera sensor. Thus,

*H*is sparse and can be applied efficiently using only its non-zero entries. More importantly, the repeating pattern of the lenslet array gives rise to a periodicity in the light field PSFs that dramatically reduces the computational burden of computing the coefficients of the

*H*matrix. Consider a light PSF

*h*(

**x**,

**p**) for a point

_{0}**p**= (

_{0}*x*,

*y*,

*z*) in the volume. The light field PSF

*h*(

**x**,

**p**) for any other point

_{1}**p**= (

_{1}*x*+

*ad*/M,

*y*+

*bd*/M,

*z*) for any pair of integers

*a*,

*b*∈ is identical up to a translation of (

*ad*,

*bd*) on the image sensor. Therefore, for a fixed axial depth, the columns of

*H*can be described by a limited number of repeating patterns. Consequently, the application of the columns of

*H*corresponding to a particular z-depth can be efficiently implemented as a convolution operation on a GPU. This accelerates the reconstruction of deconvolved volumes from measured light fields, with reconstruction times on the order of seconds to minutes depending on the size of the volume being reconstructed. Note that the relatively heavy-weight 3-D deconvolution algorithm is carried out as a post-processing step; acquisition of raw light field images can still be acquired at the frame rate of the camera sensor.

19. S. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied optics , **52**, D22D31, (2013). [CrossRef]

19. S. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied optics , **52**, D22D31, (2013). [CrossRef]

19. S. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied optics , **52**, D22D31, (2013). [CrossRef]

**52**, D22D31, (2013). [CrossRef]

*z*-planes. (3) The formulation in [19

**52**, D22D31, (2013). [CrossRef]

**52**, D22D31, (2013). [CrossRef]

## 4. Experimental results

### 4.1. Experimental characterization of lateral resolution

*z*from +100 μm to −100 μm relative to the native object plane in 1 μm increments and collected a light field for each

*z*-plane. In other words, we deliberately mis-focused the microscope relative to the target, but then captured a light field rather than a simple photograph. The question, then, is to what extent we can computationally reconstruct the target (despite this mis-focus) using our 3-D deconvolution algorithm, and what resolution do we obtain?

*z*, the Richardson-Lucy scheme described in Section 3.4 was run for 30 iterations to reconstruct a volume sampled at 16× the lenslet sampling rate. The “volume” being reconstructed was restricted to one

*z*-plane known to contain the USAF target. In essence, this implicitly leveraged our prior knowledge of the

*z*-position of the test target and the fact that it is planar (i.e. that there is no light coming from other

*z*-planes). This approach, which we refer to as a “single-plane reconstruction,” allows us to see how much resolution can be recovered at a particular

*z*-plane under ideal circumstances. We will note that, although we knew the axial location of the target being reconstructed in these tests, this knowledge is probably not necessary. For example, our technique could be used for post-capture autofocusing when imaging a planar specimen at an unknown depth. This is a topic for future work.

20. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express **19**, 1506–1508 (2011). [CrossRef]

*lp/mm*] to group 9.3 with a spatial frequency of 645[

*lp/mm*]). For each ROI, the local contrast is calculated as where

*I*and

_{max}*I*are the maximal and minimal signal levels along a line drawn perpendicular to the stripes in each ROI. The final contrast measure is the the average of contrast for the horizontally and vertically oriented portion of the ROI.

_{min}*z*= −15μm when imaging with the 20× configuration. Contrast in high resolution ROIs decreases gradually with increasing

*z*-position. As previously discussed, enhanced resolution is not possible at the native object plane, and we can see this reflected in the reconstruction at

*z*= 0 μm where large, lenslet-shaped “pixels” are spaced equal to the lenslet sampling period. At the

*z*= −15

*μm*plane in Fig. 5(b), one can also observe an apparent anisotropy in the resolution of the image. Spatial frequencies along the horizontal and vertical axis of the image are reconstructed at a higher resolution than those at other orientations. This artifact is due to the square aperture of our lenslets: the wider effective aperture measured across the diagonal of the lenslet results in a slightly lower band limit for spatial frequencies along that orientation. We note that this anisotropy could easily be avoided by using a microlens array with circular lenslet apertures.

*z*= −10μm, and resolution falls of twice as quickly as in the 20× configuration (the

*z*-range plotted in Fig. 5(b) is ±50 μm, only half of that in Fig. 5(a)).

### 4.2. A theoretical band limit for the light field microscope

*ν*

_{lf}is the depth-dependent band limit (in cycles/m), M and NA are the magnification and numerical aperture of the objective,

*λ*is the emission wavelength of the sample,

*d*is the lenslet pitch, and

*z*is the depth in the sample relative to the native object plane. The criterion applies only for depths where |

*z*| ≥

*d*

^{2}/(2M

^{2}

*λ*). These equations, which are based on simple geometric calculations, are derived in Appendix 1.

*z*. More surprisingly, the predicted band limit does not depend on numerical aperture, as the diffraction limit of a widefield microscope does. Instead, the it is determined largely by the objective magnification and lenslet pitch. In Appendix 1, we explain why NA does not appear in the theoretical band limit. Here we will briefly mention that our microscope design assumes that NA is used to determine the optimal (diffraction limited) sampling rate behind each lenslet (i.e. the size of camera pixels relative to the pitch of the lenslet array). As such, NA plays an important role in determining the microscope’s optical design, but once this design choice is made it has no direct impact on lateral resolution. We have also found that increasing NA improves axial resolution and signal to noise ratio in a 3-D light field reconstruction, just as it does it a widefield microscope. Thus, NA is still an important optical design parameter, just not one that effects the theoretical band limit presented here.

*z*, the resolution very near the native object plane cannot be predicted using the theoretical band limit. As we have seen in the USAF experiments, this region is often subject to reconstruction artifacts that arise due to diffraction and sampling effects. However, we can still use the criterion to understand the highest predicted resolution (which we will call “peak” resolution in this figure) as well as the resolution fall-off (which we will define as the rate at which relative resolutions between two optical recipes change as

*z*is varied). With these definitions in mind, we make the following observations.

### 4.3. Reconstruction of a 3-D specimen

2. M. Levoy, Z. Zhang, and I. McDowell, “Recording and controlling the 4D light field in a microscope using microlens arrays,” Journal of Microscopy **235**, 144–162 (2009). [CrossRef] [PubMed]

*xy*slices from the two respective reconstruction methods.

*z*= 0 μm is still improved relative to Fig. 8(a) thanks to the removal of out-of-focus light by the deconvolution algorithm. The highest resolution in the deconvolved volume is achieved at

*z*= −4 μm and

*z*= 4 μm. However, these planes are still close enough to the native object plane to be subject to some reconstruction artifacts. These artifacts are no longer present at

*z*= ±10 μm, although the resolution here is already somewhat reduced from its peak at

*z*= ±4 μm.

*H*is rank deficient and the inverse problem is underdetermined. Fortunately, the Richardson-Lucy algorithm favors sparse solutions to the inverse problem [22

22. R. Heintzmann, “Estimating missing information by maximum likelihood deconvolution,” Micron **38**, 136–144 (2007) [CrossRef]

*z*-planes in the volume, planes farther from the native object plane have a lower band limit and can therefore be sampled at a lower rate. In a more efficient implementation of our algorithm, one could choose a different sampling rate for each

*z*-plane based on its band limit. This would dramatically lower the total number of voxels to be reconstructed. The measurement matrix would then have fewer columns, resulting in a better conditioned inverse problem and a better performing algorithm.

21. I. J. Cox and C. J. R. Sheppard, “Information capacity and resolution in an optical system,” J. Opt. Soc. Am. A **3**, 1152 (1986). [CrossRef]

## 5. Conclusion and future directions

*z*-planes in the volume. One such design using a pair of prisms was recently proposed for light field cameras [23]. Alternatively, a light field could be captured along with a normal, high-resolution widefield image. This would improve resolution at the native object plane and possibly at other planes as well if the two were combined as proposed in [24]. Finally, a lenslet array could be placed at the native image plane of a multi-focal microscope [25

25. S. Abrahamsson, J. Chen, B. Hajj, S. Stallinga, A. Y. Katsov, J. Wisniewski, G. Mizuguchi, P. Soule, F. Mueller, C. D. Darzacq, X. Darzacq, C. Wu, C. I. Bargmann, D. A. Agard, M. G. L. Gustafsson, and M. Dahan, “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Meth. 1–6. (2012).

9. T. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. **34**. 972–986 (2012). [CrossRef]

## Appendix 1: Derivation of the theoretical band limit

**p**

_{1}at a depth

*z*on the object side of microscope. In the limit as

*z*→ ∞, the image of this point behind the lenslet approaches a diffraction-limited spot whose size is determined by the numerical aperture of the system. Now consider a second point

**p**

_{2}at the same depth, but displaced by a distance

*r*from the optical axis. If |

*z*| is large enough to produce two diffraction limited spots, the two points will be just barely distinguishable if they are separated by a distance determined by an appropriate 2-point resolution criterion. This occurs when

*ν*= 2 NA/

_{obj}*λ*(see [27, p. 143]). This limit is set by the diameter of the microscope objective back aperture, i.e. the exit pupil of the system. Lenslets are focused on the back aperture, and thus each lenslet forms an image of this exit pupil. If their numerical apertures are matched, then

*ν*= M

_{obj}*ν*

_{μlens}, where

*ν*

_{μlens}=

*d*/(

*λf*

_{μlens}) is the maximum spatial frequency that can be represented on the focal plane behind a lenslet for coherent imaging [27, p. 103]. Combining these three equations and solving for the lenslet focal length yields, This is the focal length of a lenslet with pitch

*d*that has been matched to the numerical aperture of a given microscope objective. Substituting this into Eq. (12) yields the distance

*r*between

**p**

_{1}and

**p**

_{2}where they would be just discernible as two points on the image sensor:

*r*= c

*λ*M|

*z*|/

*d*.

*r*. This is the spatial frequency where we can just barely discern features spaced a distance

*r*at depth

*z*.

*z*is relatively large. Since the angular sampling rate is independent of NA, so too is our theoretical band limit.

*z*| is sufficiently large that

**p**

_{1}and

**p**

_{2}form diffraction limited spots. This occurs approximately where the diameter of the blur disk

*b*predicted via geometric optics is less than the diameter of a diffraction limited spot, Using similar triangles, we can compute

*b*=

*d*(

*z*−

_{i}*f*

_{μlens}/M

^{2})/(M

*z*). Substituting this along with the lensmaker’s formula 1/

*z*+ 1/

_{i}*z*= M

^{2}/

*f*

_{μlens}and Eq. (13) into Eq. (14), we see that the criterion applies only for depths where |

*z*| ≥

*d*

^{2}/(2M

^{2}

*λ*).

## Acknowledgments

## References and links

1. | M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH . (2006) 924–934. [CrossRef] |

2. | M. Levoy, Z. Zhang, and I. McDowell, “Recording and controlling the 4D light field in a microscope using microlens arrays,” Journal of Microscopy |

3. | S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances and challenges in super-resolution,” International Journal of Imaging Systems and Technology |

4. | R. Ng, “Fourier slice photography,” in Proceedings of ACM SIGGRAPH (2005). 735–744. [CrossRef] |

5. | M. Bertero and C. de Mol, “III Super-resolution by data inversion,” in |

6. | T. Pham, L. van Vliet, and K. Schutte, “Influence of signal-to-noise ratio and point spread function on limits of superresolution,” Proc. SPIE |

7. | S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell. |

8. | K. Grochenig and T. Strohmer, “Numerical and theoretical aspects of nonuniform sampling of band-limited images,” in “ |

9. | T. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. |

10. | W. Chan, E. Lam, M. Ng, and G. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing |

11. | M. Gu, |

12. | D. A. Agard, “Optical sectioning microscopy: cellular architecture in three dimensions,” Annual review of biophysics and bioengineering |

13. | M. Born and E. Wolf, |

14. | M. R. Arnison and C. J. R. Sheppard, “A 3D vectorial optical transfer function suitable for arbitrary pupil functions,” Optics communications |

15. | A. Egner and S. W. Hell, “Equivalence of the Huygens–Fresnel and Debye approach for the calculation of high aperture point-spread functions in the presence of refractive index mismatch,” Journal of Microscopy |

16. | J. Breckinridge, D. Voelz, and J. B. Breckinridge, |

17. | J. M. Bardsley and J. G. Nagy, “Covariance-preconditioned iterative methods for nonnegatively constrained astronomical imaging,” SIAM journal on matrix analysis and applications |

18. | M. Bertero, P. Boccacci, G. Desidera, and G. Vicidomini, “Image deblurring with Poisson data: from cells to galaxies,” Inverse Problems |

19. | S. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied optics , |

20. | J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express |

21. | I. J. Cox and C. J. R. Sheppard, “Information capacity and resolution in an optical system,” J. Opt. Soc. Am. A |

22. | R. Heintzmann, “Estimating missing information by maximum likelihood deconvolution,” Micron |

23. | P. Favaro, “A split-sensor light field camera for extended depth of field and superresolution,” in “SPIE Conference Series ,” |

24. | C. H. Lu, S. Muenzel, and J. Fleischer, “High-resolution light-field microscopy,” in “Computational Optical Sensing and Imaging, Microscopy and Tomography I (CTh3B),” (2013). |

25. | S. Abrahamsson, J. Chen, B. Hajj, S. Stallinga, A. Y. Katsov, J. Wisniewski, G. Mizuguchi, P. Soule, F. Mueller, C. D. Darzacq, X. Darzacq, C. Wu, C. I. Bargmann, D. A. Agard, M. G. L. Gustafsson, and M. Dahan, “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Meth. 1–6. (2012). |

26. | M. Pluta, |

27. | J. Goodman, |

**OCIS Codes**

(100.1830) Image processing : Deconvolution

(100.3190) Image processing : Inverse problems

(100.6950) Image processing : Tomographic image processing

(180.2520) Microscopy : Fluorescence microscopy

(180.6900) Microscopy : Three-dimensional microscopy

**ToC Category:**

Image Processing

**History**

Original Manuscript: July 16, 2013

Revised Manuscript: September 30, 2013

Manuscript Accepted: October 4, 2013

Published: October 17, 2013

**Citation**

Michael Broxton, Logan Grosenick, Samuel Yang, Noy Cohen, Aaron Andalman, Karl Deisseroth, and Marc Levoy, "Wave optics theory and 3-D deconvolution for the light field microscope," Opt. Express **21**, 25418-25439 (2013)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-21-25418

Sort: Year | Journal | Reset

### References

- M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” in Proceedings of ACM SIGGRAPH. (2006) 924–934. [CrossRef]
- M. Levoy, Z. Zhang, and I. McDowell, “Recording and controlling the 4D light field in a microscope using microlens arrays,” Journal of Microscopy235, 144–162 (2009). [CrossRef] [PubMed]
- S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances and challenges in super-resolution,” International Journal of Imaging Systems and Technology14, 47–57 (2004). [CrossRef]
- R. Ng, “Fourier slice photography,” in Proceedings of ACM SIGGRAPH(2005). 735–744. [CrossRef]
- M. Bertero and C. de Mol, “III Super-resolution by data inversion,” in Progress in Optics (Elsevier, 1996) pp. 129–178. [CrossRef]
- T. Pham, L. van Vliet, and K. Schutte, “Influence of signal-to-noise ratio and point spread function on limits of superresolution,” Proc. SPIE5672, 169–180 (2005). [CrossRef]
- S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell.24. 1167–1183 (2002) [CrossRef]
- K. Grochenig and T. Strohmer, “Numerical and theoretical aspects of nonuniform sampling of band-limited images,” in “Nonuniform Sampling,” F. Marvasti, ed.. Information Technology: Transmission, Processing, and Storage, 283–324 (SpringerUS, 2010).
- T. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell.34. 972–986 (2012). [CrossRef]
- W. Chan, E. Lam, M. Ng, and G. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” Multidimensional Systems and Signal Processing18. 83–101. (2007). [CrossRef]
- M. Gu, Advanced Optical Imaging Theory (Springer, 1999).
- D. A. Agard, “Optical sectioning microscopy: cellular architecture in three dimensions,” Annual review of biophysics and bioengineering13, 191–219. (1984). [CrossRef] [PubMed]
- M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University, 1999).
- M. R. Arnison and C. J. R. Sheppard, “A 3D vectorial optical transfer function suitable for arbitrary pupil functions,” Optics communications211, 53–63 (2002). [CrossRef]
- A. Egner and S. W. Hell, “Equivalence of the Huygens–Fresnel and Debye approach for the calculation of high aperture point-spread functions in the presence of refractive index mismatch,” Journal of Microscopy193, 244–249 (1999). [CrossRef]
- J. Breckinridge, D. Voelz, and J. B. Breckinridge, Computational Fourier Optics: a MATLAB Tutorial (SPIE Press, 2011).
- J. M. Bardsley and J. G. Nagy, “Covariance-preconditioned iterative methods for nonnegatively constrained astronomical imaging,” SIAM journal on matrix analysis and applications27, 1184–1197 (2006). [CrossRef]
- M. Bertero, P. Boccacci, G. Desidera, and G. Vicidomini, “Image deblurring with Poisson data: from cells to galaxies,” Inverse Problems25, 123006 (2009). [CrossRef]
- S. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Applied optics, 52, D22D31, (2013). [CrossRef]
- J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express19, 1506–1508 (2011). [CrossRef]
- I. J. Cox and C. J. R. Sheppard, “Information capacity and resolution in an optical system,” J. Opt. Soc. Am. A3, 1152 (1986). [CrossRef]
- R. Heintzmann, “Estimating missing information by maximum likelihood deconvolution,” Micron38, 136–144 (2007) [CrossRef]
- P. Favaro, “A split-sensor light field camera for extended depth of field and superresolution,” in “SPIE Conference Series,” 8436. (2012).
- C. H. Lu, S. Muenzel, and J. Fleischer, “High-resolution light-field microscopy,” in “Computational Optical Sensing and Imaging, Microscopy and Tomography I (CTh3B),” (2013).
- S. Abrahamsson, J. Chen, B. Hajj, S. Stallinga, A. Y. Katsov, J. Wisniewski, G. Mizuguchi, P. Soule, F. Mueller, C. D. Darzacq, X. Darzacq, C. Wu, C. I. Bargmann, D. A. Agard, M. G. L. Gustafsson, and M. Dahan, “Fast multicolor 3D imaging using aberration-corrected multifocus microscopy,” Nat. Meth.1–6. (2012).
- M. Pluta, Advanced Light Microscopy, Vol. 1.(Elsevier, 1988).
- J. Goodman, Introduction to Fourier Optics, 2nd ed. (MaGraw-Hill, 1996).

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.