The use of Shack -Hartmann sensor based adaptive optics for wavefront aberration correction is well established in astronomy and microscopy for point like objects to achieve diffraction limited imaging [1
1. B. C. Platt and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg. 17(5), S573–S577 (2001). [PubMed]
3. M. Rueckel, J. A. Mack-Bucher, and W. Denk, “Adaptive wavefront correction in two-photon microscopy using coherence-gated wavefront sensing,” Proc. Natl. Acad. Sci. U.S.A. 103(46), 17137–17142 (2006). [CrossRef] [PubMed]
]. It is currently an active field of research in optical coherence tomography/microscopy (OCT/OCM) [4
4. M. Pircher and R. J. Zawadzki, “Combining adaptive optics with optical coherence tomography: Unveiling the cellular structure of the human retina in vivo,” Expert Rev. Ophthalmol. 2(6), 1019–1035 (2007). [CrossRef]
5. K. Sasaki, K. Kurokawa, S. Makita, and Y. Yasuno, “Extended depth of focus adaptive optics spectral domain optical coherence tomography,” Biomed. Opt. Express 3(10), 2353–2370 (2012). [CrossRef] [PubMed]
]. Recently, adaptive optics via pupil segmentation using SLM was demonstrated in two photon microscopy [7
7. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010). [CrossRef] [PubMed]
]. These results showed that the optical aberrations introduced by the sample, due to change in the refractive index with depth in the sample, can be reduced to recover diffraction-limited resolution. This can improve the imaging depth in tissues. Such segmented pupil approach has also been shown with scene based adaptive optics [8
8. T. Haist, J. Hafner, M. Warber, and W. Osten, “Scene-based wavefront correction with spatial light modulators,” Proc. SPIE 7064, 70640M, 70640M-11 (2008). [CrossRef]
]. Lately, Tippie and Fienup demonstrated the subaperture correlation based phase correction as a post processing technique in the case of synthetic aperture digital off axis holographic imaging of extended objects [9
Especially for coherent imaging techniques the advantage lies in the availability of phase information. This information has been successfully exploited for implementing digital refocusing techniques in OCT, by measuring the full complex field backscattered from the sample. Current methods rely however on two assumptions: first, that the samples exhibit an isotropic and homogenous structure with respect to its optical properties, and secondly, that the aberrations, if present, are well defined, or accessible. Whereas the first limitation has not been addressed so far, the second issue can be solved either by assuming simple defocus and applying spherical wavefront corrections, or by iteratively optimizing the complex wavefront with a merit function that uses the image sharpness as metric [10
10. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25(4), 983–994 (2008). [CrossRef] [PubMed]
11. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. U.S.A. 109(19), 7175–7180 (2012). [CrossRef] [PubMed]
In this paper, we investigate the method of subaperture correlation based wavefront error detection and correction as a post processing technique to achieve near diffraction limited resolution in the presence of aberrations. This method has the advantage of directly providing the local wavefront gradient for each subaperture in a single step. As such it operates as a digital equivalent to a Shack-Hartmann sensor. In section 2 we present the theory behind the algorithm which is applicable to any coherent interferometric imaging system. We show that this algorithm does not require the knowledge of any system parameters and furthermore does not rely on the assumption of sample homogeneity with respect to its refractive index. In section 3 we present simulation results demonstrating the practical implementation and performance of the algorithm. We have considered a full field swept source OCT system for our simulation, which is a type of an interference microscopic system. We also show how this method allows for a fast and simple way to correct for defocus aberration. Finally in section 4 we present experimental proof of the principle.
For the theoretical analysis we consider a system based on a Michelson interferometer as shown in Fig. 1
Fig. 1 Schematic diagram for the interferometric setup. Lens L1 and L2 form a 4-f telecentric imaging system. Dotted rays show the imaging path of a point on the object in focus. Camera is at the focal plane of lens L1.
, but the theory presented here is valid for any interferometric setup. The interference of the light reflected from the object and reference mirror is adequately sampled using a 2-D camera placed at the image plane of the object. For simplicity, we assume that we have a 4-f telecentric imaging system, and the object field is bandlimited by a square aperture at the pupil plane. In the 2-D interferometric imaging setup, the recorded signal
of the detection plane and at the wavenumber
of the laser illumination is given by
are the field from the object and the reference arm respectively at the detector expressed as
where the signal
which contains object field information is given by the convolution of
, which is the geometrical image of the object field, with the point spread function (PSF)
of the system and
is a point in the object plane.
is the local reference field amplitude,
denotes the optical path length difference between the sample and the reference arm, and
is the wavelength of light. Phase factors
in Eqs. (2)
respectively represent the accumulated phase due to propagation of the field from sample and the reference mirror position to the detector plane. In case of monochromatic illumination, with effectively single
, the complex signal
which contains object field information, which we are interested in, can be retrieved via phase shifting methods where
is varied [12
12. P. Hariharan, Optical Interferometry (Academic, 2003).
13. D. Malacara, Optical Shop Testing (Wiley, 1992).
]. Phase shifting methods are also used to detect the complex wavefront employing broad band light sources which provide coherence gating [14
14. M. Rueckel and W. Denk, “Properties of coherence-gated wavefront sensing,” J. Opt. Soc. Am. A 24(11), 3517–3529 (2007). [CrossRef] [PubMed]
]. The complex signal
can also be detected via frequency or wavelength diversity as in the case of swept source OCT. In this case, the object is placed at a distance longer than that of the reference mirror position with respect to a common reference point, and the interference signal is recorded with varying
. The Fourier transformation along the
dimension yields the tomographic signal separated from autocorrelation, constant intensity offset (DC) and complex conjugate terms [15
15. W. Drexler and J. G. Fujimoto, Optical Coherence Tomography: Technology and Applications (Springer, 2008).
]. We assume that the aberrations present in the system primarily originate from the sample and sample optics L2, and can be represented by an effective phase error at the pupil plane.
In case of FF SSOCT, the complex signal
, obtained after
Fourier transformation, containing field information about the object’s
layer can be written in the discrete from as
is the detector pixel pitch assumed to be the same in both directions,
determine the pixel location in the image plane. Let the size of the sample be
pixels. After embedding the 2-D signal into an array of zeros twice its size (
), we calculate the 2-D DFT of the signal
on the computer to propagate it to the Fourier/pupil plane.
determine the pixel location in the Fourier plane. In Eq. (5)
, we have neglected the constant complex factor due to propagation as it does not affect the reconstruction of intensity images, which we are finally interested in. The extent of the spatial frequency is limited due to the square pupil aperture. Let the extent of spatial frequencies in Fourier plane be
), then according to the paraxial approximation
is the length of the side of the square aperture,
is the central wavelength,
is the focal length of the lens, and
is the pixel pitch in the pupil plane. We window out the
at the Fourier plane, embed it into the middle of array of zeros of size
and segment it into
subapertures with size
is the floor function.
is usually odd so that the central subaperture can be selected as a reference as shown in Fig. 2
Fig. 2 Segmentation of Fourier data into KxK subapertures with K = 3. Green square dots represent sampling points of the average local slope data due to non-overlapping subapertures. Red dotted boxes represent subapertures with about 50% overlap in both directions. Blue circular dots represent sampling points of the slope data due to overlapping subapertures. With overlapping subapertures we can increase the sampling points to reduce the error in phase estimation.
. In the absence of aberration the wavefront has same slope everywhere across the pupil plane, and hence all the images due to subaperture will be formed at the same position in the image plane. But, in the presence of aberration the slope of the wavefront will vary across the pupil which will result in relative shift between the images of the subaperture. Since we have a circular centro-symmetric optical system we chose the location of the image formed by central subaperture to be the reference where we want all images due to subapertures to be formed after aberration correction. We can also use aperture overlap as shown in Fig. 2
to increase the number of sampling points for local slope data.
We assume that in each subaperture the local phase error can be approximated by the first order Taylor expansion. We measure the translation between the intensity of the images formed by the different subapertures with respect to intensity of the image formed by the central subaperture after normalizing them to the same level [16
16. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). [CrossRef] [PubMed]
]. We use this information to calculate the relative local wavefront slope in each subaperture given by
are the shift in terms of pixels in horizontal and vertical direction in the image plane,
are the center points in the
subaperture to which local slope data value is assigned . We assume the
subaperture to be the central reference subaperture. We see that the slope data obtained is independent of any system parameters. We use this slope information to reconstruct the estimated wavefront error using Taylor monomials as basis function for modelling phase error. The detailed description of the method is given in Appendix A.
After we have estimated the phase error,
we apply a phase correction
to the Fourier data
and then perform the 2-D IDFT to get the phase corrected image
This process is repeated for all the layers in case of a 3-D volume image to get an aberration free volume image. We assumed a 4-f system with the pupil and the Fourier plane being the same. As we have shown that the method is independent of any system parameters, it can be applied to the Fourier transform of the image even if the pupil is not at the Fourier plane. But one should take care that the image is bandlimited. Otherwise, aliasing of higher spatial frequencies might disturb proper reconstruction when performing 2-D DFT on the image. Figure 3
Fig. 3 Schematic of the subaperture correlation based phase correction method.
shows the schematic diagram for the estimation of the phase error based on split aperture method.
3. Computer simulation
For the computer simulation we consider the same optical setup shown in Fig. 1
with focal length of lens L1 and L2 as
mm. We consider a sweeping laser source with center wavelength
and bandwidth Fig. 4(a)
Fig. 4 (a) Normalized power spectrum, (b) amplitude of spectral interferogram at a lateral pixel, (c) A-scan, (d) image slice at the peak location in red dotted box in (c), (e) phase error in radians applied at the pupil plane, (f) aberrated image due to phase error in (e).
shows the normalized power spectrum with respect to wavenumber
. We assume that the sweeping is linear in
. For simplicity, just for the demonstration of the working principle of the method, we consider only a 2-D object which gives diffuse reflection on illumination. Our object is a USAF bar target of size
pixels, multiplied with circular complex Gaussian random numbers to simulate the speckle noise. We zero-pad the object to the size of
pixels and apply the fast Fourier transform (FFT) to get to the pupil plane. We multiply the result with a square aperture, which is an array of unity value of size
pixels zero padded to size
pixels. To apply a phase error we multiply the result with a factor
using Eq. (16)
and then compute the IFFT to get to the image plane. In the last step, we compute the IFFT instead of FFT simply to avoid inverted images without actually affecting the phase correction method. We multiply the resulting field at the image plane with a phase factor of
to take into account the propagation distance, then add the delayed reference on-axis plane wave with phase factor of
and compute the squared modulus to obtain the simulated interference signal. This is done for each
in the sweep to create a stack of 2-D interference signal with
in 256 equal steps. The reference wave amplitude was
times the object field amplitude. Figure 4(b)
shows the spectral interferogram at one lateral pixel.
For the reconstruction, we first calculate the FFT along
, being 256 spectral pixels after zero padding to size 512 pixels, for each lateral pixel of the detection plane
to separate the DC, autocorrelation and the complex conjugate term from the desired object term. The resulting sample depth profile (A-scan) is plotted in Fig. 4(c)
. It allows separating the DC, autocorrelation and the complex conjugate term from the desired object term (red dashed rectangle). Figure 4(d)
shows the image slice from the reconstructed image volume at the position of the peak in Fig. 4(c)
without any phase error. The simulated phase error consisted of Taylor monomials up to 6th order and the coefficients were selected from the random Gaussian distribution such that peak to valley phase error across the aperture is 58.03 radians. Figure 4(e)
shows the simulated phase error in radians and Fig. 4(f)
is the corresponding aberrated image obtained after applying it. We have assumed that the phase error is independent of the light frequency and that the system is compensated for any dispersion effects.
Fig. 5 (a), (b), (c) and (g) show the phase corrected images for non-overlapping subaperture with values of K equal to 3, 5, 7 and 9 respectively, and (d), (e), (f) and (j) are the respective residual phase error in radians. (h) and (i) are the images for subapertures with 50 percent overlap for K equal to 3 and 5, and (k) and (l) are the respective residual phase error in radians.
shows the results for the various cases of aperture segmentation. In Fig. 5(a)
we have 9 sampling points, and hence we could fit only first 9 Taylor monomials according to Eq. (16)
. Comparing the degraded image in Fig. 4(f)
to the corrected image in Fig. 5(a)
we immediately see the improvement in visual image quality. Except for the image obtained in the case when
with 50 percent shown in Fig. 5(h)
, all the images obtained for different cases are of similar visual quality. We found in our simulation that in order to see any significant smearing or blurring in the images, the peak to valley (P-V) aberration should be more than 15 radians. Otherwise the image is close to the diffraction limited image in visual quality. After phase corrections are applied under different conditions of pupil splitting, we find that the P-V residual phase errors are below 10 radians in all cases under consideration except for the case when
with 50 percent overlap, shown in Fig. 5(k)
. Hence, we judge the performance of the different pupil splitting conditions based on the residual phase error maps. In Fig. 5(b)
we have 25 sampling points and hence we could fit Taylor monomials upto
order with 25 coefficients. We see that the residual phase error has further reduced as compared to the
case in Fig. 5(d)
. In Fig. 5(c) and (g)
respectively, but we fit only Taylor monomials upto 6th order to see the effect of increasing sampling points. We see that residual error decreases for
but increases for
. This is because with increasing
the size of the subapertures becomes smaller and hence the resolution of the images corresponding to these subapertures also reduces leading to registration error for shift calculations. This results in false slope calculation and phase error estimation. Obviously increasing the number of apertures allows in principle for higher order phase correction, but the smaller number of pixels in the subaperture leads to increasing phase errors. A possible solution is to use overlapping apertures as shown in Fig. 2
We used overlapping subapertures with 50 percent overlap in order to increase the sampling points with uniform spacing and to maintain the number of pixels of the subapertures. 50 percent overlap ensures that we have maximum number of sampling points without the over redundancy of the subaperture data due to overlap. In case of overlapping apertures, K
no longer stands for the number of subapertures but defines the size of each subaperture as
pixels. Nevertheless, for
we have a higher residual phase error as compared to the non-overlapping case, also visible in the image in Fig. 5(h)
as compared to Fig. 5(a)
. This is because we have assumed in our algorithm that phase error in each subaperture can be described by a 1st order Taylor series. However in the presence of higher order aberrations this assumption is no longer true if the size of the subaperture is big enough to capture large variation of phase error. This can result in an error in phase estimation, and using overlapping with large subapertures of same size can further add up the errors. Betzig et.al refer to errors induced due to overlapping subapertures as “residual coupling between the subregions” [7
7. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010). [CrossRef] [PubMed]
]. This is what we see in the case of
with 50 percent overlap. It shows that for
the size of subapertures are not optimal for the 1st order Taylor approximation of local phase error. However, the residual error decreases rapidly for the overlapping case when
. In case of
the subaperture size is small enough for the approximation to be valid and we see a significant reduction in the residual phase error in Fig. 5(l)
Fig. 6 RMS residual phase error in radians for different values of K and different percentage of overlap between subapertures. For the same value of K the size of subapertures in non- overlapping and overlapping cases are the same.
shows the plot of rms residual phase error for different values of
for both non-overlapping and overlapping subapertures with different percentage of overlap. We see that with overlapping subapertures of appropriate size with respect to the overall aberrations we obtain in general a lower residual phase error. Only if the subaperture size is chosen too large to cover significant amount of aberrations, which cannot be approximated by the 1st order Taylor series, the overlapping aperture approach performs worse. A smaller size of subapertures ultimately leads to loss of resolution and increased registration error, which in consequence leads to an increase in residual phase error. For bigger size of subapertures the wavefront is sampled with less points and hence one can find coefficients only for the lower order Taylor monomials. The optimal number, size and percentage of overlap of sub-apertures depend of course on the amount of aberrations present. However, from the plots in Fig. 6
we can see that the overlapping subapertures do not yield significantly different results from the non overlapping case for K
= 5, 7 9 and 11. Also, increasing the percentage of overlap to more than 50 percent between subapertures does not significantly reduce the residual phase error. This is due to the redundancy of data caused by the overlap. We can see that for K
= 3 the residual rms errors are lower for the cases of more than 50 percent overlap as compared to the 50 percent overlap case. But still the residual rms errors are higher than in the non overlapping case. On the other hand, for K
= 13, the residual rms errors for the cases of more than 50 percent overlap are higher than both the 50 percent overlap and the non-overlapping case. Since both K
= 3 and K
= 13 fall in the regimes where the approximations and assumptions of the presented algorithm are no longer valid, it is difficult to explain the exact reasons behind the observed differences. Nevertheless, in Fig. 6
we can see that the condition
with 50 percent overlap has performed the best with residual rms of 0.3412 radians which is well within the Marechal’s criterion of rms error of 0.449 radians for diffraction limited performance.
In the case of symmetric quadratic phase error across the aperture which results in defocus aberration, we can find the estimate of the phase error with just two subapertures. The aperture can be split into two equal halves vertically or horizontally. The phase error term in each half has a linear slope with equal magnitude but opposite direction. The opposite slopes in each half cause the images formed by each half to shift in opposite direction relative to each other. Using the relative shift information by cross correlating the images formed using two half apertures, one can easily derive the estimate of the coefficient of the quadratic phase as
is the shift in terms of pixels in
is the total size of the data array in pixels and N
is the size of the aperture in pixels. From Eq. (9)
we can estimate the quadratic phase error. This method is simple and fast as compared to the defocus correction method based on sharpness optimization which requires long iterations.
4. Experimental results
The schematic of the FF SS OCT setup for the proof of principle study is shown in Fig. 7(a)
Fig. 7 (a) Schematic of the experimental setup: M is the mirror, B.S. is the beam splitter, MO is the 5X NIR microscope objective, L1 and L2 are lens with focal length f = 200 mm, L3 and L4 are lens with f = 150 mm and 75 mm and P is the pupil places at the focal plane of L3 and L4, (b) sample consisting of layer of plastic sheet, film of dried milk and the RTT, (c) image of the RTT surface obtained with non-uniform plastic sheeting, (d) Fourier transform of the image shows it is band limited, and (e) zoomed in image of (c) showing 390390 pixels . Focus was placed on the RTT.
. The system is a simple Michelson interferometer based on Linnik configuration with same type of microscope objectives (5 × Mitotuyo Plan Apo NIR Infinity-Corrected Objective, NA = 0.14, focal length f
= 40 mm) in both the sample and the reference arm. The sample consisted of a layer of plastic, a film of dried milk and an USAF resolution test target (RTT). A thin film of dried milk was used to produce scattering and diffuse reflection. The plastic layer of non uniform thickness and structure to create random aberration was created by heating a plastic used for compact disc (CD) cases. The output beam from the sweeping laser source (Superlum BroadSweeper 840-M) incident on the lens L1 is of diameter 10 mm and the field of view on the sample is ~2 mm. The power incident on sample is 5 mW. The RTT surface was in focus while imaging. The image of the sample formed by L2 is transferred to the camera plane using a telescope formed by lens L3 and L4 with effective magnification of 2.5
. A circular pupil P of diameter 4.8 mm is placed at the focal plane of lens L3 and L4 to spatially band limit the signal. Figures 7(c)
show that the image obtained is speckled due scattering at the milk film and aberrated by the plastic layer.
For imaging, the laser is swept from wavelength
step width, and the frames at each wavelength are acquired using a CMOS camera (Photon Focus MV1-D1312I-160-CL-12) at the frame rate of 108 fps synchronized with the laser. A total of 512 frames are acquired. After zero padding and
mapping of the spectral pixels, a 1-D FFT is performed along the
dimension for each lateral pixel, the standard FDOCT procedure, which provides the depth location of the different layers of the sample. The layer corresponding to the RTT is picked for aberration correction and the corresponding enface intensity image is displayed in Fig. 7(c)
. Figure 7(e)
shows the zoomed in image of Fig. 7(c)
pixels. The numbers 4 and 5 indicate the location of the 4th and 5th group elements. We can barely resolve the RTT element (4, 3). The horizontals bars are visually more affected by the aberration due to the anisotropic nature of the distorted plastic layer. FFT of the image after zero padding shows the spatial frequency information within a circular area. For further processing we filtered out a square area (
pixels) from the spatial frequency distribution. The side length was chosen approximately equal to the diameter of the circular pupil (dotted square in Fig. 7(d)
Figure 8 Fig. 8
Phase corrected images of the one in Fig. 7
, obtained using (a) non-overlapping subapertures with K
= 3, (b) non-overlapping subapertures with K
= 5 and (c) overlapping subapertures with K
= 5. (d), (e) and (f) are the detected phase error across the aperture in radians in the case of (a), (b) and (c) respectively.
shows the phase correction results after subaperture processing. The Taylor monomials upto 5th order are fitted to determine the phase error. Figure 8(a)
shows the result obtained using 9 non-overlapping subapertures with
. In this case we could fit only upto first 9 monomial terms given by Eq. (16)
, and hence monomial fitting limited to the 4th order. We also tried non-overlapping subapertures with
and overlapping subaperture with 50% overlap for
. Note that Figs. 8(d)-(f)
show only the estimation of the phase error that may be present. Unlike the simulation where we calculated the residual phase error knowing the reference phase error, here we look for the improvement in image quality to judge the best pupil splitting condition. We can clearly appreciate the improvement in resolution as the bars are more evident after phase correction. We can resolve horizontal bars upto element (5, 1) in Fig. 8(a)
and (5, 4) in Figs. 8(b)
. This corresponds to the improvement in resolution by a factor of 1.6 for Fig. 8(a)
, and a factor of 2 for the case in Fig. 8(b) and (c)
respectively over the aberrated image with the highest resolvable element being (4, 3). The improvement in resolution is calculated using the relation
is the improvement in elements. Vertical bars in the corrected images appear much sharper after phase correction. The theoretically calculated diffraction limited resolution of our experimental setup is 6.5
. Note that in our experiment the resolution is limited by the size of the pupil P. The best resolution obtained in case of Figs. 8(b)
corresponding to element (5,4) is about 11
which seems to be far from the theoretical limit. This is primarily because of the strong local variations of the wavefront across the pupil plane due to the highly distorted plastic layer causing anisotropic imaging condition. Also, this may be the reason why phase maps shown in Figs. 8(d)
appear different. However, they have the same general profile.
We also investigated the effect of refractive index change within the sample. We replaced the non uniform plastic layer of the sample with an uniform plastic layer of thickness ~1 mm. Without plastic layer the RTT was placed at the focal plane of the microscope objective while imaging. The presence of Plastic layer causes the image to be defocused as seen in Fig. 9(a)
(a) Aberrated image obtained using uniform plastic sheet, (b) phase corrected image using non-overlapping subapertures with K
= 3, (c) image obtained by only defocus correction using two non-overlapping subapertures as in Fig. 7, (d)
phase error in radians estimated for case (b) shows strong quadratic and fourth order terms, (e) quadratic phase error detected for case (c). Here defocus balances the spherical aberration.
. Figure 9(b)
shows the result using nonoverlapping subapertures with
. The corresponding estimated phase error in Fig. 9(d)
shows the presence of quadratic and 4th order terms which is expected due to the presence of defocus and spherical aberration caused by the change in refractive index within the sample. Since spherical aberration can be balanced with defocus, we applied a simple half aperture method to find the effective defocus error according to Eq. (9)
as shown in Fig. 9(e)
Successful focusing is clearly evident in these images as elements upto (6, 1) in Fig. 9(b)
and (5, 6) in 9(c) can be resolved which corresponds to resolution of 7.8
respectively. These values are close to the calculated theoretical diffraction limited resolution of 6.5
. In the subaperture processing we filtered out the square aperture, whereas the actual pupil was circular. So the subapertures at the edges were not completely filled with spatial frequency data. This may have contributed to the residual phase error.
For demonstrating the method on a biological sample we applied the defocus correction using two non-overlapping subapertures to the 3-D volume image of a grape. The digital refocusing method enables to effectively achieve an extended depth of focus. The first layer of the grape was at the focal plane of the microscope objective. Since the theoretical depth of field is only 130 µm in the sample (assuming refractive index of 1.5), the deeper layers out of focus get blurred. Figure 10(a)
Fig. 10 (a) A tomogram of the grape sample, (b) 3-D image volume with dotted line showing location of the tomogram shown in (a), (c) enface image obtained at the depth of 424.8 µm in the grape sample indicated by arrow in (a), (d) is the digitally focused image of (c).
shows a tomogram of the grape sample with an arrow indicating a layer at the depth of 424.8
from the focal plane. Figure 10(c)
shows an enface view of that layer and Fig. 10(d)
shows the defocus corrected image. We can appreciate the improvement in lateral resolution as the cell boundaries are clearly visible now.
In Fig. 11
Fig. 11 Comparison of coefficients of defocus error with depth z obtained theoretically and using the two subaperture method for a grape sample.
blue dotted plot shows the coefficients of the defocus error calculated using the two subaperture method, given by Eq. (9)
, for each layer of a grape sample. Whereas, the solid red line shows the theoretically calculated coefficients using the knowledge of the optical distance of each layer of the grape sample from the focal plane given by
denotes the optical distance of the
layer from the focal plane with
being the pixel pitch in depth and
being the location of layer in terms of pixels. Green dotted line indicates the theoretically calculated depth of focus at
. Here the number of pixels in the image is
and pixel pitch of the camera is
. The pixel pitch in depth is calculated assuming uniform refractive index of
in the sample given by
. The effect of applying theoretically calculated defocus correction is basically the same as Fresnel propagating the field of out of focus layers to the focal plane. We can see from the plot that after a depth of about 380
, there is increasing deviation of the result obtained by using the two subaperture method from the theoretically calculated one using Eq. (10)
. This may give an impression that the subaperture method is inaccurate in estimating defocus error at greater depth. We further investigated the effect of defocus correction, calculated theoretically, on the visual quality of the images and compared it with the defocus corrected images obtained using the two subaperture method. We found that the images obtained using both methods appear similar in visual quality till a depth of about 380
. However, the images obtained using theoretically calculated defocus corrections appear blurry at greater depths. Whereas, in case of the two subaperture method the images obtained at increased depth are visually in sharp focus. This shows that for deeper layers the theoretically corrected defocus error requires further optimization or processing to achieve the level of performance demonstrated by the single step subaperture method. Media 1
shows the fly through the depth of the original and digitally focused 3-D volume images of the grape sample. Here we can clearly see that the same lateral resolution is maintained throughout the depth after defocus correction.