OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 20, Iss. 5 — Feb. 27, 2012
  • pp: 5470–5480
« Show journal navigation

Quantitative single-shot imaging of complex objects using phase retrieval with a designed periphery

Alexander Jesacher, Walter Harm, Stefan Bernet, and Monika Ritsch-Marte  »View Author Affiliations


Optics Express, Vol. 20, Issue 5, pp. 5470-5480 (2012)
http://dx.doi.org/10.1364/OE.20.005470


View Full Text Article

Acrobat PDF (1879 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Measuring transmission and optical thickness of an object with a single intensity recording is desired in many fields of imaging research. One possibility to achieve this is to employ phase retrieval algorithms. We propose a method to significantly improve the performance of such algorithms in optical imaging. The method relies on introducing a specially designed phase object into the specimen plane during the image recording, which serves as a constraint in the subsequent phase retrieval algorithm. This leads to faster algorithm convergence and improved final accuracy. Quantitative imaging can be performed by a single recording of the resulting diffraction pattern in the camera plane, without using lenses or other optical elements. The method allows effective suppression of the “twin-image”, an artefact that appears when holograms are read out. Results from numerical simulations and experiments confirm a high accuracy which can be comparable to that of phase-stepping interferometry.

© 2012 OSA

1. Introduction

Solutions to the problem of obtaining amplitude and phase of an electromagnetic field from a single intensity measurement are desirable in many areas of imaging research, predominantly in imaging modalities where the intrinsic radiation properties make it difficult to apply the approved interferometry techniques of visible light optics, such as X-ray, gamma or electron-beam imaging. Inline or Gabor holography [1

1. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). [CrossRef] [PubMed]

] provides a solution to a certain extent. There, one illuminates the object with a sufficiently coherent beam and records the intensity of the interference pattern created by the scattered and unscattered wave parts. Due to its simplicity and the availability of high resolution digital image sensors it is nowadays frequently used in various fields of research [2

2. I. Moon, M. Daneshpanah, A. Anand, and B. Javidi, “Cell identification with computational 3-D holographic microscopy,” Opt. Photonics News 22, 18–23 (2011). [CrossRef]

6

6. M. Kanka, R. Riesenberg, P. Petruck, and C. Graulig, “High resolution (NA=0.8) in lensless in-line holographic microscopy with glass sample carriers,” Opt. Lett. 36, 3651–3653 (2011). [CrossRef] [PubMed]

]. However, the loss of phase information at the recording process gives rise to an inherent ambiguity: when the hologram is read out (either optically or numerically), a potentially disturbing “twin-image” appears in addition to the original object. This twin-image represents the conjugated diffraction order of the recorded intensity hologram and thus differs from the original object in the sense that at the position of the hologram it is the conjugated version of the original object field. Hence, the twin-image resembles a mirrored version of the original and appears in a different axial plane if the object was placed in a finite distance from the image sensor during the recording process. Since the invention of holography it was sought to suppress the twin-image, first by optical means [7

7. W. L. Bragg and G. L. Rogers, “Elimination of the unwanted image in diffraction microscopy,” Nature 167, 190–191 (1951). [CrossRef] [PubMed]

9

9. O. Bryngdahl and A. Lohmann, “Single-sideband holography,” J. Opt. Soc. Am. 58, 620–624 (1968). [CrossRef]

], later also numerically, for instance by using linear filtering [10

10. L. Onural and P. D. Scott, “Digital decoding of in-line holograms,” Opt. Eng. 26, 1124–1132 (1987).

, 11

11. K. A. Nugent, “Twin-image elimination in Gabor holography,” Opt. Commun. 78, 293–299 (1990). [CrossRef]

], or iterative methods [12

12. G. Liu and P. D. Scott, “Phase retrieval and twin-image elimination for in-line Fresnel holograms,” J. Opt. Soc. Am. A 4, 159–165 (1987). [CrossRef]

17

17. S. M. Raupach, “Cascaded adaptive-mask algorithm for twin-image removal and its application to digital holograms of ice crystals,” Appl. Opt. 48, 287–301 (2009). [CrossRef] [PubMed]

] that often employ variants of the Gerchberg-Saxton algorithm [18

18. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Jena) 35, 237–246 (1972).

].

The basic principle of such algorithms is to numerically propagate the light field back and forth between the recording and the object plane, in each plane applying certain constraints. Ideally, the algorithm ends up with a totally suppressed twin-image, leaving only the real object. In the recording plane, an obvious constraint is the recorded intensity, which is however a valid boundary condition for both, the real image and its undesired twin. Therefore, in the object plane one must apply a constraint which breaks the ambiguity between real and twin-image to make the algorithm converge towards the true solution. Depending on the application, a variety of different constraints can be introduced [19

19. V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A 20, 40–55 (2003). [CrossRef]

], although it is in general not possible to find a method that can guarantee that the algorithm converges towards the true solution [19

19. V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A 20, 40–55 (2003). [CrossRef]

]. It often happens that the algorithm gets “trapped” in local error-minima and ends up with an imprecise approximation of the original object. The final performance, i.e. the convergence speed and the accuracy of the result depend crucially on the strength of the imposed boundary conditions.

Considering the approach of introducing a known object into the sample plane to serve as constraint one might ask the question how it should be ideally structured to impose strong boundary conditions. It is well known that objects of asymmetric shape lead to better results than objects of symmetric shape [20

20. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4, 118–123 (1987). [CrossRef]

], because phase retrieval is more “unique” in such cases. Furthermore, it appears reasonable that one has to maximise the “crosstalk” between the known periphery and the unknown object in order to enhance the “flow of information” from periphery to object that takes place at each iterative step. Since there is naturally no such crosstalk in the object plane because object and periphery are by definition spatially separated, one can only maximise it in the recording plane, where both fields mix due to diffraction. To this end we propose to use a designed diffractive periphery that maximises the observable crosstalk, i.e. that alters the object’s diffraction pattern in the recording plane as much as possible by its presence. A value to quantify this crosstalk can be defined as follows:
M=2|Eref||Etot||Eref|2+|Etot|2.
(1)
Here, Eref is the reference field, i.e. the light field in the recording plane when no object is inserted, and Etot the entire field created by periphery and object. The function 〈...〉 denotes spatial averaging over the entire recording plane. The expression above is basically the correlation of Eref with Etot. In case the object has only little influence on the periphery diffraction pattern, we have ErefEtot and the expression takes a value close to one. We expect an unsatisfactory algorithm performance for this case. On the other hand, if the fields scattered by periphery and object interfere strongly at the recording plane, Eref and Etot will show significant differences. Then the correlation term 〈|Eref| |Etot|〉 in Eq. 1 will be smaller than one and we expect an improved phase retrieval performance. The theoretical optimal value of M is zero.

The principle is outlined in Fig. 1. The drawing on the left illustrates the case of plane wave illumination. The wave parts passing beside and going through the object start to interfere as they propagate towards the recording plane. For lensless imaging applications aiming at high resolution, the recording plane is located relatively close, hence the size of the region where both fields overlap is quite small. This represents an explanation for the general fact that twin-image suppression performs worse for large Fresnel numbers – a behaviour that was also pointed out by other authors [22

22. G. Koren, D. Joyeux, and F. Polack, “Twin-image elimination in in-line holography of finite-support complex objects,” Opt. Lett. 16, 1979–1981 (1991). [CrossRef] [PubMed]

].

Fig. 1 Left: for plane wave illumination, the fields passing by and going through the object show only little overlap at the recording plane. Right: a designed periphery can maximise this overlap.

The sketch on the right depicts our proposal, where the periphery is designed to shape the illumination beam for maximal overlapping with the object’s diffraction pattern in the recording plane. In order to achieve a large overlap, the peripheral structure could be similar to that of a diffractive lens or axicon. Another possibility is to design the periphery using iterative hologram computation algorithms. In this case, the periphery phase will adopt a kind of scrambled appearance, which is however not random but optimised to condense the light in the desired way. It should be noted that also a simple diffuser disc leads to an enhanced field overlapping in the recording plane and can thus be expected to cause improved phase retrieval performance. Because of the simplicity of this approach, we performed experiments with such discs and investigated the obtainable quality.

2. Experiment with a diffuser disc

Employing a polymer diffuser disc as peripheral object, we took single intensity recordings of the light field scattered by a transparent insect wing and applied a phase retrieval algorithm (see section 3 for details about the algorithm implementation) to retrieve its amplitude and phase functions. We tested holographic diffusers from Edmund Optics (NT65-883, NT65-554 and NT47-996) with specified cut-off diffraction angles of 0.5°, 1° and 5°, respectively, finding the 1° scatterer to perform best. The value M for this specific combination of specimen and diffuser was measured to be 0.988, compared to 0.993 when the diffuser disc was removed and the object illuminated by a plane wave. Although the change in the parameter M appears marginal, it has nevertheless a great influence on the achievable quality.

A sketch of the set-up is shown in Fig. 2(a). A hole of 3 mm diameter was drilled into the centre of the diffuser, which was then placed at a distance of about 15 cm from a CMOS sensor (Canon EOS 1000D) with a size of 22.2×14.8 mm2 and a pixel side length of 5.7 μm. These dimensions define the achievable resolution to be on the order of 10 μm. The diffuser was illuminated with a helium-neon laser (633 nm wavelength) of Gaussian shape and 9 mm diameter (1/e2), such that a part of the beam passed undiffracted through the hole in the diffuser. This part illuminated the object in the specimen plane, which was located a couple of millimetres behind the diffuser disc. At the recording plane, the field diffracted by the diffuser overlapped well with the remainder of the light. A typical recorded image is shown in Fig. 2(b). From this single image, the amplitude and phase of the insect wing could be reconstructed with high accuracy by the phase retrieval algorithm (see Fig. 2(c)).

Fig. 2 (a) Experimental set-up; (b) typical image recorded by the CMOS sensor; (c) amplitude and phase of the wing, reconstructed from the image shown in (b).

To successfully apply the phase retrieval algorithm, it is necessary to have accurate knowledge of the light field scattered by the diffuser. This field corresponds to the value Eref of Eq. 1. On this account a beam splitter cube was placed between object and recording planes (not indicated in the figure), which allowed to introduce a reference wave to interfere with the scattered field. Amplitude and phase of the scattered field were measured by stepping the phase of the reference beam (six equidistant phase steps in the interval [0,2π]) and taking images of the corresponding interferograms. It should be noted that this interferometric measurement has to be conducted only a single time. Once the diffuser disc has been characterised, amplitude and phase information of arbitrary samples can be measured from single intensity recordings of their diffraction pattern, without using any imaging optics.

It is worthwhile reminding oneself of the possibility to perform numerical refocussing, once the complex field has been obtained. Figure 3 contains images which demonstrate this ability. The image on the left contains the amplitude of the insect wing, which is also shown in Fig. 2(c). The right image shows the amplitude of the same field, but defocussed by seven millimetres, such that the holey scattering disc comes into focus. The development of the field amplitude in axial steps of 1 mm in a certain volume around the specimen is provided by means of a movie ( Media 1).

Fig. 3 Once the complex field has been obtained, it is possible to perform numerical refocussing. The images contain field amplitudes that have been obtained via phase retrieval. Left: Focussed onto the wing. Right: Focussed onto the scattering disc, located seven millimetres in front of the wing. A movie is provided ( Media 1) that shows how the amplitude develops in the volume between these planes.

3. Phase retrieval algorithm

Our algorithm is based on the error-reduction algorithm [23

23. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef] [PubMed]

], which is a variant of the Gerchberg-Saxton algorithm. Its input information are the interferometrically measured complex field that is scattered by the periphery (referred to as “reference field” Eref) and a single intensity recording of the field scattered by object plus periphery (referred to as “total field” Etot), similar to that shown in Fig. 2(b). Both fields are measured in the recording plane, therefore this is also the plane where the algorithm has its starting point. In an initial step, both fields are combined to a new field E, defined in the recording plane:
E(r,zrec)=|Etot(r,zrec)|Eref(r,zrec)/|Eref(r,zrec)|.
(2)
The vector r represents the transverse coordinate vector and zrec the axial coordinate of the recording plane. In the following, the phase retrieval algorithm is executed, which iteratively performs the following four steps:
  1. The field E of Eq. 2 is numerically propagated into the object plane using an appropriate propagation operator F:
    E(r,zobj)=F{E(r,zrec)}.
    (3)
  2. The support-constraint is applied by replacing E(r,zobj) by the reference field Eref (r,zobj) in the entire object plane except at the area which is occupied by the object. This area defines the object support, and the remaining area the periphery. Since the object’s boundaries cannot be clearly identified for the first couple of iterations, we initially choose the object support large enough to contain any possible sample. This initial support is referred to as Sini, which for the experiment described in section 2 corresponds to the area defined by the hole in the diffuser. The resulting new field E′ can be expressed as:
    E(r,zobj)={E(r,zobj)ifrSiniEref(r,zobj)ifrSini.
  3. E′(r,zobj) is numerically back-propagated to the recording plane, where we obtain:
    E(r,zrec)=F1{E(r,zobj)}.
    (4)
  4. Finally, we apply our second constraint, which is the recorded modulus of the object field Etot :
    E(r,zrec)=|Etot(r,zrec)|E(r,zrec)/|E(r,zrec)|.
    (5)

The above steps (i)–(iv) define one iteration of the phase-retrieval algorithm. After a couple of iterations, the object boundaries are typically very well defined. This allows to replace the initial object support Sini by a refined one (Sfinal) that closely matches the sample shape (see Fig. 4). This refinement is done by automated object segmentation and does not require the user to take any action. It should be noted that redefining the object support also implies a redefinition of the periphery area: parts of the area that formerly belonged to the object support become parts of the periphery. The object-support refinement is similar to that described in Ref. [17

17. S. M. Raupach, “Cascaded adaptive-mask algorithm for twin-image removal and its application to digital holograms of ice crystals,” Appl. Opt. 48, 287–301 (2009). [CrossRef] [PubMed]

] and typically leads to significantly improved results.

Fig. 4 Automated object segmentation: The algorithm initially assumes a sufficiently large object support Sini that is automatically refined after a few iterations.

Optionally, one may also include an additional constraint in step (ii): at positions within the object-support where |E(r,zobj)| > |Eref (r,zobj)|, i.e. where the object seems to amplify the illumination intensity, one can replace E(r,r1) by |Eref(r,zobj)| E(r,r1)/|E(r,r1)| [12

12. G. Liu and P. D. Scott, “Phase retrieval and twin-image elimination for in-line Fresnel holograms,” J. Opt. Soc. Am. A 4, 159–165 (1987). [CrossRef]

]. This, however, restricts the method to objects that do not amplify light. It should be noted that this additional constraint was found to have only a weak effect on the performance of the algorithm for the objects investigated. Therefore it was not applied in the numerical simulations presented in this manuscript.

4. Experimental comparison with alternative methods

All experimental results are summarised in Fig. 5. Every method represents a different column in the image matrix. The first two image rows contain the reconstructed amplitude and phase of the wing. The third row contains the modulus of the difference between the reconstructed and interferometrically measured fields. The phase-retrieval results were obtained by 110 iterations of the error-reduction algorithm. After the first 10 iterations, the preliminary result allowed for object segmentation and thus for an accurate localisation of the object’s boundaries. Taking the interferometry results (first column) as reference, one can detect a clear quality difference between the results of the other three methods. Clearly, the random phase approach performs best, delivering a complex field that appears almost indistinguishable from the interferometric measurement. The approach using the plane wave periphery delivers results that seem quite accurate at a first glance but reveal significant reconstruction errors in both, amplitude and phase at a closer look. Finally, the inline hologram reconstruction – as expected – returns a superposition of real and twin-image, and therefore the worst results in view of quantitative accuracy. Considering the simplicity and robustness of this approach, however, it represents a valuable tool in many applications. Furthermore, it should be emphasised that the twin-image does not necessarily represent a problem for inline holography. Depending on the set-up parameters, its influence can vary from heavily disturbing to not recognisable [24

24. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45, 836–850 (2006). [CrossRef] [PubMed]

].

Fig. 5 Experimental comparison of different methods for complex field reconstruction. The object under investigation is an insect wing. 1st image column: interferometry; 2nd column: phase retrieval using random phase periphery; 3rd column: phase retrieval using plane wave periphery; 4th column: inline holography. The first two rows contain the reconstructed amplitude and phase images, the third row the difference of the reconstructed complex field to the interferometric measurement. The graphs in the lower half compare the reconstructed phase along different sections through the insect wing. The locations of the sections are indicated in the interferometry phase image.

5. Numerical simulations

So far we have shown that using a randomly scattering periphery can lead to improved results. Let us now investigate optimised peripheries. We have conducted a series of numerical simulations in order to provide further support for the role of M in view of phase retrieval performance. We have investigated the use of different peripheries with varying values of M, amongst them a random-phase periphery similar to that used in our experiments, but also peripheries that have been specially designed to maximise M.

Although an object-independent definition of M is clearly impossible, it is still feasible to define it for a typical object spectrum. Most objects scatter only little light into high diffraction angles, hence a reasonable approach to increase M is to design a periphery that shapes a Gaussian intensity distribution at the centre of the recording plane. Once this target field has been defined, an iterative Fourier transform algorithm such as the Gerchberg Saxton algorithm can be used to create a diffractive optical element (DOE) which corresponds to the desired periphery in the object plane.

The simulation parameters were chosen to resemble our experimental conditions. We assumed a top-hat illumination with light of λ = 633 nm wavelength, a distance of 15 cm between object and recording plane, and a circular periphery of 6 mm diameter with a 3 mm hole in its centre. Within this hole, we placed different phase objects (see insets in Fig. 6), each of them showing a peak-to-valley phase of 2π. One of the objects was assumed to be also partly absorbing. The objects were deliberately chosen to differ in terms of contrast and spatial frequency content. In contrast to the experimental work where the object is unknown, it is not necessary in the simulations to define a large initial object-support Sini that is later refined: tight-fitting object supports were used instead right from the start of the phase retrieval algorithm. The approximate boundaries of these support regions are indicated by red lines around the objects in Fig. 6. The numerical grid spacing was 11.4 μm and roughly matches the achievable resolution defined by λ/NA, where NA stands for the numerical aperture of the imaging system. We employed the scalar spectrum of plane waves method for simulating the light propagation.

Fig. 6 (a) Four different phase objects were chosen for the numerical simulations. Black curves: the residual errors after three iterations of the error-reduction algorithm as functions of the angular width w of the spot created by the designed peripheries; red curves: the corresponding values of M; (b) error convergence behaviour for one of the four objects (see inset), calculated for different peripheries: cyan: “plane wave” periphery; blue: 1° random scatterer; red: designed periphery. The corresponding values for M are stated above the curves.

To investigate the role of M, we designed a set of 16 different peripheries, each producing a spot of light with Gaussian envelope and random phase at the recording plane centre. The spots have equal total power but different widths w, which are defined as the full widths at 1/e2 intensity level and expressed in angular units as seen from the object plane. Therefore they show a different overlap with the object field and thus different values for M. For each periphery, we employed the error reduction algorithm to retrieve all four objects from single intensity recordings. The algorithm was stopped after three iterations and an error parameter σ calculated as the RMS difference between the original and reconstructed object fields:
σ=|EorigErecon|2.
(6)
The four diagrams in Fig. 6(a) show the obtained remaining errors for each object and each of the 16 peripheries (black curves). Apparently, the smallest residual errors were achieved for peripheries that produce Gaussian spots of w = 0.8°. This approximately applies for all four objects. The diagrams also contain plots of the corresponding values M (red curves). The value M correlates well with the residual errors.

Fig. 7 Left: residual error after 30 iterations for different sizes of the scattering periphery. For the designed periphery, reasonable results are obtained if it is larger than the object by a factor of about 2. Right: phase of the reconstructed department logo after 30 iterations, framed by the designed scattering periphery which covers an area 1.6× larger than that of the logo. The areas of the scattering periphery Ascat and object support S are coloured in yellow and red, respectively.

6. Summary and discussion

We demonstrate a method that significantly improves the performance of phase retrieval algorithms for measuring the full complex-valued structure of an unknown object from a single intensity recording. Considered within the context of inline holography, the method has the ability to suppress the “twin-image”, which represents the disturbing conjugated diffraction order of recorded holograms. Our method is based upon introducing a specially designed phase object into the specimen plane during the recording process. This can be thought of as a specific support-constraint. If the structure of this additional (“peripheral”) object is used as constraint for the phase retrieval algorithm, its convergence speed is significantly improved and results of high precision can be obtained. The key to the achievable accuracy is thereby the precision to which the peripheral object is known. By incorporating an interferometer into the imaging set-up, it can be characterised precisely and in situ, which has the advantage that no subsequent image registration steps have to be taken [25

25. S. Bernet, W. Harm, A. Jesacher, and M. Ritsch-Marte, “Lensless digital holography with diffuse illumination through a pseudo-random phase mask,’ Opt. Express 19, 25113–25124 (2011). [CrossRef]

]. In the phase retrieval procedure, the precise knowledge of the periphery is “transferred” to precise knowledge of the sample.

We present experimental results that directly demonstrate the proposed advantages. We experimentally compared the performances of two different peripheries: a plane wave and a plastic diffuser, the latter representing a cheap and easily available peripheral object. Results obtained from inline holography and phase-stepping interferometry were also included in the comparison. We find that using the plastic diffuser (albeit not yet optimised/customised) delivers the most accurate results, which can be comparable to the interferometric measurement.

We would like to emphasise that a more attractive way of putting our proposal into practice is to utilise patterned illumination rather than to introduce a real peripheral object into the specimen plane. For instance, one could use a DOE to shape the illumination wave such that it forms the desired periphery in the specimen plane. The use of dynamic devices such as SLMs would even allow to calculate and display object-specific peripheries in real time. Exploiting the potential of graphics processing units, the iterative phase retrieval can also be fast, with a single iteration only taking a couple of milliseconds.

Acknowledgments

This work was supported by the Austrian Science Foundation (FWF) Project No. P19582-N20 and the ERC Advanced Grant 247 024 catchIT.

References and links

1.

D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). [CrossRef] [PubMed]

2.

I. Moon, M. Daneshpanah, A. Anand, and B. Javidi, “Cell identification with computational 3-D holographic microscopy,” Opt. Photonics News 22, 18–23 (2011). [CrossRef]

3.

W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. U.S.A. 98, 11301–11305 (2001). [CrossRef] [PubMed]

4.

Q. Xu, K. Shi, H. Li, K. Choi, R. Horisaki, D. Brady, D. Psaltis, and Z. Liu, “Inline holographic coherent anti-Stokes Raman microscopy,” Opt. Express 18, 8213–8219 (2010). [CrossRef] [PubMed]

5.

P. Spanne, C. Raven, I. Snigireva, and A. Snigirev, “In-line holography and phase-contrast microtomography with high energy x-rays,” Phys. Med. Biol. 44, 741–749 (1999). [CrossRef] [PubMed]

6.

M. Kanka, R. Riesenberg, P. Petruck, and C. Graulig, “High resolution (NA=0.8) in lensless in-line holographic microscopy with glass sample carriers,” Opt. Lett. 36, 3651–3653 (2011). [CrossRef] [PubMed]

7.

W. L. Bragg and G. L. Rogers, “Elimination of the unwanted image in diffraction microscopy,” Nature 167, 190–191 (1951). [CrossRef] [PubMed]

8.

E. N. Leith and J. Upatnieks, “Wavefront reconstruction with continuous-tone objects,” J. Opt. Soc. Am. 53, 1377–1381 (1963). [CrossRef]

9.

O. Bryngdahl and A. Lohmann, “Single-sideband holography,” J. Opt. Soc. Am. 58, 620–624 (1968). [CrossRef]

10.

L. Onural and P. D. Scott, “Digital decoding of in-line holograms,” Opt. Eng. 26, 1124–1132 (1987).

11.

K. A. Nugent, “Twin-image elimination in Gabor holography,” Opt. Commun. 78, 293–299 (1990). [CrossRef]

12.

G. Liu and P. D. Scott, “Phase retrieval and twin-image elimination for in-line Fresnel holograms,” J. Opt. Soc. Am. A 4, 159–165 (1987). [CrossRef]

13.

G. Koren, F. Polack, and D. Joyeux, “Iterative algorithms for twin-image elimination in in-line holography using finite-support constraints,” J. Opt. Soc. Am. A 10, 423–433 (1993). [CrossRef]

14.

F. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Image reconstruction for in-line holography with the Yang-Gu algorithm,” Appl. Opt. 42, 6452–6457 (2003). [CrossRef] [PubMed]

15.

T. Latychevskaia and H.-W. Fink, “Solution to the twin image problem in holography,” Phys. Rev. Lett. 98, 233901 (2007). [CrossRef] [PubMed]

16.

C. P. McElhinney, B. M. Hennelly, and T. J. Naughton, “Twin-image reduction in inline digital holography using an object segmentation heuristic,” J. Phys.: Conf. Ser. 139, 012014 (2008). [CrossRef]

17.

S. M. Raupach, “Cascaded adaptive-mask algorithm for twin-image removal and its application to digital holograms of ice crystals,” Appl. Opt. 48, 287–301 (2009). [CrossRef] [PubMed]

18.

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Jena) 35, 237–246 (1972).

19.

V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A 20, 40–55 (2003). [CrossRef]

20.

J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A 4, 118–123 (1987). [CrossRef]

21.

J. R. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” Opt. Express 14, 498–508 (2006). [CrossRef] [PubMed]

22.

G. Koren, D. Joyeux, and F. Polack, “Twin-image elimination in in-line holography of finite-support complex objects,” Opt. Lett. 16, 1979–1981 (1991). [CrossRef] [PubMed]

23.

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef] [PubMed]

24.

J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45, 836–850 (2006). [CrossRef] [PubMed]

25.

S. Bernet, W. Harm, A. Jesacher, and M. Ritsch-Marte, “Lensless digital holography with diffuse illumination through a pseudo-random phase mask,’ Opt. Express 19, 25113–25124 (2011). [CrossRef]

OCIS Codes
(100.5070) Image processing : Phase retrieval
(110.6150) Imaging systems : Speckle imaging
(090.1995) Holography : Digital holography

ToC Category:
Image Processing

History
Original Manuscript: January 4, 2012
Manuscript Accepted: February 7, 2012
Published: February 21, 2012

Citation
Alexander Jesacher, Walter Harm, Stefan Bernet, and Monika Ritsch-Marte, "Quantitative single-shot imaging of complex objects using phase retrieval with a designed periphery," Opt. Express 20, 5470-5480 (2012)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-5-5470


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. D. Gabor, “A new microscopic principle,” Nature161, 777–778 (1948). [CrossRef] [PubMed]
  2. I. Moon, M. Daneshpanah, A. Anand, and B. Javidi, “Cell identification with computational 3-D holographic microscopy,” Opt. Photonics News22, 18–23 (2011). [CrossRef]
  3. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci. U.S.A.98, 11301–11305 (2001). [CrossRef] [PubMed]
  4. Q. Xu, K. Shi, H. Li, K. Choi, R. Horisaki, D. Brady, D. Psaltis, and Z. Liu, “Inline holographic coherent anti-Stokes Raman microscopy,” Opt. Express18, 8213–8219 (2010). [CrossRef] [PubMed]
  5. P. Spanne, C. Raven, I. Snigireva, and A. Snigirev, “In-line holography and phase-contrast microtomography with high energy x-rays,” Phys. Med. Biol.44, 741–749 (1999). [CrossRef] [PubMed]
  6. M. Kanka, R. Riesenberg, P. Petruck, and C. Graulig, “High resolution (NA=0.8) in lensless in-line holographic microscopy with glass sample carriers,” Opt. Lett.36, 3651–3653 (2011). [CrossRef] [PubMed]
  7. W. L. Bragg and G. L. Rogers, “Elimination of the unwanted image in diffraction microscopy,” Nature167, 190–191 (1951). [CrossRef] [PubMed]
  8. E. N. Leith and J. Upatnieks, “Wavefront reconstruction with continuous-tone objects,” J. Opt. Soc. Am.53, 1377–1381 (1963). [CrossRef]
  9. O. Bryngdahl and A. Lohmann, “Single-sideband holography,” J. Opt. Soc. Am.58, 620–624 (1968). [CrossRef]
  10. L. Onural and P. D. Scott, “Digital decoding of in-line holograms,” Opt. Eng.26, 1124–1132 (1987).
  11. K. A. Nugent, “Twin-image elimination in Gabor holography,” Opt. Commun.78, 293–299 (1990). [CrossRef]
  12. G. Liu and P. D. Scott, “Phase retrieval and twin-image elimination for in-line Fresnel holograms,” J. Opt. Soc. Am. A4, 159–165 (1987). [CrossRef]
  13. G. Koren, F. Polack, and D. Joyeux, “Iterative algorithms for twin-image elimination in in-line holography using finite-support constraints,” J. Opt. Soc. Am. A10, 423–433 (1993). [CrossRef]
  14. F. Zhang, G. Pedrini, W. Osten, and H. J. Tiziani, “Image reconstruction for in-line holography with the Yang-Gu algorithm,” Appl. Opt.42, 6452–6457 (2003). [CrossRef] [PubMed]
  15. T. Latychevskaia and H.-W. Fink, “Solution to the twin image problem in holography,” Phys. Rev. Lett.98, 233901 (2007). [CrossRef] [PubMed]
  16. C. P. McElhinney, B. M. Hennelly, and T. J. Naughton, “Twin-image reduction in inline digital holography using an object segmentation heuristic,” J. Phys.: Conf. Ser.139, 012014 (2008). [CrossRef]
  17. S. M. Raupach, “Cascaded adaptive-mask algorithm for twin-image removal and its application to digital holograms of ice crystals,” Appl. Opt.48, 287–301 (2009). [CrossRef] [PubMed]
  18. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik (Jena)35, 237–246 (1972).
  19. V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A20, 40–55 (2003). [CrossRef]
  20. J. R. Fienup, “Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint,” J. Opt. Soc. Am. A4, 118–123 (1987). [CrossRef]
  21. J. R. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” Opt. Express14, 498–508 (2006). [CrossRef] [PubMed]
  22. G. Koren, D. Joyeux, and F. Polack, “Twin-image elimination in in-line holography of finite-support complex objects,” Opt. Lett.16, 1979–1981 (1991). [CrossRef] [PubMed]
  23. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt.21, 2758–2769 (1982). [CrossRef] [PubMed]
  24. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt.45, 836–850 (2006). [CrossRef] [PubMed]
  25. S. Bernet, W. Harm, A. Jesacher, and M. Ritsch-Marte, “Lensless digital holography with diffuse illumination through a pseudo-random phase mask,’ Opt. Express19, 25113–25124 (2011). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: AVI (3223 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited