OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Editor: Joseph N. Mait
  • Vol. 48, Iss. 34 — Dec. 1, 2009
  • pp: H105–H112
« Show journal navigation

Extending the depth of focus for enhanced three-dimensional imaging and profilometry: an overview

Alex Zlotnik, Shai Ben-Yaish, and Zeev Zalevsky  »View Author Affiliations


Applied Optics, Vol. 48, Issue 34, pp. H105-H112 (2009)
http://dx.doi.org/10.1364/AO.48.00H105


View Full Text Article

Acrobat PDF (1097 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We overview the benefits that extended depth of focus technology may provide for three-dimensional imaging and profilometry. The approaches for which the extended depth of focus benefits are being examined include stereoscopy, light coherence, pattern projection, scanning line, speckles projection, and projection of axially varied shapes.

© 2009 Optical Society of America

Data sets associated with this article are available at http://hdl.handle.net/10376/1474. Links such as View 1 that appear in figure captions and elsewhere will launch custom data views if ISP software is present.

1. Introduction

There are several relevant approaches for extracting the topography of an object. The basic approach is based upon stereoscopy, where the object is viewed from different points of view, and, by computing the relative shift between the various points of view, one may estimate the distance [1

1. T. Sawatari, “Real-time noncontacting distance measurement using optical triangulation,” Appl. Opt. 15, 2821–2827 (1976). [CrossRef] [PubMed]

, 2

2. G. Hausler and D. Ritter, “Parallel three-dimensional sensing by color-coded triangulation,” Appl. Opt. 32, 7164–7170 (1993). [CrossRef] [PubMed]

, 3

3. R. G. Dorsch, G. Hausler, and J. M. Herrmann, “Laser tri angulation: fundamental uncertainty in distance measurement,” Appl. Opt. 33, 1306–1312 (1994). [CrossRef] [PubMed]

].

Other types of techniques involve active projection of patterns, for instance, projection of a grating and computing the gradients obtained in the image [4

4. A. M. Bruckstein, “On shape from shading,” Comput. Vis. Graph. ImageProcess. 44, 139–154 (1988). [CrossRef]

, 5

5. Y. G. Leclerc and A. F. Bobick, “The direct computation of height from shading,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 1991), pp. 552–558. [CrossRef]

, 6

6. R. Zhang and M. Shah, “Shape from intensity gradient,” IEEE Trans. Syst. Man Cybern. 29, 318–325 (1999). [CrossRef]

]. The main disadvantage is that the gradients are obtained only in locations with height changes that are usually very space limited and shadowed. Since the height estimation in this approach is cumulative, a miss of a certain gradient (height change) accumulates an error. In addition, the technique will obtain the height change only in the direction perpendicular to the projected grating. If the height change coincides with the grating direction, no gradient will be obtained. Other techniques involve projection of a line on the object and scanning the object with that line. The height might be obtained based on the curvature of the projected line [7

7. M. Asada, H. Ichikawa, and S. Tjuji, “Determining of surface properties by projecting a stripe pattern,” in Proceedings of the International Pattern Recognition Conference (IEEE, 1986), pp. 1162–1164.

, 8

8. M. Asada, H. Ichikawa, and S. Tsuji, “Determining surface orientation by projecting a stripe pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 749–754 (1988). [CrossRef]

, 9

9. R. Kimmel, N. Kiryati, and A. M. Bruckstein, “Analyzing and synthesizing images by evolving curves with the Osher-Sethian method,” Int. J. Comput. Vis. 24, 37–56 (1997). [CrossRef]

]. The main problem with that approach is that the object must be static during the scanning process; otherwise, the height estimation is blurred. Such an approach will not work for motion estimation. In order to solve this, an illumination with a 2D periodic pattern is possible. Local shifts can be translated to 3D information. However, the main drawback of this method is the phase wrapping. Local shifts exceeding one period cannot be separated from shifts that are smaller than one period. To overcome this, one may encode the second spatial dimension by wavelengths [10

10. L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi pass dynamic programming,” in Proceedings of 1st International Symposium on 3D DataProcessing Visualization and Transmission (3DPVT) (IEEE Computer Society, 2002), pp. 24–37. [CrossRef] [PubMed]

] or by special code [11

11. E. Horn and N. Kiryati, “Toward optimal structured light patterns,” in Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling (IEEE Computer Society, 1997), pp. 28–37.

], rather than by using the time domain degree of freedom (as done in the line scanning approach).

There is a large variety of EDOF techniques. Some require digital postprocessing [14

14. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef] [PubMed]

, 15

15. J. van der Gracht, E. Dowski, M. Taylor, and D. Deaver, “Broadband behavior of an optical-digital focus-invariant system,” Opt. Lett. 21, 919–921 (1996). [CrossRef] [PubMed]

], aperture apodization by absorptive mask [16

16. J. O. Castaneda, E. Tepichin, and A. Diaz, “Arbitrary high focal depth with a quasi optimum real and positive transmittance apodizer,” Appl. Opt. 28, 2666–2669 (1989). [CrossRef]

, 17

17. J. O. Castaneda and L. R. Berriel-Valdos, “Zone plate for arbitrary high focal depth,” Appl. Opt. 29, 994–997 (1990). [CrossRef]

], or diffraction optical phase elements such as multifocal lenses or spatially dense distributions [18

18. E. Ben Eliezer, Z. Zalevsky, E. Marom, and N. Konforti, “All-optical extended depth of field imaging system,” J. Opt. A, Pure Appl. Opt. 5, S164–S169 (2003). [CrossRef]

]. Other approaches tailor the modulation transfer functions (MTFs) with high focal depth [19

19. A. Sauceda and J. Ojeda-Castaneda, “High focal depth with fractional-power wavefronts,” Opt. Lett. 29, 560–562 (2004). [CrossRef] [PubMed]

] or use logarithmic asphere lenses [20

20. W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, 875–877 (2001). [CrossRef]

]. Some all-optical approaches are available as well and are based upon attachment of a phase-affecting, binary optical element defining a spatially low frequency phase transition that codes the entrance pupil of the imaging system [21

21. Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631–2643 (2006). [CrossRef] [PubMed]

].

Obviously there is a large variety of 3D or ranging approaches that are used in the scientific community and that relate to digital holography [22

22. C. Iemmi, A. Moreno, and J. Campos, “Digital holography with a point diffraction interferometer,” Opt. Express 13, 1885–1891 (2005). [CrossRef] [PubMed]

, 23

23. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45, 836–850 (2006). [CrossRef] [PubMed]

, 24

24. I. Yamaguchi and T. Zhang, “Phase shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef] [PubMed]

], imaging through plate of random pinholes [25

25. Z. Zalevsky and A. Zlotnik, “Axially and transversally super resolved imaging and ranging with random aperture coding,” J. Opt. A, Pure Appl. Opt. 10, 064014 (2008). [CrossRef]

, 26

26. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” J. Display Technol. 3, 315–320 (2007). [CrossRef]

], laser based RADAR (LIDAR) [27

27. M. Pfennigbauer, B. Möbius, and J. Pereira do Carmo, “Echo digitizing imaging LIDAR for rendezvous and docking,” Proc. SPIE 7323, 732302 (2009). [CrossRef]

], and various image processing approaches using allocation of the geometrical transformation over straight lines in the image (e.g., vanishing points that are the points that lines are drawn toward in order to induce perspective [28

28. J. A. Shufelt, “Performance evaluation and analysis of vanishing point detection rechniques,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 282–288 (1999). [CrossRef]

]). Those approaches are less relevant to the benefits that may be obtained when adding the EDOF feature to the imaging system.

2. Stereoscopy Based Techniques

The 3D reconstruction process out of stereo images is usually composed of three phases—camera calibration, image rectification, and disparity map. In the calibration, step one determines the mutual camera orientations and certain camera parameters, such as focal length and geometrical aberrations. The latter should be corrected since a pinhole camera model is used. In the rectification phase, one should correct geometrical aberrations due to imperfection of the imaging system and transform the left image such that the left camera focal plane would coincide with the right camera focal plane. Usually the transformation makes left and right images aligned row-wise. This makes the third step easier. Next, one should find the disparity map, i.e., the location shift of features appearing in both images. This is basically a feature matching process. To make this process successful, the feature should have a decent contrast.

In Fig. 1 one may see a schematic sketch of the stereoscopic calculation where (xrxl) is a disparity map result, xr designates the location of a given feature in the right camera image, xl is the location of the same feature in the left camera image, f is the focal length of the cameras, b is the distance between the two cameras, and Z is the estimated distance to the feature:
Z=bf(xlxr).
(1)
In order to be able to obtain accurate estimation of the range, the object in both images must be in focus. For a large range of distances, one usually needs to reduce the aperture size since the depth of focus is proportional to f-number square, i.e. to the square of the ratio between the focal length and the aperture of the imaging lens:
Δz=2F#COC=κ1λF#2=κ1λf2D2,
(2)
where Δz is the depth of focus, COC is called in the literature a circle of confusion, κ1 is a constant, λ is the wavelength, F# is the f-number, f is the focal length, and D is the diameter of the lens aperture.

However, such a reduction reduces the resolution of the imaging system (the smallest feature δ that may be imaged is proportional to the product between the wavelength and the f-number):
δ=κ2λF#,
(3)
where κ2 is a constant. Reducing the aperture diameter (increasing the f-number) also reduces the energetic efficiency, which is proportional to the area of the aperture. Therefore one important feature in stereoscopic imaging that benefits from EDOF is obtaining the same depth of focus as can be obtained with increased f-number but without losing resolution or energetic efficiency as would have been obtained when physically decreasing the size of the aperture.

In the simulation presented in Fig. 2 we have performed a simulation, using ZEMAX software, in which through focus MTF for a frequency of 40  cycles/mm was computed for a given stereoscopic system with F#=12.5 and no addition of an EDOF approach [Fig. 2a], F#=7.2 and no addition of an EDOF approach [Fig. 2b], and F#=7.2 with the addition of an EDOF element [Fig. 2c]. The EDOF element added to the simulation was designed following the technical description presented in Ref. [21

21. Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631–2643 (2006). [CrossRef] [PubMed]

]. It was an annular like binary phase element extending the depth of focus by creating proper interference of light passing through the different regions of the aperture of the lens. The phase given to the various parts of the aperture creates a desired constructive interference in a “focus channel” while a disruptive interference is created around it. The generated “focus channel” is the created EDOF. The designed phase only EDOF element that was used in this simulation was designed along this operation principle, and it had a single phase ring with external diameter of about 200 μm, while the width of the phase ring was approximately 50 μm. The etching depth was less than 600nm.

The considerations for the proper design of the phase transitions providing this EDOF are described in Ref. [21

21. Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631–2643 (2006). [CrossRef] [PubMed]

].

One may see that for a contrast threshold of about 20%, the system with the EDOF addition and F#=7.2 has the same depth of focus as the same system without the EDOF but with F#=12.5. The meaning of this is that a gain of 300%=(12.5/7.2)2 is obtained in the overall energetic efficiency of the imaging system. Such a significant improvement in the efficiency is especially important for biomedical applications where low light conditions are common.

In Fig. 3 we present the obtained experimental results. The EDOF element used for the experimental validations was designed using the operation principle of Ref. [21

21. Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631–2643 (2006). [CrossRef] [PubMed]

]. The fabricated EDOF element had a single phase ring with external diameter smaller than 1mm, while the width of the phase ring was below 0.3mm. The etching depth of the fabricated profile that creates the proper phase delay between the various parts of the lens aperture was below a micrometer.

In our experimental setup the inspected object was a set of letters positioned on a tilted plane having distance linearly ranging from 20cm (right side of the letters text) to 40cm (left side of the letters text). The focal length of the lenses of the two cameras was 4.5mm and the f-number was 2.8. The sepa ration distance between the cameras was 5cm. In Figs. 3a, 3b we present the images captured by the right and the left cameras, respectively, while no EDOF element was added. Due to lack of an EDOF element, some of the letters are defocused (right side of the image), which yields the wrong range estimation as one may see in Fig. 3c. The depth map is in millimeters. The x,y axes units are pixel values (3 μm per pixel).

In Figs. 3d, 3e we present the images captured by the right and the left cameras after adding the EDOF element. Now the images are all in focus, and therefore also the range estimation that is presented in Fig. 3f coincides with the experimental conditions. In Fig. 3f as well, the depth map is in millimeters, and the units of the depth map are 3 μm per each pixel. The EDOF element that was fabricated for the experiment follows the specifications of Ref. [21

21. Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631–2643 (2006). [CrossRef] [PubMed]

].

3. Light Coherence Based Techniques

Using the coherence of light can assist in ranging. One possible approach was presented in Ref. [29

29. Z. Zalevsky, O. Margalit, E. Vexberg, R. Pearl, and J. Garcia, “Suppression of phase ambiguity in digital holography by using partial coherence or specimen rotation,” Appl. Opt. 47, D154–D163 (2008). [CrossRef] [PubMed]

], where the contrast of the secondary speckles generated on the surface of the inspected object when illuminated by a coherent spot of light provided the indication for the range. The contrast can also be extracted by temporal scanning of the object [30

30. T. Dresel, G. Hausler, and H. Venzke, “Three-dimensional sensing of rough surfaces by coherence radar,” Appl. Opt. 31, 919–925 (1992). [CrossRef] [PubMed]

]. Obviously prior to operation one needs to map the variation of the contrast versus the axial domain and to construct a lookup table. By comparing the measured local contrast to the lookup table, one may estimate the topography.

For clarity, in Eq. (4), one may see the definition of the coherence function, where P designates the spatial coordinate and τ is the temporal axis. u is the electrical field and is an ensemble average operation:
Γ11(τ(P))=u(P1,t+τ(P))*u(P1,t).
(4)
By proper coherence function shaping, one may generated a desired (and different) function for every lateral position coordinate P [31

31. Z. Zalevsky, D. Mendlovic, and H. M. Ozaktas, “Energetic efficient synthesisof mutual intensity distribution,” J. Opt. A, Pure Appl. Opt. 2, 83–87 (2000). [CrossRef]

]. This generation of desired distribution can benefit from the EDOF techniques, not for direct extension of the depth of focus, but rather to obtain better control or improved proximity to the desired shaping (e.g., axial) of the coherence function (of the illuminating beam). Shaping the coherence function can assist in range estimation, as it was demonstrated to be helpful in increasing resolution [32

32. Z. Zalevsky, J. García, P. García-Martínez, and C. Ferreira, “Spatial information transmission using orthogonal mutual coherence coding,” Opt. Lett. 30, 2837–2839 (2005). [CrossRef] [PubMed]

] or in field of view enlargement [33

33. V. Micó, J. García, C. Ferreira, D. Sylman, and Z. Zalevsky, “Spatial information transmission using axial temporal coherence coding,” Opt. Lett. 32, 736–738 (2007). [CrossRef] [PubMed]

]. In the papers of Refs. [32

32. Z. Zalevsky, J. García, P. García-Martínez, and C. Ferreira, “Spatial information transmission using orthogonal mutual coherence coding,” Opt. Lett. 30, 2837–2839 (2005). [CrossRef] [PubMed]

, 33

33. V. Micó, J. García, C. Ferreira, D. Sylman, and Z. Zalevsky, “Spatial information transmission using axial temporal coherence coding,” Opt. Lett. 32, 736–738 (2007). [CrossRef] [PubMed]

], proper coherence shaping allowed illuminating the inspected object in such a way that every spatial region in the object was coded by an orthogonal or uncorrelated coherence distribution. That way, after multiplexing and mixing the various lateral regions, and after their transmission through a resolution limited imaging system, they could still be separated and be used to reconstruct a high resolution and over the full field of view image. Since improved imaging capabilities (higher resolution) affect the 3D estimation capabilities of various approaches, we have mentioned this topic as well in this overview paper.

4. Patterns Projection Based Techniques

The pattern projection 3D approaches include a projector and an imaging camera. The projector is responsible for projecting special patterns upon the object, and the imaging camera extracts the object's topography by comparing the original reference with the modification generated in the projected pattern on top of the object.

The physical operation principle of all the approaches described in the section and that involve pattern projection is similar to triangulation (explained in Section 2). For all the 3D approaches that are described in this section, the depth of focus aspects are relevant both for the projector as well as for the imager that needs to estimate the topography from the imaged pattern being reflected from the object. For the projector, the EDOF is important in order to generate the desired distribution for large axial range. For the imager, the EDOF is important in order to observe an adequately focused image with all the spatial details that are relevant for the topography estimation.

The defocusing reduces the resolution at which an imaging system perceives an object as well as affects the resolution at which a projected structure reaches the illuminated object.

In the case of using a laser to project patterns (coherent rather than incoherent illumination), one needs to observe the CTF rather than the OTF. The CTF describes the transmission of the spatial frequencies of the field (rather than intensity). As previously mentioned, the defocusing is expressed in the addition of a quadratic phase to the spatial spectral transmission of the relevant field distribution.

We will now briefly review and explain some typical pattern projection based techniques, and then we will focus on a 2D grating projection approach and validate experimentally the added value gained by the addition of EDOF technology into this 3D estimation approach.

4A. Projection of 2D Periodic Patterns

The operation principle of this approach is schematically described by Fig. 4a, where it is assumed that in a certain spatial position, an object has a height change of Δh in comparison to the reference height plane. The illumination pattern will be shifted by the amount of Δx and Δy (in the transversal plane, which we also refer to as the object plane) in comparison to the pattern obtained when the reference plane is being illuminated [see Fig. 4a]:
Δh=Δxtanαx,Δh=Δytanαy,
(9)
where αx and αy are the known angles between the pattern’s illumination source and the horizontal and vertical axes in the object plane determined by the camera, respectively. Thus, by measuring the transversal shifts of the projected lines, one may estimate the local height distribution of the imaged object.

An experimental example of how the projected lines are shifted due to the elevation of the object can be seen in Fig. 4b, where we have marked by white arrows the shift of the projected lines. The object used for this demonstration may be seen in the upper left corner of Fig. 4b.

4B. Scanning Line

The aforementioned approach provides the topography estimation over the full field of view, but it suffers from a phase wrapping problem [7

7. M. Asada, H. Ichikawa, and S. Tjuji, “Determining of surface properties by projecting a stripe pattern,” in Proceedings of the International Pattern Recognition Conference (IEEE, 1986), pp. 1162–1164.

]. In one of the applied approaches to overcome it, a scanning with a line projected on top of the object is performed. The operation principle is exactly the same as in the case of a grating projection. The deformation of the line provides the relevant information to extract the topography. Here the phase wrapping problem is replaced with time multiplexing, and thus this technique is practical mainly for static objects.

4C. Speckle Projection

Another direction to avoid the phase wrapping is to project a random rather than a periodic pattern [34

34. J. Garcia and Z. Zalevsky, “Range mapping using speckle decorrelation,” U.S.patent7,433,024 (October 2008); World Intellectual Property Organization publication WO/2007/096893 (27 February 2007).

, 35

35. A. Shpunt and Z. Zalevsky, “Three-dimensional sensing using speckle patterns,” World Intellectual Property Organization publication WO/2007/105205 (8 March 2007).

]. The simplest choice is to project speckles, i.e., to illuminate the object through a diffuser. The lack of periodicity solves the phase wrapping problem. The local shifts of the random spots are detected by correlating the imaged pattern with the projected pattern. The shifts as in triangulation are translated to 3D topography. In this case the EDOF is important to maintain the same random pattern over a large axial range.

In order to demonstrate the operation principle of this approach, we present the experimental results of Fig. 5. In Fig. 5 the left image is an image of the projected speckles when reflected from the reference surface without the addition of the inspected object. In the right part of the figure the inspected object was added in. The darker regions of the right part of Fig. 5 correspond to the elevated surface of the object. These regions have shifted speckle distribution—as one may see by comparing the right image with the reference image presented in the left part of Fig. 5. Note that the white circles designate identical regions in both pictures.

4D. Projection of Z-Varied Patterns

In the case that an axially varying speckle pattern is generated, almost the same height decoding algorithm as in the case of Subsection 4C is applied. The difference is that instead of estimating the relative shifts between the reference pattern and the captured pattern when the object is in place, one compares the captured image (with the object in place) with a set of N axially varying reference patterns and tries to find the axial pattern that is the closest to the captured image. Since each reference pattern corresponds to different axial range, the height of that specific lateral region can be estimated. In Fig. 6 one may see a schematic example how a Z-varied line distribution can be generated (see Ref. [36

36. D. Sazbon, Z. Zalevsky, and E. Rivlin, “Qualitative real-time range extraction for preplanned scene partitioning using laser beam coding,” Pattern Recogn. Lett. 26, 1772–1781 (2005). [CrossRef]

]) and used for ranging and profile estimation application. In the schematic sketch of Fig. 6, in each axial distance, lines with different tilting angles are projected. The angles of the lines that are being reflected from the object can be used for ranging when compared with the projected reference distribution. Obviously, instead of rotating lines, any other Z varied distribution may be generated, even a random speckle distribution as described in Ref. [37

37. J. García, Z. Zalevsky, P. García-Martínez, C. Ferreira, M. Teicher, and Y. Beiderman, “Three-dimensional mapping and range measurement by means of projected speckle patterns,” Appl. Opt. 47, 3032–3040 (2008). [CrossRef] [PubMed]

].

4E. Experimental Validation: Usage of EDOF

In Fig. 7 we present experimental results demonstrating the benefits gained by the addition of EDOF technology to the pattern projection 3D approaches. In Fig. 7a we present the experimental setup. The setup contains a grating projector located on the left side of the picture and an imaging camera positioned on its right side. The period of the projected grating was about 1.5mm in the object plane. The distance between the projector and the camera was 15cm. The distance between the projector and object was about 17cm, and the object that we used was a curved edge of a page seen in the central upper part of Fig. 7a. The maximal curving was in the left side of the object and it equaled to approximately 2mm. The imaging cameras had focal length of 4.5mm and f-number of 2.8. In Figs. 7b, 7c, we present the projected grating as it is imaged by the camera without and with the inspected object, respectively. One may see that the lines of the grating are defocused. The 3D estimation obtained from the images of Figs. 7b, 7c is seen in Fig. 7d.

The same experiment was repeated, but this time after the addition of an EDOF element to the imaging lens of the camera following the design of Ref. [21

21. Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631–2643 (2006). [CrossRef] [PubMed]

]. In Figs. 7e, 7f, one may see the projected grating as it is imaged without and with the object, respectively. In Fig. 7g, one may see the 3D reconstruction, which clearly visualizes the improvement obtained due to the addition of the EDOF. The gradually varied profile of the tilted page is clearly seen. Both for Figs. 7d, 7g, each pixel corresponds to 3 μm, and the reconstruction is presented in the camera plane.

Note that in Fig. 7, the x,y axis units are pixel values. The depth maps (color bar) are in relative units. The EDOF element used in this experiment was the same element that was used for the experiment of Fig. 3.

5. Conclusions

In this paper we overviewed several three dimension (3D) and profile extraction approaches that may benefit from the addition of extended depth of focus technology. The paper explained the general possible benefit that may be gained by each 3D technique, and for several selected 3D approaches, numerical and experimental investigations were presented to illustrate this gain.

Fig. 1 Schematic sketch of stereoscopic system.
Fig. 2 Through focus MTF for frequency of 40  cycles/mm: (a) F#=12.5, no EDOF; (b) F#=7.2, no EDOF; (c) F#=7.2, with EDOF.
Fig. 3 Experimental results for letters positioned on a tilted plane (x and y axis units are pixels, where each pixel is 3 μm): (a)–(c) without EDOF (View 1), (d)–(f) with EDOF (View 2), (a)–(d) image captured from the right camera, (b)–(e) image captured from the left camera, and (c)–(f) the resulting depth map.
Fig. 4 (a) Schematic description of the operation principle of 3D estimation by illuminating the object with a periodic structure; (b) experimental demonstration of the operation principle.
Fig. 5 Projection of speckles for 3D estimation.
Fig. 6 Schematic sketch of Z-varied patterns projection.
Fig. 7 (a) Experimental setup (x and y axis units are pixels, where each pixel is 3 μm); (b) imaged grating without EDOF and without the inspected object; (c) imaged grating without EDOF and with the inspected object; (d) 3D reconstruction for the case of without EDOF (View 3) (color bar values are relative); (e) imaged grating with EDOF and without the inspected object; (f) imaged grating with EDOF and with the inspected object; (g) 3D reconstruction with EDOF (View 4) (color bar values are relative).
1.

T. Sawatari, “Real-time noncontacting distance measurement using optical triangulation,” Appl. Opt. 15, 2821–2827 (1976). [CrossRef] [PubMed]

2.

G. Hausler and D. Ritter, “Parallel three-dimensional sensing by color-coded triangulation,” Appl. Opt. 32, 7164–7170 (1993). [CrossRef] [PubMed]

3.

R. G. Dorsch, G. Hausler, and J. M. Herrmann, “Laser tri angulation: fundamental uncertainty in distance measurement,” Appl. Opt. 33, 1306–1312 (1994). [CrossRef] [PubMed]

4.

A. M. Bruckstein, “On shape from shading,” Comput. Vis. Graph. ImageProcess. 44, 139–154 (1988). [CrossRef]

5.

Y. G. Leclerc and A. F. Bobick, “The direct computation of height from shading,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 1991), pp. 552–558. [CrossRef]

6.

R. Zhang and M. Shah, “Shape from intensity gradient,” IEEE Trans. Syst. Man Cybern. 29, 318–325 (1999). [CrossRef]

7.

M. Asada, H. Ichikawa, and S. Tjuji, “Determining of surface properties by projecting a stripe pattern,” in Proceedings of the International Pattern Recognition Conference (IEEE, 1986), pp. 1162–1164.

8.

M. Asada, H. Ichikawa, and S. Tsuji, “Determining surface orientation by projecting a stripe pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 749–754 (1988). [CrossRef]

9.

R. Kimmel, N. Kiryati, and A. M. Bruckstein, “Analyzing and synthesizing images by evolving curves with the Osher-Sethian method,” Int. J. Comput. Vis. 24, 37–56 (1997). [CrossRef]

10.

L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi pass dynamic programming,” in Proceedings of 1st International Symposium on 3D DataProcessing Visualization and Transmission (3DPVT) (IEEE Computer Society, 2002), pp. 24–37. [CrossRef] [PubMed]

11.

E. Horn and N. Kiryati, “Toward optimal structured light patterns,” in Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling (IEEE Computer Society, 1997), pp. 28–37.

12.

J. Rosen and A. Yariv, “General theorem of spatial coherence: application to three-dimensional imaging,” J. Opt. Soc. Am. A 13, 2091–2095 (1996). [CrossRef]

13.

J. M. Schmitt, “Optical coherence tomography (OCT): a review,” IEEE J. Sel. Top. Quantum Electron. 5, 1205–1215 (1999). [CrossRef]

14.

E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef] [PubMed]

15.

J. van der Gracht, E. Dowski, M. Taylor, and D. Deaver, “Broadband behavior of an optical-digital focus-invariant system,” Opt. Lett. 21, 919–921 (1996). [CrossRef] [PubMed]

16.

J. O. Castaneda, E. Tepichin, and A. Diaz, “Arbitrary high focal depth with a quasi optimum real and positive transmittance apodizer,” Appl. Opt. 28, 2666–2669 (1989). [CrossRef]

17.

J. O. Castaneda and L. R. Berriel-Valdos, “Zone plate for arbitrary high focal depth,” Appl. Opt. 29, 994–997 (1990). [CrossRef]

18.

E. Ben Eliezer, Z. Zalevsky, E. Marom, and N. Konforti, “All-optical extended depth of field imaging system,” J. Opt. A, Pure Appl. Opt. 5, S164–S169 (2003). [CrossRef]

19.

A. Sauceda and J. Ojeda-Castaneda, “High focal depth with fractional-power wavefronts,” Opt. Lett. 29, 560–562 (2004). [CrossRef] [PubMed]

20.

W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, 875–877 (2001). [CrossRef]

21.

Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631–2643 (2006). [CrossRef] [PubMed]

22.

C. Iemmi, A. Moreno, and J. Campos, “Digital holography with a point diffraction interferometer,” Opt. Express 13, 1885–1891 (2005). [CrossRef] [PubMed]

23.

J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45, 836–850 (2006). [CrossRef] [PubMed]

24.

I. Yamaguchi and T. Zhang, “Phase shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef] [PubMed]

25.

Z. Zalevsky and A. Zlotnik, “Axially and transversally super resolved imaging and ranging with random aperture coding,” J. Opt. A, Pure Appl. Opt. 10, 064014 (2008). [CrossRef]

26.

A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” J. Display Technol. 3, 315–320 (2007). [CrossRef]

27.

M. Pfennigbauer, B. Möbius, and J. Pereira do Carmo, “Echo digitizing imaging LIDAR for rendezvous and docking,” Proc. SPIE 7323, 732302 (2009). [CrossRef]

28.

J. A. Shufelt, “Performance evaluation and analysis of vanishing point detection rechniques,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 282–288 (1999). [CrossRef]

29.

Z. Zalevsky, O. Margalit, E. Vexberg, R. Pearl, and J. Garcia, “Suppression of phase ambiguity in digital holography by using partial coherence or specimen rotation,” Appl. Opt. 47, D154–D163 (2008). [CrossRef] [PubMed]

30.

T. Dresel, G. Hausler, and H. Venzke, “Three-dimensional sensing of rough surfaces by coherence radar,” Appl. Opt. 31, 919–925 (1992). [CrossRef] [PubMed]

31.

Z. Zalevsky, D. Mendlovic, and H. M. Ozaktas, “Energetic efficient synthesisof mutual intensity distribution,” J. Opt. A, Pure Appl. Opt. 2, 83–87 (2000). [CrossRef]

32.

Z. Zalevsky, J. García, P. García-Martínez, and C. Ferreira, “Spatial information transmission using orthogonal mutual coherence coding,” Opt. Lett. 30, 2837–2839 (2005). [CrossRef] [PubMed]

33.

V. Micó, J. García, C. Ferreira, D. Sylman, and Z. Zalevsky, “Spatial information transmission using axial temporal coherence coding,” Opt. Lett. 32, 736–738 (2007). [CrossRef] [PubMed]

34.

J. Garcia and Z. Zalevsky, “Range mapping using speckle decorrelation,” U.S.patent7,433,024 (October 2008); World Intellectual Property Organization publication WO/2007/096893 (27 February 2007).

35.

A. Shpunt and Z. Zalevsky, “Three-dimensional sensing using speckle patterns,” World Intellectual Property Organization publication WO/2007/105205 (8 March 2007).

36.

D. Sazbon, Z. Zalevsky, and E. Rivlin, “Qualitative real-time range extraction for preplanned scene partitioning using laser beam coding,” Pattern Recogn. Lett. 26, 1772–1781 (2005). [CrossRef]

37.

J. García, Z. Zalevsky, P. García-Martínez, C. Ferreira, M. Teicher, and Y. Beiderman, “Three-dimensional mapping and range measurement by means of projected speckle patterns,” Appl. Opt. 47, 3032–3040 (2008). [CrossRef] [PubMed]

OCIS Codes
(110.6880) Imaging systems : Three-dimensional image acquisition
(110.2945) Imaging systems : Illumination design

History
Original Manuscript: July 22, 2009
Revised Manuscript: September 16, 2009
Manuscript Accepted: September 18, 2009
Published: October 9, 2009

Virtual Issues
(2009) Advances in Optics and Photonics
Digital Holography and 3-D Imaging: Interactive Science Publishing (2009) Applied Optics

Citation
Alex Zlotnik, Shai Ben-Yaish, and Zeev Zalevsky, "Extending the depth of focus for enhanced three-dimensional imaging and profilometry: an overview," Appl. Opt. 48, H105-H112 (2009)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-48-34-H105


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. T. Sawatari, “Real-time noncontacting distance measurement using optical triangulation,” Appl. Opt. 15, 2821-2827 (1976). [CrossRef] [PubMed]
  2. G. Hausler and D. Ritter, “Parallel three-dimensional sensing by color-coded triangulation,” Appl. Opt. 32, 7164-7170(1993). [CrossRef] [PubMed]
  3. R. G. Dorsch, G. Hausler, and J. M. Herrmann, “Laser triangulation: fundamental uncertainty in distance measurement,” Appl. Opt. 33, 1306-1312 (1994). [CrossRef] [PubMed]
  4. A. M. Bruckstein, “On shape from shading,” Comput. Vis. Graph. Image Process. 44, 139-154 (1988). [CrossRef]
  5. Y. G. Leclerc and A. F. Bobick, “The direct computation of height from shading,” in Proceedings of IEEE Computer Vision and Pattern Recognition (IEEE, 1991), pp. 552-558. [CrossRef]
  6. R. Zhang and M. Shah, “Shape from intensity gradient,” IEEE Trans. Syst. Man Cybern. 29, 318-325 (1999). [CrossRef]
  7. M. Asada, H. Ichikawa, and S. Tjuji, “Determining of surface properties by projecting a stripe pattern,” in Proceedings of the International Pattern Recognition Conference (IEEE, 1986), pp. 1162-1164.
  8. M. Asada, H. Ichikawa, and S. Tsuji, “Determining surface orientation by projecting a stripe pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 749-754 (1988). [CrossRef]
  9. R. Kimmel, N. Kiryati, and A. M. Bruckstein, “Analyzing and synthesizing images by evolving curves with the Osher-Sethian method,” Int. J. Comput. Vis. 24, 37-56 (1997). [CrossRef]
  10. L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi pass dynamic programming,” in Proceedings of 1st International Symposium on 3D Data Processing Visualization and Transmission (3DPVT) (IEEE Computer Society, 2002), pp. 24-37. [CrossRef] [PubMed]
  11. E. Horn and N. Kiryati, “Toward optimal structured light patterns,” in Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling (IEEE Computer Society, 1997), pp. 28-37.
  12. J. Rosen and A. Yariv, “General theorem of spatial coherence: application to three-dimensional imaging,” J. Opt. Soc. Am. A 13, 2091-2095 (1996). [CrossRef]
  13. J. M. Schmitt, “Optical coherence tomography (OCT): a review,” IEEE J. Sel. Top. Quantum Electron. 5, 1205-1215(1999). [CrossRef]
  14. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859-1866 (1995). [CrossRef] [PubMed]
  15. J. van der Gracht, E. Dowski, M. Taylor, and D. Deaver, “Broadband behavior of an optical-digital focus-invariant system,” Opt. Lett. 21, 919-921 (1996). [CrossRef] [PubMed]
  16. J. O. Castaneda, E. Tepichin, and A. Diaz, “Arbitrary high focal depth with a quasi optimum real and positive transmittance apodizer,” Appl. Opt. 28, 2666-2669 (1989). [CrossRef]
  17. J. O. Castaneda and L. R. Berriel-Valdos, “Zone plate for arbitrary high focal depth,” Appl. Opt. 29, 994-997 (1990). [CrossRef]
  18. E. Ben Eliezer, Z. Zalevsky, E. Marom, and N. Konforti, “All-optical extended depth of field imaging system,” J. Opt. A, Pure Appl. Opt. 5, S164-S169 (2003). [CrossRef]
  19. A. Sauceda and J. Ojeda-Castaneda, “High focal depth with fractional-power wavefronts,” Opt. Lett. 29, 560-562 (2004). [CrossRef] [PubMed]
  20. W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, 875-877 (2001). [CrossRef]
  21. Z. Zalevsky, A. Shemer, A. Zlotnik, E. Ben-Eliezer, and E. Marom, “All-optical axial super resolving imaging using low-frequency binary-phase mask,” Opt. Express 14, 2631-2643 (2006). [CrossRef] [PubMed]
  22. C. Iemmi, A. Moreno, and J. Campos, “Digital holography with a point diffraction interferometer,” Opt. Express 13, 1885-1891 (2005). [CrossRef] [PubMed]
  23. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45, 836-850 (2006). [CrossRef] [PubMed]
  24. I. Yamaguchi and T. Zhang, “Phase shifting digital holography,” Opt. Lett. 22, 1268-1270 (1997). [CrossRef] [PubMed]
  25. Z. Zalevsky and A. Zlotnik, “Axially and transversally super resolved imaging and ranging with random aperture coding,” J. Opt. A, Pure Appl. Opt. 10, 064014 (2008). [CrossRef]
  26. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” J. Display Technol. 3, 315-320 (2007). [CrossRef]
  27. M. Pfennigbauer, B. Möbius, and J. Pereira do Carmo, “Echo digitizing imaging LIDAR for rendezvous and docking,” Proc. SPIE 7323, 732302 (2009). [CrossRef]
  28. J. A. Shufelt, “Performance evaluation and analysis of vanishing point detection rechniques,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 282-288 (1999). [CrossRef]
  29. Z. Zalevsky, O. Margalit, E. Vexberg, R. Pearl, and J. Garcia, “Suppression of phase ambiguity in digital holography by using partial coherence or specimen rotation,” Appl. Opt. 47, D154-D163 (2008). [CrossRef] [PubMed]
  30. T. Dresel, G. Hausler, and H. Venzke, “Three-dimensional sensing of rough surfaces by coherence radar,” Appl. Opt. 31, 919-925 (1992). [CrossRef] [PubMed]
  31. Z. Zalevsky, D. Mendlovic, and H. M. Ozaktas, “Energetic efficient synthesis of mutual intensity distribution,” J. Opt. A, Pure Appl. Opt. 2, 83-87 (2000). [CrossRef]
  32. Z. Zalevsky, J. García, P. García-Martínez, and C. Ferreira, “Spatial information transmission using orthogonal mutual coherence coding,” Opt. Lett. 30, 2837-2839 (2005). [CrossRef] [PubMed]
  33. V. Micó, J. García, C. Ferreira, D. Sylman, and Z. Zalevsky, “Spatial information transmission using axial temporal coherence coding,” Opt. Lett. 32, 736-738 (2007). [CrossRef] [PubMed]
  34. J. Garcia and Z. Zalevsky, “Range mapping using speckle decorrelation,” U.S. patent 7,433,024 (October 2008); World Intellectual Property Organization publication WO/2007/096893 (27 February 2007).
  35. A. Shpunt and Z. Zalevsky, “Three-dimensional sensing using speckle patterns,” World Intellectual Property Organization publication WO/2007/105205 (8 March 2007).
  36. D. Sazbon, Z. Zalevsky, and E. Rivlin, “Qualitative real-time range extraction for preplanned scene partitioning using laser beam coding,” Pattern Recogn. Lett. 26, 1772-1781(2005). [CrossRef]
  37. J. García, Z. Zalevsky, P. García-Martínez, C. Ferreira, M. Teicher, and Y. Beiderman, “Three-dimensional mapping and range measurement by means of projected speckle patterns,” Appl. Opt. 47, 3032-3040 (2008). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited