OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 15, Iss. 3 — Feb. 5, 2007
  • pp: 1011–1023
« Show journal navigation

Depth from automatic defocusing

V. Aslantas and D.T. Pham  »View Author Affiliations


Optics Express, Vol. 15, Issue 3, pp. 1011-1023 (2007)
http://dx.doi.org/10.1364/OE.15.001011


View Full Text Article

Acrobat PDF (402 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper presents a depth recovery method that gives the depth of any scene from its defocused images. The method combines depth from defocusing and depth from automatic focusing techniques. Blur information in defocused images is utilised to measure depth in a way similar to determining depth from automatic focusing but without searching for sharp images of objects. The proposed method does not need special scene illumination and involves only a single camera. Therefore, there are no correspondence, occlusion and intrusive emissions problems. The paper gives experimental results which demonstrate the accuracy of the method.

© 2007 Optical Society of America

1. Introduction

The depth of a visible surface of a scene is the distance between the surface and the sensor. Recovering depth information from two-dimensional images of a scene is an important task in computer vision that can assist numerous applications such as object recognition, scene interpretation, obstacle avoidance, inspection and assembly.

Various passive depth computation techniques have been developed for computer vision applications [1

1. S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995). [CrossRef]

,2

2. M. Hebert, “Active and passive range sensing for robotics,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, San Francisco, CA, 2000), pp.102–110.

]. They can be classified into two groups. The first group operates using just one image. The second group requires more than one image which can be acquired using either multiple cameras or a camera whose parameters and positioning can be changed.

Single-image depth cues such as texture gradients and surface shading require heuristic assumptions. Therefore, they cannot be used to recover absolute depth. Multiple-image depth cues, such as stereo vision and motion parallax, usually require a solution to the correspondence problem of matching features amongst the images and all suffer from the occlusion problem where not everything that can be viewed from one position can be seen from another. These problems are computationally expensive and difficult to solve.

DFAF techniques may necessitate large changes in the camera parameters or large movements of the camera (or object) to obtain a sharp image. These cause alterations to the image magnification and mean image brightness, which in turn results in feature shifts and edge bleeding. DFAF searches for the sharpest image position of an object by comparing the sharpness values of images. Because there is no information on the sharpness value that a sharp image will have, local optima can cause miscalculation of the position of the sharp image. These problems will, in turn, affect the object distance.

DFD methods do not require an object to be in focus in order to compute its depth. If an image of a scene is acquired by a real lens, points on the surface of the scene at a particular distance from the lens will be in focus whereas points at other distances will be out of focus by varying degrees depending on their distances. DFD methods make use of this information to determine the depth of an object [8

8. M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp.773–776.

, 13–36

13. P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. 5,63–69 (1987). [CrossRef]

].

DFD techniques have drawbacks such as restriction on the camera parameters and appearance of objects, restriction on the form of the point-spread function (PSF) of the camera systems, limited range of effectiveness and high noise sensitivity [8

8. M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp.773–776.

]. The main source of depth errors in DFD is inaccurate modelling of the PSF.

The technique developed in this paper, called Depth from Automatic Defocusing (DFAD), is a combination of DFD and DFAF. Unlike DFD techniques, DFAF uses blur information without modelling or assuming the PSF of the camera system. The technique computes depth in a similar manner to DFAF but does not require the sharp image of an object or large alterations in camera settings. In contrast to DFAF, the sharpness value to be found is known in DFAD. Therefore, DFAD is more accurate and reliable than DFAF and DFD techniques.

DFAD does not need special scene illumination and involves only a single camera. Therefore, there are no correspondence and occlusion problems as found in stereo vision and motion parallax or intrusive emissions as with active depth computation techniques.

The remainder of the paper comprises four sections. Section 2 explains the theory underlying the proposed technique. Section 3 analyses several general issues such as the selection of the camera parameters, criterion function and evaluation window size, which should be determined before implementing DFAD. Edge bleeding, which is a problem with DFAF techniques, is also discussed in this section. Section 4 presents the results obtained. Section 5 concludes the paper.

2. Theory of depth from automatic defocusing

2.1 Basics of DFD and DFAF

Figure 1 shows the basic geometry of image formation. All light rays that are radiated by the object O and intercepted by the lens are refracted by the lens to converge at point If on the focal plane. Each point in a scene is projected onto a single point on the focal plane, causing a focused image to be formed on it. For a camera with a thin convex lens of focal length F, the relation between the distance DOL from a point in a scene to the lens and the distance DLF from its focused image to the lens is given by the Gaussian lens law:

1DOL+1DLF=1F
(1)
Fig. 1. Basic image formation geometry

However, if the sensor plane does not coincide with the focal plane, the image Id formed on the sensor plane will be a circular disk known as a “circle of confusion” or “blur circle” with diameter 2R, provided that the aperture of the lens is also circular. By using similar triangles, a formula can be derived to establish the relationship between the radius of the blur circle R and the displacement δ of the sensor plane from the focal plane:

R=2DLF
(2)

where L is the diameter of the aperture of the lens. From Fig. 1 which shows the object behind the plane of best focus (PBF), an equation for δ can be derived as:

δ=DLSDLF
(3)

where DLS is the distance between the lens and the sensor plane. The quantities DLS, L and F together are referred to as the camera parameters. The aperture diameter L of a lens is often given as [37

37. B.K.P. Horn, Robot Vision, (McGraw-Hill, New York, 1986).

]:

L=Ff
(4)

where f is the f-number of a given lens system. Substituting Eq.s (3) and (4) into Eq. (2) gives:

R=FDLSFDLF2fDLF
(5)

Then, using Eq. (1), DLF is eliminated from Eq. (5) to give:

R=DOL(DLSF)FDLS2fDOL
(6)

Figure 2 shows the theoretical blur circle radius R versus the distance DOL of an object for an f/2.8, 50mm (F) lens with the camera focused on an object located 1m in front of the lens [for a focal plane distance value of 52.53 mm].

By solving Eq. (6) for DLS, the following equation is obtained:

DOL=FDLSDLSF2fR
(7)
Fig. 2. Plot of theoretical blur circle radius versus depth for an f/2.8, 50mm lens [camera focused on an object 1m away from the lens].

When the object is in front of the plane of best focus, Eq.s (3) and (7) become:

δ=DLFDLS
(8)
DOL=FDLSDLSF+2fR
(9)

Equations (7) and (9) relate the object distance DOL to the radius of the blur circle R.

Using Eqs. (7) and (9), the object distance can be calculated in two ways. First, it can be computed by estimating the radius of the blur circle R (as in Depth from Defocusing, DFD, techniques). Second, a sharp image of an object can be obtained by varying some, or all, of the camera parameters or the distance between the camera and the object to reduce R to zero. Then, the above equations become well known Gussian lens law:

DOL=FDLSDLSF
(10)

By employing the camera parameters, Eq. (10) can be used to compute the depth. Techniques that work in this way are known as Depth from Automatic Focusing (DFAF) techniques.

2.2 Theory of DFAD

To explain this further, consider an object placed behind the PBF as in Fig. 1. Also, let I 1(x,y) and I 2(x,y) be images taken using two different camera parameters settings: F 1, f 1, D LS1 and F 2, f 2, D LS2 The blur circle diameters R 1 and R 2, corresponding to I 1(x,y) and I 2(x,y), respectively, are:

R1=DOL(DLS1F1)F1DLS12f1DOL
(11)
R2=(DOL+d)(DLS2F2)F2DLS22f2(DOL+d)
(12)

where d is the displacement of the camera and the object away from each other between the taking of images I 1(x,y) and I 2(x,y).

Fig. 3. Cross sections of three edges. (The step edge was placed at a distance of 200mm from the lens. Blurred edge 1 was obtained using camera parameters DLS1=75.0mm, F1=50.0mm and f1=1.4. The camera parameters used for the blurred edge 2 were DLS2=74.0mm, F2=47.49mm and f2=2.0.)

If the measured sharpness values are the same for both images, the blur circle radii R 1 and R 2 should be equal. (In other words, exactly the same images of an object can be obtained using different camera settings.) Figure 3 shows the blurred versions of a step edge obtained using different camera parameters. In this fig., the blurred edges match each other exactly.

By equating R 1 and R 2 in Eqs. (11) and (12) and solving for DOL, the following equation is obtained:

DOL2+[d(DLS1F1f2DLS2F2f1)(DLS1F1)f2(DLS2F2)f1]DOLDLS1F1f2d(DLS1F1)f2(DLS2F2)f1=0
(13)

Equation (13) is valid for an object behind or in front of the PBF provided that the position of the object relative to that plane remains the same after the camera parameters are changed (that is an object initially in front of the PBF stays in front of it after the change of parameters). However, two identical blurred images of an object, which give the same sharpness values, can be obtained when the object is placed in front of or behind the PBF. Therefore, if one of the sharpness values is measured when the object is located on one side of the PBF and the other is obtained when the object is on the other side, Eq. (13) should be rewritten as:

DOL2+[d(DLS1F1f2+DLS2F2f1)(DLS1F1)f2+(DLS2F2)f1]DOLDLS1F1f2d(DLS1F1)f2+(DLS2F2)f1=0
(14)

3. Selection of camera parameters, criterion function and evaluation window

3.1 Selection of the camera parameters

Equations (13) and (14) suggest that it is possible to vary more than one camera parameter simultaneously. However, this should not be done in a random manner because the effects of one camera parameter may be cancelled by varying another camera parameter. Therefore, it is better to change just one camera parameter after measuring the sharpness of the first image. This sharpness value is subsequently to be obtained again by altering another camera parameter.

Fig. 4. Different camera parameters giving the same sharpness value. B is the point of best focus

Without knowing whether the object is behind or in front of the PBF, there is a problem with deciding which equation to employ. This problem can be solved by focusing the camera on the limit of the viewing distance by adjusting the camera parameters. Having obtained the sharpness value at that distance, the first changes in one of the previously unaltered camera parameters should make the image more defocused. Another solution to the problem is to change the f-number (f) of camera by a small amount after having computed and recorded the sharpness value of the first image. Then, the recorded sharpness value is searched for by changing one of the other camera parameters (DLS, F, d). These techniques allow the object to remain on one side of the PBF and Eq. (13) to be employed.

If the object is at the PBF, images taken with different f-numbers will have the same sharpness values. Changing the values of the other camera parameters causes images to become more defocused and therefore the developed technique does not allow this to be carried out. Hence, the camera parameters will stay the same except f-numbers (D LS1=D LS2, F 1=F 2, d = 0, f 1f 2). Equations (13) and (14) then become the well known Gaussian lens law (Eq. 10). These observations also prove the theoretical soundness of the derived equations. The distance of the object can be calculated using either of them.

How ambiguity can arise and be resolved, if the above techniques are not adopted, can be explained by considering the following for the sake of the theoretical completeness of the developed technique. Assume that the object is located at point B behind the PBF which is at point C. After having computed the first sharpness value (S) at point C, one of the camera parameters, for example DLS, is altered. Changes in DLS can make the camera focus at one of four different regions (see regions I to IV in Fig. 5). Assume that S is searched for by moving the camera with respect to the object. If the camera is focused at I, the sharpness value obtained from that distance is less than S. Therefore, the camera should be moved towards to the object to obtain S. If the camera is focused at II, the sharpness value obtained from that distance is larger than S. Therefore, the camera should be moved away from the object.

Fig. 5. Possible focusing positions for an object placed in front of the camera. B corresponds to the object location. (Arrows show direction of camera movements)

Table 1 shows the parameter adjustments for objects behind and in front of the PBF. The first column gives the changes in parameter DLS after the first sharpness value is recorded (S). The second column shows how the sharpness value obtained using the new DLS compares with S. The third column gives the relative changes needed in camera parameter (d or F) to restore S. The last column indicates which equation is to be used for depth computation. As can be observed from the table, ambiguity arises in some cases. For example, the parameter and sharpness changes are identical between the first and the last rows of the table but different equations are required. The same problem also exists between the fourth and fifth rows of the table.

The same ambiguity also exists when F or d is changed first and searching is performed with one of the other camera parameters. There are many ways to solve this problem. For example, when an ambiguous situation is encountered, one of the equations is used to compute the object distance. The camera is focused at that distance and an image is obtained. If the sharpness value of the image is greater than the first sharpness value, the equation used for depth computation was the correct equation. Otherwise, the wrong equation was chosen.

Table 1. Parameter adjustments and depth computation. “+” and “-” indicate that this camera parameter needs to be increased or decreased, respectively

table-icon
View This Table

Another technique to avoid ambiguity is to focus the camera slightly further than the initially focused distance after computing the first sharpness value. If the object is behind the PBF, the image acquired from the new position will be sharper and consequently its sharpness value will be higher than the previously obtained value. If the object is in front of the PBF, the image will be more blurred and its sharpness value will be less than the previously obtained value. From this information, the object position can be estimated. Having estimated the place of the object, it is straightforward to know which equation to employ. However, if searching is done by changing the f-number (f) an extra image is always needed to determine which equation to employ.

3.2 Selection of the criterion function

  • 1. It can be used with different window sizes and for a wide variety of scenes.
  • 2. It is relatively insensitive to noise.
  • 3. It is straightforward to compute and the computation can be implemented in parallel and in hardware.

The Tenengrad function estimates the gradient at each image point I(x,y) in an evaluation window by summing all magnitudes greater than a pre-defined threshold value. To enhance the effect of the larger values (i.e. the edges), the gradients are squared. The criterion function is defined as:

maxxNyNZ(x,y)2forZ(x,y)2>T
(15)

where Z(x,y)=Gx(x,y)2+Gy(x,y)2 is the gradient magnitude.

There are many discrete operators which can approximate the values of the gradient components Gx(x,y) and Gy(x,y). The Tenengrad function uses the Sobel convolution operator. The masks required to implement the Sobel operator in the horizontal and vertical directions are given below:

[101202101][121000121]

It has been argued that it is unnecessary to use a threshold value in the Tenengrad function [43

43. T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing ,11,629–639 (1993). [CrossRef]

]. Krotkov [3

3. E.P. Krotkov, “Focusing,” Int. J. Compt. Vision 1,223–237 (1987). [CrossRef]

] also disregarded the threshold in his implementation of the Tenengrad function because threshold selection requires heuristic choices. Therefore, in this work, no threshold value was chosen.

3.3 Selection of the evaluation window size

Criterion values remain the same for evaluation windows situated in a homogeneous region of an image regardless of the amount of defocus. Therefore, criterion functions must be evaluated in windows that have some kind of visual variations such as edges, lines, textures etc. Also, a window must contain the projection of object points that lie at the same distance from the lens. Otherwise, the criterion function will, in general, give multiple solutions which cause a miscalculation of the object depth.

There is a trade-off associated with choosing the size of the window. Larger windows increase the robustness of the criterion function. However, they reduce the spatial resolution of the resulting depth array and increase the computation time. Smaller windows increase the spatial resolution and decrease the computation time but are more affected by noise.

When computing object distance using an automatic focusing technique, if a large alteration in the camera parameters is required to obtain a sharp image of an object, this causes the magnification of the lens and the image brightness to change. The larger the change in camera settings, the greater the effect on images. Consequently, many pixels will move in or out of the evaluation window. These effects can be compensated for either optically or computationally. The former requires extra equipment such as another camera or a natural density filter. The latter increases the execution time of the techniques. These effects are reduced in DFAD since it does not require a large alteration of the camera parameters to compute the distance of an object.

3.4 Noise reduction

The quality of an image is often corrupted by noise from the digitisation process for the particular equipment employed, interference from nearby computers and connecting cables, and ambient light which varies from moment to moment. This can cause a miscalculation of the searched image position of an object and consequently its distance. Therefore, the amount of noise in the image should be reduced as much as possible. There are several methods in computer vision for minimising image noise. One of the most common techniques is spatial averaging [45

45. R.C. Gonzalez and R.E. Woods, “Digital Image Processing,” (Addison-Wesley, Reading, MA1992).

]. However, this blurs images. Another technique is to threshold the criterion values. As previously mentioned, threshold selection inevitably requires heuristic choices. Therefore, focusing programs should avoid employing a threshold for noise reduction.

In this work, temporal averaging was employed. At each stage of the searching process, temporal averaging is performed by taking more than one image of the same scene at different times. The images are acquired by employing the same camera parameters for each object position. In the resulting image, the grey level of each pixel is the average intensity of the same pixel location in all the acquired images. That is:

I(x,y)=1ni=1nIi(x,y)
(16)

where Ii(x,y) is the pixel grey level value at point (x,y) for image i and n is the number of images used for averaging. The larger the value of n, the greater the reduction in noise becomes. However, using a larger n increases the amount of computation time.

3.5 Edge bleeding effects

Focus-based methods divide images into subimages to compute the depth. This causes an error in depth computation because the intensity outside a window “bleeds” into the window. The effect is larger when the blur increases. To illustrate this, consider the effect on two points A and B which are 16 pixels apart (see Fig. 6(a)) and placed at 200mm and 150mm respectively from the camera. The criterion function will be employed within a window “W” to compute the sharpness value of point A. If the camera is focused at point B (DOL = 150mm, DLF = 75mm, F = 50mm, f = 1.4), the image of point A will be blurred (Fig. 6(b)). If a DFAF technique is used to compute the distance of point A by moving the camera, a 50mm camera movement is needed to obtain the sharp image of point A. As can be seen from Fig. 6(c), the image of point B will be blurred and will bleed into W. This causes miscalculation of the sharp image position of point A and consequently its distance.

With DFAD, having recorded the sharpness value of A at 200mm with the camera focused at B, DLF is changed from 75mm to 74mm. This causes the sharpness of point A to vary slightly. To obtain the recorded sharpness value, the camera is moved. In this case, a camera movement of 8.45mm is required to restore the sharpness of A. Comparing Fig. 6(c) and Fig. 6(d) shows that the bleeding effect is much less for DFAD than for DFAF.

4. Results

Since a computer-controlled system was not available, experiments to test the proposed DFAD method were carried out manually. The tests were conducted with two different lenses having fixed focal lengths of 50mm and 90mm. The method was evaluated on three objects located within 1000mm of the camera. The images of the objects used in the experiments are shown in Fig. 7. The objects were placed at 20 different known positions. At each position, temporal averaging was performed with n = 20. The size of the evaluation window was 80×80. The window was chosen to be large enough so that the object stayed within it regardless of variations in the camera parameters. The depth computation process was as follows. The f-number (f) of the lens was first set to 4 and the camera was directed to obtain an image. The sharpness value of the image was measured and recorded. Then, f- number was changed from 4 to 2.8 and the camera was redirected to acquire another image. The second image was rescaled to have the same mean grey level as the first image. The sharpness value of the second image was computed and the difference between this and the previously recorded sharpness value was calculated. The object was moved with respect to the camera until the minimum difference was obtained. Movements were made using a slide with an accuracy of 0.1mm.

Fig. 6. (a) Cross section at points A and B (assuming a pin-hole camera with infinite depth of field) (b) the camera is focused at point B (c) the camera is focused at point A (d) after the movement required for depth computation by DFAD
Fig. 7. Images of the objects used in the experiments

The direction of the object movement was determined by the difference between the sharpness values. When the difference increased, it was known that the object was being moved in the wrong direction. Otherwise, it was being moved in the right direction. After having obtained the minimum difference, Eq. (13) was chosen to compute the depth since the objects always stayed behind the PBF. Eq. (13) can be rewritten as:

DOL2+[dD0]DOLD0f2d(f2f1)=0
(17)

where D 0 is the distance to the PBF from the lens at the beginning of the experiment. D 0 is given by the following lens law:

D0=DLSFDLSF
(18)

Solving Eq. (17) gives two DOL values, one of which is positive and the other negative. The positive value yields the distance of the object. The results are plotted in Fig. 8. The percentage error in distance was found to be approximately 0.15%.

Instead of computing sharpness values of images, experiments were also carried out by simply subtracting the images. However, the results obtained were not as accurate as employing a criterion function. If fast and less accurate results are required in a specific application, then image subtraction can be performed. It is also possible to employ correlation values between the images.

Fig. 8. (a) Estimated depth vs. real depth (b) Errors for different depths

5. Conclusion

In this paper, a depth calculation method called Depth from Automatic Defocusing has been presented. The method computes depth in a similar way to DFAF techniques. However, it does not require sharp images of an object to determine its distance. The technique uses blur information and does not need to model or evaluate the point-spread function.

The method was implemented to compute the depth of general scenes using their defocused images. On average, experimental results have shown that the depth estimation error was approximately 0.15%. Thus, DFAD is an accurate technique for depth computation. Having determined the distance of an object from the camera, it is easy to obtain the sharp image of the object. Therefore, this method can also be used for automatic focusing.

Changes in the f-number (f) of the lens alter the mean image intensity. Therefore, the images were rescaled to have the same mean intensity value. However, rescaling causes errors in depth computation. To prevent this, DFAD can be performed by varying camera parameters other than the f-number of the lens.

With DFAF techniques, there is no information on the sharpness value that a sharp image will have. Therefore, local optima can cause miscalculation of the position of the sharp image. However, with DFAD, the sharpness value to be found is known. Hence, DFAD results are more reliable than those for DFAF.

As with DFD, DFAF and stereo techniques, one of the sources of errors in DFAD is the limited spatial resolution of the detector array inherent in any imaging system. The size of pixels plays an important role both on image sampling and depth of field. These will, in turn, affect the computation of object distance. Therefore, the higher the resolution of the detector array (the smaller the pixel size), the more certain the accuracy of results will be.

The proposed DFAD method currently has two main drawbacks. As with DFAF, a weakness of DFAD is that it requires the processing of more images than with DFD techniques which need only a few images to determine the depth of objects. Also, DFAD cannot compute the distance of plain objects. This is a common problem with passive depth computation techniques. However, if a random illumination pattern can be projected onto such objects, then DFAD can be applied.

References and links

1.

S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995). [CrossRef]

2.

M. Hebert, “Active and passive range sensing for robotics,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, San Francisco, CA, 2000), pp.102–110.

3.

E.P. Krotkov, “Focusing,” Int. J. Compt. Vision 1,223–237 (1987). [CrossRef]

4.

T. Darell and K. Wohn, “Depth from Focus Using a Pyramid Architecture,” Pattern Recogn. Lett. 11,787–796 (1990). [CrossRef]

5.

S.K. Nayar and Y. Nakagawa, “Shape from Focus: An Effective Approach for Rough Surfaces,” in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, Cincinnati, Ohio, 1990), pp.218–225.

6.

H.N. Nair and C.V. Stewart, “Robust Focus Ranging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Illinois, 1992), pp.309–314.

7.

D.T. Pham and V. Aslantas, “Automatic Focusing,” in Birinci Turk Yapay Zeka ve Yapay Sinir Aglari Sempozyumu, (Bilkent Universitesi, Ankara, 1992), pp.295–303.

8.

M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp.773–776.

9.

M. Subbarao and T. Choi, “Accurate Recovery of Three Dimensional Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 17,266–274 (1995). [CrossRef]

10.

M. Subbarao and J.K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. 20,864–870 (1998). [CrossRef]

11.

N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision 26,153–163 (1998). [CrossRef]

12.

Bilal Ahmad and Tae-Sun Choi, “A heuristic approach for finding best focused shape,” IEEE Trans. Circuits Syst. 15,566–574 (2005).

13.

P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. 5,63–69 (1987). [CrossRef]

14.

A.P. Pentland, “A New Sense for Depth of Field,” IEEE Trans. Pattern Anal. Mach. Intell. 9,523–531 (1987). [CrossRef] [PubMed]

15.

M. Subbarao and N. Gurumoorthy, “Depth Recovery from Blurred Edges,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Ann Arbor, MI, 1988), pp.498–503.

16.

M. Subbarao, “Efficient Depth Recovery Through Inverse Optics,” Machine Vision Inspection and Measurement, H. Freeman ed., (Academic, Boston, 1989).

17.

C. Cardillo and M.A. Sid-Ahmed, “3-D Position Sensing Using Passive Monocular Vision System,” IEEE Trans. Pattern Anal. Mach. Intell. 13,809–813 (1991). [CrossRef]

18.

R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. 5,35–45 (1992). [CrossRef]

19.

S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. 14,405–411 (1992). [CrossRef]

20.

J. Ens and P. Lawrence, “Investigation of Methods for Determining Depth from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. 15,97–108 (1993). [CrossRef]

21.

L.F. Holeva, “Range Estimation from Camera Blur by Regularised Adaptive Identification,” Int. J. Pattern Recogn. Artif. Intell. 8,1273–1300 (1994). [CrossRef]

22.

A.P. Pentland, S. Scherock, T. Darrell, and B. Girod, “Simple Range Cameras based on Focal Error,” J. Opt. Soc. Am. A 11,2925–2934 (1994). [CrossRef]

23.

M. Subbarao and G. Surya, “Depth from Defocus: A Spatial Domain Approach,” Int. J. Comput. Vision 13,271–294 (1994). [CrossRef]

24.

S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. 8,179–186 (1995). [CrossRef]

25.

M. Watanabe and S.K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vision 27,203–225 (1998). [CrossRef]

26.

N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. 31,95–102 (1999). [CrossRef]

27.

S. Chaudhuri and A.N. Rajagopalan, “Depth from Defocus: A Real Aperture Imaging Approach,” (Springer-Verlag New York, Inc. 1999).

28.

D.T. Pham and V. Aslantas, “Depth from Defocusing Using a Neural Network,” J. Pattern Recogn. 32,715–727 (1999). [CrossRef]

29.

M. Asif and T.S. Choi, “Shape from focus using multilayer feedforward neural networks,” IEEE Trans. Image Process. 10,1670–1675 (2001). [CrossRef]

30.

J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing 148,356–362 (2001). [CrossRef]

31.

P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision 52,25–43 (2003). [CrossRef]

32.

D. Z. F. Deschenes, “Depth from Defocus Estimation in Spatial Domain,” Computer Vision and Image Understanding 81,143–165 (2001). [CrossRef]

33.

P. Favaro and S. Soatto, “Learning Shape from Defocus,” in European Conference on Computer Vision, (Copenhagen, Denmark, 2002), pp.735–45.

34.

V. Aslantas and M. Tunckanat, “Depth from Image Sharpness Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.260–265.

35.

V. Aslantas, “Estimation of Depth From Defocusing Using A Neural Network,” in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp.305–309.

36.

V. Aslantas and M. Tunckanat, “Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network,” LNCS 3949,41–48 (2006).

37.

B.K.P. Horn, Robot Vision, (McGraw-Hill, New York, 1986).

38.

R.A. Jarvis, “Focus Optimisation Criteria for Computer Image Processing,” Microscope ,24,163–180 (1976).

39.

J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).

40.

F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry ,6,81–91 (1985). [CrossRef] [PubMed]

41.

L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry ,12,195–206 (1991). [CrossRef] [PubMed]

42.

M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering ,32,2824–2836 (1993). [CrossRef]

43.

T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing ,11,629–639 (1993). [CrossRef]

44.

V. Aslantas, “Criterion functions for automatic focusing,” in 10. Turkish Symposium on Artificial Intelligence and Neural Networks, (Gazimagusa, Turkish Republic of Northern Cyprus2001), pp.301–311.

45.

R.C. Gonzalez and R.E. Woods, “Digital Image Processing,” (Addison-Wesley, Reading, MA1992).

OCIS Codes
(100.2000) Image processing : Digital image processing
(150.5670) Machine vision : Range finding
(150.6910) Machine vision : Three-dimensional sensing

ToC Category:
Machine Vision

History
Original Manuscript: October 27, 2006
Revised Manuscript: January 3, 2007
Manuscript Accepted: January 8, 2007
Published: February 5, 2007

Citation
V. Aslantas and D. T. Pham, "Depth from automatic defocusing," Opt. Express 15, 1011-1023 (2007)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-3-1011


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. F. El-Hakim, J.-A. Beraldin, and F. Blais, "A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems," in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE 2646, 14-25 (1995). [CrossRef]
  2. M. Hebert, "Active and passive range sensing for robotics," in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, San Francisco, CA, 2000), pp. 102-110.
  3. E. P. Krotkov, "Focusing," Int. J. Compt. Vision 1, 223-237 (1987). [CrossRef]
  4. T. Darell and K. Wohn, "Depth from Focus using a Pyramid Architecture," Pattern Recogn. Lett. 11, 787-796 (1990). [CrossRef]
  5. S. K. Nayar and Y. Nakagawa, "Shape from Focus: An Effective Approach for Rough Surfaces," in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, Cincinnati, Ohio, 1990), pp. 218-225.
  6. H. N. Nair and C. V. Stewart, "Robust Focus Ranging," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Illinois, 1992), pp. 309-314.
  7. D. T. Pham and V. Aslantas, "Automatic Focusing," in Birinci Turk Yapay Zeka ve Yapay Sinir Aglari Sempozyumu, (Bilkent Universitesi, Ankara, 1992), pp. 295-303.
  8. M. Subbarao and T. Wei, "Depth from Defocus and Rapid Autofocusing: A Practical Approach," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp. 773-776.
  9. M. Subbarao and T. Choi, "Accurate Recovery of Three Dimensional Shape from Focus," IEEE Trans. Pattern Anal. Mach. Intell. 17, 266-274 (1995). [CrossRef]
  10. M. Subbarao and J. K. Tyan, "Selecting the optimal focus measure for autofocusing and depth-from-focus," IEEE Trans. Pattern Anal. Mach. Intell. 20, 864-870 (1998). [CrossRef]
  11. N. Asada, H. Fujiwara and T. Matsuyama, "Edge and depth from focus," Int. J. Comput. Vision 26, 153-163 (1998). [CrossRef]
  12. Bilal Ahmad and Tae-Sun Choi, "A heuristic approach for finding best focused shape," IEEE Trans. Circuits Syst. 15, 566-574 (2005).
  13. P. Grossmann, "Depth from Focus," Pattern Recogn. Lett. 5, 63-69 (1987). [CrossRef]
  14. A. P. Pentland, "A New Sense for Depth of Field," IEEE Trans. Pattern Anal. Mach. Intell. 9, 523-531 (1987). [CrossRef] [PubMed]
  15. M. Subbarao and N. Gurumoorthy, "Depth Recovery from Blurred Edges," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Ann Arbor, MI, 1988), pp.498-503.
  16. M. Subbarao, "Efficient Depth Recovery through Inverse Optics," Machine Vision Inspection and Measurement, H. Freeman ed., (Academic, Boston, 1989).
  17. C. Cardillo and M. A. Sid-Ahmed, "3-D Position Sensing using Passive Monocular Vision System," IEEE Trans. Pattern Anal. Mach. Intell. 13, 809-813 (1991). [CrossRef]
  18. R. V. Dantu, N. J. Dimopoulos, R. V. Patel, and A. J. Al-Khalili, "Depth Perception using Blurring and its Application in VLSI Wafer Probing," Mach. Vision Appl. 5, 35-45 (1992). [CrossRef]
  19. S. H. Lai, C. W. Fu, and S. Chang, "A Generalised Depth Estimation Algorithm with a Single Image," IEEE Trans. Pattern Anal. Mach. Intell. 14, 405-411 (1992). [CrossRef]
  20. J. Ens and P. Lawrence, "Investigation of Methods for Determining Depth from Focus," IEEE Trans. Pattern Anal. Mach. Intell. 15, 97-108 (1993). [CrossRef]
  21. L. F. Holeva, "Range Estimation from Camera Blur by Regularised Adaptive Identification," Int. J. Pattern Recogn. Artif. Intell. 8, 1273-1300 (1994). [CrossRef]
  22. A. P. Pentland, S. Scherock, T. Darrell, and B. Girod, "Simple Range Cameras based on Focal Error," J. Opt. Soc. Am. A 11, 2925-2934 (1994). [CrossRef]
  23. M. Subbarao and G. Surya, "Depth from Defocus: A Spatial Domain Approach," Int. J. Comput. Vision 13, 271-294 (1994). [CrossRef]
  24. S. Xu, D. W. Capson, and T. M. Caelli, "Range Measurement from Defocus Gradient," Mach. Vision Appl. 8, 179-186 (1995). [CrossRef]
  25. M. Watanabe and S. K. Nayar, "Rational filters for passive depth from defocus," Int. J. Comput. Vision 27, 203-225 (1998). [CrossRef]
  26. N. Asada, H. Fujiwara, and T. Matsuyama, "Particle depth measurement based on depth-from-defocus," Opt. Laser Technol. 31, 95-102 (1999). [CrossRef]
  27. S. Chaudhuri and A. N. Rajagopalan, "Depth from Defocus: A Real Aperture Imaging Approach," (Springer-Verlag New York, Inc. 1999).
  28. D. T. Pham and V. Aslantas, "Depth from Defocusing using a Neural Network," J. Pattern Recogn. 32, 715-727 (1999). [CrossRef]
  29. M. Asif, and T. S. Choi, "Shape from focus using multilayer feedforward neural networks," IEEE Trans. Image Process. 10, 1670-1675 (2001). [CrossRef]
  30. J. Rayala, S. Gupta, and S. K. Mullick, "Estimation of depth from defocus as polynomial system identification," IEE Proceedings, Vision, Image and Signal Processing 148, 356-362 (2001). [CrossRef]
  31. P. Favaro, A. Mennucci, and S. Soatto, "Observing Shape from Defocused Images," Int. J. Comput. Vision 52, 25-43 (2003). [CrossRef]
  32. D. Z. F. Deschenes, "Depth from Defocus Estimation in Spatial Domain," Computer Vision and Image Understanding 81, 143-165 (2001). [CrossRef]
  33. P. Favaro and S. Soatto, "Learning Shape from Defocus," in European Conference on Computer Vision, (Copenhagen, Denmark, 2002), pp.735-45.
  34. V. Aslantas and M. Tunckanat, "Depth from Image Sharpness using a Neural Network," in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp. 260-265.
  35. V. Aslantas, "Estimation of Depth From Defocusing using a Neural Network," in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp. 305-309.
  36. V. Aslantas and M. Tunçkanat, "Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network," LNCS 3949, 41-48 (2006).
  37. B. K. P. Horn, Robot Vision, (McGraw-Hill, New York, 1986).
  38. R. A. Jarvis, "Focus Optimisation Criteria for Computer Image Processing," Microscope,  24, 163-180 (1976).
  39. J. F. Schlag, A. C. Sanderson, C. P. Neuman, and F. C. Wimberly, "Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control," CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).
  40. F. C. A. Groen, I. T. Young, and G. Ligthart, "A Comparison of Different Focus Functions for use in Autofocus Algorithms," Cytometry,  6, 81-91 (1985). [CrossRef] [PubMed]
  41. L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr. K . Preston, "Comparison of Autofocus Methods for Automated Microscopy," Cytometry,  12, 195-206 (1991). [CrossRef] [PubMed]
  42. M. Subbarao, T. Choi, and A. Nikzat, "Focusing Techniques," Optical Engineering,  32, 2824-2836 (1993). [CrossRef]
  43. T. T. E. Yeo, S. H. Ong, Jayasooriah, and R. Sinniah, "Autofocusing for Tissue Microscopy," J. Image and Vision Computing,  11, 629-639 (1993). [CrossRef]
  44. V. Aslantas, "Criterion functions for automatic focusing," in 10. Turkish Symposium on Artificial Intelligence and Neural Networks, (Gazimagusa, Turkish Republic of Northern Cyprus 2001), pp.301-311.
  45. R. C. Gonzalez and R. E. Woods, "Digital Image Processing," (Addison-Wesley, Reading, MA 1992).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited