## Depth from automatic defocusing

Optics Express, Vol. 15, Issue 3, pp. 1011-1023 (2007)

http://dx.doi.org/10.1364/OE.15.001011

Acrobat PDF (402 KB)

### Abstract

This paper presents a depth recovery method that gives the depth of any scene from its defocused images. The method combines depth from defocusing and depth from automatic focusing techniques. Blur information in defocused images is utilised to measure depth in a way similar to determining depth from automatic focusing but without searching for sharp images of objects. The proposed method does not need special scene illumination and involves only a single camera. Therefore, there are no correspondence, occlusion and intrusive emissions problems. The paper gives experimental results which demonstrate the accuracy of the method.

© 2007 Optical Society of America

## 1. Introduction

1. S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in *Digital Photogrammetry and Remote Sensing*,
Eugeny A. Fedosov, Ed., Proc. SPIE2646,14–25 (1995). [CrossRef]

3. E.P. Krotkov, “Focusing,” Int. J. Compt. Vision **1**,223–237 (1987). [CrossRef]

13. P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. **5**,63–69 (1987). [CrossRef]

## 2. Theory of depth from automatic defocusing

### 2.1 Basics of DFD and DFAF

_{f}on the focal plane. Each point in a scene is projected onto a single point on the focal plane, causing a focused image to be formed on it. For a camera with a thin convex lens of focal length

*F*, the relation between the distance

*D*from a point in a scene to the lens and the distance

_{OL}*D*from its focused image to the lens is given by the Gaussian lens law:

_{LF}_{d}formed on the sensor plane will be a circular disk known as a “

*circle of confusion*” or “

*blur circle*” with diameter 2

*R*, provided that the aperture of the lens is also circular. By using similar triangles, a formula can be derived to establish the relationship between the radius of the blur circle

*R*and the displacement δ of the sensor plane from the focal plane:

*L*is the diameter of the aperture of the lens. From Fig. 1 which shows the object behind the plane of best focus (PBF), an equation for

*δ*can be derived as:

*D*is the distance between the lens and the sensor plane. The quantities

_{LS}*D*,

_{LS}*L*and

*F*together are referred to as the

*camera parameters*. The aperture diameter L of a lens is often given as [37]:

*R*versus the distance

*D*of an object for an

_{OL}*f*/2.8, 50mm (

*F*) lens with the camera focused on an object located 1m in front of the lens [for a focal plane distance value of 52.53 mm].

*D*, the following equation is obtained:

_{LS}*R*(as in Depth from Defocusing, DFD, techniques). Second, a sharp image of an object can be obtained by varying some, or all, of the camera parameters or the distance between the camera and the object to reduce

*R*to zero. Then, the above equations become well known Gussian lens law:

### 2.2 Theory of DFAD

*I*

_{1}(

*x*,

*y*) and

*I*

_{2}(

*x*,

*y*) be images taken using two different camera parameters settings:

*F*

_{1},

*f*

_{1},

*D*

_{LS1}and

*F*

_{2},

*f*

_{2},

*D*

_{LS2}The blur circle diameters

*R*

_{1}and

*R*

_{2}, corresponding to

*I*

_{1}(

*x*,

*y*) and

*I*

_{2}(

*x*,

*y*), respectively, are:

*d*is the displacement of the camera and the object away from each other between the taking of images

*I*

_{1}(

*x*,

*y*) and

*I*

_{2}(

*x*,

*y*).

*R*

_{1}and

*R*

_{2}should be equal. (In other words, exactly the same images of an object can be obtained using different camera settings.) Figure 3 shows the blurred versions of a step edge obtained using different camera parameters. In this fig., the blurred edges match each other exactly.

*R*

_{1}and

*R*

_{2}in Eqs. (11) and (12) and solving for

*D*, the following equation is obtained:

_{OL}## 3. Selection of camera parameters, criterion function and evaluation window

### 3.1 Selection of the camera parameters

*f*-number (

*f*) of camera by a small amount after having computed and recorded the sharpness value of the first image. Then, the recorded sharpness value is searched for by changing one of the other camera parameters (

*D*,

_{LS}*F*,

*d*). These techniques allow the object to remain on one side of the PBF and Eq. (13) to be employed.

*f*-numbers will have the same sharpness values. Changing the values of the other camera parameters causes images to become more defocused and therefore the developed technique does not allow this to be carried out. Hence, the camera parameters will stay the same except

*f*-numbers (

*D*

_{LS1}=

*D*

_{LS2},

*F*

_{1}=

*F*

_{2},

*d*= 0,

*f*

_{1}≠

*f*

_{2}). Equations (13) and (14) then become the well known Gaussian lens law (Eq. 10). These observations also prove the theoretical soundness of the derived equations. The distance of the object can be calculated using either of them.

*D*, is altered. Changes in

_{LS}*D*can make the camera focus at one of four different regions (see regions I to IV in Fig. 5). Assume that S is searched for by moving the camera with respect to the object. If the camera is focused at I, the sharpness value obtained from that distance is less than S. Therefore, the camera should be moved towards to the object to obtain S. If the camera is focused at II, the sharpness value obtained from that distance is larger than S. Therefore, the camera should be moved away from the object.

_{LS}*D*after the first sharpness value is recorded (S). The second column shows how the sharpness value obtained using the new

_{LS}*D*compares with S. The third column gives the relative changes needed in camera parameter (

_{LS}*d*or

*F*) to restore S. The last column indicates which equation is to be used for depth computation. As can be observed from the table, ambiguity arises in some cases. For example, the parameter and sharpness changes are identical between the first and the last rows of the table but different equations are required. The same problem also exists between the fourth and fifth rows of the table.

*F*or

*d*is changed first and searching is performed with one of the other camera parameters. There are many ways to solve this problem. For example, when an ambiguous situation is encountered, one of the equations is used to compute the object distance. The camera is focused at that distance and an image is obtained. If the sharpness value of the image is greater than the first sharpness value, the equation used for depth computation was the correct equation. Otherwise, the wrong equation was chosen.

*f*-number (

*f*) an extra image is always needed to determine which equation to employ.

### 3.2 Selection of the criterion function

3. E.P. Krotkov, “Focusing,” Int. J. Compt. Vision **1**,223–237 (1987). [CrossRef]

- 1. It can be used with different window sizes and for a wide variety of scenes.
- 2. It is relatively insensitive to noise.
- 3. It is straightforward to compute and the computation can be implemented in parallel and in hardware.

*I*(

*x*,

*y*) in an evaluation window by summing all magnitudes greater than a pre-defined threshold value. To enhance the effect of the larger values (i.e. the edges), the gradients are squared. The criterion function is defined as:

*G*(

_{x}*x*,

*y*) and

*G*(

_{y}*x*,

*y*). The Tenengrad function uses the Sobel convolution operator. The masks required to implement the Sobel operator in the horizontal and vertical directions are given below:

43. T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing ,**11**,629–639 (1993). [CrossRef]

3. E.P. Krotkov, “Focusing,” Int. J. Compt. Vision **1**,223–237 (1987). [CrossRef]

### 3.3 Selection of the evaluation window size

### 3.4 Noise reduction

*I*(

_{i}*x*,

*y*) is the pixel grey level value at point (

*x*,

*y*) for image

*i*and

*n*is the number of images used for averaging. The larger the value of

*n*, the greater the reduction in noise becomes. However, using a larger

*n*increases the amount of computation time.

### 3.5 Edge bleeding effects

*D*= 150mm,

_{OL}*D*= 75mm,

_{LF}*F*= 50mm,

*f*= 1.4), the image of point A will be blurred (Fig. 6(b)). If a DFAF technique is used to compute the distance of point A by moving the camera, a 50mm camera movement is needed to obtain the sharp image of point A. As can be seen from Fig. 6(c), the image of point B will be blurred and will bleed into W. This causes miscalculation of the sharp image position of point A and consequently its distance.

*D*is changed from 75mm to 74mm. This causes the sharpness of point A to vary slightly. To obtain the recorded sharpness value, the camera is moved. In this case, a camera movement of 8.45mm is required to restore the sharpness of A. Comparing Fig. 6(c) and Fig. 6(d) shows that the bleeding effect is much less for DFAD than for DFAF.

_{LF}## 4. Results

*n*= 20. The size of the evaluation window was 80×80. The window was chosen to be large enough so that the object stayed within it regardless of variations in the camera parameters. The depth computation process was as follows. The

*f*-number (

*f*) of the lens was first set to 4 and the camera was directed to obtain an image. The sharpness value of the image was measured and recorded. Then,

*f*- number was changed from 4 to 2.8 and the camera was redirected to acquire another image. The second image was rescaled to have the same mean grey level as the first image. The sharpness value of the second image was computed and the difference between this and the previously recorded sharpness value was calculated. The object was moved with respect to the camera until the minimum difference was obtained. Movements were made using a slide with an accuracy of 0.1mm.

*D*

_{0}is the distance to the PBF from the lens at the beginning of the experiment.

*D*

_{0}is given by the following lens law:

*D*values, one of which is positive and the other negative. The positive value yields the distance of the object. The results are plotted in Fig. 8. The percentage error in distance was found to be approximately 0.15%.

_{OL}## 5. Conclusion

*f*-number (

*f*) of the lens alter the mean image intensity. Therefore, the images were rescaled to have the same mean intensity value. However, rescaling causes errors in depth computation. To prevent this, DFAD can be performed by varying camera parameters other than the

*f*-number of the lens.

## References and links

1. | S.F. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in |

2. | M. Hebert, “Active and passive range sensing for robotics,” in |

3. | E.P. Krotkov, “Focusing,” Int. J. Compt. Vision |

4. | T. Darell and K. Wohn, “Depth from Focus Using a Pyramid Architecture,” Pattern Recogn. Lett. |

5. | S.K. Nayar and Y. Nakagawa, “Shape from Focus: An Effective Approach for Rough Surfaces,” in |

6. | H.N. Nair and C.V. Stewart, “Robust Focus Ranging,” in |

7. | D.T. Pham and V. Aslantas, “Automatic Focusing,” in |

8. | M. Subbarao and T. Wei, “Depth from Defocus and Rapid Autofocusing: A Practical Approach,” in |

9. | M. Subbarao and T. Choi, “Accurate Recovery of Three Dimensional Shape from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. |

10. | M. Subbarao and J.K. Tyan, “Selecting the optimal focus measure for autofocusing and depth-from-focus,” IEEE Trans. Pattern Anal. Mach. Intell. |

11. | N. Asada, H. Fujiwara, and T. Matsuyama, “Edge and depth from focus,” Int. J. Comput. Vision |

12. | Bilal Ahmad and Tae-Sun Choi, “A heuristic approach for finding best focused shape,” IEEE Trans. Circuits Syst. |

13. | P. Grossmann, “Depth from Focus,” Pattern Recogn. Lett. |

14. | A.P. Pentland, “A New Sense for Depth of Field,” IEEE Trans. Pattern Anal. Mach. Intell. |

15. | M. Subbarao and N. Gurumoorthy, “Depth Recovery from Blurred Edges,” in |

16. | M. Subbarao, “Efficient Depth Recovery Through Inverse Optics,” |

17. | C. Cardillo and M.A. Sid-Ahmed, “3-D Position Sensing Using Passive Monocular Vision System,” IEEE Trans. Pattern Anal. Mach. Intell. |

18. | R.V. Dantu, N.J. Dimopoulos, R.V. Patel, and A.J. Al-Khalili, “Depth Perception Using Blurring and its Application in VLSI Wafer Probing,” Mach. Vision Appl. |

19. | S.H. Lai, C.W. Fu, and S. Chang, “A Generalised Depth Estimation Algorithm with a Single Image,” IEEE Trans. Pattern Anal. Mach. Intell. |

20. | J. Ens and P. Lawrence, “Investigation of Methods for Determining Depth from Focus,” IEEE Trans. Pattern Anal. Mach. Intell. |

21. | L.F. Holeva, “Range Estimation from Camera Blur by Regularised Adaptive Identification,” Int. J. Pattern Recogn. Artif. Intell. |

22. | A.P. Pentland, S. Scherock, T. Darrell, and B. Girod, “Simple Range Cameras based on Focal Error,” J. Opt. Soc. Am. A |

23. | M. Subbarao and G. Surya, “Depth from Defocus: A Spatial Domain Approach,” Int. J. Comput. Vision |

24. | S. Xu, D.W. Capson, and T.M. Caelli, “Range Measurement from Defocus Gradient,” Mach. Vision Appl. |

25. | M. Watanabe and S.K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comput. Vision |

26. | N. Asada, H. Fujiwara, and T. Matsuyama, “Particle depth measurement based on depth-from-defocus,” Opt. Laser Technol. |

27. | S. Chaudhuri and A.N. Rajagopalan, “Depth from Defocus: A Real Aperture Imaging Approach,” (Springer-Verlag New York, Inc. 1999). |

28. | D.T. Pham and V. Aslantas, “Depth from Defocusing Using a Neural Network,” J. Pattern Recogn. |

29. | M. Asif and T.S. Choi, “Shape from focus using multilayer feedforward neural networks,” IEEE Trans. Image Process. |

30. | J. Rayala, S. Gupta, and S.K. Mullick, “Estimation of depth from defocus as polynomial system identification,” IEE Proceedings, Vision, Image and Signal Processing |

31. | P. Favaro, A. Mennucci, and S. Soatto, “Observing Shape from Defocused Images,” Int. J. Comput. Vision |

32. | D. Z. F. Deschenes, “Depth from Defocus Estimation in Spatial Domain,” Computer Vision and Image Understanding |

33. | P. Favaro and S. Soatto, “Learning Shape from Defocus,” in |

34. | V. Aslantas and M. Tunckanat, “Depth from Image Sharpness Using A Neural Network,” in |

35. | V. Aslantas, “Estimation of Depth From Defocusing Using A Neural Network,” in |

36. | V. Aslantas and M. Tunckanat, “Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network,” LNCS |

37. | B.K.P. Horn, |

38. | R.A. Jarvis, “Focus Optimisation Criteria for Computer Image Processing,” Microscope , |

39. | J.F. Schlag, A.C. Sanderson, C.P. Neuman, and F.C. Wimberly, “Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control,” CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983). |

40. | F.C.A. Groen, I.T. Young, and G. Ligthart, “A Comparison of Different Focus Functions for Use in Autofocus Algorithms,” Cytometry , |

41. | L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr.K. Preston, “Comparison of Autofocus Methods for Automated Microscopy,” Cytometry , |

42. | M. Subbarao, T. Choi, and A. Nikzat, “Focusing Techniques,” Optical Engineering , |

43. | T.T.E. Yeo, S.H. Ong, Jayasooriah, and R. Sinniah, “Autofocusing for Tissue Microscopy,” J. Image and Vision Computing , |

44. | V. Aslantas, “Criterion functions for automatic focusing,” in |

45. | R.C. Gonzalez and R.E. Woods, “Digital Image Processing,” (Addison-Wesley, Reading, MA1992). |

**OCIS Codes**

(100.2000) Image processing : Digital image processing

(150.5670) Machine vision : Range finding

(150.6910) Machine vision : Three-dimensional sensing

**ToC Category:**

Machine Vision

**History**

Original Manuscript: October 27, 2006

Revised Manuscript: January 3, 2007

Manuscript Accepted: January 8, 2007

Published: February 5, 2007

**Citation**

V. Aslantas and D. T. Pham, "Depth from automatic defocusing," Opt. Express **15**, 1011-1023 (2007)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-3-1011

Sort: Year | Journal | Reset

### References

- S. F. El-Hakim, J.-A. Beraldin, and F. Blais, "A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems," in Digital Photogrammetry and Remote Sensing, Eugeny A. Fedosov, Ed., Proc. SPIE 2646, 14-25 (1995). [CrossRef]
- M. Hebert, "Active and passive range sensing for robotics," in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, San Francisco, CA, 2000), pp. 102-110.
- E. P. Krotkov, "Focusing," Int. J. Compt. Vision 1, 223-237 (1987). [CrossRef]
- T. Darell and K. Wohn, "Depth from Focus using a Pyramid Architecture," Pattern Recogn. Lett. 11, 787-796 (1990). [CrossRef]
- S. K. Nayar and Y. Nakagawa, "Shape from Focus: An Effective Approach for Rough Surfaces," in Proceedings of IEEE Conference on Robotics and Automation, (Institute of Electrical and Electronics Engineers, Cincinnati, Ohio, 1990), pp. 218-225.
- H. N. Nair and C. V. Stewart, "Robust Focus Ranging," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Illinois, 1992), pp. 309-314.
- D. T. Pham and V. Aslantas, "Automatic Focusing," in Birinci Turk Yapay Zeka ve Yapay Sinir Aglari Sempozyumu, (Bilkent Universitesi, Ankara, 1992), pp. 295-303.
- M. Subbarao and T. Wei, "Depth from Defocus and Rapid Autofocusing: A Practical Approach," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Champaign, Illinois, 1992), pp. 773-776.
- M. Subbarao and T. Choi, "Accurate Recovery of Three Dimensional Shape from Focus," IEEE Trans. Pattern Anal. Mach. Intell. 17, 266-274 (1995). [CrossRef]
- M. Subbarao and J. K. Tyan, "Selecting the optimal focus measure for autofocusing and depth-from-focus," IEEE Trans. Pattern Anal. Mach. Intell. 20, 864-870 (1998). [CrossRef]
- N. Asada, H. Fujiwara and T. Matsuyama, "Edge and depth from focus," Int. J. Comput. Vision 26, 153-163 (1998). [CrossRef]
- Bilal Ahmad and Tae-Sun Choi, "A heuristic approach for finding best focused shape," IEEE Trans. Circuits Syst. 15, 566-574 (2005).
- P. Grossmann, "Depth from Focus," Pattern Recogn. Lett. 5, 63-69 (1987). [CrossRef]
- A. P. Pentland, "A New Sense for Depth of Field," IEEE Trans. Pattern Anal. Mach. Intell. 9, 523-531 (1987). [CrossRef] [PubMed]
- M. Subbarao and N. Gurumoorthy, "Depth Recovery from Blurred Edges," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Ann Arbor, MI, 1988), pp.498-503.
- M. Subbarao, "Efficient Depth Recovery through Inverse Optics," Machine Vision Inspection and Measurement, H. Freeman ed., (Academic, Boston, 1989).
- C. Cardillo and M. A. Sid-Ahmed, "3-D Position Sensing using Passive Monocular Vision System," IEEE Trans. Pattern Anal. Mach. Intell. 13, 809-813 (1991). [CrossRef]
- R. V. Dantu, N. J. Dimopoulos, R. V. Patel, and A. J. Al-Khalili, "Depth Perception using Blurring and its Application in VLSI Wafer Probing," Mach. Vision Appl. 5, 35-45 (1992). [CrossRef]
- S. H. Lai, C. W. Fu, and S. Chang, "A Generalised Depth Estimation Algorithm with a Single Image," IEEE Trans. Pattern Anal. Mach. Intell. 14, 405-411 (1992). [CrossRef]
- J. Ens and P. Lawrence, "Investigation of Methods for Determining Depth from Focus," IEEE Trans. Pattern Anal. Mach. Intell. 15, 97-108 (1993). [CrossRef]
- L. F. Holeva, "Range Estimation from Camera Blur by Regularised Adaptive Identification," Int. J. Pattern Recogn. Artif. Intell. 8, 1273-1300 (1994). [CrossRef]
- A. P. Pentland, S. Scherock, T. Darrell, and B. Girod, "Simple Range Cameras based on Focal Error," J. Opt. Soc. Am. A 11, 2925-2934 (1994). [CrossRef]
- M. Subbarao and G. Surya, "Depth from Defocus: A Spatial Domain Approach," Int. J. Comput. Vision 13, 271-294 (1994). [CrossRef]
- S. Xu, D. W. Capson, and T. M. Caelli, "Range Measurement from Defocus Gradient," Mach. Vision Appl. 8, 179-186 (1995). [CrossRef]
- M. Watanabe and S. K. Nayar, "Rational filters for passive depth from defocus," Int. J. Comput. Vision 27, 203-225 (1998). [CrossRef]
- N. Asada, H. Fujiwara, and T. Matsuyama, "Particle depth measurement based on depth-from-defocus," Opt. Laser Technol. 31, 95-102 (1999). [CrossRef]
- S. Chaudhuri and A. N. Rajagopalan, "Depth from Defocus: A Real Aperture Imaging Approach," (Springer-Verlag New York, Inc. 1999).
- D. T. Pham and V. Aslantas, "Depth from Defocusing using a Neural Network," J. Pattern Recogn. 32, 715-727 (1999). [CrossRef]
- M. Asif, and T. S. Choi, "Shape from focus using multilayer feedforward neural networks," IEEE Trans. Image Process. 10, 1670-1675 (2001). [CrossRef]
- J. Rayala, S. Gupta, and S. K. Mullick, "Estimation of depth from defocus as polynomial system identification," IEE Proceedings, Vision, Image and Signal Processing 148, 356-362 (2001). [CrossRef]
- P. Favaro, A. Mennucci, and S. Soatto, "Observing Shape from Defocused Images," Int. J. Comput. Vision 52, 25-43 (2003). [CrossRef]
- D. Z. F. Deschenes, "Depth from Defocus Estimation in Spatial Domain," Computer Vision and Image Understanding 81, 143-165 (2001). [CrossRef]
- P. Favaro and S. Soatto, "Learning Shape from Defocus," in European Conference on Computer Vision, (Copenhagen, Denmark, 2002), pp.735-45.
- V. Aslantas and M. Tunckanat, "Depth from Image Sharpness using a Neural Network," in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp. 260-265.
- V. Aslantas, "Estimation of Depth From Defocusing using a Neural Network," in International Conference on Signal Processing, (Canakkale, Turkey, 2003), pp. 305-309.
- V. Aslantas and M. Tunçkanat, "Depth of General Scenes from Defocused Images Using Multilayer Feedforward Network," LNCS 3949, 41-48 (2006).
- B. K. P. Horn, Robot Vision, (McGraw-Hill, New York, 1986).
- R. A. Jarvis, "Focus Optimisation Criteria for Computer Image Processing," Microscope, 24, 163-180 (1976).
- J. F. Schlag, A. C. Sanderson, C. P. Neuman, and F. C. Wimberly, "Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control," CMU-RI-TR-83-14, (Robotics Institution, Carnegie Mellon University, 1983).
- F. C. A. Groen, I. T. Young, and G. Ligthart, "A Comparison of Different Focus Functions for use in Autofocus Algorithms," Cytometry, 6, 81-91 (1985). [CrossRef] [PubMed]
- L. Firestone, K. Cook, K. Culp, N. Talsania, and Jr. K . Preston, "Comparison of Autofocus Methods for Automated Microscopy," Cytometry, 12, 195-206 (1991). [CrossRef] [PubMed]
- M. Subbarao, T. Choi, and A. Nikzat, "Focusing Techniques," Optical Engineering, 32, 2824-2836 (1993). [CrossRef]
- T. T. E. Yeo, S. H. Ong, Jayasooriah, and R. Sinniah, "Autofocusing for Tissue Microscopy," J. Image and Vision Computing, 11, 629-639 (1993). [CrossRef]
- V. Aslantas, "Criterion functions for automatic focusing," in 10. Turkish Symposium on Artificial Intelligence and Neural Networks, (Gazimagusa, Turkish Republic of Northern Cyprus 2001), pp.301-311.
- R. C. Gonzalez and R. E. Woods, "Digital Image Processing," (Addison-Wesley, Reading, MA 1992).

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.