OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A


  • Editor: Franco Gori
  • Vol. 29, Iss. 8 — Aug. 1, 2012
  • pp: 1694–1706

Defocus blur parameters identification by histogram matching

Huei-Yung Lin and Xin-Han Chou  »View Author Affiliations

JOSA A, Vol. 29, Issue 8, pp. 1694-1706 (2012)

View Full Text Article

Enhanced HTML    Acrobat PDF (1506 KB)

Browse Journals / Lookup Meetings

Browse by Journal and Year


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools



A defocus blur identification technique based on histogram analysis of a real edge image is presented. The image defocus process of a camera is formulated by incorporating the nonlinear camera response and the intensity-dependent noise model. The histogram matching between the synthesized and real defocused regions is then carried out with intensity-dependent filtering. By iteratively changing the point-spread function parameters, the best blur extent is identified from histogram comparison. We have performed the experiments on both the synthetic and real edge images. The results have demonstrated the robustness and feasibility of the proposed technique.

© 2012 Optical Society of America

OCIS Codes
(150.1135) Machine vision : Algorithms
(100.4995) Image processing : Pattern recognition, metrics

ToC Category:
Image Processing

Original Manuscript: October 17, 2011
Revised Manuscript: May 22, 2012
Manuscript Accepted: June 19, 2012
Published: July 27, 2012

Huei-Yung Lin and Xin-Han Chou, "Defocus blur parameters identification by histogram matching," J. Opt. Soc. Am. A 29, 1694-1706 (2012)

Sort:  Author  |  Year  |  Journal  |  Reset  


  1. M. Banham and A. Katsaggelos, “Digital image restoration,” IEEE Signal Process. Mag. 14, 24–41 (1997). [CrossRef]
  2. M. A. Kutay and H. M. Ozaktas, “Optimal image restoration with the fractional fourier transform,” J. Opt. Soc. Am. A 15, 825–833 (1998). [CrossRef]
  3. S. Wu, W. Lin, S. Xie, Z. Lu, E. P. Ong, and S. Yao, “Blind blur assessment for vision-based applications,” J. Vis. Commun. Image Represent 20, 231–241 (2009). [CrossRef]
  4. I. van Zyl Marais and W. H. Steyn, “Robust defocus blur identification in the context of blind image quality assessment,” Signal Process. Image Commun. 22, 833–844 (2007). [CrossRef]
  5. D. Rajan, S. Chaudhuri, and M. Joshi, “Multi-objective super resolution: concepts and examples,” IEEE Signal Process. Mag. 20, 49–61 (2003). [CrossRef]
  6. J. Yang and D. Schonfeld, “Virtual focus and depth estimation from defocused video sequences,” IEEE Trans. Image Process. 19, 668–679 (2010). [CrossRef]
  7. C. Swain and T. Chen, “Defocus-based image segmentation,” inIEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 4 (1995), pp. 2403–2406.
  8. Z. Liu, W. Li, L. Shen, Z. Han, and Z. Zhang, “Automatic segmentation of focused objects from images with low depth of field,” Pattern Recogn. Lett. 31, 572–581 (2010). [CrossRef]
  9. K. Pradeep and A. Rajagopalan, “Improving shape from focus using defocus cue,” IEEE Trans. Image Process. 16, 1920–1925(2007). [CrossRef]
  10. S. Chaudhuri and A. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1998).
  11. M. Subbarao and G. Surya, “Depth from defocus: a spatial domain approach,” Int. J. Comput. Vis. 13, 271–294(1994). [CrossRef]
  12. J. Elder and S. Zucker, “Local scale control for edge detection and blur estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 699–716 (1998). [CrossRef]
  13. V. Kayargadde and J. B. Martens, “Estimation of edge parameters and image blur using polynomial transforms,” in CVGIP: Graphical Models and Image Processing, Vol. 56 (1994), pp. 442–461. [CrossRef]
  14. O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007). [CrossRef]
  15. M. Chang, A. Tekalp, and A. Erdem, “Blur identification using the bispectrum,” Signal Processing 39, 2323–2325 (1991).
  16. R. Fabian and D. Malah, “Robust identification of motion and out of focus blur parameters from blurred and noisy images,” in CVGIP: Graphical Models and Image Processing, Vol. 53 (1991), pp. 403–412. [CrossRef]
  17. R. Rom, “On the cepstrum of two-dimensional functions (corresp.),” IEEE Trans. Inf. Theory 21, 214–217 (1975). [CrossRef]
  18. M. Cannon, “Blind deconvolution of spatially invariant image blurs with phase,” IEEE Trans. Acoustics, Speech, Signal Process. ASSP-24, 58–63 (1976).
  19. S. Reeves and R. Mersereau, “Blur identification by the method of generalized cross-validation,” IEEE Trans. Image Process. 1, 301–311 (1992). [CrossRef]
  20. L. Chen and K.-H. Yap, “A soft double regularization approach to parametric blind image deconvolution,” IEEE Trans. Image Process. 14, 624–633 (2005). [CrossRef]
  21. A. Savakis and H. Trussell, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993). [CrossRef]
  22. I. Aizenberg, T. Bregin, C. Butakoff, V. N. Karnaukhov, N. S. Merzlyakov, and O. Milukova, “Type of blur and blur parameters identification using neural network and its application to image restoration,” in Proceedings of the International Conference on Artificial Neural Networks, ICANN ’02, (Springer-Verlag, 2002), pp. 1231–1236.
  23. I. Aizenberg, D. Paliy, J. Zurada, and J. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Trans. Neural Netw. 19, 883–898 (2008). [CrossRef]
  24. J. Da Rugna and H. Konik, “Blur identification in image processing,” in Proceedings of International Joint Conference on Neural Networks (2006), pp. 2536–2541.
  25. L. Chen and K.-H. Yap, “Efficient discrete spatial techniques for blur support identification in blind image deconvolution,” IEEE Trans. Signal Process. 54, 1557–1562 (2006). [CrossRef]
  26. D. Li, R. Mersereau, and S. Simske, “Blur identification based on kurtosis minimization,” in IEEE International Conference on Image Processing, Vol. 1 (2005), I-905–8.
  27. F. Chen and J. Ma, “An empirical identification method of Gaussian blur parameter for image deblurring,” IEEE Trans. Signal Process. 57, 2467–2478 (2009). [CrossRef]
  28. J. Lin, C. Zhang, and Q. Shi, “Estimating the amount of defocus through a wavelet transform approach,” Pattern Recogn. Lett. 25, 407–411 (2004). [CrossRef]
  29. K. Rank, M. Lendl, and R. Unbehauen, “Estimation of image noise variance,” IEE Proc. Vis. Image Signal Process. 146, 80–84 (1999). [CrossRef]
  30. G. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 267–276 (1994). [CrossRef]
  31. Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of ccd imaging process,” in Proceedings of Eighth IEEE International Conference on Computer Vision, Vol. 1 (IEEE, 2001), pp. 480–487.
  32. B. Horn, Robot Vision (MIT Press, 1986).
  33. M. Grossberg and S. Nayar, “Modeling the space of camera response functions,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1272–1282 (2004). [CrossRef]
  34. H. Y. Lin and C. H. Chang, “Photo-consistent motion blur modeling for realistic image synthesis,” in Advances in Image and Video Technology (2006), pp. 1273–1282.
  35. C. Liu, W. T. Freeman, R. Szeliski, and S. B. Kang, “Noise estimation from a single image,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1 (IEEE, 2006), pp. 901–908.
  36. S. Gabarda and G. Cristóbal, “Blind image quality assessment through anisotropy,” J. Opt. Soc. Am. A 24, B42–B51 (2007). [CrossRef]
  37. A. Ciancio, A. da Costa, E. da Silva, A. Said, R. Samadani, and P. Obrador, “No-reference blur assessment of digital pictures based on multifeature classifiers,” IEEE Trans. Image Process. 20, 64–75 (2011). [CrossRef]
  38. H. Sheikh, M. Sabir, and A. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440–3451 (2006). [CrossRef]
  39. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited