OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Editor: Joseph N. Mait
  • Vol. 51, Iss. 31 — Nov. 1, 2012
  • pp: 7529–7536

Target recognition of ladar range images using even-order Zernike moments

Zheng-Jun Liu, Qi Li, Zhi-Wei Xia, and Qi Wang  »View Author Affiliations


Applied Optics, Vol. 51, Issue 31, pp. 7529-7536 (2012)
http://dx.doi.org/10.1364/AO.51.007529


View Full Text Article

Enhanced HTML    Acrobat PDF (1190 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.

© 2012 Optical Society of America

OCIS Codes
(100.5760) Image processing : Rotation-invariant pattern recognition
(280.5600) Remote sensing and sensors : Radar
(100.4996) Image processing : Pattern recognition, neural networks

ToC Category:
Image Processing

History
Original Manuscript: April 18, 2012
Revised Manuscript: August 30, 2012
Manuscript Accepted: September 14, 2012
Published: October 24, 2012

Citation
Zheng-Jun Liu, Qi Li, Zhi-Wei Xia, and Qi Wang, "Target recognition of ladar range images using even-order Zernike moments," Appl. Opt. 51, 7529-7536 (2012)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-51-31-7529


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. G. E. Smith and B. G. Mobasseri, “Robust through-the-wall radar image classification using a target-model alignment procedure,” IEEE Trans. Image Process. 21, 754–767 (2012). [CrossRef]
  2. B. Steder, G. Greisetti, and G. W. Burgard, “Robust place recognition for 3D range data based on point features,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA) (IEEE, 2010), pp. 1400–1405.
  3. Q. Wang, L. Wang, and J. F. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express 18, 15349–15360 (2010). [CrossRef]
  4. A. Antonio, M. Pilar, and S. Santiago, “3D scene retrieval and recognition with depth gradient images,” Pattern Recogn. Lett. 32, 1337–1353 (2011). [CrossRef]
  5. F. Stein and G. Medioni, “Structural indexing: efficient 3D object recognition,” IEEE Trans. Pattern Anal. Machine Intel. 14, 125–145 (1992). [CrossRef]
  6. A. E. Johnson, “A representation for 3-D surface matching,” Ph.D. dissertation (Robotics Institute, Carnegie Mellon University, 1997).
  7. N. J. Mitra, L. Guibas, J. Giesen, and M. Pauly, “Probabilistic fingerprints for shapes,” in Proceedings of Symposium on Geometry Processing (ACM, 2006), pp. 121–130.
  8. A. S. Mian, M. Bennamoun, and R. Owens, “3D model-based object recognition and segmentation in cluttered scenes,” IEEE Trans. Pattern Anal. Machine Intel. 28, 1584–1600 (2006). [CrossRef]
  9. H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recogn. Lett. 28, 1252–1262 (2007). [CrossRef]
  10. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inf. Theory 8, 179–187 (1962).
  11. S. Chang and C. P. Grover, “Pattern recognition with generalized centroids and subcentroids,” Appl. Opt. 44, 1372–1380 (2005). [CrossRef]
  12. A. Stern, I. Kruchakov, E. Yoavi, and N. S. Kopeika, “Recognition of motion-blurred images by use of the method of moments,” Appl. Opt. 41, 2164–2171 (2002). [CrossRef]
  13. M. Teague, “Image analysis via the general theory of moments,” J. Opt Soc. Am. 70, 920–930 (1980). [CrossRef]
  14. A. Khotanzad and Y. H. Hong, “Invariant image recognition by Zernike moments,” IEEE Trans. Pattern Anal. Machine Intel. 12, 489–497 (1990). [CrossRef]
  15. A. P. Vivanco, G. U. Serrano, F. G. Agustin, and A. C. Rodriguez, “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng. 46, 017002 (2007). [CrossRef]
  16. J. Revaud, G. Lavoue, and A. Baskurt, “Improving Zernike moments comparison for optimal similarity and rotation angle retrieval,” IEEE Trans. Pattern Anal. Machine Intel. 31, 627–636 (2009). [CrossRef]
  17. Z. Chen and S. K. Sun, “A Zernike moment phase-based descriptor for local image representation and matching,” IEEE Trans. Image Process. 19, 205–219 (2010). [CrossRef]
  18. G. A. Papakostas, Y. S. Boutalis, D. A. Karras, and B. G. Mertzios, “Pattern classification by using improved wavelet compressed Zernike moments,” Appl. Math. Comput. 212, 162–176 (2009). [CrossRef]
  19. B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process. 92, 308–318 (2012). [CrossRef]
  20. C. W. Chong, R. Paramesran, and R. Mukundan, “A comparative analysis of algorithms for fast computation of Zernike moments,” Pattern Recogn. 36, 731–742 (2003). [CrossRef]
  21. C. Singh and E. Walia, “Fast and numerically stable methods for the computation of Zernike moments,” Pattern Recogn. 43, 2497–2506 (2010). [CrossRef]
  22. Z. W. Yang and T. Fang, “On the accuracy of image normalization by Zernike moments,” Image Vis. Comput. 28, 403–413 (2010). [CrossRef]
  23. G. A. Papakostas, Y. S. Boutalis, C. N. Papaodysseus, and D. K. Fragoulis, “Numerical error analysis in Zernike moments computation,” Image Vis. Comput. 24, 960–969 (2006). [CrossRef]
  24. C. Y. Wee and R. Paramesran, “On the computational aspects of Zernike moments,” Image Vis. Comput. 25, 967–980(2007). [CrossRef]
  25. L. Kotoulas and I. Andreadis, “Accurate calculation of image moments,” IEEE Trans. Image Process. 16, 2028–2037(2007). [CrossRef]
  26. L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE. Trans. Pattern Anal. Machine Intel. 12, 993–1001 (1990).
  27. M. Pohit, “Neural network model for rotation invariant recognition of object shapes,” Appl. Opt. 49, 4144–4151 (2010). [CrossRef]
  28. A. Khotanzad and J. J. H. Liou, “Recognition and pose estimation of unoccluded three-dimensional objects from a two-dimensional perspective view by banks of neural networks,” IEEE Trans. Neural Netw. 7, 897–906 (1996). [CrossRef]
  29. J. T. J. Green and J. H. Shapiro, “Detecting objects in three-dimensional laser radar range images,” Opt. Eng. 33, 865–874 (1994).
  30. J. T. J. Green and J. H. Shapiro, “Maximum-likelihood laser radar range profiling with the expectation-maximization algorithm,” Opt. Eng. 31, 2343–2354 (1992).
  31. R. G. Donald, I. Fung, and J. H. Shapiro, “Maximum-likelihood multiresolution laser radar range imaging,” IEEE Trans. Image Process. 6, 36–46 (1997). [CrossRef]
  32. H. Abdi and L. J. Williams, “Principal component analysis,” WIREs Comp. Stat. 2, 433–459 (2010). [CrossRef]
  33. D. Xiao and L. Yang, “Gait recognition using Zernike moments and BP neural network,” in Proceedings of IEEE International Conference on Networking, Sensing and Control (IEEE, 2008), pp. 418–423.
  34. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. (Prentice Hall, 2002).
  35. Z. J. Liu, Q. Li, Z. W. Xia, and Q. Wang, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51, 087201 (2012). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited