OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A


  • Editor: Franco Gori
  • Vol. 31, Iss. 4 — Apr. 1, 2014
  • pp: 734–744

Biologically inspired multilevel approach for multiple moving targets detection from airborne forward-looking infrared sequences

Yansheng Li, Yihua Tan, Hang Li, Tao Li, and Jinwen Tian  »View Author Affiliations

JOSA A, Vol. 31, Issue 4, pp. 734-744 (2014)

View Full Text Article

Enhanced HTML    Acrobat PDF (1056 KB)

Browse Journals / Lookup Meetings

Browse by Journal and Year


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools



In this paper, a biologically inspired multilevel approach for simultaneously detecting multiple independently moving targets from airborne forward-looking infrared (FLIR) sequences is proposed. Due to the moving platform, low contrast infrared images, and nonrepeatability of the target signature, moving targets detection from FLIR sequences is still an open problem. Avoiding six parameter affine or eight parameter planar projective transformation matrix estimation of two adjacent frames, which are utilized by existing moving targets detection approaches to cope with the moving infrared camera and have become the bottleneck for the further elevation of the moving targets detection performance, the proposed moving targets detection approach comprises three sequential modules: motion perception for efficiently extracting motion cues, attended motion views extraction for coarsely localizing moving targets, and appearance perception in the local attended motion views for accurately detecting moving targets. Experimental results demonstrate that the proposed approach is efficient and outperforms the compared state-of-the-art approaches.

© 2014 Optical Society of America

OCIS Codes
(040.2480) Detectors : FLIR, forward-looking infrared
(100.2960) Image processing : Image analysis
(330.4150) Vision, color, and visual optics : Motion detection
(330.4270) Vision, color, and visual optics : Vision system neurophysiology

ToC Category:

Original Manuscript: October 15, 2013
Revised Manuscript: January 23, 2014
Manuscript Accepted: February 7, 2014
Published: March 17, 2014

Virtual Issues
Vol. 9, Iss. 6 Virtual Journal for Biomedical Optics

Yansheng Li, Yihua Tan, Hang Li, Tao Li, and Jinwen Tian, "Biologically inspired multilevel approach for multiple moving targets detection from airborne forward-looking infrared sequences," J. Opt. Soc. Am. A 31, 734-744 (2014)

Sort:  Author  |  Year  |  Journal  |  Reset  


  1. H. Yalcin, M. Hebert, R. Collins, and M. Black, “A flow-based approach to vehicle detection and background mosaicking in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), Vol. 2.
  2. Z. Yin and R. Collins, “Belief propagation in a 3D spatiotemporal MRF for moving object detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  3. Q. Yu and G. Medioni, “Motion pattern interpretation and detection for tracking moving vehicles in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2671–2678.
  4. C. Benedek, T. Sziranyi, Z. Kato, and J. Zerubia, “Detection of object motion regions in aerial image pairs with a multilayer markovian model,” IEEE Trans. Image Process. 18, 2303–2315 (2009). [CrossRef]
  5. X. Cao, Ch. Wu, P. Yan, and X. Li, “Vehicle detection and motion analysis in low-altitude airborne video under urban environment,” IEEE Trans. Circuits Syst. Video Technol. 21, 1522–1533 (2011). [CrossRef]
  6. X. Cao, J. Lan, P. Yan, and X. Li, “Vehicle detection and tracking in airborne videos by multi-motion layer analysis,” Mach. Vis. Appl. 23, 921–935 (2012). [CrossRef]
  7. K. Liu, B. Ma, Q. Du, and G. Chen, “Fast motion detection from airborne videos using graphics processing unit,” J. Appl. Remote Sens. 6, 061505 (2012). [CrossRef]
  8. H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013). [CrossRef]
  9. Z. Yin and R. Collins, “Moving object localization in thermal imagery by forward backward MHI,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2006), pp. 133.
  10. A. Strehl and J. K. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.
  11. F. Yao, G. Shao, A. Sekmen, and M. Malkani, “Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector,” EURASIP J. Image Video Process. 3, 124681 (2010). [CrossRef]
  12. S. Bhattacharya, H. Idrees, I. Saleemi, S. Ali, and M. Shah, “Moving object detection and tracking in forward looking infra-red aerial imagery,” in Machine Vision Beyond Visible Spectrum (Springer, 2011), pp. 221–252.
  13. T. R. S. Kalyan and M. Malathi, “Architectural implementation of high speed optical flow computation based on Lucas-Kanade algorithm,” in Proceedings of International Conference on Electronics Computer Technology (IEEE, 2011), pp. 192–195.
  14. I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “High-frame-rate optical flow system,” IEEE Trans. Circuits Syst. Video Technol. 22, 105–112 (2012). [CrossRef]
  15. G. Castellano, J. Boyce, and M. Sandler, “Moving target detection in infrared imagery using a regularized CDWT optical flow,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 13–22.
  16. C. Kim and P. Milanfar, “Visual saliency in noisy images,” J. Vis. 13(4):5, 1–14 (2013). [CrossRef]
  17. L. Ungerleider and M. Mishkin, “Two cortical visual systems,” in Analysis of Visual Behavior (MIT, 1982), pp. 549–586.
  18. D. C. Van Essen and J. H. R. Maunsell, “Hierarchical organization and functional streams in the visual cortex,” Trends Neurosci. 6, 370–375 (1983). [CrossRef]
  19. M. A. Goodale and A. D. Milner, “Separate visual pathways for perception and action,” Trends Neurosci. 15, 20–25 (1992). [CrossRef]
  20. J. Norman, “Two visual systems and two theories of perception: an attempt to reconcile the constructivist and ecological approaches,” Behav. Brain Sci. 25, 73–96 (2002). [CrossRef]
  21. S. E. Palmer, Vision Science: Photons to Phenomenology (MIT, 1999).
  22. R. D. Mclntosh and T. Schenk, “Two visual streams for perception and action: current trends,” Neuropsychologia 47, 1391–1396 (2009). [CrossRef]
  23. L. Nowak, M. Munk, P. Girard, and J. Bullier, “Visual latencies in areas v1 and v2 of the macaque monkey,” Vis. Neurosci. 12, 371–384 (1995). [CrossRef]
  24. J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998). [CrossRef]
  25. L. Itti and P. Baldi, “Bayesian surprise attracts human attention,” Vis. Res. 49, 1295–1306 (2009). [CrossRef]
  26. B. Zhou, X. Hou, and L. Zhang, “A phase discrepancy analysis of object motion,” in Proceedings of the 10th Asian Conference on Computer Vision (2010), pp. 225–238.
  27. M. S. Longmire and E. H. Takken, “LMS and matched digital filters for optical clutter suppression,” Appl. Opt. 27, 1141–1159 (1988). [CrossRef]
  28. P. A. Ffrench, J. R. Zeidler, and W. H. Ku, “Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm,” IEEE Trans. Image Process. 6, 383–397 (1997). [CrossRef]
  29. S. Leonov, “Nonparametric methods for clutter removal,” IEEE Trans. Aerosp. Electron. Syst. 37, 832–848 (2001). [CrossRef]
  30. F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. 43, 315–323 (2004). [CrossRef]
  31. Ch. Gao, J. Tian, and P. Wang, “Generalized structure tensor based infrared small target detection,” Electron. Lett. 44, 1349–1351 (2008). [CrossRef]
  32. Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013). [CrossRef]
  33. A. Yilmaz, K. Shafique, and M. Shah, “Target tracking in airborne forward looking infrared imagery,” Image Vis. Comput. 21, 623–635 (2003). [CrossRef]
  34. U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004). [CrossRef]
  35. S. Der, A. Chan, N. Nasrabadi, and H. Kwon, “Automated vehicle detection in forward-looking infrared imagery,” Appl. Opt. 43, 333–348 (2004). [CrossRef]
  36. J. F. Khan and M. S. Alam, “Target detection in cluttered forward-looking infrared imagery,” Opt. Eng. 44, 076404 (2005). [CrossRef]
  37. J. F. Khan, M. S. Alam, and S. Bhuiyan, “Automatic target detection in forward-looking infrared imagery via probabilistic neural networks,” Appl. Opt. 48, 464–476 (2009). [CrossRef]
  38. A. Akula, R. Ghosh, S. Kumar, and H. K. Sardana, “Moving target detection in thermal infrared imagery using spatiotemporal information,” J. Opt. Soc. Am. A 30, 1492–1501 (2013). [CrossRef]
  39. Y. Chen, X. Liu, and Q. Huang, “Real-time detection of rapid moving infrared target on variation background,” Infrared Phys. Technol. 51, 146–151 (2008). [CrossRef]
  40. Q. Wang, W. Zhu, and L. Zhang, “Moving object detection system with phase discrepancy,” in Proceedings of the 8th International Symposium on Neural Networks (Springer, 2011), pp. 402–411.
  41. L. Ren, Ch. Shi, and X. Ran, “Target detection of maritime search and rescue: saliency accumulation method,” in Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (IEEE, 2012), pp. 1972–1976.
  42. Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013). [CrossRef]
  43. Y. F. Ma and H. Zhang, “Contrast-based image attention analysis by using fuzzy growing,” in Proceedings of the 11th ACM International Conference on Multimedia (ACM, 2003), pp. 374–381.
  44. L. Itti, C. Koch, and E. Niebur, “A model of saliency based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998). [CrossRef]
  45. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990). [CrossRef]
  46. T. Zhang, X. Wang, and Y. Wang, “Automatic threshold estimation for gradient image segmentation,” in Proceedings of Multispectral Image Processing and Pattern Recognition (SPIE, 2001), pp. 121–126.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited