OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Editor: Joseph N. Mait
  • Vol. 50, Iss. 33 — Nov. 20, 2011
  • pp: 6302–6312

Detection and tracking of sea-surface targets in infrared and visual band videos using the bag-of-features technique with scale-invariant feature transform

Tolga Can, A. Onur Karalı, and Tayfun Aytaç  »View Author Affiliations


Applied Optics, Vol. 50, Issue 33, pp. 6302-6312 (2011)
http://dx.doi.org/10.1364/AO.50.006302


View Full Text Article

Enhanced HTML    Acrobat PDF (729 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Sea-surface targets are automatically detected and tracked using the bag-of-features (BOF) technique with the scale-invariant feature transform (SIFT) in infrared (IR) and visual (VIS) band videos. Features corresponding to the sea-surface targets and background are first clustered using a training set offline, and these features are then used for online target detection using the BOF technique. The features corresponding to the targets are matched to those in the subsequent frame for target tracking purposes with a set of heuristic rules. Tracking performance is compared with an optical-flow-based method with respect to the ground truth target positions for different real IR and VIS band videos and synthetic IR videos. Scenarios are composed of videos recorded/generated at different times of day, containing single and multiple targets located at different ranges and orientations. The experimental results show that sea-surface targets can be detected and tracked with plausible accuracies by using the BOF technique with the SIFT in both IR and VIS band videos.

© 2011 Optical Society of America

OCIS Codes
(070.5010) Fourier optics and signal processing : Pattern recognition
(100.2000) Image processing : Digital image processing
(100.5010) Image processing : Pattern recognition
(110.3080) Imaging systems : Infrared imaging
(330.1880) Vision, color, and visual optics : Detection
(100.4999) Image processing : Pattern recognition, target tracking

ToC Category:
Imaging Systems

History
Original Manuscript: February 3, 2011
Revised Manuscript: September 19, 2011
Manuscript Accepted: September 30, 2011
Published: November 18, 2011

Citation
Tolga Can, A. Onur Karalı, and Tayfun Aytaç, "Detection and tracking of sea-surface targets in infrared and visual band videos using the bag-of-features technique with scale-invariant feature transform," Appl. Opt. 50, 6302-6312 (2011)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-50-33-6302


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. M. Diani, A. Baldacci, and G. Corsini, “Novel background removal algorithm for Navy infrared search and track systems,” Opt. Eng. 40, 1729–1734 (2001). [CrossRef]
  2. Y. Xiong, J.-X. Peng, M.-Y. Ding, and D.-H. Xue, “An extended track-before-detect algorithm for infrared target detection,” IEEE Trans. Aerospace Electron. Syst. 33, 1087–1092 (1997). [CrossRef]
  3. M. de Visser, P. B. W. Schwering, J. F. de Groot, and E. A. Hendriks, “Passive ranging using an infrared search and track sensor,” Opt. Eng. 45, 1–14 (2006). [CrossRef]
  4. S. Çakır, T. Aytaç, A. Yıldırım, and Ö. Nezih Gerek, “Classifier-based offline feature selection and evaluation for visual tracking of sea-surface and aerial targets,” Opt. Eng. 50, 107205 (2011). [CrossRef]
  5. C. R. Zeisse, C. P. McGrath, K. M. Littfin, and H. G. Hughes, “Infrared radiance of the wind-ruffled sea,” J. Opt. Soc. Am. A 16, 1439–1452 (1999). [CrossRef]
  6. A. Bal and M. S. Alam, “Dynamic target tracking with fringe-adjusted joint transform correlation and template matching,” Appl. Opt. 43, 4874–4881 (2004). [CrossRef]
  7. F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. 43, 315–323 (2004). [CrossRef]
  8. A. Bal and M. S. Alam, “Automatic target tracking in FLIR image sequences using intensity variation function and template modeling,” IEEE Trans. Instrum. Meas. 54, 1846–1852(2005). [CrossRef]
  9. H.-W. Chen, S. Sutha, and T. Olson, “Target detection and recognition improvements by use of spatiotemporal fusion,” Appl. Opt. 43, 403–415 (2004). [CrossRef]
  10. J. F. Khan, M. S. Alam, and S. M. A. Bhuiyan, “Automatic target detection in forward-looking infrared imagery via probabilistic neural networks,” Appl. Opt. 48, 464–476(2009). [CrossRef]
  11. Z. Zalevsky, D. Mendlovic, E. Rivlin, and S. Rotman, “Contrasted statistical processing algorithm for obtaining improved target detection performances in infrared cluttered environment,” Opt. Eng. 39, 2609–2617(2000). [CrossRef]
  12. J. S. Shaik and K. M. Iftekharuddin, “Detection and tracking of rotated and scaled targets by use of Hilbert-wavelet transform,” Appl. Opt. 42, 4718–4735 (2003). [CrossRef]
  13. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004). [CrossRef]
  14. H. P. Moravec, “Visual mapping by a robot rover,” in Proceedings of the 6th International Joint Conference on Artificial Intelligence (Morgan Kaufmann, 1979), Vol.  1, pp. 598–600.
  15. C. Harris and M. Stephens, “A combined corner and edge detection,” in Proceedings of The Fourth Alvey Vision Conference (The British Machine Vision Association and Society for Pattern Recognition, 1988), Vol.  15, pp. 147–151.
  16. C. Tomasi and T. Kanade, “Detection and tracking of point features,” Technical report (Carnegie Mellon Univ., 1991).
  17. C. Park, K. Baea, and J.-H. Jung, “Object recognition in infrared image sequences using scale invariant feature transform,” Proc. SPIE , 6968, 69681P (2008). [CrossRef]
  18. H. Lee, P. G. Heo, J.-Y. Suk, B.-Y. Yeou, and H. Park, “Scale-invariant object tracking method using strong corners in the scale domain,” Opt. Eng. 48, 017204 (2009). [CrossRef]
  19. P. B. W. Schwering, H. A. Lensen, S. P. van den Broek, R. J. M. den Hollander, W. van der Mark, H. Bouma, and R. A. W. Kemp, “Application of heterogeneous multiple camera system with panoramic capabilities in a harbor environment,” Proc. SPIE , 7481, 74810C (2009). [CrossRef]
  20. J.-Z. Liu, X.-C. Yu, L.-Q. Gong, and W.-S. Yu, “Automatic matching of infrared image sequences based on rotation invariant,” in Proceedings of the IEEE International Conference on Environmental Science and Information Technology (IEEE, 2009), pp. 365–368.
  21. Y. Wang, A. Camargo, R. Fevig, F. Martel, and R. R. Schultz, “Image mosaicking from uncooled thermal IR video captured by a small UAV,” in Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation (IEEE, 2008), pp. 161–164.
  22. G. Csurka, C. R. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” in Proceedings of the European Conference on Computer Vision Workshop on Statistical Learning in Computer Vision (Springer, 2004), pp. 59–74.
  23. T. Kinnunen, J. K. Kamarainen, L. Lensu, and H. Kalviainen, “Bag-of-features codebook generation by self-organisation,” in Proceedings of the 7th International Workshop on Advances in Self-Organizing Maps (Springer-Verlag, 2009), pp. 124–132.
  24. J. C. van Gemert, J. M. Geusebroek, C. J. Veenman, and A. W. M. Smeulders, “Kernel codebooks for scene categorization,” in Lecture Notes in Computer Science: European Conference on Computer Vision (Springer, 2008), pp. 696–709. [CrossRef]
  25. E. Nowak, F. Jurie, and B. Triggs, “Sampling strategies for bag-of-features image classification,” in Proceedings of the European Conference on Computer Vision (Springer, 2006), pp. 490–503.
  26. T. Leung and J. Malik, “Representing and recognizing the visual appearance of materials using three-dimensional textons,” Int. J. Comput. Vis. 43, 29–44 (2001). [CrossRef]
  27. M. Weber, M. Welling, and P. Perona, “Unsupervised learning of models for recognition,” in Proceedings of the European Conference on Computer Vision (Springer, 2000), pp. 18–32.
  28. J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” in Proceedings of the Tenth IEEE International Conference on Computer Vision (IEEE, 2005), Vol.  2, pp. 1800–1807.
  29. A. Vedaldi, www.vlfeat.org/vedaldi/code/sift.html, SIFT (2010).
  30. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence (Morgan Kaufmann, 1981), Vol.  2, pp. 674–679.
  31. J. Shi and C. Tomasi, “Good features to track,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 593–600.
  32. J.-Y. Bouguet, “Pyramidal implementation of the Lucas–Kanade feature tracker,” Technical report, Intel Corp., Microprocessor Research Labs, 1999.
  33. M. I. Smith, M. Bernhardt, C. R. Angell, D. Hickman, P. Whitehead, and D. Patel, “Validation and acceptance of synthetic infrared imagery,” Proc. SPIE , 5408, 9–21 (2004). [CrossRef]
  34. J. D. Agostino and C. Webb, “Three-dimensional analysis framework and measurement methodology for imaging system noise,” Proc. SPIE , 1488, 110–121 (1991). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article

OSA is a member of CrossRef.

CrossCheck Deposited