OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 22, Iss. 18 — Sep. 8, 2014
  • pp: 21577–21588

Arbitrary cylinder color model for the codebook based background subtraction

Zhi Zeng and Jianyuan Jia  »View Author Affiliations

Optics Express, Vol. 22, Issue 18, pp. 21577-21588 (2014)

View Full Text Article

Enhanced HTML    Acrobat PDF (2058 KB)

Browse Journals / Lookup Meetings

Browse by Journal and Year


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools



The codebook background subtraction approach is widely used in computer vision applications. One of its distinguished features is the cylinder color model used to cope with illumination changes. The performances of this approach depends strongly on the color model. However, we have found this color model is valid only if the spectrum components of the light source change in the same proportion. In fact, this is not true in many practical cases. In these cases, the performances of the approach would be degraded significantly. To tackle this problem, we propose an arbitrary cylinder color model with a highly efficient updating strategy. This model uses cylinders whose axes need not going through the origin, so that the cylinder color model is extended to much more general cases. Experimental results show that, with no loss of real-time performance, the proposed model reduces the wrong classification rate of the cylinder color model by more than fifty percent.

© 2014 Optical Society of America

OCIS Codes
(110.2960) Imaging systems : Image analysis
(330.4150) Vision, color, and visual optics : Motion detection

ToC Category:
Image Processing

Original Manuscript: May 8, 2014
Revised Manuscript: July 28, 2014
Manuscript Accepted: August 22, 2014
Published: August 29, 2014

Zhi Zeng and Jianyuan Jia, "Arbitrary cylinder color model for the codebook based background subtraction," Opt. Express 22, 21577-21588 (2014)

Sort:  Author  |  Year  |  Journal  |  Reset  


  1. K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, “Background modeling and subtraction by codebook construction,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2004), pp. 3061–3064.
  2. K. Kim, T. H. Chalidabhongse, D. Harwood, and L. Davis, “Real-time foreground-background segmentation using codebook model,” Real-time Imaging 11(3), 172–185 (2005). [CrossRef]
  3. A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proc. IEEE 90(7), 1151–1163 (2002). [CrossRef]
  4. L. Y. Li, W. M. Huang, I. Y. H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Trans. Image Process. 13(11), 1459–1472 (2004). [CrossRef] [PubMed]
  5. D. S. Lee, “Effective Gaussian mixture learning for video background subtraction,” IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 827–832 (2005). [CrossRef] [PubMed]
  6. Z. Zivkovic and F. Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction,” Pattern Recognit. Lett. 27(7), 773–780 (2006). [CrossRef]
  7. M. Heikklä and M. Pietikäinen, “A texture-based method for modeling the background and detecting moving objects,” IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657–662 (2006). [CrossRef] [PubMed]
  8. L. Maddalena and A. Petrosino, “A self-organizing approach to background subtraction for visual surveillance applications,” IEEE Trans. Image Process. 17(7), 1168–1177 (2008). [CrossRef] [PubMed]
  9. D. M. Tsai and S. C. Lai, “Independent component analysis-based background subtraction for indoor surveillance,” IEEE Trans. Image Process. 18(1), 158–167 (2009). [CrossRef] [PubMed]
  10. O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process. 20(6), 1709–1724 (2011). [CrossRef] [PubMed]
  11. S. Kwak, G. Bae, and H. Byun, “Moving-object segmentation using a foreground history map,” J. Opt. Soc. Am. A 27(2), 180–187 (2010). [CrossRef] [PubMed]
  12. C. Cuevas, R. Mohedano, and N. García, “Adaptable Bayesian classifier for spatiotemporal nonparametric moving object detection strategies,” Opt. Lett. 37(15), 3159–3161 (2012). [CrossRef] [PubMed]
  13. A. Elkabetz and Y. Yitzhaky, “Background modeling for moving object detection in long-distance imaging through turbulent medium,” Appl. Opt. 53(6), 1132–1141 (2014). [CrossRef] [PubMed]
  14. Y. B. Li, F. Chen, W. L. Xu, and Y. T. Du, “Gaussian-based codebook model for video background subtraction,” Adv. Nat. Comput. 2, 762–765 (2006). [CrossRef]
  15. M. H. Sigari and M. Fathy, “Real-time background modeling/subtraction using two-layer codebook model,” in Proceedings of the International Multi-Conference of Engineers and Computer Scientists (IAENG, 2008), pp. 717–720.
  16. A. Ilyas, M. Scuturici, and S. Miguet, “Real time foreground-background segmentation using a modified codebook model,” in Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance (IEEE, 2009), pp. 454–459. [CrossRef]
  17. M. J. Wu and X. R. Peng, “Spatio-temporal context for codebook-based dynamic background subtraction,” AEU, Int. J. Electron. Commun. 64(8), 739–747 (2010). [CrossRef]
  18. J. M. Guo, Y. F. Liu, C. H. Hsia, M. H. Shih, and C. S. Hsu, “Hierarchical method for foreground detection using codebook model,” IEEE Trans. Circ. Syst. Video Tech. 21(6), 804–815 (2011). [CrossRef]
  19. I. T. Sun, S. C. Hsu, and C. L. Huang, “A hybrid codebook background model for background subtraction,” in Workshop of IEEE Conference on Signal Processing Systems (IEEE, 2011), pp. 96–101. [CrossRef]
  20. M. Shah, J. Deng, and B. Woodford, “Enhanced codebook model for real-time background subtraction,” Neural Information Processing 3, 449–458 (2011).
  21. Q. Tu, Y. Xu, and M. Zhou, “Box-based codebook model for real-time objects detection,” in Proceedings of IEEE Conference on Intelligent Control and Automation (IEEE, 2008), pp. 7621–7625.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


Fig. 1 Fig. 2 Fig. 3
Fig. 4

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited