OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 9, Iss. 5 — Apr. 29, 2014

Robust metric for the evaluation of visual saliency algorithms

Ali Alsam and Puneet Sharma  »View Author Affiliations


JOSA A, Vol. 31, Issue 3, pp. 532-540 (2014)
http://dx.doi.org/10.1364/JOSAA.31.000532


View Full Text Article

Enhanced HTML    Acrobat PDF (1204 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper, we analyzed eye fixation data obtained from 15 observers and 1003 images. When studying the eigen-decomposition of the correlation matrix constructed based on the fixation data of one observer viewing all images, it was observed that 23% of the data can be accounted for by one eigenvector. This finding implies a repeated viewing pattern that is independent of image content. Examination of this pattern revealed that it was highly correlated with the center region of the image. The presence of a repeated viewing pattern raised the following question: can we use the statistical information contained in the first eigenvector to filter out the fixations that were part of the pattern from those that are image feature dependent? To answer this question we designed a robust AUC metric that uses statistical analysis to better judge the goodness of the different saliency algorithms.

© 2014 Optical Society of America

OCIS Codes
(070.0070) Fourier optics and signal processing : Fourier optics and signal processing
(100.2960) Image processing : Image analysis
(100.5010) Image processing : Pattern recognition
(330.4060) Vision, color, and visual optics : Vision modeling

ToC Category:
Image Processing

History
Original Manuscript: May 28, 2013
Revised Manuscript: October 13, 2013
Manuscript Accepted: November 22, 2013
Published: February 13, 2014

Virtual Issues
Vol. 9, Iss. 5 Virtual Journal for Biomedical Optics

Citation
Ali Alsam and Puneet Sharma, "Robust metric for the evaluation of visual saliency algorithms," J. Opt. Soc. Am. A 31, 532-540 (2014)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=josaa-31-3-532


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998). [CrossRef]
  2. C. Koch and S. Ullman, “Shifts in selective visual attention: towards the underlying neural circuitry,” Human Neurobiol. 4, 219–227 (1985).
  3. L. Itti and C. Koch, “A saliency-based search mechanism for overt and covert shifts of visual attention,” Vis. Res. 40, 1489–1506 (2000). [CrossRef]
  4. L. Itti and C. Koch, “Computational modelling of visual attention,” Nat. Rev. Neurosci. 2, 194–203 (2001). [CrossRef]
  5. D. Walther and C. Koch, “Modeling attention to salient proto-objects,” Neural Networks 19, 1395–1407 (2006). [CrossRef]
  6. M. Cerf, J. Harel, W. Einhauser, and C. Koch, “Predicting human gaze using low-level saliency combined with face detection,” in Advances in Neural Information Processing Systems (NIPS) (2007), Vol. 20, pp. 241–248.
  7. J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Advances in Neural Information Processing Systems (NIPS) (2006).
  8. J. M. Henderson, J. R. Brockmole, M. S. Castelhano, and M. Mack, “Visual saliency does not account for eye movements during visual search in real-world scenes,” in Eye Movements: A Window on Mind and Brain (Elsevier, 2007), pp. 537–562.
  9. U. Rajashekar, I. van der Linde, A. C. Bovik, and L. K. Cormack, “Gaffe: a gaze-attentive fixation finding engine,” IEEE Trans. Image Process. 17, 564—573 (2008). [CrossRef]
  10. D. Walther, “Interactions of visual attention and object recognition: computational modeling, algorithms, and psychophysics,” Ph.D. thesis (California Institute of Technology, 2006).
  11. J. M. Henderson, “Human gaze control during real-world scene perception,” Trends Cogn. Sci. 7, 498–504 (2003). [CrossRef]
  12. A. Oliva, A. Torralba, M. S. Castelhano, and J. M. Henderson, “Top-down control of visual attention in object detection,” in Proceedings of International Conference on Image Processing (IEEE, 2003), pp. 253–256.
  13. D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention,” Vis. Res. 42, 107–123 (2002). [CrossRef]
  14. B. W. Tatler, “The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions,” J. Vis. 7(14):4 (2007). [CrossRef]
  15. B. W. Tatler, R. J. Baddeley, and I. D. Gilchrist, “Visual correlates of fixation selection: effects of scale and time,” Vis. Res. 45, 643–659 (2005). [CrossRef]
  16. T. Judd, K. Ehinger, F. Durand, and A. Torralba, “Learning to predict where humans look,” in International Conference on Computer Vision (ICCV) (IEEE, 2009).
  17. R. Rosenholtz, “A simple saliency model predicts a number of motion popout phenomena,” Vis. Res. 39, 3157–3163 (1999). [CrossRef]
  18. G. Golub and W. Kahan, “Calculating the singular values and pseudo-inverse of a matrix,” J. Soc. Ind. Appl. Math. Ser. B 2, 205–224 (1965). [CrossRef]
  19. D. Kalman, “A singularly valuable decomposition: the SVD of a matrix,” College Math J. 27, 2–23 (1996). [CrossRef]
  20. A. Borji and L. Itti, “State-of-the-art in visual attention modeling,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 185–207 (2013). [CrossRef]
  21. T. Fawcett, “ROC graphs with instance-varying costs,” Pattern Recogn. Lett. 27, 882–891 (2006). [CrossRef]
  22. O. Le Meur, P. Le Callet, D. Barba, and D. Thoreau, “A coherent computational approach to model bottom-up visual attention,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 802–817 (2006). [CrossRef]
  23. B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “LabelMe: a database and web-based tool for image annotation,” Int. J. Comput. Vis. 77, 157–173 (2008). [CrossRef]
  24. M. Cerf, E. P. Frady, and C. Koch, “Faces and text attract gaze independent of the task: experimental data and computer model,” J. Vis. 9(12):10 (2009). [CrossRef]
  25. N. D. B. Bruce and J. K. Tsotsos, “Saliency based on information maximization,” in Advances in Neural Information Processing Systems (NIPS) (2005), pp. 155–162.
  26. A. Garcia-Diaz, X. R. Fdez-Vidal, X. M. Pardo, and R. Dosil, “Saliency from hierarchical adaptation through decorrelation and variance normalization,” Image Vis. Comput. 30, 51–64 (2012). [CrossRef]
  27. E. Erdem and A. Erdem, “Visual saliency estimation by nonlinearly integrating features using region covariances,” J. Vis. 13(4):11 (2013). [CrossRef]
  28. X. Hou and L. Zhang, “Saliency detection: a spectral residual approach,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  29. B. Schauerte and R. Stiefelhagen, “Predicting human gaze using quaternion dct image signature saliency and face detection,” in Proceedings of the IEEE Workshop on the Applications of Computer Vision (WACV), Breckenridge, Colorado, January9–11, 2012 (IEEE, 2012).
  30. L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell, “Sun: a Bayesian framework for saliency using natural statistics,” J. Vis. 8(7):32 (2008). [CrossRef]
  31. A. Alsam, P. Sharma, and A. Wrlsen, “Asymmetry as a measure of visual saliency,” in 18th Scandinavian Conference on Image Analysis (SCIA), Espoo, Finland (2013).
  32. A. Borji and L. Itti, “Exploiting local and global patch rarities for saliency detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, Rhode Island (2012).
  33. A. Borji, D. N. Sihite, and L. Itti, “Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study,” IEEE Trans. Image Process. 22, 55–69 (2013). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited