OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 6, Iss. 3 — Mar. 18, 2011

Estimating the usefulness of distorted natural images using an image contour degradation measure

David M. Rouse, Sheila S. Hemami, Romuald Pépion, and Patrick Le Callet  »View Author Affiliations


JOSA A, Vol. 28, Issue 2, pp. 157-188 (2011)
http://dx.doi.org/10.1364/JOSAA.28.000157


View Full Text Article

Enhanced HTML    Acrobat PDF (2161 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Quality estimators aspire to quantify the perceptual resemblance, but not the usefulness, of a distorted image when compared to a reference natural image. However, humans can successfully accomplish tasks (e.g., object identification) using visibly distorted images that are not necessarily of high quality. A suite of novel subjective experiments reveals that quality does not accurately predict utility (i.e., usefulness). Thus, even accurate quality estimators cannot accurately estimate utility. In the absence of utility estimators, leading quality estimators are assessed as both quality and utility estimators and dismantled to understand those image characteristics that distinguish utility from quality. A newly proposed utility estimator demonstrates that a measure of contour degradation is sufficient to accurately estimate utility and is argued to be compatible with shape-based theories of object perception.

© 2011 Optical Society of America

OCIS Codes
(110.2960) Imaging systems : Image analysis
(110.3000) Imaging systems : Image quality assessment
(110.3925) Imaging systems : Metrics

ToC Category:
Imaging Systems

History
Original Manuscript: July 20, 2010
Revised Manuscript: November 5, 2010
Manuscript Accepted: November 8, 2010
Published: January 24, 2011

Virtual Issues
Vol. 6, Iss. 3 Virtual Journal for Biomedical Optics

Citation
David M. Rouse, Sheila S. Hemami, Romuald Pépion, and Patrick Le Callet, "Estimating the usefulness of distorted natural images using an image contour degradation measure," J. Opt. Soc. Am. A 28, 157-188 (2011)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=josaa-28-2-157


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. In this paper, “natural images” are formed using imaging devices that sense the natural environment over the visible portion of the electromagnetic spectrum (e.g., digital cameras). Computer-generated images and other types of synthetic images are not considered natural images.
  2. C. G. Ford, M. A. McFarland, and I. W. Stange, “Subjective video quality assessment methods for recognition tasks,” Proc. SPIE 7240, 72400Z (2009). [CrossRef]
  3. C.Ford, P.Raush, and K.Davis, eds., Video Quality in Public Safety Conference (Institute for Telecommunication Sciences, 2009).
  4. A. M. Burton, S. Wilson, M. Cowan, and V. Bruce, “Face recognition in poor-quality video; evidence from security surveillance,” Psychol. Sci. 10, 243-248 (1999). [CrossRef]
  5. J. K. Petersen, Understanding Surveillance Technologies (CRC, 2001).
  6. J. P. Davis and T. Valentine, “CCTV on trial: matching video images with the defendant in the dock,” Appl. Cogn. Psychol. 23, 482-505 (2009). [CrossRef]
  7. J. C. Leachtenauer and R. G. Driggers, Surveillance and Reconnaissance Imaging Systems (Artech House, 2001).
  8. J. Johnson, “Analysis of image forming systems,” in Image Intensifier Symposium (Fort Belvoir, 1958).
  9. L.M.Biberman, ed., Perception of Displayed Information(Plenum, 1973).
  10. A. van Meeteren, “Characterization of task performance with viewing instruments,” J. Opt. Soc. Am. A 7, 2016-2023 (1990). [CrossRef]
  11. J. C. Leachtenauer, “Resolution requirements and the Johnson criteria revisited,” Proc. SPIE 1-15 (2003). [CrossRef]
  12. R. H. Vollmerhausen, E. Jacobs, and R. G. Driggers, “New metric for predicting target acquisition performance,” Opt. Eng. 43, 2806-2818 (2004). [CrossRef]
  13. J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002). [CrossRef]
  14. P. D. O'Shea, E. L. Jacobs, and R. L. Espinola, “Effects of image compression on sensor performance,” Opt. Eng. 47, 013202(2008). [CrossRef]
  15. T. Stockham, “Image processing in the context of a visual model,” Proc. IEEE 60, 828-842 (1972). [CrossRef]
  16. J. L. Mannos, “The effects of a visual fidelity criterion on the encoding of images,” IEEE Trans. Inf. Theory 20, 525-536(1974). [CrossRef]
  17. D. Granrath, “The role of human visual models in image processing,” Proc. IEEE 69, 552-561 (1981). [CrossRef]
  18. H. de Ridder and G. M. Majoor, “Numerical category scaling: an efficient method for assessing digital image coding impairments,” Proc. SPIE 1249, 65-77 (1990). [CrossRef]
  19. J. A. J. Roufs, “Perceptual image quality: concept and measurement,” Philips J. Res. 47, 35-62 (1992).
  20. S. A. Klein, “Image quality and image compression: a psychophysicist's viewpoint,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 73-88.
  21. T. N. Pappas and R. J. Safranek, “Perceptual criteria for image quality evaluation,” in Handbook of Image and Video Processing, A.C.Bovik, ed. (Academic, 2000).
  22. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600-612 (2004). [CrossRef]
  23. H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process. 15, 430-444(2006). [CrossRef]
  24. The National Imagery Interpretability Rating Scale (NIIRS) has been associated with image quality . However, the NIIRS characterizes an image's quality based on the ability of a photo interpreter to detect, recognize, and identify objects in an image. Various versions of the NIIRS have been designed for specific image applications. The NIIRS is more compatible with the definition of utility used in this paper.
  25. H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “LIVE image quality assessment database release 2,” http://live.ece.utexas.edu/research/quality.
  26. D. Chandler, “The CSIQ database,” http://vision.okstate.edu/index.php?loc=csiq.
  27. A visually lossless image is visually indistinguishable from a reference image.
  28. T. M. Murphy and L. H. Finkel, “Shape representation by a network of V4-like cells,” Neural Netw. 20, 851-867 (2007). [CrossRef]
  29. G. Loffler, “Perception of contours and shapes: low and intermediate stage mechanisms,” Vis. Res. 48, 2106-2127(2008). [CrossRef]
  30. S. O. Dumoulin, S. C. Dakin, and R. F. Hess, “Sparsely distributed contours dominate extra-striate responses to complex scenes,” NeuroImage 42, 890-901 (2008). [CrossRef]
  31. The experiments described in this paper augment the experiments described in previous publications by the authors .
  32. D. M. Chandler and S. S. Hemami, “Effects of natural images on the detectability of simple and compound wavelet subband quantization distortions,” J. Opt. Soc. Am. A 20, 1164-1180(2003). [CrossRef]
  33. W. B. Pennebaker and J. L. Mitchell, JPEG: Still Image Data Compression Standard (Van Nostrand Reinhold, 1993).
  34. “Independent JPEG Group,” http://www.ijg.org.
  35. International Organization for Standardization, “Information technology--digital compression and coding of continuous-tone still images--requirements and guidelines,” ITU-T T.81 (International Telecommunication Union, 1992).
  36. D. S. Taubman and M. W. MarcellinJPEG2000: Image Compression Fundamentals, Standards, and Practice (Kluwer Academic, 2002).
  37. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithm,” Physica D (Amsterdam) 60, 259-268 (1992). [CrossRef]
  38. G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004). [CrossRef]
  39. J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process. 14, 1570-1582 (2005). [CrossRef]
  40. D. M. Rouse and S. S. Hemami, “Analyzing the role of visual structure in the recognition of natural image content with multi-scale SSIM,” Proc. SPIE 6806, 680615.1-680615.14 (2008).
  41. J. S. Bruner and M. C. Potter, “Interference in visual recognition,” Science 144, 424-425 (1964). [CrossRef]
  42. R. A. Bradley and M. E. Terry, “The rank analysis of incomplete block designs I: The method of paired comparisons,” Biometrika 39, 324-345 (1952).
  43. D. E. Critchlow and M. A. Fligner, “Paired comparisons, triple comparisons, and ranking experiments as generalized linear models, and their implementation on GLIM,” Psychometrika 56, 517-533 (1991). [CrossRef]
  44. D. Strohmeier and G. Tech, “Sharp, bright, three-dimensional: open profiling of quality for mobile 3DTV coding methods,” Proc. SPIE 75420T (2010). [CrossRef]
  45. International Telecommunication Union, “Subjective video quality assessment methods for multimedia applications,” ITU-U P.910 (International Telecommunication Union, 2008).
  46. Numerical category scaling , adjective category scale , and categorical sort are alternative names describing the ACR test method. The subjective assessment methodology for video quality (SAMVIQ) generally obtains more accurate perceived quality scores and avoids many problems where observers avoid using the ends of the quality scale. Both ACR and SAMVIQ yield very similar perceived quality scores for our collection of distorted images .
  47. “Multimedia group test plan” (2008), draft version 1.21., http://www.vqeg.org.
  48. Prior work in the context of perceived quality often denotes a perceived quality score as a mean opinion score.
  49. The perceived quality of unrecognizable images with perceived utility scores less than −15 range from 1 to 1.4 with the average, standard deviation, and median being 1.07, 0.089, and 1.04, respectively.
  50. G. W. Snedecor and W. G. Cochran, Statistical Methods, 8th ed. (Iowa State, 1989).
  51. C. M. Jarque and A. K. Bera, “Efficient tests for normality, homoscedasticity, and serial independence of regression residuals,” Econ. Lett. 6, 255-259 (1980). [CrossRef]
  52. E. C. Fieller, H. O. Hartley, and E. S. Pearson, “Tests for rank correlation coefficients. I,” Biometrika 44, 470-481(1957).
  53. J. L. Devore, Probability and Statistics for Engineering and the Sciences, 5th ed. (Duxbury, 2000).
  54. Only six BLOCK distorted images have perceived utility scores greater than −15, so results corresponding to the BLOCK distorted images provide little insight into the relationship between quality and utility. Furthermore, these images have perceived quality scores in the range [1,1.3] (i.e., “bad” quality) and perceived utility scores in the range [−13,4] (i.e., effectively useless).
  55. Values of Conf(STS(γ)>STS+HPF(γ)) less than 0.025 and greater than 0.975 indicate that the subjective scores for TS and TS+HPF distorted images with equal γ are statistically different at the 95% confidence level (i.e., a two-sided z test). Values of Conf(STS(γ)>STS+HPF(γ)) less than 0.05 indicate that the subjective score for the TS distorted image is statistically smaller than the subjective score for a TS+HPF distorted image formed from the same reference image using the same γ at the 95% confidence level (i.e., a one-sided z test). Similarly, values of Conf(STS(γ)>STS+HPF(γ)) greater than 0.95 indicate that the subjective score for the TS distorted image is statistically greater than the subjective score for a TS+HPF distorted image with the same γ.
  56. D. J. Field, “Relations between the statistics of natural images and the response properties of cortical cells,” J. Opt. Soc. Am. A 4, 2379-2394 (1987). [CrossRef]
  57. C. A. Párraga, T. Troscianko, and D. J. Tolhurst, “The effects of amplitude-spectrum statistics on foveal and peripheral discrimination of changes in natural images, and a multi-resolution model,” Vis. Res. 45, 3145-3168 (2005). [CrossRef]
  58. C. Poynton, “The rehabilitation of gamma,” Proc. SPIE 3299, 232-249 (1998). [CrossRef]
  59. A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” IEEE Trans. Commun. 43, 2959-2965(1995). [CrossRef]
  60. I. Avcıbaş, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11, 206-233(2002). [CrossRef]
  61. R. L. De Valois and K. K. De Valois, Spatial Vision (Oxford University, 1990).
  62. G. Legge and J. Foley, “Contrast masking in human vision,” J. Opt. Soc. Am. 70, 1458-1470 (1980). [CrossRef]
  63. M. A. Georgeson and G. D. Sullivan, “Contrast constancy: debluring in human vision by spatial frequency channels,” J. Physiol. 252, 627-656 (1975).
  64. N. Brady and D. J. Field, “What's constant in contrast constancy? The effects of scaling on the perceived contrast of bandpass patterns,” Vis. Res. 35, 739-756 (1995). [CrossRef]
  65. W. A. Pearlman, “A visual system model and a new distortion measure in the context of image processing,” J. Opt. Soc. Am. 68, 374-386 (1978). [CrossRef]
  66. R. J. Safranek and J. D. Johnston, “A perceptually tuned sub-band image coder with image dependent quantization and post-quantization data compression,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1989), pp. 1945-1948.
  67. S. J. Daly, “The visible difference predictor: an algorithm for the assessment of image fidelity,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 179-206.
  68. J. Lubin, “The use of psychophysical data and models in the analysis of display system performance,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 163-178.
  69. A. B. Watson, “DCT quantization matrices visually optimized for individual images,” Proc. SPIE 1913, 202-216 (1993). [CrossRef]
  70. P. Teo and D. Heeger, “Perceptual image distortion,” Proc. SPIE 2179, 127-141 (1994). [CrossRef]
  71. A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor, “Visibility of wavelet quantization noise,” IEEE Trans. Image Process. 6, 1164-1175 (1997). [CrossRef]
  72. N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000). [CrossRef]
  73. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Proceedings of the 37th IEEE Asilomar Conference on Signals, Systems, and Computers (IEEE, 2003), Vol. 2, pp. 1398-1402.
  74. D. M. Chandler and S. S. Hemami, “VSNR: a wavelet-based visual signal-to-noise ratio for natural images,” IEEE Trans. Image Process. 16, 2284-2298 (2007). [CrossRef]
  75. M. Carnec, P. Le Callet, and D. Barba, “Objective quality assessment of color images based on a generic perceptual reduced reference,” Signal Process., Image Commun. 23, 239-256(2008). [CrossRef]
  76. D. Navon, “Forest before trees: the precedence of global features in visual perception,” Cogn. Psychol. 9, 353-383(1977). [CrossRef]
  77. D. M. Rouse and S. S. Hemami, “Understanding and simplifying the structural similarity metric,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2008), pp. 1188-1191.
  78. D. Rouse, R. Pepion, S. Hemami, and P. Le Callet, “Image utility assessment and a relationship with image quality assessment,” Proc. SPIE 7240 (2009). [CrossRef]
  79. K. Grill-Spector, “The neural basis of object perception,” Curr. Opin. Neurobiol. 13, 159-166 (2003). [CrossRef]
  80. I. Biderman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38-64 (1988). [CrossRef]
  81. D. M. Rouse and S. S. Hemami, “Quantifying the use of structure in cognitive tasks,” Proc. SPIE 6492, 64921O (2007). [CrossRef]
  82. D. M. Rouse and S. S. Hemami, “Natural image utility assessment using image contours,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2009), pp. 2217-2220.
  83. W. K. Pratt, Digital Image Processing: PIKS Inside, 3rd ed.(Wiley, 2001).
  84. C. Giardina and E. DoughertyMorphological Methods in Image and Signal Processing (Prentice Hall, 1998).
  85. The Hamming distance counts the number of dissimilar elements between two vectors .
  86. D. Marr and E. Hildreth, “Theory of edge detection,” Proc. R. Soc. Lond. Ser. B 207, 187-217 (1980). [CrossRef]
  87. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679-698 (1986). [CrossRef]
  88. E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 1995), Vol. 3, pp. 444-447.
  89. The high-pass residual generated by the steerable pyramid is not used.
  90. S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 710-732 (1992). [CrossRef]
  91. M. D. Gaubatz, D. M. Rouse, and S. S. Hemami, “MeTriX MuX,” http://foulard.ece.cornell.edu/gaubatz/metrix_mux.
  92. Video Quality Experts Group, “VQEG final report of FR-TV phase II validation test” (2003), http://www.vqeg.org.
  93. Video Quality Experts Group, “Final report from the VQEG on the validation of objective models of multimedia quality assessment, phase I,” (2008), version 2.6., http://www.vqeg.org.
  94. M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004). [CrossRef]
  95. M. B. Brown and A. B. Forsythe, “Robust tests for the equality of variances,” J. Am. Stat. Assoc. 69, 364-367 (1974). [CrossRef]
  96. D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Peninsula, 1988).
  97. J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology (Oak Brook, Ill.) 143, 29-36 (1982).
  98. T. Fawcett, “An introduction to ROC analysis,” Pattern Recogn. Lett. 27, 861-874 (2006). [CrossRef]
  99. The notation MS-NICES≤2 is used to refer to both MS-NICE1 and MS-NICE2.
  100. D. Martin, C. Fowlkes, D. Tal and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference of Computer Vision (IEEE, 2001), pp. 416-423.
  101. The local variance comparison used by SSIM corresponds to an analysis of high-frequency content and does not need to be removed.
  102. H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440-3451(2006). [CrossRef]
  103. E. C. Larson and D. M. Chandler, “The most apparent distortion: a dual strategy for full reference image quality,” Proc. SPIE 7242, 72420S (2009). [CrossRef]
  104. We use “NICE” to generically refer to both the single-scale and multiscale implementations of NICE, and specific implementations of NICE (e.g., NICECanny) will be identified when necessary.
  105. Using the fine-scale steerable pyramid filters to identify image contours for MS-NICE lead to statistically similar performance to the single-scale implementation of NICE using the Sobel Canny edge detectors.
  106. A. B. Watson and J. A. Solomon, “Model of visual contrast gain control and pattern masking,” J. Opt. Soc. Am. A 14, 2379-2391 (1997). [CrossRef]
  107. H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117-2128(2005). [CrossRef]
  108. The subscript k for Nk accounts for decimated wavelet decompositions, such as the steerable pyramid, whose channels in coarser image scales have fewer coefficients than channels in finer image scales.
  109. U. Polat and D. Sagi, “Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments,” Vis. Res. 33, 993-999 (1993). [CrossRef]
  110. U. Polat and D. Sagi, “The architecture of perceptual spatial interactions,” Vis. Res. 34, 73-78 (1994). [CrossRef]
  111. V. Kayargadde and J.-B. Martens, “Perceptual characterization of images degraded by blur and noise: experiments,” J. Opt. Soc. Am. A 13, 1166-1177 (1996). [CrossRef]
  112. D. M. Chandler, K. H. Lim, and S. S. Hemami, “Effects of spatial correlations and global precedence on the visual fidelity of distorted images,” Proc. SPIE 6057, 60570F (2006). [CrossRef]
  113. S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu, “Statistical edge detection: learning and evaluating edge cues,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 57-74 (2003). [CrossRef]
  114. W. Ma and B. S. Manjunath, “Edgeflow: a technique for boundary detection and segmentation,” IEEE Trans. Image Process. 9, 1375-1388 (2000). [CrossRef]
  115. E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976). [CrossRef]
  116. C. A. Collin and P. A. McMullen, “Subordinate-level categorization relies on high spatial frequencies to a greater degree than basic-level categorization,” Percept. Psychophys. 67, 354-364 (2005).
  117. C. A. Collin, “Spatial-frequency thresholds for object categorisation at basic and subordinate levels,” Perception 35, 41-52(2006). [CrossRef]
  118. A. Torralba, “How many pixels make an image?,” Vis. Neurosci. 26, 123-131 (2009). [CrossRef]
  119. F. A. Rosell and R. H. Willson, “Recent psychophysical experiments and the display signal-to-noise ratio concept,” in Perception of Displayed Information, L.Biberman, ed. (Plenum, 1973), pp. 167-232.
  120. The Johnson criteria were based on a study with a specific set of objects, and it is possible that different objects would suggest different criteria for object recognition .
  121. S. Ullman, High-Level Vision: Object Recognition and Visual Cognition (MIT, 1996).
  122. S. Panis, J. De Winter, J. Vandekerckhove, and J. Wagemans, “Identification of everyday objects on the basis of fragmented outline versions,” Perception 37, 271-289(2008). [CrossRef]
  123. D. J. Field, A. Hayes, and R. Hess, “Contour integration by the human visual system: evidence for a local “association field”,” Vis. Res. 33, 173-193 (1993). [CrossRef]
  124. The subscript k for Nk accounts for decimated wavelet decompositions, such as the steerable pyramid, whose channels in coarser image scales have fewer coefficients than channels in finer image scales.
  125. M. J. Wainwright and E. P. Simoncelli, “Scale mixtures of Gaussians and the statistics of natural images,” in Advances in Neural Information Processing Systems, S.A.Solla, T.K.Leen, and K.-R.Miller, eds. (MIT, 2000), pp. 855-861.
  126. M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky, “Random cascades on wavelet trees and their use in analyzing and modeling natural images,” Appl. Comput. Harmon. Anal. 11, 89-123 (2001). [CrossRef]
  127. B. W. Keelan, Handbook of Image Quality: Characterization and Prediction (CRC, 2002).
  128. D. Rouse, R. Pepion, P. Le Callet, and S. Hemami, “Tradeoffs in subjective testing methods for image video quality assessment,” Proc. SPIE 7527, 75270F (2010). [CrossRef]
  129. R. W. Hamming, “Error detecting for error correcting codes,” Bell Syst. Tech. J. 29, 147-160 (1950).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited