OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A

| OPTICS, IMAGE SCIENCE, AND VISION

  • Editor: Franco Gori
  • Vol. 29, Iss. 8 — Aug. 1, 2012
  • pp: 1516–1528

Resolution loss without imaging blur

Tali Treibitz and Yoav Y. Schechner  »View Author Affiliations


JOSA A, Vol. 29, Issue 8, pp. 1516-1528 (2012)
http://dx.doi.org/10.1364/JOSAA.29.001516


View Full Text Article

Enhanced HTML    Acrobat PDF (2114 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Image recovery under noise is widely studied. However, there is little emphasis on performance as a function of object size. In this work we analyze the probability of recovery as a function of object spatial frequency. The analysis uses a physical model for the acquired signal and noise, and also accounts for potential postacquisition noise filtering. Linear-systems analysis yields an effective cutoff frequency, which is induced by noise, despite having no optical blur in the imaging model. This means that a low signal-to-noise ratio (SNR) in images causes resolution loss, similar to image blur. We further consider the effect on SNR of pointwise image formation models, such as added specular or indirect reflections, additive scattering, radiance attenuation in haze, and flash photography. The result is a tool that assesses the ability to recover (within a desirable success rate) an object or feature having a certain size, distance from the camera, and radiance difference from its nearby background, per attenuation coefficient of the medium. The bounds rely on the camera specifications.

© 2012 Optical Society of America

OCIS Codes
(030.4280) Coherence and statistical optics : Noise in imaging systems
(110.0110) Imaging systems : Imaging systems

ToC Category:
Imaging Systems

History
Original Manuscript: February 15, 2012
Revised Manuscript: June 5, 2012
Manuscript Accepted: June 5, 2012
Published: July 11, 2012

Citation
Tali Treibitz and Yoav Y. Schechner, "Resolution loss without imaging blur," J. Opt. Soc. Am. A 29, 1516-1528 (2012)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-29-8-1516


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]
  2. R. Kaftory, Y. Y. Schechner, and Y. Y. Zeevi, “Variational distance-dependent image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  3. G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004). [CrossRef]
  4. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Machine Intell. 31, 385–399 (2009). [CrossRef]
  5. P. Dayan and L. F. Abbott, Theoretical Neuroscience (MIT, 2001), Chap. 4, pp. 139–141.
  6. N. S. Kopeika, A System Engineering Approach to Imaging (SPIE, 1998), Chaps. 9, 10, 19.
  7. S. W. Smith, The Scientist & Engineer’s Guide to Digital Signal Processing (California Tech. Publishing, 1997), Chap. 11.
  8. J. Lloyd, Thermal Imaging Systems (Springer, 1975). Chaps. 5, 10.
  9. O. Schade, “An evaluation of photographic image quality and resolving power,” J. Soc. Motion Pict. Telev. Eng. 73, 81–119 (1964).
  10. H. Barrett, “Objective assessment of image quality: effects of quantum noise and object variability,” J. Opt. Soc. Am. A 7, 1266–1278 (1990). [CrossRef]
  11. H. Barrett, “NEQ: its progenitors and progeny,” Proc. SPIE 7263, 72630F (2009).
  12. W. Geisler, Ideal Observer Analysis (MIT Press, 2003), pp. 825–837.
  13. I. Cunningham, and R. Shaw, “Signal-to-noise optimization of medical imaging systems,” J. Opt. Soc. Am. A 16, 621–632 (1999). [CrossRef]
  14. M. Unser, B. L. Trus, and A. C. Steven, “A new resolution criterion based on spectral signal-to-noise ratios,” Ultramicroscopy 23, 39–51 (1987). [CrossRef]
  15. M. Shahram and P. Milanfar, “Imaging below the diffraction limit: a statistical analysis,” IEEE Trans. Image Process. 13, 677–689 (2004). [CrossRef]
  16. M. Shahram and P. Milanfar, “Statistical and information-theoretic analysis of resolution in imaging,” IEEE Trans. Inf. Theory 52, 3411–3437 (2006). [CrossRef]
  17. H. Farid and E. H. Adelson, “Separating reflections from images by use of independent component analysis,” J. Opt. Soc. Am. A 16, 2136–2145 (1999). [CrossRef]
  18. S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21, 163–186 (1997). [CrossRef]
  19. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27, 72, (2008). [CrossRef]
  20. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008). [CrossRef]
  21. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003). [CrossRef]
  22. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), p. 108.
  23. J. Gu, R. Ramamoorthi, P. Belhumeur, and S. K. Nayar, “Dirty glass: rendering contamination on transparent surfaces,” in Eurographics Symposium on Rendering (Springer, 2007), p. 159–170.
  24. T. Treibitz and Y. Y. Schechner, “Recovery limits in pointwise degradation,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.
  25. G. E. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Machine Intell. 16, 267–276 (1994). [CrossRef]
  26. A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005). [CrossRef]
  27. J. Takamatsu, Y. Matsushita, and K. Ikeuchi, “Estimating radiometric response functions from image noise variance,” in Proceedings of European Conference on Computer Vision (Springer, 2008), pp. 623–637.
  28. T. J. Fellers, and M. W. Davidson, “CCD noise sources and signal-to-noise ratio,” Optical Microscopy Primer (Molecular Expressions™) (2004).
  29. S. Inoué, and K. R. Spring, Video Microscopy: The Fundamentals, 2nd ed (Springer, 1997), Chap. 7, p. 316.
  30. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Machine Intell. 29, 1339–1354 (2007). [CrossRef]
  31. C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008). [CrossRef]
  32. Y. Matsushita, and S. Lin, “Radiometric calibration from noise distributions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), p. 1–8.
  33. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25, 935–944 (2006). [CrossRef]
  34. S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1440–1447.
  35. F. Koreban, and Y. Y. Schechner, “Geometry by deflaring,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.
  36. S. Bobrov, and Y. Y. Schechner, “Image-based prediction of imaging and vision performance,” J. Opt. Soc. Am. A 24, 1920–1929 (2007). [CrossRef]
  37. R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A 17, 831–835 (2000). [CrossRef]
  38. K. Tan, and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A 18, 2460–2467 (2001). [CrossRef]
  39. Y. Schechner, D. Diner, and J. Martonchik, “Spaceborne underwater imaging,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2011), p. 1–8.
  40. M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004). [CrossRef]
  41. A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph. 24, 828–835 (2005). [CrossRef]
  42. B. Wells, “MTF provides an image-quality metric,” Laser Focus World 41 (2005).
  43. S. G. Narasimhan, C. Wang, and S. K. Nayar, “All the images of an outdoor scene,” in Proceedings of European Conference on Computer Vision (IEEE, 2002), pp. 148–162.
  44. J. C. Leachtenauer, W. Malila, J. Irvine, L. Colburn, and N. Salvaggio, “General image-quality equation: GIQE,” Appl. Opt. 36, 8322–8328 (1997). [CrossRef]
  45. P. Roetling, E. Trabka, and R. Kinzly, “Theoretical prediction of image quality,” J. Opt. Soc. Am. 58, 342–344 (1968). [CrossRef]
  46. R. D. Fiete, and T. A. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. 40, 574–585 (2001). [CrossRef]
  47. A. Burgess, “The rose model, revisited,” J. Opt. Soc. Am. A 16, 633–646 (1999). [CrossRef]
  48. B. W. Keelan, Handbook of Image Quality (Dekker, 2002),Chaps. 2, 3.
  49. D. H. Kelly, “Adaptation effects on spatio-temporal sine-wave thresholds,” Vis. Res. 12, 89–101 (1972). [CrossRef]
  50. R. Mantiuk, K. Kim, A. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Trans. Graph. 30, 40 (2011). [CrossRef]
  51. A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005). [CrossRef]
  52. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007). [CrossRef]
  53. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006). [CrossRef]
  54. P. Chatterjee and P. Milanfar, “Clustering-based denoising with locally learned dictionaries,” IEEE Trans. Image Process. 18, 1438–1451 (2009). [CrossRef]
  55. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of gaussians in the wavelet domain,” IEEE Trans. Image Process. 12, 1338–1351 (2003). [CrossRef]
  56. P. Chatterjee and P. Milanfar, “Is denoising dead?,” IEEE Trans. Image Process. 19, 895–911 (2010). [CrossRef]
  57. A. Levin and B. Nadler, “Natural image denoising: optimality and inherent bounds,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 2833–2840.
  58. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Denoising examples,” decsai.ugr.es/~javier/denoise/examples.
  59. S. Hasinoff, F. Durand, and W. Freeman, “Noise-optimal capture for high dynamic range photography,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 553–560.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited