OSA's Digital Library

Journal of the Optical Society of America A

Journal of the Optical Society of America A

| OPTICS, IMAGE SCIENCE, AND VISION

  • Vol. 22, Iss. 5 — May. 1, 2005
  • pp: 839–848

Radiometric framework for image mosaicking

Anatoly Litvinov and Yoav Y. Schechner  »View Author Affiliations


JOSA A, Vol. 22, Issue 5, pp. 839-848 (2005)
http://dx.doi.org/10.1364/JOSAA.22.000839


View Full Text Article

Enhanced HTML    Acrobat PDF (872 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Nonuniform exposures often affect imaging systems, e.g., owing to vignetting. Moreover, the sensor’s radiometric response may be nonlinear. These characteristics hinder photometric measurements. They are particularly annoying in image mosaicking, in which images are stitched to enhance the field of view. Mosaics suffer from seams stemming from radiometric inconsistencies between raw images. Prior methods feathered the seams but did not address their root cause. We handle these problems in a unified framework. We suggest a method for simultaneously estimating the radiometric response and the camera nonuniformity, based on a frame sequence acquired during camera motion. The estimated functions are then compensated for. This permits image mosaicking, in which no seams are apparent. There is no need to resort to dedicated seam-feathering methods. Fundamental ambiguities associated with this estimation problem are stated.

© 2005 Optical Society of America

OCIS Codes
(040.1490) Detectors : Cameras
(100.3020) Image processing : Image reconstruction-restoration
(100.3190) Image processing : Inverse problems
(110.0110) Imaging systems : Imaging systems
(150.0150) Machine vision : Machine vision
(350.2660) Other areas of optics : Fusion

History
Original Manuscript: June 14, 2004
Manuscript Accepted: October 5, 2004
Published: May 1, 2005

Citation
Anatoly Litvinov and Yoav Y. Schechner, "Radiometric framework for image mosaicking," J. Opt. Soc. Am. A 22, 839-848 (2005)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-22-5-839


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. B. Kang, R. Weiss, “Can we calibrate a camera using an image of a flat, textureless Lambertian surface?” in Proceedings of European Conference on Computer Vision, (Springer, New York, 2000), Part 2, pp. 640–653.
  2. I. C. Khoo, M. V. Wood, M. Y. Shih, P. H. Chen, “Extremely nonlinear photosensitive liquid crystals for image sensing and sensor protection,” Opt. Express 4, 432–442 (1999). [CrossRef] [PubMed]
  3. N. Tabiryan, S. Nersisyan, “Liquid-crystal film eclipses the sun artificially,” Laser Focus World 38, 105–108 (2002).
  4. In different communities the terms mosaicing[5, 6] and mosaicking[7, 8, 9, 10] are used.
  5. D. Capel, A. Zisserman, “Automated mosaicing with super-resolution zoom,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE Press, Piscataway, N.J., 1998), pp. 885–891.
  6. S. Peleg, M. Ben-Ezra, Y. Pritch, “Omnistereo: panoramic stereo imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 279–290 (2001). [CrossRef]
  7. R. Kwok, J. C. Curlander, S. Pang, “An automated system for mosaicking spaceborne SAR imagery,” Int. J. Remote Sens. 11, 209–223 (1990). [CrossRef]
  8. R. Eustice, O. Pizarro, H. Singh, J. Howland, “UWIT: Underwater Image Toolbox for optical image processing and mosaicking in MATLAB,” in Proceedings of IEEE International Symposium on Underwater Technology (IEEE Press, Piscataway, N.J., 2002), pp. 141–145.
  9. R. Garcia, J. Batlle, X. Cufi, J. Amat, “Positioning an underwater vehicle through image mosaicking,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE Press, Piscataway, N.J., 2001), Part 3, pp. 2779–2784.
  10. M. L. Duplaquet, “Building large image mosaics with invisible seam lines,” in Visual Information Processing VII, S. K. Park and R. D. Juday, eds., Proc. SPIE3387, 369–377 (1998).
  11. C. J. Lada, D. L. DePoy, K. M. Merrill, I. Gatley, “Infrared images of M17,” Astron. J. 374, 533–539 (1991). [CrossRef]
  12. L. A. Soderblom, K. Edwards, E. M. Eliason, E. M. Sanchez, M. P. Charette, “Global color variations on the Martian surface,” Icarus 34, 446–464 (1978). [CrossRef]
  13. J. M. Uson, S. P. Boughn, J. R. Kuhn, “The central galaxy in Abell 2029: an old supergiant,” Science 250, 539–540 (1990). [CrossRef] [PubMed]
  14. A. R. Vasavada, A. P. Ingersoll, D. Banfield, M. Bell, P. J. Gierasch, M. J. S. Belton, “Galileo imaging of Jupiter’s atmosphere: the great red spot, equatorial region, and white ovals,” Icarus 135, 265–275 (1998). [CrossRef]
  15. S. Negahdaripour, X. Xu, A. Khemene, Z. Awan, “3-D motion and depth estimation from sea-floor images for mosaic-based station-keeping and navigation of ROV’s/AUV’s and high-resolution sea-floor mapping,” in Proceedings of IEEE Workshop on Autonomous Underwater Vehicles (IEEE Press, Piscataway, N.J., 1998), pp. 191–200.
  16. M. Hansen, P. Anandan, K. Dana, G. van der Wal, P. Burt, “Real-time scene stabilization and mosaic construction,” in Proceedings of IEEE Workshop on Applications of Computer Vision (IEEE Press, Piscataway, N.J., 1994), pp. 54–62.
  17. E. M. Reynoso, G. M. Dubner, W. M. Goss, E. M. Arnal, “VLA observations of neutral hydrogen in the direction of Puppis A,” Astron. J. 110, 318–324 (1995). [CrossRef]
  18. P. J. Burt, E. H. Adelson, “A multiresolution spline with application to image mosaics,” ACM Trans. Graphics 2, 217–236 (1983). [CrossRef]
  19. A. Levin, A. Zomet, S. Peleg, Y. Weiss, “Seamless image stitching in the gradient domain,” in Proceedings of European Conference in Computer Vision (Springer, New York, 2004), Part IV, pp. 377–390.
  20. H. Y. Shum, R. Szeliski, “Systems and experiment paper: construction of panoramic image mosaics with global and local alignment,” Int. J. Comput. Vision 36, 101–130 (2000). [CrossRef]
  21. M. Aggarwal, N. Ahuja, “High dynamic range panoramic imaging,” in Proceedings of IEEE International Conference on Computer Vision (IEEE Press, Piscataway, N.J., 2001), Vol. I, pp. 2–9.
  22. Y. Y. Schechner, S. K. Nayar, “Generalized mosaicing,” in Proceedings of IEEE International Conference on Computer Vision (IEEE Press, Piscataway, N.J., 2001), Vol. I, pp. 17–24.
  23. Y. Y. Schechner, S. K. Nayar, “Generalized mosaicing: high dynamic range in a wide field of view,” Int. J. Comput. Vision 53, 245–267 (2003). [CrossRef]
  24. P. E. Debevec, J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of SIGGRAPH 97 (Association for Computing Machinery, New York, 1997), pp. 369–378.
  25. S. Mann, R. W. Picard, “On being ‘undigital’ with digital cameras: extending dynamic range by combining differently exposed pictures,” in Proceedings of IS&T 48th Annual Conference (Society for Imaging Science and Technology, Springfield, Va., 1995), pp. 422–428.
  26. S. Mann, “Comparametric equations with practical applications in quantigraphic image processing,” IEEE Trans. Image Process. 9, 1389–1406 (2000). [CrossRef]
  27. T. Mitsunaga, S. K. Nayar, “Radiometric self calibration,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE Press, Piscataway, N.J., 1999), Vol. I, pp. 374–380.
  28. S. J. Kim, M. Pollefeys, “Radiometric alignment of image sequences,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE Press, Piscataway, N.J., 2004), Vol. I, pp. 645–652.
  29. J. Jia, C. K. Tang, “Image registration with global and local luminance alignment,” in Proceedings of IEEE Conference on Computer Vision (IEEE Press, Piscataway, N.J., 2003), Vol. I, pp. 156–163.
  30. F. M. Candocia, “Jointly registering images in domain and range by piecewise linear comparametric analysis,” IEEE Trans. Image Process. 12, 409–419 (2003). [CrossRef]
  31. S. Mann, R. Mann, “Quantigraphic imaging: estimating the camera response and exposures from differently exposed images,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE Press, Piscataway, N.J., 2001), Vol. 1, pp. 842–849.
  32. M. D. Grossberg, S. K. Nayar, “Determining the camera response from images: what is knowable?,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1455–1467 (2003). [CrossRef]
  33. P. Törle, “Scene-based correction of image sensor deficiencies,” MSc. thesis (Linköping Institute of Technology, Linköping, Sweden, 2003).
  34. R. C. Hardie, M. M. Hayat, E. Armstrong, B. Yasuda, “Scene-based nonuniformity correction with video sequences and registration,” Appl. Opt. 39, 1241–1250 (2000). [CrossRef]
  35. B. M. Ratliff, M. M. Hayat, J. S. Tyo, “Radiometrically accurate scene-based nonuniformity correction for array sensors,” J. Opt. Soc. Am. A 20, 1890–1899 (2003). [CrossRef]
  36. S. N. Torres, J. E. Pezoa, M. Hayat, “Scene-based nonuniformity correction for focal plane arrays by the method of the inverse covariance form,” Appl. Opt. 42, 5872–5881 (2003). [CrossRef] [PubMed]
  37. H. Farid, “Blind inverse gamma correction,” IEEE Trans. Image Process. 10, 1428–1433 (2001). [CrossRef]
  38. S. Lin, J. Gu, S. Yamazaki, H. Shum, “Radiometric calibration from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE Press, Piscataway, N.J., 2004), Vol. II, pp. 938–946.
  39. S. Inoue, Video Microscopy (Plenum, New York, 1986). pp. 209–214.
  40. The radiometric response function is usually monotonically increasing. It monotonically decreases in negative films and in some camera modes.
  41. S. Hsu, H. S. Sawhney, R. Kumar, “Automated mosaics via topology inference,” IEEE Comput. Graphics Appl. 22, 44–54 (2002). [CrossRef]
  42. M. Irani, P. Anandan, J. Bergen, R. Kumar, S. Hsu, “Efficient representations of video sequences and their application,” Signal Process. 8, 327–351 (1996).
  43. R. K. Sharma, M. Pavel, “Multisensor image registration,” in Proceedings of the Society for Information Display (Society for Information Display, Playa del Ray, Calif., 1997), Vol. XXVIII, pp. 951–954 (1997).
  44. P. Thevenaz, M. Unser, “Optimization of mutual information for multiresolution image registration,” IEEE Trans. Image Process. 9, 2083–2099 (2000). [CrossRef]
  45. P. Viola, W. M. Wells, “Alignment by maximization of mutual information,” Int. J. Comput. Vision 24, 137–154 (1997). [CrossRef]
  46. We may avoid the apperance of trivial solution by expressing Eq. (15) in a matrix formulation. This is only one of the possible realizations of the requirement to avoid a nontrivial g. Another possibility is to fix the boundary range values of g.
  47. We placed the filter a few centimeters ahead of the lens. If the filter is placed right next to the lens, it affects the aperture properties[48] without producing spatially varying effects in the image.
  48. H. Farid, E. P. Simoncelli, “Range estimation by optical differentiation,” J. Opt. Soc. Am. A 15, 1777–1786 (1998). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited