OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 8, Iss. 9 — Oct. 2, 2013

Unified multiframe super-resolution of matte, foreground, and background

Sahana M. Prabhu and A. N. Rajagopalan  »View Author Affiliations


JOSA A, Vol. 30, Issue 8, pp. 1524-1534 (2013)
http://dx.doi.org/10.1364/JOSAA.30.001524


View Full Text Article

Enhanced HTML    Acrobat PDF (1620 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Reconstruction of a super-resolved image from multiple frames and extraction of matte are two popular topics that have been solved independently. In this paper, we advocate a unified framework that assimilates matting within the super-resolution model. We show that joint estimation is advantageous, as super-resolved edge information helps in obtaining a sharp matte, while the matte in turn aids in resolving fine details. We propose a multiframe approach to increase the spatial resolution of the matte, foreground, and background. This is validated extensively on examples from standard matting datasets.

© 2013 Optical Society of America

OCIS Codes
(100.0100) Image processing : Image processing
(100.3190) Image processing : Inverse problems
(100.6640) Image processing : Superresolution

ToC Category:
Image Processing

History
Original Manuscript: January 15, 2013
Revised Manuscript: April 18, 2013
Manuscript Accepted: May 13, 2013
Published: July 15, 2013

Virtual Issues
Vol. 8, Iss. 9 Virtual Journal for Biomedical Optics

Citation
Sahana M. Prabhu and A. N. Rajagopalan, "Unified multiframe super-resolution of matte, foreground, and background," J. Opt. Soc. Am. A 30, 1524-1534 (2013)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=josaa-30-8-1524


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J. Wang and M. Cohen, “Image and video matting: a survey,” Found. Trends Comput. Graph. Vis. 3, 97–175 (2007). [CrossRef]
  2. C. Rhemann, C. Rother, P. Kohli, and M. Gelautz, “A spatially varying PSF-based prior for alpha matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2149–2156.
  3. M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in SIGGRAPH—Proceedings International Conference on Computer Graphics and Interactive Techniques (ACM, 2005), pp. 567–576.
  4. B. L. Price, B. S. Morse, and S. Cohen, “Simultaneous foreground, background, and alpha estimation for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2157–2164.
  5. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]
  6. R. R. Schultz and R. L. Stevenson, “Extraction of high-resolution frames from video sequences,” IEEE Trans. Image Process. 5, 996–1011 (1996). [CrossRef]
  7. Y. W. Tai, W. S. Tong, and C. K. Tang, “Perceptually-inspired and edge-directed color image super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 1948–1955.
  8. S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos, “SoftCuts: a soft edge smoothness prior for color image super-resolution,” IEEE Trans. Image Process. 18, 969–981 (2009). [CrossRef]
  9. A. W. M. van Eekeren, K. Schutte, and L. J. van Vliet, “Multiframe super-resolution reconstruction of small moving objects,” IEEE Trans. Image Process. 19, 2901–2912 (2010). [CrossRef]
  10. N. Joshi, W. Matusik, S. Avidan, H. Pfister, and W. T. Freeman, “Exploring defocus matting: nonparametric acceleration, super-resolution, and off-center matting,” IEEE Comput. Graph. Appl. 27, 43–52 (2007). [CrossRef]
  11. F. Sroubek, G. Cristobal, and J. Flusser, “A unified approach to superresolution and multichannel blind deconvolution,” IEEE Trans. Image Process. 16, 2322–2332 (2007). [CrossRef]
  12. J. Wang and M. Cohen, “Optimized color sampling for robust matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  13. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 228–242 (2008). [CrossRef]
  14. C. Rhemann, C. Rother, J. Wang, M. Gelautz, P. Kohli, and P. Rott, “A perceptually motivated online benchmark for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1826–1833.
  15. Y. Zheng and C. Kambhamettu, “Learning based digital matting,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2009), pp. 889–896.
  16. E. S. L. Gastal and M. M. Oliveira, “Shared sampling for real-time alpha matting,” Comput. Graph. Forum 29, 575–584 (2010). [CrossRef]
  17. S. K. Yeung, C. K. Tang, M. S. Brown, and S. B. Kang, “Matting and compositing of transparent and refractive objects,” ACM Trans. Graph. 30, 1–13 (2011). [CrossRef]
  18. S. M. Yoon and G. J. Yoon, “Alpha matting using compressive sensing,” Electron. Lett. 48, 153–155 (2012). [CrossRef]
  19. E. Shahrian and D. Rajan, “Weighted color and texture sample selection for image matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 718–725.
  20. Q. Chen, D. Li, and C. K. Tang, “KNN Matting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 869–876.
  21. S. M. Prabhu and A. N. Rajagopalan, “Natural matting for degraded pictures,” IEEE Trans. Image Process. 20, 3647–3653 (2011). [CrossRef]
  22. Y. Y. Chuang, A. Agarwala, B. Curless, D. H. Salesin, and R. Szeliski, “Video matting of complex scenes,” ACM Trans. Graph. 21, 243–248 (2002). [CrossRef]
  23. T. Schoenemann and D. Cremers, “A coding-cost framework for super-resolution motion layer decomposition,” IEEE Trans. Image Process. 21, 1097–1110 (2012). [CrossRef]
  24. X. Bai, J. Wang, D. Simons, and G. Sapiro, “Video snapcut: robust video object cutout using localized classifiers,” ACM Trans. Graph. 28, 70 (2009). [CrossRef]
  25. X. Bai and G. Sapiro, “Geodesic matting: a framework for fast interactive image and video segmentation and matting,” Int. J. Comput. Vis. 82, 113–132 (2009). [CrossRef]
  26. L. Wang, M. Gong, C. Zhang, R. Yang, C. Zhang, and Y. H. Yang, “Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations,” Int. J. Comput. Vis. 97, 104–121 (2012). [CrossRef]
  27. M. Sindeev, A. Konushin, and C. Rother, “Alpha flow for video matting,” in Proceedings of Asian Conference on Computer Vision (2012).
  28. I. Choi, M. Lee, and Y. W. Tai, “Video matting using multi-frame nonlocal matting laplacian,” in Proceedings of European Conference on Computer Vision (2012), pp. 540–553.
  29. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multi-frame super-resolution,” IEEE Trans. Image Process. 13, 1327–1344 (2004). [CrossRef]
  30. S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Trans. Image Process. 15, 141–159 (2006). [CrossRef]
  31. R. Fransens, C. Strecha, and L. van Gool, “Optical flow based super-resolution: a probabilistic approach,” Comput. Vis. Image Underst. 106, 106–115 (2007). [CrossRef]
  32. K. V. Suresh and A. N. Rajagopalan, “Robust and computationally efficient superresolution algorithm,” J. Opt. Soc. Am. A 24, 984–992 (2007). [CrossRef]
  33. H. Shen, L. Zhang, B. Huang, and P. Li, “A MAP approach for joint motion estimation, segmentation, and super resolution,” IEEE Trans. Image Process. 16, 479–490 (2007). [CrossRef]
  34. M. Bleyer, M. Gelautz, C. Rother, and C. Rhemann, “A stereo approach that handles the matting problem via image warping,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 501–508.
  35. H. Takeda, P. Milanfar, M. Protter, and M. Elad, “Super-resolution without explicit subpixel motion estimation,” IEEE Trans. Image Process. 18, 1958–1975 (2009). [CrossRef]
  36. L. Zhang, Q. Yuan, H. Shen, and P. Li, “Multiframe image super-resolution adapted with local spatial information,” J. Opt. Soc. Am. A 28, 381–390 (2011). [CrossRef]
  37. S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational Bayesian super resolution,” IEEE Trans. Image Process. 20, 984–999 (2011). [CrossRef]
  38. S. Villena, M. Vega, S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Bayesian combination of sparse and non-sparse priors in image super resolution,” Digit. Signal Process. 23, 530–541 (2013). [CrossRef]
  39. A. Sanchez-Beato, “Coordinate-descent super-resolution and registration for parametric global motion models,” J. Vis. Commun. Image Represent. 23, 1060–1067 (2012). [CrossRef]
  40. S. H. Keller, F. Lauze, and M. Nielsen, “Video super-resolution using simultaneous motion and intensity calculations,” IEEE Trans. Image Process. 20, 1870–1884 (2011). [CrossRef]
  41. H. Su, Y. Wu, and J. Zhou, “Super-resolution without dense flow,” IEEE Trans. Image Process. 21, 1782–1795 (2012). [CrossRef]
  42. S. M. Prabhu and A. N. Rajagopalan, “Matte super-resolution for compositing,” Symposium of the German Association for Pattern Recognition (DAGM, 2010), pp. 422–431.
  43. S. M. Prabhu and A. N. Rajagopalan, “Joint multi-frame super-resolution and matting,” in Proceedings of International Conference on Pattern Recognition (IEEE2012), pp. 1924–1927.
  44. T. Brox and J. Malik, “Large displacement optical flow: descriptor matching in variational motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 500–513 (2011). [CrossRef]
  45. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM J. Optim. 10, 177–182 (1999). [CrossRef]
  46. W. K. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing (Cambridge University, 2007).
  47. G. R. K. S. Subrahmanyam, A. N. Rajagopalan, and R. Aravind, “Recursive framework for joint inpainting and de-noising of photographic films,” J. Opt. Soc. Am. A 27, 1091–1099 (2010). [CrossRef]
  48. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]
  49. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: a feature similarity index for image quality assessment,” IEEE Trans. Image Process. 20, 2378–2386 (2011). [CrossRef]
  50. N. Apostoloff and A. Fitzgibbon, “Automatic video segmentation using spatiotemporal T-junctions,” in Proceedings of the British Machine Vision Conference (2006).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited