OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 9 — May. 6, 2013
  • pp: 11181–11186
« Show journal navigation

Reflectance field display

Ryoichi Horisaki and Jun Tanida  »View Author Affiliations


Optics Express, Vol. 21, Issue 9, pp. 11181-11186 (2013)
http://dx.doi.org/10.1364/OE.21.011181


View Full Text Article

Acrobat PDF (3120 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We propose a display for stereoscopically representing an arbitrary object that is responsive to an arbitrary physical illumination source in the display environment. Our scheme is based on the eight-dimensional reflectance field, which contains angular and spatial information of incoming and outgoing light rays of an object, and is also known as the bidirectional scattering surface reflectance distribution function (BSSRDF). This system is composed of an integral photography unit, an integral display unit, and a processor connecting these units. The concept was demonstrated experimentally. In the demonstrations, a stereoscopically represented object responded to changes in physical illumination coming toward the display.

© 2013 OSA

1. Introduction

Fig. 1 Incoming and outgoing light field of an object.

To display the outgoing light field (OLF) of an object, various systems have been proposed based on the concept of integral display (ID), in which a lens array, a lenticular lens, or a projector array is used to reproduce the light rays [1

1. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006) [CrossRef] .

, 2

2. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52, 546–560 (2013) [CrossRef] [PubMed] .

, 7

7. J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45, 9066–9078 (2006) [CrossRef] [PubMed] .

, 8

8. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18, 8824–8835 (2010) [CrossRef] [PubMed] .

]. These ID systems represent the object stereoscopically by reproducing the OLF with these optics.

Some displays that can respond to changes in the physical illumination (illumination-responsive displays) have been proposed [9

9. S. K. Nayar, P. N. Belhumeur, and T. E. Boult, “Lighting sensitive display,” ACM Trans. Graph. 23, 963–979 (2004) [CrossRef] .

, 10

10. T. Koike and T. Naemura, “BRDF displays,” in “Proc. SIGGRAPH ’07 poster presentation ,” (2007), pp. 1–4.

]. In these displays, a camera was used to measure the two-dimensional spatial position of an illumination source in the display environment, and an output image was calculated computationally using the measured position. Displays that change the displayed object in response to the incoming light rays to an object also have been proposed [11

11. M. Fuchs, R. Raskar, H.-P. Seidel, and H. P. A. Lensch, “Towards passive 6D reflectance field displays,” ACM Trans. Graph. 27, 581–58:8 (2008) [CrossRef] .

, 12

12. M. B. Hullin, H. P. A. Lensch, R. Raskar, H.-P. Seidel, and I. Ihrke, “Dynamic display of BRDFs,” in “Proc. EUROGRAPHICS ,” (2011), pp. 475–483.

]. The incoming light field (ILF) generated by a physical illumination source is ℒin(u′, v′, s′, t′) in Fig. 1, where u′, v′ and s′, t′ are the angular and spatial coordinates of the ILF, respectively. These ILF-responsive displays use passive optics such as a multi-layered lens array or a liquid surface in order to change the represented object image in response to the ILF from a physical illumination source in the display environment. However, displays that reproduce the OLF in response to the ILF toward the display have not been realized yet.

IP and ID are well-known methods of observing and reproducing the light field [13

13. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 297–306 [CrossRef] .

]. They have been combined to realize display systems that capture and reproduce the OLF [14

14. X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett. 34, 3803–3805 (2009) [CrossRef] [PubMed] .

16

16. X. Jiao, X. Zhao, Y. Yang, Z. Fang, and X. Yuan, “Dual-camera enabled real-time three-dimensional integral imaging pick-up and display,” Opt. Express 20, 27304–27311 (2012) [CrossRef] [PubMed] .

]. In these systems, an object is illuminated by a fixed light source, and only the OLF, not the ILF, of the object is captured by an IP unit and is reproduced by an ID unit. Furthermore, ID systems using computer graphics to generate an input integral image have been demonstrated for high-quality displays [17

17. Y. Igarashi, H. Murata, and M. Ueda, “3-D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys. 17, 1683 (1978) [CrossRef] .

19

19. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004) [CrossRef] [PubMed] .

]. In these systems, an IP unit is implemented virtually by computer graphics. In a virtual world, the IP unit observes the OLF of a virtually illuminated object, and the ID unit physically reproduces the OLF. A represented object in the systems mentioned in this paragraph is unresponsive to the physical illumination in the display environment.

Here, we present a display that represents stereoscopically an arbitrary object that is responsive to an arbitrary physical illumination source in the display environment; that is, our display reproduces an OLF that changes in response to the ILF of the object. This display system is based on the reflectance field, which contains both the OLF and the ILF [20

20. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 145–156 [CrossRef] .

]. The eight-dimensional reflectance field function ℛ(u, v, s, t, u′, v′, s′, t′) is written as
(u,v,s,t,u,v,s,t)=dout(u,v,s,t)din(u,v,s,t),
(1)
where d shows an infinitesimal quantity. This function is also known as the bidirectional scattering surface reflectance distribution function (BSSRDF), which has been used to express translucent materials in computer renderings [21

21. F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis, Geometrical Considerations and Nomenclature for Reflectance, vol. 160 of Monograph (National Bureau of Standards, US, 1977).

, 22

22. H. W. Jensen, S. R. Marschner, M. Levoy, and P. Hanrahan, “A practical model for subsurface light transport,” in “Proc. SIGGRAPH ’01,” (ACM, New York, NY, USA, 2001), pp. 511–518 [CrossRef] .

]. The reflectance field ℛ can be interpreted as a response OLF ℒout to an impulse ILF ℒin. The reflectance field enables us to perform image-based rendering of an object with an arbitrary camera and illumination. Capturing the eight-dimensional reflectance field generally requires a long observation time, but some fast methods have been proposed [23

23. R. Horisaki, Y. Tampa, and J. Tanida, “Compressive reflectance field acquisition using confocal imaging with variable coded apertures,” in “Computational Optical Sensing and Imaging,” (2012), p. CTu3B.4.

, 24

24. S. Tagawa, Y. Mukaigawa, and Y. Yagi, “8-D reflectance field for computational photography,” in “Proc. ICPR 2012 ,” (2012), pp. 2181–2185.

]. In this paper, we experimentally demonstrate the proposed concept, in which a represented object is stereoscopic and its appearance changes in response to changes of physical light rays coming toward the display, using a computer-generated reflectance field.

2. Proposed display system

As shown in Fig. 2, our proposed system is composed of an IP unit for observing the ILF, an ID unit for reproducing the OLF, and a processor for calculating the OLF from the ILF. The IP unit consists of a camera and a lens array, and the ID unit has a projector and a lens array. The ID lens array can also serve as the IP lens array, as shown in Fig. 2. The camera and the projector focus on the focal plane of the lens array. The processor connects these IP and ID units.

Fig. 2 Schematic diagram of the computational reflectance field display system, where ILF is the incoming light field observed by the integral photography unit, and OLF is the outgoing light field reproduced by the integral display unit.

Before operating the display system, the reflectance field ℛ(u, v, s, t, u′, v′, s′, t′) of an object is captured by a reflectance field observation system or is generated computationally, and this is stored in the processor. The angular and spatial coordinates u, v, s, t of the OLF are on the focal plane of the lens array, as shown in Fig. 2, and the angular and spatial coordinates u′, v′, s′, t′ of the ILF are also on this focal plane.

The proposed display is a time-division system composed of three steps, as shown in Fig. 3. First, the ILF ℒin from the physical illumination source in the display environment is observed by the IP unit, as shown in Fig. 3(a). Pixels of the image captured by the IP unit are rearranged directly to generate pixels of the ILF ℒin, as shown in Fig. 2[13

13. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 297–306 [CrossRef] .

]. Next, the observed ILF ℒin is sent to the processor, which calculates the OLF ℒout with the ILF ℒin and the stored reflectance field ℛ as shown in Fig. 3(b). The computational process is simply written as
out(u,v,s,t)=uvst(u,v,s,t,u,v,s,t)×in(u,v,s,t).
(2)
The angle and the spatial position of the physical illumination source are not calculated in this system. Therefore, the scheme is robust against variations in the illumination and the object. The calculated OLF ℒout is sent to the ID unit. Finally, the OLF ℒout is physically reproduced in the display environment by the ID unit, as shown in Fig. 3(c). Pixels of the OLF ℒout are also rearranged directly to generate pixels of the projected image in the ID unit. Eventually, these three steps will be executed in real-time, but in the following experimental demonstration, they were executed separately to show a proof of concept. This computational reflectance field display can reproduce an OLF ℒout that changes in response to the physical ILF ℒin in the display environment. Our scheme is image-based and is useful for photorealistic expression [3

3. M. Levoy and P. Hanrahan, “Light field rendering,” in “Proc. ACM SIGGRAPH,” (ACM Press, 1996), pp. 43–54.

].

Fig. 3 Flow of the computational reflectance field display system. (a) ILF observation, (b) computational process, and (c) OLF reproduction.

3. Experimental verification

The proposed concept was demonstrated experimentally with a computer-generated reflectance field. A camera (Digital SLR: D200 manufactured by Nikon) and a projector (3-LCD projector: EH-TW400 manufactured by Epson) were arranged as shown in Fig. 2. A lenticular lens (pitch: 20 LPI, focal length: 3 mm, material: acrylic) was used instead of a lens array for simplicity. A diffuser was placed on the focal plane of the lenticular lens to increase the incoming and outgoing angles of the rays through the lenticular lens.

A teapot shown in Fig. 4(a) was used as the object. The surface was assumed to be a diffuse and specular material. The reflectance field ℛ of the teapot was computationally generated by OpenGL with scanning of the lateral position of the camera capturing the response and the angle and lateral position of a spotlight stimulating the impulse based on Eq. (1). The size of the reflectance field ℛ was 4 ×1 ×128 ×128 ×4 ×1 ×32 ×32 pixels, where the variables are u, v, s, t, u′, v′, s′, and t′, respectively, as shown in Eq. (1). The angular resolutions (along the u, v, u′, and v′-axes) were calculated from the lens pitch of the lenticular lens and the projector’s resolution on the focal plane of the lenticular lens. The spatial resolutions (along the s, t, s′, and t′-axes) were determined by the number of lenses of the lenticular lens. The spatial resolution of the ILF was assumed to be lower than that of the OLF because the spot produced by the illumination source was larger than a single pixel of the object. The center of the teapot was assumed to be located at the center of the lenticular lens, as shown in Fig. 4(b).

Fig. 4 Object and illumination. (a) Front and (b) top views.

A laser pointer was used as the physical illumination source in the display environment, and it illuminated a white square area on the lenticular lens, as indicated in Fig. 4(a), with two different incoming angles, from the right and left of the observer. These incoming light beams and the actually illuminated location on the lenticular lens are shown by arrows in Fig. 4(b). Two images with the two incoming angles were captured by the camera. The captured images were resized and reshaped to 4 ×1 ×32 ×32 pixels to generate the ILFs ℒin. The OLFs ℒout were calculated with the ILFs ℒin and the reflectance field ℛ based on Eq. (2). Finally, the two calculated OLFs ℒout, whose sizes were 4 ×1 ×128 ×128 pixels, were projected individually onto the lenticular lens by the projector.

The results are shown in Fig. 5. The reproduced object under physical illumination from the right is shown in Figs. 5(a) and 5(b), which are views from the center and the left, respectively. The parallax between them was well-reproduced. The reproduced object under physical illumination from the left is shown in Figs. 5(c) and 5(d), which are views from the center and the left, respectively. The response effect for the physical ILF in the display environment, namely, the varying bright area due to the changing angle of the incoming physical light from the laser pointer, was demonstrated successfully, as can be seen by comparing Figs. 5(a) and 5(c), and also Figs. 5(b) and 5(d).

Fig. 5 Reproduced object. Views from (a) the center and (b) the left under illumination from the right of the observer. Views from (c) the center and (d) the left under illumination from the left of the observer.

4. Conclusions

In this paper, we proposed a computational eight-dimensional reflectance field display system. It observes the four-dimensional physical ILF in the display environment and reproduces the four-dimensional OLF, that changes in response to the observed ILF, in the environment. The system is composed of an IP unit for observing the ILF, an ID unit for reproducing the OLF, and a processor for calculating the OLF from the ILF. The concept was experimentally verified with a computer-generated reflectance field. In the experiments, a lenticular lens was used instead of the lens array in Fig. 2 for simplicity. This simplification eliminated the vertical parallax of the proposed display, but the parallax can be readily implemented by using a lens array. A display representing the reflectance field, in other words, the BSSRDF, of an object was demonstrated successfully. In the experiments, an object was represented stereoscopically and it was responsive to the angles and spatial positions of incoming light rays toward the display from a physical illumination source.

References and links

1.

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006) [CrossRef] .

2.

X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt. 52, 546–560 (2013) [CrossRef] [PubMed] .

3.

M. Levoy and P. Hanrahan, “Light field rendering,” in “Proc. ACM SIGGRAPH,” (ACM Press, 1996), pp. 43–54.

4.

G. M. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

5.

R. Ng, “Fourier slice photography,” ACM Trans. Graph. 24, 735–744 (2005) [CrossRef] .

6.

M. Levoy, “Light fields and computational imaging,” IEEE Computer 39, 46–55 (2006) [CrossRef] .

7.

J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45, 9066–9078 (2006) [CrossRef] [PubMed] .

8.

Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18, 8824–8835 (2010) [CrossRef] [PubMed] .

9.

S. K. Nayar, P. N. Belhumeur, and T. E. Boult, “Lighting sensitive display,” ACM Trans. Graph. 23, 963–979 (2004) [CrossRef] .

10.

T. Koike and T. Naemura, “BRDF displays,” in “Proc. SIGGRAPH ’07 poster presentation ,” (2007), pp. 1–4.

11.

M. Fuchs, R. Raskar, H.-P. Seidel, and H. P. A. Lensch, “Towards passive 6D reflectance field displays,” ACM Trans. Graph. 27, 581–58:8 (2008) [CrossRef] .

12.

M. B. Hullin, H. P. A. Lensch, R. Raskar, H.-P. Seidel, and I. Ihrke, “Dynamic display of BRDFs,” in “Proc. EUROGRAPHICS ,” (2011), pp. 475–483.

13.

A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 297–306 [CrossRef] .

14.

X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett. 34, 3803–3805 (2009) [CrossRef] [PubMed] .

15.

Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graphics 15, 841–852 (2009) [CrossRef] .

16.

X. Jiao, X. Zhao, Y. Yang, Z. Fang, and X. Yuan, “Dual-camera enabled real-time three-dimensional integral imaging pick-up and display,” Opt. Express 20, 27304–27311 (2012) [CrossRef] [PubMed] .

17.

Y. Igarashi, H. Murata, and M. Ueda, “3-D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys. 17, 1683 (1978) [CrossRef] .

18.

S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in “Proc. SPIE ,” (2001), 4297, pp. 187–195.

19.

H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004) [CrossRef] [PubMed] .

20.

P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 145–156 [CrossRef] .

21.

F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis, Geometrical Considerations and Nomenclature for Reflectance, vol. 160 of Monograph (National Bureau of Standards, US, 1977).

22.

H. W. Jensen, S. R. Marschner, M. Levoy, and P. Hanrahan, “A practical model for subsurface light transport,” in “Proc. SIGGRAPH ’01,” (ACM, New York, NY, USA, 2001), pp. 511–518 [CrossRef] .

23.

R. Horisaki, Y. Tampa, and J. Tanida, “Compressive reflectance field acquisition using confocal imaging with variable coded apertures,” in “Computational Optical Sensing and Imaging,” (2012), p. CTu3B.4.

24.

S. Tagawa, Y. Mukaigawa, and Y. Yagi, “8-D reflectance field for computational photography,” in “Proc. ICPR 2012 ,” (2012), pp. 2181–2185.

OCIS Codes
(120.2040) Instrumentation, measurement, and metrology : Displays
(110.1758) Imaging systems : Computational imaging

ToC Category:
Imaging Systems

History
Original Manuscript: January 30, 2013
Revised Manuscript: April 24, 2013
Manuscript Accepted: April 25, 2013
Published: April 30, 2013

Citation
Ryoichi Horisaki and Jun Tanida, "Reflectance field display," Opt. Express 21, 11181-11186 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-9-11181


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE94, 591–607 (2006). [CrossRef]
  2. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt.52, 546–560 (2013). [CrossRef] [PubMed]
  3. M. Levoy and P. Hanrahan, “Light field rendering,” in “Proc. ACM SIGGRAPH,” (ACM Press, 1996), pp. 43–54.
  4. G. M. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences146, 446–451 (1908).
  5. R. Ng, “Fourier slice photography,” ACM Trans. Graph.24, 735–744 (2005). [CrossRef]
  6. M. Levoy, “Light fields and computational imaging,” IEEE Computer39, 46–55 (2006). [CrossRef]
  7. J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt.45, 9066–9078 (2006). [CrossRef] [PubMed]
  8. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express18, 8824–8835 (2010). [CrossRef] [PubMed]
  9. S. K. Nayar, P. N. Belhumeur, and T. E. Boult, “Lighting sensitive display,” ACM Trans. Graph.23, 963–979 (2004). [CrossRef]
  10. T. Koike and T. Naemura, “BRDF displays,” in “Proc. SIGGRAPH ’07 poster presentation,” (2007), pp. 1–4.
  11. M. Fuchs, R. Raskar, H.-P. Seidel, and H. P. A. Lensch, “Towards passive 6D reflectance field displays,” ACM Trans. Graph.27, 581–58:8 (2008). [CrossRef]
  12. M. B. Hullin, H. P. A. Lensch, R. Raskar, H.-P. Seidel, and I. Ihrke, “Dynamic display of BRDFs,” in “Proc. EUROGRAPHICS,” (2011), pp. 475–483.
  13. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in “Proc. SIGGRAPH ’00,” (2000), pp. 297–306. [CrossRef]
  14. X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett.34, 3803–3805 (2009). [CrossRef] [PubMed]
  15. Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graphics15, 841–852 (2009). [CrossRef]
  16. X. Jiao, X. Zhao, Y. Yang, Z. Fang, and X. Yuan, “Dual-camera enabled real-time three-dimensional integral imaging pick-up and display,” Opt. Express20, 27304–27311 (2012). [CrossRef] [PubMed]
  17. Y. Igarashi, H. Murata, and M. Ueda, “3-D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys.17, 1683 (1978). [CrossRef]
  18. S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in “Proc. SPIE,” (2001), 4297, pp. 187–195.
  19. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express12, 1067–1076 (2004). [CrossRef] [PubMed]
  20. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proc. SIGGRAPH ’00,” (2000), pp. 145–156. [CrossRef]
  21. F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis, Geometrical Considerations and Nomenclature for Reflectance, vol. 160 of Monograph (National Bureau of Standards, US, 1977).
  22. H. W. Jensen, S. R. Marschner, M. Levoy, and P. Hanrahan, “A practical model for subsurface light transport,” in “Proc. SIGGRAPH ’01,” (ACM, New York, NY, USA, 2001), pp. 511–518. [CrossRef]
  23. R. Horisaki, Y. Tampa, and J. Tanida, “Compressive reflectance field acquisition using confocal imaging with variable coded apertures,” in “Computational Optical Sensing and Imaging,” (2012), p. CTu3B.4.
  24. S. Tagawa, Y. Mukaigawa, and Y. Yagi, “8-D reflectance field for computational photography,” in “Proc. ICPR 2012,” (2012), pp. 2181–2185.

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4 Fig. 5
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited