## Reflectance field display |

Optics Express, Vol. 21, Issue 9, pp. 11181-11186 (2013)

http://dx.doi.org/10.1364/OE.21.011181

Acrobat PDF (3120 KB)

### Abstract

We propose a display for stereoscopically representing an arbitrary object that is responsive to an arbitrary physical illumination source in the display environment. Our scheme is based on the eight-dimensional reflectance field, which contains angular and spatial information of incoming and outgoing light rays of an object, and is also known as the bidirectional scattering surface reflectance distribution function (BSSRDF). This system is composed of an integral photography unit, an integral display unit, and a processor connecting these units. The concept was demonstrated experimentally. In the demonstrations, a stereoscopically represented object responded to changes in physical illumination coming toward the display.

© 2013 OSA

## 1. Introduction

1. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and
processing using integral imaging,” Proc. IEEE **94**, 591–607 (2006) [CrossRef] .

2. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing,
display, and applications,” Appl. Opt. **52**, 546–560 (2013) [CrossRef] [PubMed] .

_{out}(

*u*,

*v*,

*s*,

*t*), is typically determined by using two parallel planes indicating the angle and spatial position of a ray, as shown in Fig. 1, where

*u*,

*v*and

*s*,

*t*are called the angular and spatial coordinates in this paper, respectively. The light field was originally applied to an information acquisition technique called integral photography (IP). IP uses a camera array or a lens array to observe the angles and the spatial positions of the rays [1

1. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and
processing using integral imaging,” Proc. IEEE **94**, 591–607 (2006) [CrossRef] .

6. M. Levoy, “Light fields and computational
imaging,” IEEE Computer **39**, 46–55 (2006) [CrossRef] .

1. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and
processing using integral imaging,” Proc. IEEE **94**, 591–607 (2006) [CrossRef] .

2. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing,
display, and applications,” Appl. Opt. **52**, 546–560 (2013) [CrossRef] [PubMed] .

7. J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging
system,” Appl. Opt. **45**, 9066–9078 (2006) [CrossRef] [PubMed] .

8. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a
256-view super multi-view display,” Opt. Express **18**, 8824–8835 (2010) [CrossRef] [PubMed] .

9. S. K. Nayar, P. N. Belhumeur, and T. E. Boult, “Lighting sensitive display,”
ACM Trans. Graph. **23**, 963–979 (2004) [CrossRef] .

11. M. Fuchs, R. Raskar, H.-P. Seidel, and H. P. A. Lensch, “Towards passive 6D reflectance field
displays,” ACM Trans. Graph. **27**, 581–58:8
(2008) [CrossRef] .

_{in}(

*u*′,

*v*′,

*s*′,

*t*′) in Fig. 1, where

*u*′,

*v*′ and

*s*′,

*t*′ are the angular and spatial coordinates of the ILF, respectively. These ILF-responsive displays use passive optics such as a multi-layered lens array or a liquid surface in order to change the represented object image in response to the ILF from a physical illumination source in the display environment. However, displays that reproduce the OLF in response to the ILF toward the display have not been realized yet.

13. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 297–306 [CrossRef] .

14. X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color
three-dimensional display,” Opt. Lett. **34**, 3803–3805 (2009) [CrossRef] [PubMed] .

16. X. Jiao, X. Zhao, Y. Yang, Z. Fang, and X. Yuan, “Dual-camera enabled real-time three-dimensional integral
imaging pick-up and display,” Opt. Express **20**, 27304–27311 (2012) [CrossRef] [PubMed] .

17. Y. Igarashi, H. Murata, and M. Ueda, “3-D display system using a computer generated integral
photograph,” Jpn. J. Appl. Phys. **17**, 1683 (1978) [CrossRef] .

19. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a
multiprojector,” Opt. Express **12**, 1067–1076 (2004) [CrossRef] [PubMed] .

20. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 145–156 [CrossRef] .

*u*,

*v*,

*s*,

*t*,

*u*′,

*v*′,

*s*′,

*t*′) is written as where d shows an infinitesimal quantity. This function is also known as the bidirectional scattering surface reflectance distribution function (BSSRDF), which has been used to express translucent materials in computer renderings [21, 22

22. H. W. Jensen, S. R. Marschner, M. Levoy, and P. Hanrahan, “A practical model for subsurface light
transport,” in “*Proc. SIGGRAPH
’01*,” (ACM, New York, NY,
USA, 2001), pp. 511–518 [CrossRef] .

*response*OLF ℒ

_{out}to an

*impulse*ILF ℒ

_{in}. The reflectance field enables us to perform image-based rendering of an object with an arbitrary camera and illumination. Capturing the eight-dimensional reflectance field generally requires a long observation time, but some fast methods have been proposed [23, 24]. In this paper, we experimentally demonstrate the proposed concept, in which a represented object is stereoscopic and its appearance changes in response to changes of physical light rays coming toward the display, using a computer-generated reflectance field.

## 2. Proposed display system

*u*,

*v*,

*s*,

*t*,

*u*′,

*v*′,

*s*′,

*t*′) of an object is captured by a reflectance field observation system or is generated computationally, and this is stored in the processor. The angular and spatial coordinates

*u*,

*v*,

*s*,

*t*of the OLF are on the focal plane of the lens array, as shown in Fig. 2, and the angular and spatial coordinates

*u*′,

*v*′,

*s*′,

*t*′ of the ILF are also on this focal plane.

_{in}from the physical illumination source in the display environment is observed by the IP unit, as shown in Fig. 3(a). Pixels of the image captured by the IP unit are rearranged directly to generate pixels of the ILF ℒ

_{in}, as shown in Fig. 2[13

13. A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 297–306 [CrossRef] .

_{in}is sent to the processor, which calculates the OLF ℒ

_{out}with the ILF ℒ

_{in}and the stored reflectance field ℛ as shown in Fig. 3(b). The computational process is simply written as The angle and the spatial position of the physical illumination source are not calculated in this system. Therefore, the scheme is robust against variations in the illumination and the object. The calculated OLF ℒ

_{out}is sent to the ID unit. Finally, the OLF ℒ

_{out}is physically reproduced in the display environment by the ID unit, as shown in Fig. 3(c). Pixels of the OLF ℒ

_{out}are also rearranged directly to generate pixels of the projected image in the ID unit. Eventually, these three steps will be executed in real-time, but in the following experimental demonstration, they were executed separately to show a proof of concept. This computational reflectance field display can reproduce an OLF ℒ

_{out}that changes in response to the physical ILF ℒ

_{in}in the display environment. Our scheme is image-based and is useful for photorealistic expression [3].

## 3. Experimental verification

*u*,

*v*,

*s*,

*t*,

*u*′,

*v*′,

*s*′, and

*t*′, respectively, as shown in Eq. (1). The angular resolutions (along the

*u*,

*v*,

*u*′, and

*v*′-axes) were calculated from the lens pitch of the lenticular lens and the projector’s resolution on the focal plane of the lenticular lens. The spatial resolutions (along the

*s*,

*t*,

*s*′, and

*t*′-axes) were determined by the number of lenses of the lenticular lens. The spatial resolution of the ILF was assumed to be lower than that of the OLF because the spot produced by the illumination source was larger than a single pixel of the object. The center of the teapot was assumed to be located at the center of the lenticular lens, as shown in Fig. 4(b).

_{in}. The OLFs ℒ

_{out}were calculated with the ILFs ℒ

_{in}and the reflectance field ℛ based on Eq. (2). Finally, the two calculated OLFs ℒ

_{out}, whose sizes were 4 ×1 ×128 ×128 pixels, were projected individually onto the lenticular lens by the projector.

## 4. Conclusions

## References and links

1. | A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and
processing using integral imaging,” Proc. IEEE |

2. | X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing,
display, and applications,” Appl. Opt. |

3. | M. Levoy and P. Hanrahan, “Light field rendering,” in
“ |

4. | G. M. Lippmann, “La photographie integrale,”
Comptes-Rendus Academie des Sciences |

5. | R. Ng, “Fourier slice photography,”
ACM Trans. Graph. |

6. | M. Levoy, “Light fields and computational
imaging,” IEEE Computer |

7. | J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging
system,” Appl. Opt. |

8. | Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a
256-view super multi-view display,” Opt. Express |

9. | S. K. Nayar, P. N. Belhumeur, and T. E. Boult, “Lighting sensitive display,”
ACM Trans. Graph. |

10. | T. Koike and T. Naemura, “BRDF displays,” in “Proc. SIGGRAPH ’07 poster presentation ,” (2007), pp. 1–4. |

11. | M. Fuchs, R. Raskar, H.-P. Seidel, and H. P. A. Lensch, “Towards passive 6D reflectance field
displays,” ACM Trans. Graph. |

12. | M. B. Hullin, H. P. A. Lensch, R. Raskar, H.-P. Seidel, and I. Ihrke, “Dynamic display of BRDFs,” in “Proc. EUROGRAPHICS ,” (2011), pp. 475–483. |

13. | A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 297–306 [CrossRef] . |

14. | X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color
three-dimensional display,” Opt. Lett. |

15. | Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an
integral photography display with interactive control of viewing
parameters,” IEEE Trans. Vis. Comput. Graphics |

16. | X. Jiao, X. Zhao, Y. Yang, Z. Fang, and X. Yuan, “Dual-camera enabled real-time three-dimensional integral
imaging pick-up and display,” Opt. Express |

17. | Y. Igarashi, H. Murata, and M. Ueda, “3-D display system using a computer generated integral
photograph,” Jpn. J. Appl. Phys. |

18. | S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in “Proc. SPIE ,” (2001), 4297, pp. 187–195. |

19. | H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a
multiprojector,” Opt. Express |

20. | P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proc. SIGGRAPH ’00 ,” (2000), pp. 145–156 [CrossRef] . |

21. | F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis, |

22. | H. W. Jensen, S. R. Marschner, M. Levoy, and P. Hanrahan, “A practical model for subsurface light
transport,” in “ |

23. | R. Horisaki, Y. Tampa, and J. Tanida, “Compressive reflectance field acquisition using confocal imaging with variable coded apertures,” in “Computational Optical Sensing and Imaging,” (2012), p. CTu3B.4. |

24. | S. Tagawa, Y. Mukaigawa, and Y. Yagi, “8-D reflectance field for computational photography,” in “Proc. ICPR 2012 ,” (2012), pp. 2181–2185. |

**OCIS Codes**

(120.2040) Instrumentation, measurement, and metrology : Displays

(110.1758) Imaging systems : Computational imaging

**ToC Category:**

Imaging Systems

**History**

Original Manuscript: January 30, 2013

Revised Manuscript: April 24, 2013

Manuscript Accepted: April 25, 2013

Published: April 30, 2013

**Citation**

Ryoichi Horisaki and Jun Tanida, "Reflectance field display," Opt. Express **21**, 11181-11186 (2013)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-9-11181

Sort: Year | Journal | Reset

### References

- A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE94, 591–607 (2006). [CrossRef]
- X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications,” Appl. Opt.52, 546–560 (2013). [CrossRef] [PubMed]
- M. Levoy and P. Hanrahan, “Light field rendering,” in “Proc. ACM SIGGRAPH,” (ACM Press, 1996), pp. 43–54.
- G. M. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences146, 446–451 (1908).
- R. Ng, “Fourier slice photography,” ACM Trans. Graph.24, 735–744 (2005). [CrossRef]
- M. Levoy, “Light fields and computational imaging,” IEEE Computer39, 46–55 (2006). [CrossRef]
- J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt.45, 9066–9078 (2006). [CrossRef] [PubMed]
- Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express18, 8824–8835 (2010). [CrossRef] [PubMed]
- S. K. Nayar, P. N. Belhumeur, and T. E. Boult, “Lighting sensitive display,” ACM Trans. Graph.23, 963–979 (2004). [CrossRef]
- T. Koike and T. Naemura, “BRDF displays,” in “Proc. SIGGRAPH ’07 poster presentation,” (2007), pp. 1–4.
- M. Fuchs, R. Raskar, H.-P. Seidel, and H. P. A. Lensch, “Towards passive 6D reflectance field displays,” ACM Trans. Graph.27, 581–58:8 (2008). [CrossRef]
- M. B. Hullin, H. P. A. Lensch, R. Raskar, H.-P. Seidel, and I. Ihrke, “Dynamic display of BRDFs,” in “Proc. EUROGRAPHICS,” (2011), pp. 475–483.
- A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in “Proc. SIGGRAPH ’00,” (2000), pp. 297–306. [CrossRef]
- X. Sang, F. C. Fan, C. C. Jiang, S. Choi, W. Dou, C. Yu, and D. Xu, “Demonstration of a large-size real-time full-color three-dimensional display,” Opt. Lett.34, 3803–3805 (2009). [CrossRef] [PubMed]
- Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graphics15, 841–852 (2009). [CrossRef]
- X. Jiao, X. Zhao, Y. Yang, Z. Fang, and X. Yuan, “Dual-camera enabled real-time three-dimensional integral imaging pick-up and display,” Opt. Express20, 27304–27311 (2012). [CrossRef] [PubMed]
- Y. Igarashi, H. Murata, and M. Ueda, “3-D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys.17, 1683 (1978). [CrossRef]
- S.-W. Min, S. Jung, J.-H. Park, and B. Lee, “Three-dimensional display system based on computer-generated integral photography,” in “Proc. SPIE,” (2001), 4297, pp. 187–195.
- H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express12, 1067–1076 (2004). [CrossRef] [PubMed]
- P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proc. SIGGRAPH ’00,” (2000), pp. 145–156. [CrossRef]
- F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis, Geometrical Considerations and Nomenclature for Reflectance, vol. 160 of Monograph (National Bureau of Standards, US, 1977).
- H. W. Jensen, S. R. Marschner, M. Levoy, and P. Hanrahan, “A practical model for subsurface light transport,” in “Proc. SIGGRAPH ’01,” (ACM, New York, NY, USA, 2001), pp. 511–518. [CrossRef]
- R. Horisaki, Y. Tampa, and J. Tanida, “Compressive reflectance field acquisition using confocal imaging with variable coded apertures,” in “Computational Optical Sensing and Imaging,” (2012), p. CTu3B.4.
- S. Tagawa, Y. Mukaigawa, and Y. Yagi, “8-D reflectance field for computational photography,” in “Proc. ICPR 2012,” (2012), pp. 2181–2185.

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.