## Homogeneous light field model for interactive control of viewing parameters of integral imaging displays |

Optics Express, Vol. 20, Issue 13, pp. 14137-14151 (2012)

http://dx.doi.org/10.1364/OE.20.014137

Acrobat PDF (1305 KB)

### Abstract

A novel model for three dimensional (3D) interactive control of viewing parameters of integral imaging systems is established in this paper. Specifically, transformation matrices are derived in an extended homogeneous light field coordinate space based on interactive controllable requirement of integral imaging displays. In this model, new elemental images can be synthesized directly from the ones captured in the record process to display 3D images with expected viewing parameters, and no extra geometrical information of the 3D scene is required in the synthesis process. Computer simulation and optical experimental results show that the reconstructed 3D scenes with depth control, lateral translation and rotation can be achieved.

© 2012 OSA

## 1. Introduction

2. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. **40**(20), 3318–3325 (2001). [CrossRef] [PubMed]

4. I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett. **34**(6), 731–733 (2009). [CrossRef] [PubMed]

5. J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express **19**(19), 18729–18741 (2011). [CrossRef] [PubMed]

7. D.-C. Hwang, D.-H. Shin, S.-C. Kim, and E.-S. Kim, “Depth extraction of three-dimensional objects in space by the computational integral imaging reconstruction technique,” Appl. Opt. **47**(19), D128–D135 (2008). [CrossRef] [PubMed]

8. J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express **18**(25), 26373–26387 (2010). [CrossRef] [PubMed]

10. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. **31**(8), 1106–1108 (2006). [CrossRef] [PubMed]

11. W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. **23**, 814–824 (2004). [CrossRef]

13. Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph. **15**(5), 841–852 (2009). [CrossRef] [PubMed]

14. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. **36**(7), 1598–1603 (1997). [CrossRef] [PubMed]

15. H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. A **21**(3), 171–176 (1931). [CrossRef]

16. J. Arai, M. Kawakita, and F. Okano, “Effects of sampling on depth control in integral imaging,” Proc. SPIE **7237**, 723710, 723710-12 (2009). [CrossRef]

17. J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. **33**(3), 279–281 (2008). [CrossRef] [PubMed]

18. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express **13**(23), 9175–9180 (2005). [CrossRef] [PubMed]

19. H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express **18**(25), 25573–25583 (2010). [CrossRef] [PubMed]

13. Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph. **15**(5), 841–852 (2009). [CrossRef] [PubMed]

## 2. EIAs synthesis in interactive controllable II systems

### 2.1 Principle of the conventional two-step pickup model

*d*from the reference plane to the synthetic lenslet array. The depth information of the 3D scene, i.e., the position of the viewpoints [16

_{s}16. J. Arai, M. Kawakita, and F. Okano, “Effects of sampling on depth control in integral imaging,” Proc. SPIE **7237**, 723710, 723710-12 (2009). [CrossRef]

17. J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. **33**(3), 279–281 (2008). [CrossRef] [PubMed]

18. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express **13**(23), 9175–9180 (2005). [CrossRef] [PubMed]

19. H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express **18**(25), 25573–25583 (2010). [CrossRef] [PubMed]

### 2.2 Proposed HLFT model

*x*-

*x*)/

_{0}*u*= (

_{0}*y*-

*y*)/

_{0}*v*=

_{0}*z*/

*g*, where

*g*is the gap between the lenslet array and the EIA plane, have the two different intersections of (

*x*,

_{0}*y*) on XOY plane and (

_{0}*u*,

_{0}*v*) in the corresponding UV coordinate system. Thus it can be expressed as (

_{0}*x*,

_{0}*y*,

_{0}*u*,

_{0}*v*) in the conventional light field coordinate space. Since a homogeneous coordinate is able to represent the transformations of both translation and rotation more concise by a matrix, the ray is expressed by a coordinate vector of

_{0}**= (**

*t**x*,

_{0}*y*,

_{0}*u*,

_{0}*v*, 1)

_{0}*′*in the homogeneous light field coordinate space.

*S*. The synthetic lenslet array and its EIA, which have the same parameters and are placed at the same position as the II display, determine another homogeneous light field coordinate space

_{o}*S*. The projection transformation relationship of the light rays between these two coordinate spaces can be represented by a matrix

_{s}**; and the generation of the interactive controlled EIAs can be implemented by a pixel mapping with a coordinate vector transformation. Specifically, assume a ray emitted from the 3D scene is recorded in the pickup EIA with a coordinate vector**

*C*

*t**= (*

_{o}*x*,

_{o}*y*,

_{o}*u*,

_{o}*v*, 1)

_{o}*′*in coordinate space

*S*, and it is denoted as

_{o}

*t**= (*

_{s}*x*,

_{s}*y*,

_{s}*u*,

_{s}*v*, 1)

_{s}*′*in coordinate space

*S*. The transformation relationship between these two vectors can be written as:where

_{s}**is a 5 × 5 transformation matrix. The synthetic EIA can be obtained directly by the mapping from the corresponding pixels in the original EIA using Eq. (1).**

*C*## 3. Calculation of the EIAs with the HLFT model

**with the parameters set by users, and the second process is pixel mapping.**

*C*### 3.1 Translation and depth control

*t**and*

_{o}

*t**is represented aswhere*

_{s}

*C**is defined as the translation matrix.*

_{t}_{s}is translated to the position (

*x*,

*y*,

*z*) in the spatial coordinate system XYZ

_{o}defined by the original lenslet array, the coordinate transformation of the two spatial coordinate systems is as follows:where the subscript

*o*indicates the original coordinate system, and the subscript

*s*indicates the synthetic coordinate system. According to the functions of the light field rays defined in the two spatial coordinate systems and the corresponding coordinate vectors in the homogeneous light field coordinate systems, the translation matrix is derived aswhere

*g*represents the gap between the original lenslet array and its EIA plane,

_{o}*g*represents the gap between the synthetic lenslet array and its EIA plane.

_{s}*x*,

*y*= 0, the matrix in Eq. (4) becomes a depth control matrix to synthesize a new EIA used to display 3D images with the depth parameter set by users. It is easily seen in Fig. 3 that when the signs of the two parameters

*g*and

_{o}*g*are opposite, the depth of the reconstructed 3D image would reverse, especially, when

_{s}*g*< 0 and

_{o}*g*> 0, a PO conversion is done.

_{s}### 3.2 Rotation

*t**of a ray in the original light field coordinate system can be represented by the production of a rotation matrix*

_{o}

*C**and the coordinate vector*

_{r}

*t**in the rotated homogeneous light field coordinate system:*

_{s}*α*, the relationship between these two spatial coordinate systems isand the corresponding rotation matrix

*C**is calculated as*

_{x}*β*, the conversion relation of the two spatial coordinate systems isThen, the rotation matrix

*C**is*

_{y}*γ*, the conversion relationship is represented asThe rotation matrix

*C**is*

_{z}*x*,

*y*,

*z*) in the original spatial coordinate system on the first step (expressed as a translation vector

*v*= (

_{t}*x*,

*y*,

*z*, 1)), and then rotated in turn around the X, Y, Z axis with the angles

*α*,

*β*and

*γ*, respectively (expressed as a rotation vector

*v*= (

_{r}*α*,

*β*,

*γ*, 1)), the transform matrix

**is written as**

*C*### 3.3 Pixel mapping

*I*denote the value of a pixel in the synthetic EIA, which is in an elemental image of the

^{s}_{ijkl}*i*th row and the

*j*th column, and has indexes

*k*,

*l*in the corresponding element image. Here if the pixel is at the center of the central elemental image, all the four subscripts of

*i*,

*j*,

*k*,

*l*are equal to zero. The corresponding coordinate vector of the pixel

*I*is represented as

^{s}_{ijkl}

*t**= (*

_{s}*x*,

_{s}*y*,

_{s}*u*,

_{s}*v*, 1)

_{s}*′*in the synthetic homogeneous light field coordinate space:where

*p*is the pitch of the synthetic lenslet array and

_{s}*pxl*represents the pixel size of the synthetic EIA.

_{s}

*t**of this ray in the coordinate space defined by the original lenslet array and its EIA is calculated by Eqs. (1) and (12). The corresponding pixel (defined as*

_{o}*I*) recorded in the original EIA can be written aswhere

^{o}_{qrst}*p*is the pitch of the original lenslet array, and

_{o}*pxl*is the pixel size of the original EIA.

_{o}## 4. Experimental results

### 4.1 Experimental setup and parameters

20. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. **27**(13), 1144–1146 (2002). [CrossRef] [PubMed]

*mm*, the aperture is 36

*mm*, and the field of view (FOV) is 100.4°. The apple, the pear and the center of the one and a half oranges are placed at the distance of 120

*mm*, 180

*mm*, and 130

*mm*from the camera, respectively. A set of 31

*H*× 31

*V*images with a pitch of

*p*= 10

*mm*is obtained. The size of each elemental image is 36 × 36

*mm*with a resolution of 320 × 320 pixels. Figure 4(b) shows a portion of the elemental images.

*f*= 10.4

*mm*, pitch

*p*= 7.55

*mm*, and the size of each elemental image is 29 × 29 pixels. The synthetic EIAs are placed at a distance of

*g*=

*f*= 10.4

*mm*behind the lenslet array to form an II display with large depth which is first proposed in [21

21. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. **28**(16), 1421–1423 (2003). [CrossRef] [PubMed]

22. D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI J. **27**(3), 289–293 (2005). [CrossRef]

23. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Display Technol. **6**(10), 404–411 (2010). [CrossRef]

24. H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci. **55**(3), 735–742 (2012). [CrossRef]

*m*from the lenslet array to capture the reconstructed 3D images. The left and the right perspectives are taken at 60

*mm*from the center view (approx. 2.3°), respectively.

### 4.2 Depth control

*v*= (0, 0, 0, 1)

_{t}*′*. The image quality metrics of the mean squared error (MSE) and the peak signal-to-noise ratio (PSNR) are calculated to evaluate the synthetic EIA. The MSE between the synthetic EIA in Fig. 6 and the EIA picked up directly with the same parameters is 26.9, which shows the square error between these two EIAs is small. The value of PSNR is 33.8 dB, and it is greater than the commonly accepted value of 30 dB, which means the differences between these two EIAs are less than one thousandth of the scene information, as analyzed in [25]. The values of these two metrics indicate that the EIA synthesized by the proposed model is very similar to the EIA which is directly picked up by the lenslet array. Thus the proposed model can be applied to synthesize EIAs when parameters of the II display are different from that of the record system.

*mm*, that is, the displayed 3D image would be 180

*mm*towards the lenslet array, as shown in Fig. 8 .

### 4.3 Translation and rotation

*′*. A translation vector of (20

*mm*, 30

*mm*, 150

*mm*, 1)

*′*is used to keep the fruits at the lateral center of the image with an appropriate display depth.

*mm*, 0

*mm*, 150

*mm*, 1)

*′*and a rotation vector of (0°, −35°, 0°, 1)

*′*is shown in Fig. 12 .

*v**= (30*

_{t}*mm*, 40

*mm*, 0

*mm*, 1)

*′*and a rotation vector

*v**= (0°, 0°, 30°, 1)*

_{r}*′*, respectively. In the reconstructed 3D image with translation, the horizontal and vertical offsets of the target measured in the optical reconstruction experiment are 29.0

*mm*and 40.5

*mm*, respectively. For the rotated EIA, the rotation angle of the reconstructed target around the Z-axis is 28° measured in the optical reconstruction experiment. The measured result show good agreement with the pre-set viewing parameters in the model on the computer. There are slight differences which might be caused by equipment precision, operation errors, low resolution of the reconstructed target and so on.

### 4.4 Discussion

28. M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process. **21**(2), 708–717 (2012). [CrossRef] [PubMed]

## 5. Conclusions

## Acknowledgments

## References and links

1. | G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. |

2. | O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. |

3. | S. Yeom and B. Javidi, “Three-dimensional distortion-tolerant object recognition using integral imaging,” Opt. Express |

4. | I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett. |

5. | J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express |

6. | I. Chung, J.-H. Jung, J. Hong, K. Hong, and B. Lee, “Depth extraction with sub-pixel resolution in integral imaging based on genetic algorithm,” in |

7. | D.-C. Hwang, D.-H. Shin, S.-C. Kim, and E.-S. Kim, “Depth extraction of three-dimensional objects in space by the computational integral imaging reconstruction technique,” Appl. Opt. |

8. | J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express |

9. | D.-H. Shin, B.-G. Lee, and J.-J. Lee, “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” Opt. Express |

10. | B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. |

11. | W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. |

12. | J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. |

13. | Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph. |

14. | F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. |

15. | H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. A |

16. | J. Arai, M. Kawakita, and F. Okano, “Effects of sampling on depth control in integral imaging,” Proc. SPIE |

17. | J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. |

18. | M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express |

19. | H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express |

20. | J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. |

21. | J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. |

22. | D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI J. |

23. | H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Display Technol. |

24. | H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci. |

25. | R. Damasevicius and G. Ziberkas, “Energy consumption and quality of approximate image transformation,” Electron. Electr. Eng. |

26. | J. X. Chai, X. Tong, S. C. Chan, and H. Y. Shum, “Plenoptic sampling,” in |

27. | C. Zhang and T. Chen, “Spectral analysis for sampling image-based rendering data,” IEEE Trans. Circ. Syst. Video Tech. |

28. | M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process. |

**OCIS Codes**

(100.6890) Image processing : Three-dimensional image processing

(110.4190) Imaging systems : Multiple imaging

(110.6880) Imaging systems : Three-dimensional image acquisition

(120.2040) Instrumentation, measurement, and metrology : Displays

**ToC Category:**

Imaging Systems

**History**

Original Manuscript: April 11, 2012

Revised Manuscript: May 24, 2012

Manuscript Accepted: May 29, 2012

Published: June 11, 2012

**Citation**

Yin Xu, XiaoRui Wang, Yan Sun, and JianQi Zhang, "Homogeneous light field model for interactive control of viewing parameters of integral imaging displays," Opt. Express **20**, 14137-14151 (2012)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-13-14137

Sort: Year | Journal | Reset

### References

- G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys.7, 821–825 (1908).
- O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt.40(20), 3318–3325 (2001). [CrossRef] [PubMed]
- S. Yeom and B. Javidi, “Three-dimensional distortion-tolerant object recognition using integral imaging,” Opt. Express12(23), 5795–5809 (2004). [CrossRef] [PubMed]
- I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett.34(6), 731–733 (2009). [CrossRef] [PubMed]
- J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express19(19), 18729–18741 (2011). [CrossRef] [PubMed]
- I. Chung, J.-H. Jung, J. Hong, K. Hong, and B. Lee, “Depth extraction with sub-pixel resolution in integral imaging based on genetic algorithm,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD) (Optical Society of America, 2010), paper JMA3.
- D.-C. Hwang, D.-H. Shin, S.-C. Kim, and E.-S. Kim, “Depth extraction of three-dimensional objects in space by the computational integral imaging reconstruction technique,” Appl. Opt.47(19), D128–D135 (2008). [CrossRef] [PubMed]
- J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express18(25), 26373–26387 (2010). [CrossRef] [PubMed]
- D.-H. Shin, B.-G. Lee, and J.-J. Lee, “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” Opt. Express16(21), 16294–16304 (2008). [CrossRef] [PubMed]
- B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett.31(8), 1106–1108 (2006). [CrossRef] [PubMed]
- W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph.23, 814–824 (2004). [CrossRef]
- J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt.45(8), 1704–1712 (2006). [CrossRef] [PubMed]
- Y. Taguchi, T. Koike, K. Takahashi, and T. Naemura, “TransCAIP: A live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters,” IEEE Trans. Vis. Comput. Graph.15(5), 841–852 (2009). [CrossRef] [PubMed]
- F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt.36(7), 1598–1603 (1997). [CrossRef] [PubMed]
- H. E. Ives, “Optical properties of a Lippmann lenticulated sheet,” J. Opt. Soc. Am. A21(3), 171–176 (1931). [CrossRef]
- J. Arai, M. Kawakita, and F. Okano, “Effects of sampling on depth control in integral imaging,” Proc. SPIE7237, 723710, 723710-12 (2009). [CrossRef]
- J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett.33(3), 279–281 (2008). [CrossRef] [PubMed]
- M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express13(23), 9175–9180 (2005). [CrossRef] [PubMed]
- H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express18(25), 25573–25583 (2010). [CrossRef] [PubMed]
- J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett.27(13), 1144–1146 (2002). [CrossRef] [PubMed]
- J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett.28(16), 1421–1423 (2003). [CrossRef] [PubMed]
- D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI J.27(3), 289–293 (2005). [CrossRef]
- H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Display Technol.6(10), 404–411 (2010). [CrossRef]
- H.-B. Xie, X. Zhao, Y. Yang, J. Bu, Z. L. Fang, and X. C. Yuan, “Cross-lenticular lens array for full parallax 3-D display with Crosstalk reduction,” Sci. China Technolog. Sci.55(3), 735–742 (2012). [CrossRef]
- R. Damasevicius and G. Ziberkas, “Energy consumption and quality of approximate image transformation,” Electron. Electr. Eng.120, 79–82 (2012).
- J. X. Chai, X. Tong, S. C. Chan, and H. Y. Shum, “Plenoptic sampling,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’00) (ACM Press, 2000), pp. 307–318.
- C. Zhang and T. Chen, “Spectral analysis for sampling image-based rendering data,” IEEE Trans. Circ. Syst. Video Tech.13(11), 1038–1050 (2003). [CrossRef]
- M. N. Do, D. Marchand-Maillet, and M. Vetterli, “On the bandwidth of the plenoptic function,” IEEE Trans. Image Process.21(2), 708–717 (2012). [CrossRef] [PubMed]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Figures

Fig. 1 |
Fig. 2 |
Fig. 3 |

Fig. 4 |
Fig. 5 |
Fig. 6 |

Fig. 7 |
Fig. 8 |
Fig. 9 |

Fig. 10 |
Fig. 11 |
Fig. 12 |

Fig. 13 |
||

### Supplementary Material

» Media 1: MOV (2690 KB)

» Media 2: MOV (2529 KB)

» Media 3: MOV (2667 KB)

» Media 4: MOV (3028 KB)

« Previous Article | Next Article »

OSA is a member of CrossRef.