OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 16, Iss. 18 — Sep. 1, 2008
  • pp: 13969–13978
« Show journal navigation

Undistorted pickup method of both virtual and real objects for integral imaging

Joonku Hahn, Youngmin Kim, Eun-Hee Kim, and Byoungho Lee  »View Author Affiliations


Optics Express, Vol. 16, Issue 18, pp. 13969-13978 (2008)
http://dx.doi.org/10.1364/OE.16.013969


View Full Text Article

Acrobat PDF (449 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Optically corrected pickup method of both virtual and real objects for integral imaging is proposed. The proposed pickup system has imbricate view volumes which are equivalent to those in the integral imaging display. Therefore, there is no distortion resulting from the discord in directions of elemental image between pickup and display. In this system, the view volumes are transformed by 4f optics and the directions of view are defined by lens array and telecentric lens. Without computational cost for compensation, the pickup of both real and virtual objects is confirmed experimentally.

© 2008 Optical Society of America

1. Introduction

Integral imaging (InIm) is one of the most feasible technologies for displaying a three-dimensional (3D) object since full parallax field is reconstructed without viewing aid such as glasses [1

1. B. Javidi and F. Okano, eds., Three Dimensional Television, Video, and Display Technologies (Springer, 2002).

3

3. B. Lee, J.-H. Park, and S.-W. Min, “Three-dimensional display and information processing based on integral imaging,” in Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer, 2006), pp. 333–378. [CrossRef]

]. InIm display uses a lens array to define the viewing directions of correspondent elemental images. Each elemental image is a projection view of 3D object and projection view volumes are positioned periodically according to the centers of lenses in lens array. The field of view in InIm display is able to display both virtual and real objects when the gap between the lens array and elemental images is equal to the focal length of lens. Therefore, the pickup method of both real and virtual objects is necessary. Various computational generations of elemental images for InIm display have been proposed [4

4. D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction of three-dimensional objects in integral imaging using lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005). [CrossRef]

6

6. D.-H. Shin, B. Lee, and E.-S. Kim, “Parallax-controllable large-depth integral imaging scheme using lenslet array,” Jpn. J. Appl. Phys. 46, 5184–5186 (2007). [CrossRef]

]. But it is a challengeable issue to embody the actual optical pickup system with the view volumes equivalent to those of InIm display.

In order to pickup actual objects, many optical systems with lens array have been proposed. Generally, the pseudoscopic problem occurs in the pickup with a single lens array since the view volume of an individual lens in the pickup system has mirror symmetry to the view volume in the display [7

7. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]

, 8

8. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]

]. To solve the pseudoscopic problem, each elemental image is flipped and the pseudoscopic real image is changed into an orthoscopic virtual image. Therefore, in these techniques, only virtual objects could be captured orthoscopically. To improve the quality of pickup images, the numerical compensation is necessary [9

9. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Viewing-angle-enhanced integral imaging by elemental image resizing and elemental lens switching,” Appl. Opt. 41, 6875–6883 (2002). [CrossRef] [PubMed]

, 10

10. J. -H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express 12, 6020–6032 (2004). [CrossRef] [PubMed]

]. Some pickup systems with the optically correct view volumes have been proposed [11

11. R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14, 9657–9663 (2006). [CrossRef] [PubMed]

13

13. J. Hahn, Y. Kim, E.-H. Kim, and B. Lee, “Camera with inverted perspective projection view volume array for integral imaging,” The 6th International Conference on Optics-Photonics Design and Fabrication, Taipei, Taiwan , pp. 553–554, June 2008.

]. However, these optical solutions also capture only virtual objects even though they have the advantages of optically corrected pickup without unnecessary data processing.

For capturing the real objects, the depth conversion optics has been studied intensively. The depth conversion optics makes it possible to capture real objects for InIm. Typically, the depth conversion is executed by two sequential steps. In the first step, real objects are captured pseudoscopically and in the second step, the pseudoscopic real images are converted to orthoscopic virtual images [14

14. S.-W. Min, J. Hong, and B. Lee, “Analysis of an optical depth converter used in a three-dimensional integral imaging system,” Appl. Opt. 43, 4539–4549 (2004). [CrossRef] [PubMed]

16

16. J.-S. Jang and B. Javidi, “Two-step integral imaging for orthoscopic three-dimensional imaging with improved viewing resolution,” Opt. Eng. 41, 2568–2571 (2002). [CrossRef]

]. And in the same sense, the micro-convex-mirror arrays can be applied as depth conversion optics [17

17. J.-S. Jang and B. Javidi, “Three-dimensional projection integral imaging using micro-convex-mirror arrays,” Opt. Express 12, 1077–1083 (2004). [CrossRef] [PubMed]

]. These processes could be simply understood as converting the real field by mirror symmetry twice. Therefore, by adjusting the distance between two lens arrays in the second step, the depth of objects is controllable [18

18. J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33, 279–281 (2008). [CrossRef] [PubMed]

]. However, the only real objects can be captured with these techniques.

The pickup techniques for both virtual and real objects have been proposed by some research groups. The main idea of these techniques is the application of a large converging lens before the lens array [19

19. J.-S. Jang and B. Javidi, “Formation of orthoscopic three-dimensional real images in direct pickup one-step integral imaging,” Opt. Eng. 42, 1869–1870 (2003). [CrossRef]

, 20

20. J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45, 9066–9078 (2006). [CrossRef] [PubMed]

]. In these optical systems, all view volumes defined by the lens array converge with the help of the large convex lens. Here, the plane where each view volume is focused is a critical plane to separate virtual and real fields during pickup [21

21. F. Okano, “Applications of Integral Photography for Real-Time Imaging,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD) (Optical Society of America, 2008), paper DTuA1.

]. However, the converging lens distorts view volumes and the compensation is necessary for corrected pixel matching. Thus, as a numerical compensation method, the undistorted pickup system with the view volumes equivalent to those of InIm display is proposed by Martinez-Corral et al [22

22. M. Martinez-Corral, B. Javidi, R. Martinez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13, 9175–9180 (2005). [CrossRef] [PubMed]

].

This paper is organized as follows. In Sec. 2, the distortion problem is described and the imbricate view volumes of InIm display are expressed analytically. In Sec. 3, the undistorted pickup with imbricate view volumes is proposed. In Sec. 4, experimental results are presented and discussed. In Sec. 5, conclusion and perspective are given.

2. Distortion problem between pickup and integral imaging

The conventional pickup of both virtual and real objects with a large convex lens results in a distortion as shown in Fig. 1. In this conventional pickup, there exists a critical plane which defines virtual and real fields separately. Here, the pickup directions (red arrows in Fig. 1(a)) of elemental images are not parallel while the display directions (green arrows in Fig. 1(b)) of elemental images are parallel to one another in a general InIm display. Therefore, the reconstructed objects in InIm display from the conventional pickup are distorted as shown in Fig. 1(b). That is, the virtual object shrinks in comparison with the real object. These phenomena result from the convergence of pickup directions of elemental images. And thus virtual objects are magnified relatively. To capture elemental images correctly, the pickup method with the view volumes equivalent to those of InIm display is necessary.

Fig. 1. Distortion in conventional pickup of both virtual and real objects with a large convex lens: view volumes of (a) pickup and (b) InIm display.

The view volumes of InIm display are the set of sub-view volumes by every lens in the lens array. Figure 2 shows a sub-view volume of an individual lens which is represented as the viewing angle and display direction of elemental image. Each lens has an aperture stop defined by the interval between lenses. And the width of this aperture stop is usually the same as the width of an elemental image. Therefore, the viewing angle, θdisplay is given by

θdisplay=2tan1(w2f).
(1)

Here, w and f mean the width of an elemental image and the focal length of each lens respectively. For the ith lens, the corresponding sub-view volume is given by

Vi={(x,y)|xxcizzci<w2f}.
(2)

Here, (x ci,z ci) is the center position of aperture stop for the ith lens. That is, the individual sub-view volume generated by a single lens is shaped as two equilateral triangles with the vertexes in contact. And the direction of elemental image is parallel to the optical axis.

Fig. 2. Sub-view volume of an individual lens in InIm display.

Every sub-view volume defined by lens array forms an imbricate view volume as shown in Fig. 3. One sub-view volume is overlapped with others. And the display directions of elemental image are parallel to one another and positioned apart periodically.

Fig. 3. Imbricate view volumes of InIm display.

3. Pickup method with imbricate view volumes

The pickup method with the equivalent view volumes is realized by 4f optics and a telecentric lens. In this proposed system, the imbricate view volumes are transformed by 4f optics and the directions of view are defined by the telecentric lens normal to the lens array plane. Therefore, pickup directions of elemental image are parallel to one another. Figure 4 shows a schematic of the proposed pickup system with imbricate view volumes. The individual sub-view volume is shaped as two equilateral triangles with the vertexes in contact and each sub-volume is overlapped with others. These view volumes are equivalent to those of InIm display. In these imbricate view volumes, the virtual and real fields are divided by the critical plane and this critical plane is located at the focal plane of the front lens in 4f optics.

Fig. 4. Schematic of pickup system with imbricate view volumes.

The imbricate view volumes are separated spatially after transformed by 4f optics and the lens array is aligned at the position where the every view gets close and contacts with adjacent others. The distance between the back lens in 4f optics and the lens array is the sum of the focal lengths of two serial lenses. In Fig. 4, f 1 and f 2 represent the focal lengths of front and back lenses in 4f optics respectively. And f 3 is the focal length of lens array. In this proposed pickup, the viewing angle is defined by

θpickup=2tan1(w3f22f1f3).
(3)

Here, w 3 means the width of aperture stop of an individual lens in the lens array. Therefore, in order to keep the proper relation between pickup and InIm display, two viewing angles, θdisplay and θpickup, should agree with each other.

The CCD is relatively small in comparison with objects to be captured. Hence, the magnification optics is necessary. In this proposed pickup system, a telecentric lens is applied for magnification and it filters out any rays with undesirable directions. Therefore, the only rays with normal direction to the lens array plane are able to be captured by CCD. This filtering by the telecentric lens is reasonable since only directions of view normal to the lens array plane are captured.

In this proposed system, a point to be considered is that every captured elemental image is flipped in comparison with desirable elemental images of InIm display. Figure 5 shows the relationship between demanded elemental images and a pickup image. Since the pickup system and InIm display have the same view volumes, the demanded elemental image for InIm display is located in front of the critical plane. After 4f optics, the rays passing through demanded elemental images are disordered when they arrive at the pickup plane. Therefore, a rotation process is necessary to organize the set of elemental images for InIm display. By the rotation process each elemental image is rotated 180 degrees around its center. This process simply changes the coordinates of an individual elemental image in reverse order and requires negligible computational costs.

Fig. 5. Relationship between demanded elemental images and pickup image.

Figure 6 shows the photograph of embodied pickup system with imbricate view volumes. The 4f optics are composed of two Fresnel lenses with focal length of 152.4mm, and the lens array has 1.0mm periods and 3.3mm focal lengths. The telecentric lens with 0.09×magnification is applied, which was manufactured by Edmund optics. The CCD is Sony XCD-SX90CR with 3.75 µ m pixel size. Therefore, the resolutions of each elemental image are 24×24 pixels.

Fig. 6. Photograph of embodied pickup system with imbricate view volumes.

4. Experimental results

The proposed pickup method provides elemental images of virtual and real fields for InIm without distortion. In this section, we present two experimental results, where one shows the simultaneous pickup of both virtual and real objects and the other shows the comparison of the proposed method with a conventional pickup.

Figure 7 shows the elemental images captured by the proposed system. In Fig. 7(a), the raw image on pickup plane is shown. The set of elemental images after rotation process is shown in Fig. 7(b). Here, a red smile face is a real object and a yellow smile face is a virtual object. Therefore, the normal outer directions in boundary of real object concentrate inside but those of virtual object spread outside as expected. Figure 8 shows the perspective views of reconstructed image.

Fig. 7. (a) Raw pickup image and (b) set of flipped elemental images.
Fig. 8. Reconstructed image: (a) the perspective views and (b) (Media 1).

In order to compare the proposed method with a conventional pickup, we configure the objects where three of them have fixed positions and one is movable as shown in Fig. 9. Here, the virtual and real objects are marked with letters ‘V’ and ‘R’ respectively. The object on the critical plane, z=0mm is marked with a letter ‘C’. And the object with a letter ‘M’ is able to move from the virtual object position, z=-30mm to the real object position, z=30mm.

In the conventional method, the pickup system has the same schematic as that shown in Fig. 1(a). In this system, the specifications of optical components are the same as the proposed system except that 4f optics is replaced with a single Fresnel lens with focal length of 152.4mm. This Fresnel lens is the large convex lens controlling the position of critical plane. Figure 10 shows pickup images with the conventional method. In Figs. 10(a)(c), the object marked with ‘M’ is positioned at z=-30mm, z=0mm, and z=30mm respectively. Figs. 10(d) and (e) show the elemental images of the virtual and real objects respectively. The elemental images of the movable object are shown in (f) and (g) where they are the parts of (a) and (c) respectively. In this conventional method, the letters on virtual and real objects are hard to be recognized. These phenomena result from the perspective error of a large convex lens.

Fig. 9. Configuration of objects with different positions.
Fig. 10. Conventional pickup images with the movable object positioned at (a) z=-30mm, (b) z=0mm, and (c) z=30mm respectively. The elemental images of virtual and real objects are shown in (d) and (e) respectively. And the elemental images of the movable object are shown in (f) and (g) when it is positioned at z=-30mm and z=30mm respectively.
Fig. 11. Proposed pickup images with the movable object positioned at (a) z=-30mm, (b) z=0mm, and (c) z=30mm respectively. The elemental images of virtual and real objects are shown in (d) and (e) respectively. And the elemental images of the movable object are shown in (f) and (g) when it is positioned at z=-30mm and z=30mm respectively.

Figure 11 shows the pickup images with the proposed method. Figures 11(a)(c) show the pickup images when the movable object is located at the same positions as Fig. 10(a)(c). And Figs. 11(d)(g) show the elemental images of virtual, real, and movable objects respectively. The transform of virtual and real fields with 4f system shows much smaller perspective errors in elemental images than the conventional method.

As previously discussed, the distortion appears in the conventional pickup. Figure 12 shows the movie of reconstructed images by the conventional and proposed methods. In the conventional method, the virtual object appears smaller than the real object. And the movable object grows as its position changes from z=-30mm to z=30mm. On the other hand, in the proposed method, the reconstructed images show virtual and real objects with the same size. And the size of movable object is also unaffected by its position. Therefore it is proved that the distortion which occurs in the conventional pickup method cannot be found in the proposed method.

Fig. 12. (Media 2) Pickup and reconstructed images with the conventional pickup (left) and the proposed pickup (right) according to the position of the movable object.

Figure 13 shows the perspective views of reconstructed images with representative positions of the movable object. When the letter ‘M’ is positioned at z=-30mm, it is swinging with the constant gap between itself and the letter ‘V’. When it is positioned at z=30mm, it is swinging with the constant gap between itself and the letter ‘R’. As expected, the proposed pickup method provides correct perspective views according to the change of the position of the letter ‘M’.

Fig. 13. (Media 3) Perspective views of reconstructed images at z=-30mm, z=0mm, and z=30mm respectively.

5. Conclusion

Undistorted pickup method of both virtual and real objects for InIm is proposed. The proposed pickup system is realized by 4f optics and telecentric lens and it has the same imbricate view volumes as those of InIm display. In this system, there is no distortion resulting from the discord in directions of elemental image between pickup and display. Hence additional numerical compensations are unnecessary. Even though the rotation process of each elemental image is required, this simple calculation needs negligible computational cost. Therefore, this proposed method makes real-time pickup feasible without the apparent distortion. It is expected that the proposed technique can be one of the most promising real-time pickup techniques for InIm.

Acknowledgment

This work was supported by the Korea Science and Engineering Foundation and the Ministry of Education, Science and Engineering of Korea through the National Creative Research Initiative Program (# R16-2007-030-01001-0).

References and links

1.

B. Javidi and F. Okano, eds., Three Dimensional Television, Video, and Display Technologies (Springer, 2002).

2.

H. Liao, M. Iwahara, H. Nobuhiko, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004). [CrossRef] [PubMed]

3.

B. Lee, J.-H. Park, and S.-W. Min, “Three-dimensional display and information processing based on integral imaging,” in Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer, 2006), pp. 333–378. [CrossRef]

4.

D.-H. Shin, E.-S. Kim, and B. Lee, “Computational reconstruction of three-dimensional objects in integral imaging using lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005). [CrossRef]

5.

S.-W. Min, K.-S. Park, B. Lee, Y. Cho, and M. Hahn, “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys. 45, L744–L747 (2006). [CrossRef]

6.

D.-H. Shin, B. Lee, and E.-S. Kim, “Parallax-controllable large-depth integral imaging scheme using lenslet array,” Jpn. J. Appl. Phys. 46, 5184–5186 (2007). [CrossRef]

7.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]

8.

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]

9.

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Viewing-angle-enhanced integral imaging by elemental image resizing and elemental lens switching,” Appl. Opt. 41, 6875–6883 (2002). [CrossRef] [PubMed]

10.

J. -H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express 12, 6020–6032 (2004). [CrossRef] [PubMed]

11.

R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express 14, 9657–9663 (2006). [CrossRef] [PubMed]

12.

R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15, 16255–16260 (2007). [CrossRef] [PubMed]

13.

J. Hahn, Y. Kim, E.-H. Kim, and B. Lee, “Camera with inverted perspective projection view volume array for integral imaging,” The 6th International Conference on Optics-Photonics Design and Fabrication, Taipei, Taiwan , pp. 553–554, June 2008.

14.

S.-W. Min, J. Hong, and B. Lee, “Analysis of an optical depth converter used in a three-dimensional integral imaging system,” Appl. Opt. 43, 4539–4549 (2004). [CrossRef] [PubMed]

15.

F. Okano and J. Arai, “Optical shifter for a three-dimensional image by use of a gradient-index lens array,” Appl. Opt. 41, 4140–4147 (2002). [CrossRef] [PubMed]

16.

J.-S. Jang and B. Javidi, “Two-step integral imaging for orthoscopic three-dimensional imaging with improved viewing resolution,” Opt. Eng. 41, 2568–2571 (2002). [CrossRef]

17.

J.-S. Jang and B. Javidi, “Three-dimensional projection integral imaging using micro-convex-mirror arrays,” Opt. Express 12, 1077–1083 (2004). [CrossRef] [PubMed]

18.

J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33, 279–281 (2008). [CrossRef] [PubMed]

19.

J.-S. Jang and B. Javidi, “Formation of orthoscopic three-dimensional real images in direct pickup one-step integral imaging,” Opt. Eng. 42, 1869–1870 (2003). [CrossRef]

20.

J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45, 9066–9078 (2006). [CrossRef] [PubMed]

21.

F. Okano, “Applications of Integral Photography for Real-Time Imaging,” in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD) (Optical Society of America, 2008), paper DTuA1.

22.

M. Martinez-Corral, B. Javidi, R. Martinez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13, 9175–9180 (2005). [CrossRef] [PubMed]

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(110.2990) Imaging systems : Image formation theory

ToC Category:
Imaging Systems

History
Original Manuscript: June 24, 2008
Revised Manuscript: July 29, 2008
Manuscript Accepted: August 22, 2008
Published: August 25, 2008

Citation
Joonku Hahn, Youngmin Kim, Eun-Hee Kim, and Byoungho Lee, "Undistorted pickup method of both virtual and real objects for integral imaging," Opt. Express 16, 13969-13978 (2008)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-18-13969


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. B. Javidi and F. Okano, eds., Three Dimensional Television, Video, and Display Technologies (Springer, 2002).
  2. H. Liao, M. Iwahara, H. Nobuhiko, and T. Dohi, "High-quality integral videography using a multiprojector," Opt. Express 12, 1067-1076 (2004). [CrossRef] [PubMed]
  3. B. Lee, J.-H. Park, and S.-W. Min, "Three-dimensional display and information processing based on integral imaging," in Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer, 2006), pp. 333-378. [CrossRef]
  4. D.-H. Shin, E.-S. Kim, and B. Lee, "Computational reconstruction of three-dimensional objects in integral imaging using lenslet array," Jpn. J. Appl. Phys. 44, 8016-8018 (2005). [CrossRef]
  5. S.-W. Min, K.-S. Park, B. Lee, Y. Cho, and M. Hahn, "Enhanced image mapping algorithm for computer-generated integral imaging system," Jpn. J. Appl. Phys. 45, L744-L747 (2006). [CrossRef]
  6. D.-H. Shin, B. Lee, and E.-S. Kim, "Parallax-controllable large-depth integral imaging scheme using lenslet array," Jpn. J. Appl. Phys. 46, 5184-5186 (2007). [CrossRef]
  7. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, "Real-time pickup method for a three-dimensional image based on integral photography," Appl. Opt. 36, 1598-1603 (1997). [CrossRef] [PubMed]
  8. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, "Gradient-index lens-array method based on real-time integral photography for three-dimensional images," Appl. Opt. 37, 2034-2045 (1998). [CrossRef]
  9. J.-H. Park, S. Jung, H. Choi, and B. Lee, "Viewing-angle-enhanced integral imaging by elemental image resizing and elemental lens switching," Appl. Opt. 41, 6875-6883 (2002). [CrossRef] [PubMed]
  10. J. -H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, "Three-dimensional display scheme based on integral imaging with three-dimensional information processing," Opt. Express 12, 6020-6032 (2004). [CrossRef] [PubMed]
  11. R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, "Optically-corrected elemental images for undistorted Integral image display," Opt. Express 14, 9657-9663 (2006). [CrossRef] [PubMed]
  12. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, "Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system," Opt. Express 15, 16255-16260 (2007). [CrossRef] [PubMed]
  13. J. Hahn, Y. Kim, E.-H. Kim, and B. Lee, "Camera with inverted perspective projection view volume array for integral imaging," The 6th International Conference on Optics-Photonics Design and Fabrication, Taipei, Taiwan, pp. 553-554, June 2008.
  14. S.-W. Min, J. Hong, and B. Lee, "Analysis of an optical depth converter used in a three-dimensional integral imaging system," Appl. Opt. 43, 4539-4549 (2004). [CrossRef] [PubMed]
  15. F. Okano and J. Arai, "Optical shifter for a three-dimensional image by use of a gradient-index lens array," Appl. Opt. 41, 4140-4147 (2002). [CrossRef] [PubMed]
  16. J.-S. Jang and B. Javidi, "Two-step integral imaging for orthoscopic three-dimensional imaging with improved viewing resolution," Opt. Eng. 41, 2568-2571 (2002). [CrossRef]
  17. J.-S. Jang and B. Javidi, "Three-dimensional projection integral imaging using micro-convex-mirror arrays," Opt. Express 12, 1077-1083 (2004). [CrossRef] [PubMed]
  18. J. Arai, H. Kawai, M. Kawakita, and F. Okano, "Depth-control method for integral imaging," Opt. Lett. 33, 279-281 (2008). [CrossRef] [PubMed]
  19. J.-S. Jang and B. Javidi, "Formation of orthoscopic three-dimensional real images in direct pickup one-step integral imaging," Opt. Eng. 42, 1869-1870 (2003). [CrossRef]
  20. J. Arai, H. Kawai, and F. Okano, "Microlens arrays for integral imaging system," Appl. Opt. 45, 9066-9078 (2006). [CrossRef] [PubMed]
  21. F. Okano, "Applications of Integral Photography for Real-Time Imaging," in Digital Holography and Three-Dimensional Imaging, OSA Technical Digest (CD) (Optical Society of America, 2008), paper DTuA1.
  22. M. Martinez-Corral, B. Javidi, R. Martinez-Cuenca, and G. Saavedra, "Formation of real, orthoscopic integral images by smart pixel mapping," Opt. Express 13, 9175-9180 (2005). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: AVI (2255 KB)     
» Media 2: AVI (3848 KB)     
» Media 3: AVI (1696 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited