OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 15, Iss. 8 — Apr. 16, 2007
  • pp: 4814–4822
« Show journal navigation

Improved viewing resolution of integral videography by use of rotated prism sheets

Hongen Liao, Takeyoshi Dohi, and Makoto Iwahara  »View Author Affiliations


Optics Express, Vol. 15, Issue 8, pp. 4814-4822 (2007)
http://dx.doi.org/10.1364/OE.15.004814


View Full Text Article

Acrobat PDF (464 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We demonstrated that placing a pair of prism sheets in front of a display and rotating them overcomes the upper resolution limit of Integral Photograpy (IP) / Integral Videography (IV) imposed by the Nyquist sampling theorem. A pair of prism sheet with the same pitch placed in front of an IP or IV display parallel-shifts the light rays in the 3D space. Rotating the pair shifts the light rays, causing them to appear to rotate around their original positions. Changing the gap between the sheets changes the diameter of the apparent rotation. Changing the speed at which the sheets are rotated changes the speed of the image movement. Experimental results showed that the quality of the IP and IV images is improved by using this technique. It is a simple and effective way to improve the viewing resolution of IP and IV images without reducing their 3D aspects, such as image depth. It also eliminates the need to move the lenslet array.

© 2007 Optical Society of America

1. Introduction

Integral photography (IP) [1

1. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

] is a method for displaying three-dimensional (3-D) autostereoscopic images that can be formed from an arbitrary viewpoint without the use of supplementary glasses or tracking devices. A number of elemental images with different perspectives for a given 3-D object are generated and recorded on film. When the film is placed at the same position relative to the lens array and is irradiated with a backlight, the light rays retrace their original routes reproducing the image at the same position as that of original object.

IP and relative animated images have attracted much attention in a variety of 3-D image fields. A real-time pickup and 3-D display system based on IP has been developed for 3-D visualization [2

2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]

]. A gradient-index lens array and a compatible high-definition TV camera have also been developed for IP image pickup to solve the pseudoscopic problem [3

3. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]

]. Igarishi et al. described a method that generates elemental images using a computer process instead of using a physical process. This computer-generated integral photography method can be implemented in both real and virtual IP modes [4

4. Y. Igarishi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys. 17, 1683–1684 (1978). [CrossRef]

]. We previously described integral videography (IV) [5

5. H. Liao, S. Nakajima, M. Iwahara, E. Kobayashi, I. Sakuma, N. Yahagi, and T. Dohi, “Intra-operative realtime 3-D information display system based on integral videography,” Fourth International Conference on Medical Image Computing and Computer assisted Intervention --MICCAI 2001, Lecture Notes in Computer Science, LNCS 2208, 392–400 (2001). [CrossRef]

], which is an animated extension of IP. IV uses a fast image rendering algorithm to project an image of computer-generated graphical object through a micro-convex lens array. Each point shown in a 3-D space is reconstructed at the same position as that of the actual object by the convergence of rays from the pixels of the elemental images on the computer display after they pass through the lenslets in the lens array. Switching to “scene” mode enables IV to display animated objects.

IP/IV images have been shown to accurately reproduce the wavefronts that emanated from original photographed or computer-generated objects. Recent enhancements include widening the viewing angle and enhancing the depth of the IP images [6–8

6. S-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4159 (2003). [CrossRef] [PubMed]

]. Despite IP/IV’s many advantages, the viewing resolution of its spatial images (image quality) is still poor. In addition to aberration and lens deviation, the pixel pitch of the display and the lens pitch are the main factors affecting the IV image format.

A number of researchers have worked on improving the viewing resolution of IP/IV images using defocus and image diffusion to increase the resolution of the target area. For example, one study showed that the mismatch between the image position and focusing position of the lens improves the spatial resolution of required area [9

9. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. 28, 1421–1423 (2003). [CrossRef] [PubMed]

]. The image depth can be enhanced by using time-division multiplexing to adjust the distance between the screen and the lens array for both real and virtual image fields [10

10. H. Liao, Y. Ito, K. Matsumiya, K. Masamune, and T. Dohi, Object based image rendering and synthesis for computer generated integral videography, ACM SIGGRAPH 2006 (Boston, USA, 2006), Research poster, CD-ROM.

]. Another study showed that placing a diffusion sheet in front of the display can smooth the formed image, resulting in higher-quality spatial autostereoscopic image formation [11

11. H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display by use of seamless multi-projection,” Appl. Opt. 44, 305–315 (2005). [CrossRef] [PubMed]

]. Although the quality of real IP images was improved, the improvement was limited to the area around the diffusion sheet.

Several multiplexing methods have been proposed for increasing the quality of displayed IP images, including space, time, and spatiotemporal multiplexing [12

12. J. S. Jang, Y. S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display,” Opt. Express 12, 557–563 (2004). [CrossRef] [PubMed]

]. However, the increase in quality is limited due to the theoretical binding of the IV image resolution to the pixel density of the display (such as an LCD). We proposed a fundamental principle of multi-projection for increasing both image resolution and pixel density [13

13. H. Liao, M. Iwahara, N. Hata, I. Sakuma, T. Dohi, T. Koike, Y. Momoi, T. Minakawa, M. Yamasaki, F. Tajima, and H. Takeda, High-resolution integral videography autostereoscopic display using multi-projector; in The Proceedings of the Ninth International Display Workshops IDW02, (Hiroshima, Japan, 2002), pp.1229-1232.

] and developed a two-projector-based display system [14

14. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004). [CrossRef] [PubMed]

] for high-resolution IV image display. Furthermore, a relative image calibration and correction technique was developed for creating a seamless multi-projection IV image [11

11. H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display by use of seamless multi-projection,” Appl. Opt. 44, 305–315 (2005). [CrossRef] [PubMed]

]. Spatial multiplexing by using multiple displays can also increase the viewing and visual zone [15

15. B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. 41, 4856–4865 (2002). [CrossRef] [PubMed]

]. Multiple displays combined with masking can be used for increasing the viewing angle [16

16. S. Jung, J.-H. Park, B. Lee, and B. Javidi, “Viewing-angle-enhanced Integral 3-D Imaging using double display devices with masks,” Opt. Eng. 41, 2389–2390 (2002). [CrossRef]

].

With time-division multiplexing, the use of a non-stationary lens array can improve the viewing resolution of IP images [17–19

17. J. S. Jang and B. Javidi, “Improved viewing resolution of 3-D integral imaging with nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]

]. The synchronously moving lenslet array technique (MALT) is used for image pickup and display in three-dimensional IP imaging. To increase the spatial sampling rate, the positions of the lens array for both pickup and display are rapidly vibrated synchronously in the lateral direction within the retention time of the after-image of the human eye. The image sensor and display are moved together with the lenslet arrays synchronously. The motion speed is typically set to cover at least one pitch within one exposure of the CCD. In this way, the effective aperture of a lenslet is increased synthetically. As a result, the object visual field is increased, and the resolution of the diffraction-limited images is improved. When the MALT is used, the lens array has a periodic structure with a vibration range of less than the pitch of the lens. Computer-generated IP can be implemented using MALT; the elemental images are computationally reconstructed by the MALT-based optical pickup process [20–21

20. S. H. Hong and B. Javidi, “Improved resolution 3-D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004). [CrossRef] [PubMed]

]. MALT can thus be used for improving both viewing resolution and image depth.

An open issue is that both the pickup and display devices should be fast enough to represent moving elemental images. With MALT, a driving device is needed for vibrating the lenslet array for each IP pickup and display device. It is difficult to drive a large lenslet array and keep it stable at the same time. The movement of the lenslet array must be synchronized with that of the pickup so that their relative positions are maintained. Moreover, the vibration of the lens array affects the endurance and stability of the device.

We have developed a technique for improving the viewing resolution of the IP/IV image. A pair of prism sheets is placed in front of the display and rotated to overcome the upper resolution limit of IP/IV images imposed by the Nyquist sampling theorem. This technique eliminates the need to move the lenslet array.

2. Materials and Methods

2.1 Rhomboid prism and prism sheets for refracting light rays

Optical prisms are used to refract light and dispersive prisms are used to break light up into its constituent spectral colors because the refractive index depends on frequency. White light is a mixture of rays of different colors, and, when it passes through an optical prism, those of each color are bent slightly differently. For example, the blue light rays are bent more than the red light ones because they are slowed down more.

A rhomboid prism is often used for redirecting the rays without affecting the image composition. Figure 1(a) shows how light rays from a display are parallel-shifted when a rhomboid prism is placed in front of the display. The amount of the shift depends on the incident angle of the rays and the thickness of the prism. If the angle of incidence of the rays on the prism face is smaller than that on the exit face, there is no internal reflection. In short, placing a rhomboid prism in front of an IV display parallel-shifts the light rays from the display, producing the same effect as moving the lens array down [Fig. 1(b)].

Fig. 1. (a). Light rays parallel-shifted using rhomboid prism; (b) effect of moving lens array down.

2.2 Rhomboid prism converted into double-sided prism sheets

A rhomboid prism can be divided into prismlets [Fig. 2(a)], which can then be used to form a double-faced prism sheet [Fig. 2(b)]. If one of the lateral edges of the prism angle is vertically aligned with the prism plane, all the light rays are shifted in same direction. The total thickness can be made adjustable by separating the double-faced prism sheet into two single-faced prism sheets, as shown in Fig. 2(c). The distance of the shift can be adjusted by adjusting the deflection angle of the prismlets and the distance between the two prism sheets. Light dispersion can be ignored because of the small thickness of the prism sheets. Since the viewing angle of the IP/IV image is limited, light rays passing through the sloped plane from a different direction causes little error. The pair of sheets is rotated relative to the lens array as illustrated in Fig. 2(d).

Fig.2. (a). Division of rhomboid prism into prismlets; (b). formation of double-faced prism sheet; (c). two single-faced prism sheets; (d). sheet rotation.

2.3 Prism sheet rotation

Rotating the prism sheets causes the light rays from the IV display to appear to rotate around their original position; the diameter of the apparent rotation is equal to the distance of the shift. The radius of the apparent circular motion must be smaller than one-half pitch of a lenslet.

Figure 3(a) shows that the rays are shifted downwards when the pair of sheets is in the starting position, i.e. horizontally aligned. Rotating the pair 90° counterclockwise shifts the rays to the right, as shown in Fig. 3(b). Figures 3(c) and 3(d) show the corresponding shifts when the pair is rotated to the 180 and 270° positions. The frequency of the cycle can be controlled by changing the rotation speed.

Fig. 3. Effects of rotating pair of prism sheets counterclockwise: (a) at starting position (0°), sheets are horizontally aligned, and light rays are shifted downwards. (b) 90° rotation, sheets are vertical aligned, and rays are shifted rightwards; (c) 180° rotation, sheets are again horizontally aligned, and rays are shifted upward; (d) 270° rotation, sheets are again vertically aligned, and rays shifted leftward. The rotated radius can be adjusted by changing the distance between the sheets and the lens array or by changing the gap between the sheets.

Fig. 4. Light rays appear to rotate around their original positions when prism sheets are rotated.

2.4 Synchronically display element image corresponding to orientation of prism sheets

The element images can be calculated by computer or captured by camera. To reconstruct the IV image, we need to synchronically reproduce the displayed element images corresponding to the orientations of the rotated prism sheets. The system configuration we used for investigating the effects of prism sheet rotation is diagrammed in Fig. 5. We use a photo sensor to capture the rotation situation. A marker is attached to the edge of the prism sheets so that the sensor can achieve the phase difference of the rotation. An oscilloscope (Tektronix, TDS1002B) is used for displaying both of the output voltage and the wave captured by the photo sensor. The oscilloscope changes the calibrated wave signal to the digital signal. Furthermore, the signals corresponding to the rotated prism sheets are transferred to a computer via a USB cable. A program with user interface is developed to analysis the signals and calculate the corresponding rotation cycles of the prism sheets. The elemental images are displayed and updated according to the cycle of the rotation. This method enables the improvement of the image quality without affection of adjusting the distance between the lens array and the display or the captured camera.

Fig. 5 Configuration of system used to investigate effects of prism sheet rotation for improving image quality.

3. Experimental

3.1 Setup and methods

We fabricated a pair of prism sheets as described above and mounted them in front of an IV display (Fig. 6). The distance between the sheets could be adjusted to match the optical parameters of the lens array. The sheets were made of translucent plastic with a prism angle of 11° and a refracting angle of 5°. The facet spacing was 0.508 mm, and each sheet was 2 mm thick. The gap between them was adjusted by turning the screws connecting them. A 6-V DC motor (MFA, RE-540/1) was used to rotate them. The maximum rotation speed without a load was 7500 rpm.

Each lenslet in the lens array was hexagonal with a base area of 1.016 mm, which covered eight by seven pixels of the displayed image. The element images were calculated by computer.

We estimated the parameter of the prism sheets and the rotation speed on the basis of the IP display specifications. We adjusted the ray shift from 0.1 to 0.5 mm by adjusting the gap between the sheets from 2 to 10 mm.

Fig. 6. Prototype of rotating prism sheet device.

We also evaluated the maximum spatial resolution of the displayed images by using an IV image with Japanese characters [the lower part in Fig. 7(a)]. The projected images were taken using a digital camera (Nikon D1X, 3008×1536 pixels). The focal length and F-number of the camera lens were 50 mm and 16, respectively. The pupil diameter of the camera iris was about 3 mm, which is similar to that of the human eye in an ordinary lighted indoor environment.

3.2 Results

Figure 7(a) shows the original IV image. The shape of each lenslet is clearly evident. Placing the prism sheets in front of the display shifted the image and reduced its brightness slightly, as shown in Fig. 7(b). Figure 7(c) shows the result of rotation. The light rays appear to have been rotated in a circle manner as observed from the front of the display. When the reconstructed image is viewed with the naked eye, one can clearly see the characters without any effects due to the shape of the lenslet. Moreover, the intensity distribution is clearly smoother.

The updating of the element images corresponding to the rotation of the prism sheets was not so good enough for reproducing the IV image with the manufactured prototype device. The quality of the reconstructed image should be much better than shown in the figure by using a precision rotation speed capturing and images synchronic updating device.

Fig. 7. Improved viewing resolution due to the use of rotated prism sheets: (a) original IV image; (b) prism sheets placed in front of image; (c) IV image viewed after prism sheets rotated.

Movie of the IV images without and with rotated prism sheets are shown in Fig. 8. Since the IV image was purely three-dimensional, it was difficult to record the image using conventional video recorder. The quality of the actual image was much better than shown in this video.

Fig. 8. Movie of IV image without and with rotated prism sheets (2.4 MB). [Media 1]

4. Discussion and summary

The experimental results show that the image quality of IV images can be improved using the proposed technique. This rotated prism sheet technique is a simple and useful way to improve the viewing resolution of IP/IV images without reducing the 3D aspect, such as the image depth, of the reconstructed images.

Color aberration is a common problem when using a prism. Since we used two prism sheets with a small gap between them, we could ignore the light aberration. The IV viewing resolution could be improved by adding a reflection-prevention feature to the prism sheets.

We plan to investigate the relationship between the parameters of the prism sheets and the IV image quality. Since the multiple facets structure will degrade the quality of the observed image [22

22. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” JOSA A 22, 597–603 (2005). [CrossRef] [PubMed]

], the influence factors on the IV image such as the viewing distance and the lenslet’s fill factor should be also considered. Furthermore, the synchronic updating of the element images corresponding to the rotation of the prism sheets should be improved. We thus plan to improve the synchronic display of element images corresponding to the rotation of the prism sheets.

In summary, we have developed a technique that improves the viewing resolution of the integral videography and eliminates the need to move the lenslet array. The combined use of rotated prism sheets and the corresponding captured or calculated element images is a promising approach to overcoming the upper resolution limit of IP/IV images.

Acknowledgment

This work was supported in part by Grant- in-Aid for Scientific Research (17680037) of the Ministry of Education, Culture, Sports, Science and Technology in Japan and Grant- in-Aid of Strategic Information and Communications R&D Promotion Programme (062103006) of the Ministry of Internal Affairs and Communications in Japan (both to H. Liao).

References and links

1.

M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

2.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]

3.

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]

4.

Y. Igarishi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photograph,” Jpn. J. Appl. Phys. 17, 1683–1684 (1978). [CrossRef]

5.

H. Liao, S. Nakajima, M. Iwahara, E. Kobayashi, I. Sakuma, N. Yahagi, and T. Dohi, “Intra-operative realtime 3-D information display system based on integral videography,” Fourth International Conference on Medical Image Computing and Computer assisted Intervention --MICCAI 2001, Lecture Notes in Computer Science, LNCS 2208, 392–400 (2001). [CrossRef]

6.

S-W. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. 42, 4186–4159 (2003). [CrossRef] [PubMed]

7.

S. Jung, J.-H. Park, H. Choi, and B. Lee, “Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching,” Appl. Opt. 42, 2513–2520 (2003). [CrossRef] [PubMed]

8.

S-W. Min, S. Jung, J-H. Park, and B. Lee, “Study for wide-viewing integral photography using an aspheric Fresnel-lens array,” Opt. Eng. 41, 2572–2576 (2002). [CrossRef]

9.

J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields,” Opt. Lett. 28, 1421–1423 (2003). [CrossRef] [PubMed]

10.

H. Liao, Y. Ito, K. Matsumiya, K. Masamune, and T. Dohi, Object based image rendering and synthesis for computer generated integral videography, ACM SIGGRAPH 2006 (Boston, USA, 2006), Research poster, CD-ROM.

11.

H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display by use of seamless multi-projection,” Appl. Opt. 44, 305–315 (2005). [CrossRef] [PubMed]

12.

J. S. Jang, Y. S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display,” Opt. Express 12, 557–563 (2004). [CrossRef] [PubMed]

13.

H. Liao, M. Iwahara, N. Hata, I. Sakuma, T. Dohi, T. Koike, Y. Momoi, T. Minakawa, M. Yamasaki, F. Tajima, and H. Takeda, High-resolution integral videography autostereoscopic display using multi-projector; in The Proceedings of the Ninth International Display Workshops IDW02, (Hiroshima, Japan, 2002), pp.1229-1232.

14.

H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004). [CrossRef] [PubMed]

15.

B. Lee, S.-W. Min, and B. Javidi, “Theoretical analysis for three-dimensional integral imaging systems with double devices,” Appl. Opt. 41, 4856–4865 (2002). [CrossRef] [PubMed]

16.

S. Jung, J.-H. Park, B. Lee, and B. Javidi, “Viewing-angle-enhanced Integral 3-D Imaging using double display devices with masks,” Opt. Eng. 41, 2389–2390 (2002). [CrossRef]

17.

J. S. Jang and B. Javidi, “Improved viewing resolution of 3-D integral imaging with nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]

18.

A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003). [CrossRef] [PubMed]

19.

J. S. Jang, Y. S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display,” Opt. Express 12, 557–563 (2004). [CrossRef] [PubMed]

20.

S. H. Hong and B. Javidi, “Improved resolution 3-D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004). [CrossRef] [PubMed]

21.

S. Kishk and B. Javidi, “Improved resolution 3-D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003). [CrossRef] [PubMed]

22.

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” JOSA A 22, 597–603 (2005). [CrossRef] [PubMed]

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(110.2990) Imaging systems : Image formation theory
(220.2740) Optical design and fabrication : Geometric optical design

ToC Category:
Imaging Systems

History
Original Manuscript: February 13, 2007
Revised Manuscript: April 3, 2007
Manuscript Accepted: April 4, 2007
Published: April 5, 2007

Citation
Hongen Liao, Takeyoshi Dohi, and Makoto Iwahara, "Improved viewing resolution of integral videography by use of rotated prism sheets," Opt. Express 15, 4814-4822 (2007)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-8-4814


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. M. G. Lippmann, "Epreuves reversibles donnant la sensation du relief," J. Phys. 7, 821-825 (1908).
  2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, "Real-time pickup method for a three-dimensional image based on integral photography," Appl. Opt. 36, 1598-1603 (1997). [CrossRef] [PubMed]
  3. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, "Gradient-index lens-array method based on real-time integral photography for three-dimensional images," Appl. Opt. 37, 2034-2045 (1998). [CrossRef]
  4. Y. Igarishi, H. Murata, and M. Ueda, "3D display system using a computer generated integral photograph," Jpn. J. Appl. Phys. 17, 1683-1684 (1978). [CrossRef]
  5. H. Liao, S. Nakajima, M. Iwahara, E. Kobayashi, I. Sakuma, N. Yahagi, and T. Dohi, "Intra-operative real-time 3-D information display system based on integral videography," Fourth International Conference on Medical Image Computing and Computer assisted Intervention --MICCAI 2001, Lecture Notes in Computer Science, LNCS 2208, 392-400 (2001). [CrossRef]
  6. S-W. Min, B. Javidi, and B. Lee, "Enhanced three-dimensional integral imaging system by use of double display devices," Appl. Opt. 42, 4186-4195 (2003). [CrossRef] [PubMed]
  7. S. Jung, J.-H. Park, H. Choi, and B. Lee, "Wide-viewing integral three-dimensional imaging by use of orthogonal polarization switching," Appl. Opt. 42, 2513-2520 (2003). [CrossRef] [PubMed]
  8. S-W. Min, S. Jung, J-H. Park, and B. Lee, "Study for wide-viewing integral photography using an aspheric Fresnel-lens array," Opt. Eng. 41, 2572-2576 (2002). [CrossRef]
  9. J.-S. Jang, F. Jin, and B. Javidi, "Three-dimensional integral imaging with large depth of focus by use of real and virtual image fields," Opt. Lett. 28, 1421-1423 (2003). [CrossRef] [PubMed]
  10. H. Liao, Y. Ito, K. Matsumiya, K. Masamune and and T. Dohi, "Object based image rendering and synthesis for computer generated integral videography," ACM SIGGRAPH 2006 Boston, USA, (2006), Research poster, CD-ROM.
  11. H Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, "Scalable high-resolution integral videography autostereoscopic display by use of seamless multi-projection," Appl. Opt. 44, 305-315 (2005). [CrossRef] [PubMed]
  12. J. S. Jang, Y. S. Oh, and B. Javidi, "Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display," Opt. Express 12, 557-563 (2004). [CrossRef] [PubMed]
  13. H. Liao, M. Iwahara, N. Hata, I. Sakuma, T. Dohi, T. Koike, Y. Momoi, T. Minakawa, M. Yamasaki, F. Tajima, and H. Takeda, "High-resolution integral videography autostereoscopic display using multi-projector," in The Proceedings of the Ninth International Display Workshops IDW02, (Hiroshima, Japan, 2002), pp.1229-1232.
  14. H. Liao, M. Iwahara, N. Hata, and T. Dohi, "High-quality integral videography using a multiprojector," Opt. Express 12, 1067-1076 (2004). [CrossRef] [PubMed]
  15. B. Lee, S.-W. Min, and B. Javidi, "Theoretical analysis for three-dimensional integral imaging systems with double devices," Appl. Opt. 41, 4856-4865 (2002). [CrossRef] [PubMed]
  16. S. Jung, J.-H. Park, B. Lee, and B. Javidi, "Viewing-angle-enhanced Integral 3-D Imaging using double display devices with masks," Opt. Eng. 41, 2389-2390 (2002). [CrossRef]
  17. J. S. Jang and B. Javidi, "Improved viewing resolution of 3-D integral imaging with nonstationary micro-optics," Opt. Lett. 27, 324-326 (2002). [CrossRef]
  18. A. Stern and B. Javidi, "Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging," Appl. Opt. 42, 7036-7042 (2003). [CrossRef] [PubMed]
  19. J. S. Jang, Y. S. Oh, and B. Javidi, "Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display," Opt. Express 12, 557-563 (2004). [CrossRef] [PubMed]
  20. S. H. Hong and B. Javidi, "Improved resolution 3-D object reconstruction using computational integral imaging with time multiplexing," Opt. Express 12, 4579-4588 (2004). [CrossRef] [PubMed]
  21. S. Kishk and B. Javidi, "Improved resolution 3-D object sensing and recognition using time multiplexed computational integral imaging," Opt. Express 11, 3528-3541 (2003). [CrossRef] [PubMed]
  22. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, "Multifacet structure of observed reconstructed integral images," J. Opt. Soc. Am. A. 22, 597-603 (2005). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: MPG (2418 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited