OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 20, Iss. 21 — Oct. 8, 2012
  • pp: 23755–23768
« Show journal navigation

Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display

Hee-Seung Kim, Kyeong-Min Jeong, Sung-In Hong, Na-Young Jo, and Jae-Hyeung Park  »View Author Affiliations


Optics Express, Vol. 20, Issue 21, pp. 23755-23768 (2012)
http://dx.doi.org/10.1364/OE.20.023755


View Full Text Article

Acrobat PDF (2601 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Three-dimensional image distortion caused by mismatch between autostereoscopic displays and contents is analyzed. For a given three-dimensional object scene, the original light ray field in the contents and deformed one by the autostereoscopic displays are calculated. From the deformation of the light ray field, the distortion of the resultant three-dimensional image is finally deduced. The light ray field approach enables generalized distortion analysis across different autostereoscopic display techniques. The analysis result is verified experimentally when multi-view contents are applied to a multi-view display of non-matching parameters and a horizontal parallax only integral imaging display.

© 2012 OSA

1. Introduction

Autostereoscopic displays provide observers with three-dimensional(3D) images without requiring special glasses [1

1. T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68(5), 548–564 (1980). [CrossRef]

,2

2. P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV display: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007). [CrossRef]

]. Active researches have been conducted on autostereoscopic displays at many research groups. Among various autostereoscopic display techniques, multi-view(MV) display is broadly considered as the most practical method. MV display provides observers with high quality 3D images at several pre-defined viewpoints through a special ray guiding optics such as parallax barrier or lenticular lens sheet [3

3. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]

]. Many contents and products have been manufactured for MV display. Integral imaging(InIm) is another type of autostereoscopic display technique. InIm display generates 3D images by reconstructing spatio-angular ray distribution using a lens array [3

3. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]

7

7. K. Yamamoto, T. Mishina, R. Oi, T. Senoh, and M. Okui, “Cross talk elimination using an aperture for recording elemental images of integral photography,” J. Opt. Soc. Am. A 26(3), 680–690 (2009). [CrossRef] [PubMed]

]. While MV display is designed to provide different views to different viewpoints, InIm display is designed to reconstruct ray distribution in the observer space. This difference leads to distinguishable characteristics of InIm display including natural and continuous motion parallax. Motivated by these advantages of InIm display, considerable efforts have been recently made to commercialize InIm display. Due to insufficient panel resolution, however, these approaches usually make compromise by abandoning vertical parallax and reducing the number of rays per each display cell to the level comparable to that of MV displays. With these compromises, the horizontal parallax only(HPO) InIm display is now considered as another autostereoscopic display technique with a practical value.

Image distortion is one of the primary issues in autostereoscopic displays. Only when image contents captured with correct geometry are applied to the corresponding display, the 3D images can be presented without distortion. In practice, however, the contents capturing condition using multiple cameras or computer graphic techniques can deviate from the ideal one, resulting in distortion of the displayed 3D images. The situation becomes worse when HPO InIm display is considered. Since the InIm display requires orthographic image contents, usual perspective image contents captured using cameras cannot be used in principle.

In this paper, we firstly analyze contents induced 3D image distortion using light ray field concept. The light ray field approach enables unified simple analysis of the 3D image distortion both in MV and HPO InIm displays. The proposed light ray field approach has potential to be further extended to various different ray based autostereoscopic displays. In the following sections, the concept of the light ray field and the distortion analysis based on it are presented.

2. Light ray field

2.1 Concept

Light ray field is defined by a spatio-angular distribution of the light rays in a plane L(x, y, θx, θy; zr) as shown in Fig. 1
Fig. 1 Concept of light ray field.
where (x, y) represents spatial position in a reference plane at z = zr and (θx, θy) represents propagation angle measured with respect to x and y axis, respectively [22

22. A. Stern and B. Javidi, “Ray phase space approach for 3-D imaging and 3-D optical data representation,” J. Display Technol. 1(1), 141–150 (2005). [CrossRef]

24

24. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. On Graphics (Proc. SIGGRAPH) 25, 924–934 (2006).

]. Hereafter only (x, θx) distribution is considered without (y, θy) for simplicity. The reference plane is also assumed to be located at zr = 0 without loss of generality. Figure 2
Fig. 2 Light ray field of a single 3D point object.
shows an example of the light ray field of a single object point at a position (x1, z1). As shown in Fig. 2, the corresponding light ray field can be represented by a straight line intersecting the x-axis at x1 with a slanting angle of −1/z1 in x-θx plot. For general 3D objects composed of 3D object point cloud, the light ray field is given by a collection of the slanted lines.

2.2 Light ray field captured in MV contents

Currently most 3D contents are stereoscopic two view or MV images. Two or multiple cameras are usually located with a uniform spacing Δxc at a specific distance from the 3D object scene to capture 3D contents. Figure 3
Fig. 3 Rays captured at several camera positions.
shows the captured rays in the MV contents where xcn represents lateral position of n-th camera and zc is the camera distance from the reference plane. The x-θx plot of the corresponding light ray field at a reference plane is shown in Fig. 4
Fig. 4 Light ray field in the MV contents.
. As shown in Fig. 4, a single view image corresponds to a single line with a slanting angle −1/zc in the light ray field. Therefore MV contents with N views can be represented by N slanted lines in the light ray field. Note that only a small linear portion of the entire light ray field information is included in the MV contents.

2.3 Light ray field reconstructed by MV and HPO InIm display

MV display and HPO InIm display can be distinguished by their different light ray field reconstruction characteristics. Figure 5
Fig. 5 Ray reconstruction characteristics.
shows ray reconstruction of MV display and HPO InIm display. As shown in Fig. 5, the MV display reconstructs the light ray field such that the reconstructed rays are converging at a few viewpoints while the HPO InIm display reconstructs a few parallel light ray groups with a constant angular spacing Δθx.

Figure 6
Fig. 6 Reconstructed light ray field.
shows x-θx plot of the reconstructed light ray field at the ray guiding optics plane of two display methods. In case of MV display shown in Fig. 6(a), the reconstructed light ray field is represented as a collection of the slanted lines each of which corresponds to the light rays converging at a viewpoint. The slating angle and the x-axis intersection of the lines are given by −1/zv and xvn respectively where zv and xvn are the distance and the lateral position of the n-th viewpoint. On the other hand, in case of HPO InIm display, the reconstructed light ray field is represented as a collection of the horizontal lines, reflecting parallel ray reconstruction characteristics as shown in Fig. 6(b). Note that ideal 3D display would reconstruct whole plane in x-θx plot. Figure 6 shows that MV display and HPO InIm display only reconstruct a few linear portions of entire light ray field. The light ray field portion that is reconstructed depends on the display method and system specifications.

3. Analysis of image distortion

3.1 Light ray field and image distortion

3D images can be displayed without distortion only when right contents are used for right displays. In terms of the light ray field, it can be stated that distortion can be avoided only when the light ray field captured in the contents is reproduced as it is by the display panel. Figure 7
Fig. 7 Light ray field reconstruction by correct and incorrect display panels.
shows one example. Figure 7(a) shows a light ray field in 4-view image contents captured at a distance zc and spacing Δxc. When the 4-view image contents in Fig. 7(a) are supplied to the 4-view MV display with viewing distance zv = zc and viewpoint spacing Δxv = Δxc, the light ray field is reconstructed correctly as shown in Fig. 7(b) and the 3D image can be observed without distortion at the designed viewpoints. On the other hands, when the specifications of the display deviate from this optimal condition, the light ray field is reconstructed with deformation as shown in Fig. 7(c) and thus the 3D image is distorted accordingly even when the observer is located at the designated viewpoint of the display. In the followings, the distortions are analyzed in detail.

3.2 Image distortion caused by using MV image contents to incorrect MV display panel

Figure 9
Fig. 9 Image distortions when MV image contents are applied to non-matching MV display panel.
shows four cases of applying MV contents to MV displays. When the camera position in the capturing process coincides with the designed viewpoints in the display as shown in Fig. 9(a), the light ray field is reconstructed as it is, i.e. l'(x, θx) = l(x, θx), and thus the 3D image is displayed without distortion, i.e. f'(x; z) = f(x; z).

Figure 9(c) shows situation where the designed viewpoint distance zv of the display is shorter than the camera distance zc of the capture system by a distance b. In this case, a light ray propagating from a position x in the reference plane to a camera position (xcn, zc) with an angle (xcn-x)/zc is reconstructed as a light ray propagating from the same position x in the reference plane but to a different position (xvn, zv) = (xcn, zc-b) with an angle (xvn-x)/zv = (xcn-x)/(zc-b). Therefore light ray field is deformed by l(x,θx)=l(x,zcbzcθx), giving
l(x,θx)=f(x+θxz;z)=f(x+zcbzcθxz;z),
(6)
where Eqs. (1) and (2) are used. In the same manner as the above case, substituting u' = x + θxz' into Eq. (6) gives
f(u;z)=f(u+θx(zcbzczz);z).
(7)
In order to make Eq. (7) true for all θx, we have z′ = (zc-b)z/zc and thus Eq. (7) reduces to
f(x;z)=f(x;zczcbz),
(8)
where u′ and z′ are replaced by x and z. Equation (8) indicates that each depth slice at z is reconstructed at a different distance (zc-b)z/zc, compressing the 3D image longitudinally.

Finally, Fig. 9(d) shows situation that lateral spacing between designed viewpoints Δxv is wider than camera spacing Δxc by k. A light ray propagating from a position x in the reference plane to a camera position (xcn, zc) = (nΔxc, zc) with an angle (nΔxc-x)/zc is reconstructed as a light ray propagating from the same position x in the reference plane but to a different position (xvn, zv) = (nΔxv, zc) = (nkΔxc, zc) with an angle (nkΔxc-x)/zc, giving a relation
l(x,nkΔxcxzc)=l(x,nΔxcxzc).
(9)
Using θx=nkΔxcxzc, Eq. (9) can be written
l(x,θx)=l(x,θx(k1)nΔxczc)=l(x,1k(θx(k1)xzc)),
(10)
where nΔxc=zcθx+xkis used in the last equality. From Eqs. (1) and (2), we have
l(x,θx)=f(x+θxz;z)=f(x+1k(θx(k1)xzc)z;z).
(11)
Again, substituting u' = x + θxz' into Eq. (11) gives
f(u;z)=f((1(k1)zkzc)u+θx(zk+(k1)zzkzcz);z).
(12)
Equation (12) is true for all θx when
z=kz1+(k1)z/zc.
(13)
By substituting Eq. (13) into Eq. (12), finally we have
f(x;z)=f((1(k1)zzc+(k1)z)x;kz1+(k1)z/zc),
(14)
where u′ and z′ are replaced by x and z. Equation (14) indicates that the 3D image of hexahedron object is distorted to trapezoidal shape with both longitudinal and lateral deformations.

3.3 Image distortion caused by using MV image contents to HPO InIm display panel

Unlike MV display, HPO InIm display reconstructs parallel light rays without specific pre-defined viewpoints. Since the MV contents are collection of the light rays converging at several camera positions, it is expected that the 3D image distortion would be severe when the MV contents are used for HPO InIm display without modification.

Figure 10
Fig. 10 Image distortion when MV image contents are applied to HPO InIm display panel
shows the corresponding light ray field representation. A light ray propagating from a position x in the reference plane to a camera position (xcn, zc) with an angle (xcn-x)/zc is reconstructed as a light ray emanating from the same position x in the reference plane but with a different constant angle θxn, where θxn is n-th parallel light ray reconstruction angle of the HPO InIm display. This can be represented by a relation
l(x,θxn)=l(x,xcnxzc).
(15)
Assuming θxn = nΔθx and xci = nΔxc where Δθx is the angular ray spacing of the HPO InIm display, and treating the sampled angle θxn as a continuous variable θx (i.e. θxn = nΔθx = θx, xcn = nΔxc = (θxθx) Δxc), Eq. (15) becomes
l(x,θx)=l(x,θxΔxcxΔθxzcΔθx).
(16)
Again, from Eqs. (1) and (2), we have
l(x,θx)=f(x+θxz;z)=f(x+θxΔxcxΔθxzcΔθz;z),
(17)
which reduces to
f(x;z)=f((1ΔθxzΔxc+Δθxz)x;(zcΔθxΔxc+Δθxz)z).
(18)
Equation (18) indicates that when MV contents are applied to HPO InIm display panel, the 3D images are distorted with depth scaling and depth dependent lateral magnification. Note that Eq. (18) agrees with T. Saishu et al.’s result [21

21. T. Saishu, K. Taira, R. Fukushima, and Y. Hirayama, “Distortion control in a one-dimensional integral imaging autostereoscopic display system with parallel optical beam groups,” SID Tech. Dig. 35(1), 1438–1441 (2004). [CrossRef]

]. Figure 10 shows visualization of the distortion for a hexahedron object.

4. Experimental verification

The analysis was verified experimentally. In experiment, 6-view MV contents and HPO InIm contents were synthesized for two plane objects located at + 20mm and −20mm as shown in Fig. 11
Fig. 11 Experimental setup
. These contents were loaded to a MV display panel or a HPO InIm display panel, displaying 3D image. In order to check the longitudinal distortion expected from Eqs. (8), (14), and (18), a real cylindrical object was located at different distances in front of the display panel. By comparing motion parallax of the cylindrical object and the displayed 3D image, it is possible to estimate the depth of the displayed image.

In the first experiment, 3D image distortion occurring when incorrect MV contents are applied to MV displays was examined. Among lateral and longitudinal distortions, only longitudinal distortion was measured since lateral distortion is much smaller than longitudinal distortion as revealed in Eqs. (8), (14), and (18) and visualized in the rightmost part of Fig. 9. Among three distortion cases shown in Fig. 9, only two cases of viewpoint distance mismatch shown in Fig. 9(c) and viewpoint spacing mismatch shown in Fig. 9(d) were considered because the viewpoint shift shown in Fig. 9(b) does not bring longitudinal distortion.

In the experiment, a MV display panel was implemented using a slanted parallax barrier of a slanting angle tan−1(2/3) measured from the vertical axis. The pixel pitch of the panel was 294μm. The designed viewpoint distance zv and spacing Δxv of the MV display were 600mm and 29mm, respectively. For 6-view MV contents, two contents were prepared. One was synthesized with parameters zc = 2zv = 1200mm (i.e. b = 600mm) and Δxc = Δxv = 29mm (i.e. k = 1) and the other one with zc = zv = 600mm (i.e. b = 0mm) and Δxc = 0.5Δxv = 14.5mm (i.e. k = 2). For these parameters, Eqs. (8) and (14) indicate that in both cases the depth of the displayed ‘apple’ image will be reduced to around 10mm.

Experimental results are shown in Figs. 12
Fig. 12 Experimental result: MV contents to MV display, real cylindrical object located at + 20mm.
and 13
Fig. 13 Experimental result: MV contents to MV display, real cylindrical object located at + 10mm.
. All images in Figs. 12 and 13 were captured at a distance 600mm from the panel. In Fig. 12, the motion parallax of the displayed ‘apple’ image is compared with that of the real cylindrical object located at 20mm in front of the display panel. Note that the horizontal offset of the captured images were adjusted so that the horizontal position of the cylindrical object is maintained in all images. Hence the motion parallax can be observed by comparing the position of the left end of the 'apple' image. From Fig. 12, it can be observed that the motion parallax of the real cylindrical object is larger than that of the displayed ‘apple’ image in two incorrect cases (left and center column in Fig. 12), revealing the depth of the displayed ‘apple’ image is smaller than original depth 20mm. Figure 13 shows the results when the cylindrical object is located at 10mm. Now the motion parallax of the displayed ‘apple’ image in two incorrect cases is the same as that of the real cylindrical object, confirming that the reduced depth of the displayed ‘apple’ object is around 10mm as expected. In the right column of Fig. 13, it can also be observed that the motion parallax of the displayed ‘apple’ image in ideal case is larger than real cylindrical object at 10mm since it is displayed at a original depth 20mm.

Experimental results are shown in Figs. 14
Fig. 14 Experimental result: MV contents to HPO InIm display, real cylindrical object located at + 20mm.
and 15
Fig. 15 Experimental result: MV contents to HPO InIm display, real cylindrical object located at + 9mm.
. In Fig. 14, the motion parallax of the displayed 'apple' image is compared with that of the real cylindrical object located at 20mm. By observing relative shift between the real cylindrical object and the displayed 'apple' image, it can be seen that the real cylindrical object moves larger than the displayed ‘apple’ object in MV image contents of Δxc = 65/5mm case, which indicates that the displayed ‘apple’ object is reconstructed with reduced depth. On the contrary, in ideal HPO InIm image contents case, the relative motion is negligible, confirming that the ‘apple’ image has similar depth with the cylindrical object. In Fig. 15, the real cylindrical object was located at 9mm. As expected, the relative motion becomes negligible in MV image contents of Δxc = 65/5mm case, indicating the depth of the 'apple' image is reduced to around 9mm.

5. Conclusion

Acknowledgment

This work was supported by the research grant of LG display Co., LTD. This work was also supported by the research grant of the Chungbuk National University in 2010.

References and links

1.

T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68(5), 548–564 (1980). [CrossRef]

2.

P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV display: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. 17(11), 1647–1658 (2007). [CrossRef]

3.

J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]

4.

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006). [CrossRef]

5.

J.-H. Jung, S.- Park, Y. Kim, and B. Lee, “Integral imaging using a color filter pinhole array on a display panel,” Opt. Express 20(17), 18744–18756 (2012). [CrossRef]

6.

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]

7.

K. Yamamoto, T. Mishina, R. Oi, T. Senoh, and M. Okui, “Cross talk elimination using an aperture for recording elemental images of integral photography,” J. Opt. Soc. Am. A 26(3), 680–690 (2009). [CrossRef] [PubMed]

8.

D. B. Diner and D. H. Fender, Human Engineering in Stereoscopic Viewing Devices (Plenum, 1994).

9.

A. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” Proc. SPIE 1915, 36–48 (1993). [CrossRef]

10.

L. M. J. Meesters, W. A. IJsselsteijn, and P. J. H. Seuntiens, “A survey of perceptual evaluations and requirements of three-dimensional TV,” IEEE Trans. Circ. Syst. Video Tech. 14(3), 381–391 (2004). [CrossRef]

11.

K.-H. Lee, M.-J. Lee, Y.-S. Yoon, and S.-K. Kim, “Incorrect depth sense due to focused object distance,” Appl. Opt. 50(18), 2931–2939 (2011). [CrossRef] [PubMed]

12.

C. Ricolfe-Viala, A.-J. Sanchez-Salmeron, and E. Martinez-Berti, “Calibration of a wide angle stereoscopic system,” Opt. Lett. 36(16), 3064–3066 (2011). [CrossRef] [PubMed]

13.

J.-Y. Son, Y. N. Gruts, K.-D. Kwack, K.-H. Cha, and S.-K. Kim, “Stereoscopic image distortion in radial camera and projector configurations,” J. Opt. Soc. Am. A 24(3), 643–650 (2007). [CrossRef] [PubMed]

14.

V. Saveljev, “Image and observer regions in 3D displays,” J. Inform. Disp. 11(2), 68–75 (2010). [CrossRef]

15.

B.-R. Lee, J.-J. Hwang, and J.-Y. Son, “Characteristics of composite images in multiview imaging and integral photography,” Appl. Opt. 51(21), 5236–5243 (2012). [CrossRef] [PubMed]

16.

T. Horikoshi, S.-I. Uehara, T. Koike, C. Kato, K. Taira, G. Hamagishi, K. Mashitani, T. Nomura, A. Yuuki, N. Watanabe, Y. Hisatake, and H. Ujike, “Characterization of 3D image quality on autostereoscopic displays: proposal of interocular 3D purity,” SID Tech. Dig. 41(1), 331–334 (2010). [CrossRef]

17.

S.-I. Uehara, T. Horikoshi, C. Kato, T. Koike, G. Hamagishi, K. Mashitani, T. Nomura, K. Taira, A. Yuuki, N. Umezu, N. Watanabe, Y. Hisatake, and H. Ujike, “Characterization of motion parallax on multi-view/integral-imaging displays,” SID Tech. Dig. 41(1), 661–664 (2010). [CrossRef]

18.

H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15(8), 2059–2065 (1998). [CrossRef]

19.

J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express 12(24), 6020 (2004). [CrossRef] [PubMed]

20.

M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008). [CrossRef] [PubMed]

21.

T. Saishu, K. Taira, R. Fukushima, and Y. Hirayama, “Distortion control in a one-dimensional integral imaging autostereoscopic display system with parallel optical beam groups,” SID Tech. Dig. 35(1), 1438–1441 (2004). [CrossRef]

22.

A. Stern and B. Javidi, “Ray phase space approach for 3-D imaging and 3-D optical data representation,” J. Display Technol. 1(1), 141–150 (2005). [CrossRef]

23.

J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express 19(19), 18729–18741 (2011). [CrossRef] [PubMed]

24.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. On Graphics (Proc. SIGGRAPH) 25, 924–934 (2006).

OCIS Codes
(100.2960) Image processing : Image analysis
(100.6890) Image processing : Three-dimensional image processing

ToC Category:
Image Processing

History
Original Manuscript: August 24, 2012
Revised Manuscript: September 21, 2012
Manuscript Accepted: September 22, 2012
Published: October 2, 2012

Citation
Hee-Seung Kim, Kyeong-Min Jeong, Sung-In Hong, Na-Young Jo, and Jae-Hyeung Park, "Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display," Opt. Express 20, 23755-23768 (2012)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-21-23755


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. T. Okoshi, “Three-dimensional displays,” Proc. IEEE68(5), 548–564 (1980). [CrossRef]
  2. P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV display: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech.17(11), 1647–1658 (2007). [CrossRef]
  3. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt.50(34), H87–H115 (2011). [CrossRef] [PubMed]
  4. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE94(3), 591–607 (2006). [CrossRef]
  5. J.-H. Jung, S.- Park, Y. Kim, and B. Lee, “Integral imaging using a color filter pinhole array on a display panel,” Opt. Express20(17), 18744–18756 (2012). [CrossRef]
  6. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt.48(34), H77–H94 (2009). [CrossRef] [PubMed]
  7. K. Yamamoto, T. Mishina, R. Oi, T. Senoh, and M. Okui, “Cross talk elimination using an aperture for recording elemental images of integral photography,” J. Opt. Soc. Am. A26(3), 680–690 (2009). [CrossRef] [PubMed]
  8. D. B. Diner and D. H. Fender, Human Engineering in Stereoscopic Viewing Devices (Plenum, 1994).
  9. A. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” Proc. SPIE1915, 36–48 (1993). [CrossRef]
  10. L. M. J. Meesters, W. A. IJsselsteijn, and P. J. H. Seuntiens, “A survey of perceptual evaluations and requirements of three-dimensional TV,” IEEE Trans. Circ. Syst. Video Tech.14(3), 381–391 (2004). [CrossRef]
  11. K.-H. Lee, M.-J. Lee, Y.-S. Yoon, and S.-K. Kim, “Incorrect depth sense due to focused object distance,” Appl. Opt.50(18), 2931–2939 (2011). [CrossRef] [PubMed]
  12. C. Ricolfe-Viala, A.-J. Sanchez-Salmeron, and E. Martinez-Berti, “Calibration of a wide angle stereoscopic system,” Opt. Lett.36(16), 3064–3066 (2011). [CrossRef] [PubMed]
  13. J.-Y. Son, Y. N. Gruts, K.-D. Kwack, K.-H. Cha, and S.-K. Kim, “Stereoscopic image distortion in radial camera and projector configurations,” J. Opt. Soc. Am. A24(3), 643–650 (2007). [CrossRef] [PubMed]
  14. V. Saveljev, “Image and observer regions in 3D displays,” J. Inform. Disp.11(2), 68–75 (2010). [CrossRef]
  15. B.-R. Lee, J.-J. Hwang, and J.-Y. Son, “Characteristics of composite images in multiview imaging and integral photography,” Appl. Opt.51(21), 5236–5243 (2012). [CrossRef] [PubMed]
  16. T. Horikoshi, S.-I. Uehara, T. Koike, C. Kato, K. Taira, G. Hamagishi, K. Mashitani, T. Nomura, A. Yuuki, N. Watanabe, Y. Hisatake, and H. Ujike, “Characterization of 3D image quality on autostereoscopic displays: proposal of interocular 3D purity,” SID Tech. Dig.41(1), 331–334 (2010). [CrossRef]
  17. S.-I. Uehara, T. Horikoshi, C. Kato, T. Koike, G. Hamagishi, K. Mashitani, T. Nomura, K. Taira, A. Yuuki, N. Umezu, N. Watanabe, Y. Hisatake, and H. Ujike, “Characterization of motion parallax on multi-view/integral-imaging displays,” SID Tech. Dig.41(1), 661–664 (2010). [CrossRef]
  18. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A15(8), 2059–2065 (1998). [CrossRef]
  19. J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express12(24), 6020 (2004). [CrossRef] [PubMed]
  20. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett.33(7), 684–686 (2008). [CrossRef] [PubMed]
  21. T. Saishu, K. Taira, R. Fukushima, and Y. Hirayama, “Distortion control in a one-dimensional integral imaging autostereoscopic display system with parallel optical beam groups,” SID Tech. Dig.35(1), 1438–1441 (2004). [CrossRef]
  22. A. Stern and B. Javidi, “Ray phase space approach for 3-D imaging and 3-D optical data representation,” J. Display Technol.1(1), 141–150 (2005). [CrossRef]
  23. J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express19(19), 18729–18741 (2011). [CrossRef] [PubMed]
  24. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. On Graphics (Proc. SIGGRAPH) 25, 924–934 (2006).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited