OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 8, Iss. 5 — Jun. 6, 2013
« Show journal navigation

Multiple ray cluster rendering for interactive integral imaging system

Shaohui Jiao, Xiaoguang Wang, Mingcai Zhou, Weiming Li, Tao Hong, Dongkyung Nam, Jin-Ho Lee, Enhua Wu, Haitao Wang, and Ji-Yeun Kim  »View Author Affiliations


Optics Express, Vol. 21, Issue 8, pp. 10070-10086 (2013)
http://dx.doi.org/10.1364/OE.21.010070


View Full Text Article

Acrobat PDF (6625 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper, we present an efficient Computer Generated Integral Imaging (CGII) method, called multiple ray cluster rendering (MRCR). Based on the MRCR, an interactive integral imaging system is realized, which provides accurate 3D image satisfying the changeable observers’ positions in real time. The MRCR method can generate all the elemental image pixels within only one rendering pass by ray reorganization of multiple ray clusters and 3D content duplication. It is compatible with various graphic contents including mesh, point cloud, and medical data. Moreover, multi-sampling method is embedded in MRCR method for acquiring anti-aliased 3D image result. To our best knowledge, the MRCR method outperforms the existing CGII methods in both the speed performance and the display quality. Experimental results show that the proposed CGII method can achieve real-time computational speed for large-scale 3D data with about 50,000 points.

© 2013 OSA

1. Introduction

Integral imaging technology [1

1. G. Lippmann, “La photographie integrale,” C.R. Acad. Sci. 146, 446–451 (1908).

] is one of the most promising methods allowing full-color, full-parallax, and auto-stereoscopic 3D images to be simultaneously observed. The technique comprises a capture part and a display part. During the capture part, 3D information are captured through a lens array and recorded as elemental image array (EIA). While in the display part, the 3D images are integrated from the elemental images through the lens array [2

2. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-16-12-8800. [CrossRef] [PubMed]

]. The capture part can be replaced with computer-generated integral imaging (CGII) technology. Currently, the CGII is important and widely used in the integral imaging system, and it can obtain the EIA by using computer graphic techniques with a virtual lens array whose parameters are determined from the real lens array of the integral imaging display.

Early CGII method, such as point retracing rendering (PRR) [3

3. Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]

], renders the EIA point by point by retracing the displayed 3D object. Such method is simple and widely used, but with very low rendering speed. Several efficient CGII methods have been proposed, such as multiple viewpoint rendering (MVR) [4

4. M. Halle, “Multiple viewpoint rendering,” SIGGRAPH’98, Proceedings of 25th annual conference on Computer graphics and interactive techniques, 243–254 (1998).

], parallel group rendering (PGR) [5

5. S.-W. Min, J. Kim, and B. Lee, “New characteristics equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]

, 6

6. R. Yang, X. Huang, and S. Chen, “Efficient rendering of integral images,” SIGGRAPH’05, Proceedings of 32nd Annual Conference on Computer Graphics and Interactive Techniques, 44 (2005).

], and viewpoint vector rendering (VVR) [7

7. S.-W. Min, K. S. Park, B. Lee, Y. Cho, and M. Hahn, “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys. 45(28), L744–L747 (2006). [CrossRef]

9

9. K. S. Park, S.-W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 231–241 (2007).

]. MVR generates each elemental image sequentially as rendering the perspective image captured by the corresponding virtual lens. The computational time increases linearly with the number of the micro lens. MVR obtains only off-line processing with large number of lens elements. PGR is a more efficient algorithm in which the EIA is obtained from the directional scenes. The directional scenes are imaginary scenes which are observed in certain directions. PGR can reduce the number of scene rendering passes to the number of displayed pixels in one elemental image. The method is fast but it is limited to the focused mode. VVR also generates the directional scenes like PGR. The difference is that VVR generates more directional scenes which correspond to larger elemental image. Hence VVR can be suitable for all display modes, including the real, virtual and focused mode. Both VVR and PGR need multiple rendering passes which deteriorate the speed performance. Therefore, they are limited to small-scale 3D scenes and EIAs with small size.

Fast algorithm of CGII is essential because the ability to visualize and manipulate the 3D data interactively is of great importance in analysis and interpretation of the data [10

10. F. P. Brooks, “What’s real about virtual reality?” IEEE Comput. Graph. Appl. 19(6), 16–27 (1999). [CrossRef]

]. The inadequate computational speed will greatly impede the applications of integral imaging technology. For example, low speed performance may lead to cumbersome manipulation or even no feedback for interactive occasions.

For improving computational speed for the CGII, an effective means is to explore hardware with high computational power such as graphics processing units (GPU). A good example of this is the image space parallel computing method [11

11. K.-C. Kwon, C. Park, M.-U. Erdenebat, J.-S. Jeong, J.-H. Choi, N. Kim, J.-H. Park, Y.-T. Lim, and K.-H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-20-2-732. [CrossRef] [PubMed]

], which calculates the pixel values of EIA on GPU, where multiple threads can run in parallel. In each thread, the method calculates the intersection point between the corresponding ray and the 3D volume data. The solution significantly decreases the computation time, but it is not suitable for the widely used polygon based graphics contents, in which the ray intersection with polygons needs complex judgment and the ray rendering cannot be efficiently implemented in a parallel way.

In this paper, we propose a novel real-time CGII method, called multiple ray cluster rendering (MRCR) method, to realize an interactive integral imaging system with almost all-types of 3D graphics data. The proposed CGII method exploits the programmability of graphic processing unit (GPU), on which multiple clusters of perspective rays are rendered in parallel. The MRCR method can obtain EIA of 1000 pixels by1000 pixels in about 50 frames per second (fps) from large-scale graphic data. Rather than grouping parallel rays as in PGR, the MRCR method clusters and manipulates the perspective rays to achieve an optimized viewing zone. The optimized viewing zone allows users to perceive 3D image in a maximal viewing range with the same integral imaging system [12

12. R. Fukushima, K. Taira, T. Saishu, and Y. Hirayama, “Novel viewing zone control method for computer generated integral 3-D imaging,” Proceedings of SPIE – IS&T Electronic Imaging, SPIE Vol. 5291, 81–92, (2004). [CrossRef]

]. It is worth noting that the proposed multiple ray clusters are adaptively generated according to the view distance determined by the observers. Moreover, an anti-aliasing method is included in our rendering method to improve the display quality.

2. Multiple ray cluster rendering method

In this section, we firstly introduce our interactive integral imaging system in section 2.1. Then the principle of the MRCR method is described in section 2.2. Lastly implementation method of MRCR on GPU is given in section 2.3.

2.1 Configuration of the interactive integral imaging system

Figure 1
Fig. 1 An illustration for our interactive integral imaging system.
shows the setup of our interactive integral imaging system. The system mainly consists of two parts: the optical system for displaying the 3D image and the real-time calculation system using a GPU.

The optical system is based on a traditional integral imaging setup, which consists of a LCD panel and a lens array which bends the rays emitted from EIA to form 3D image. In order to adaptively produce 3D image according to the observers’ positions, the proposed optical system also includes a depth camera – PrimeSense 3D sensor [15], which can capture the depth image of the observers. Our interactive integral imaging system is inspired by the tracking integral imaging system [13

13. G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-19-5-4129. [CrossRef] [PubMed]

, 14

14. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” Digital Holography and Three-Dimensional Imaging, OSA Technical Digest, DWB27 (1990).

], which uses the infrared camera to track the viewer’s locations for integrating the EIA from different viewers’ directions. The difference is that our work acquires the depth value of the viewers’ locations as an input for generate the EIA, and achieves much faster EIA generation speed which can meet the requirement of dynamic reconstructing the 3D image according to viewers’ locations.

The calculation system implements our MRCR method. Firstly, the multiple ray cluster (which will be introduced in section 2.2) is generated with input parameters, that include: the lens array parameters, the display panel parameters, and the view distance determined by the depth image of multiple observers. Secondly, the multiple ray clusters are efficiently rendered on GPU from the graphic model. Finally, the rendering result is composited to form the displayed EIA. For each frame of the EIA image, the view distance used by the calculation system is updated according to the observers’ location captured by the depth camera. In our interactive integral imaging system, the proposed calculation system can adaptively render the EIA for moving persons at different view distances in real-time speed. Compared to traditional integral imaging system, where the viewing zone is static if the parameters, such as the focal length of the lens array, the gap between a display device and a lens array, and the size of EIA, are all fixed [16

16. H. Choi, Y. Kim, J.-H. Park, S. Jung, and B. Lee, “Improved analysis on the viewing angle of integral imaging,” Appl. Opt. 44(12), 2311–2317 (2005). [CrossRef] [PubMed]

], the viewing zone in our interactive integral imaging system is dynamically changed and can be optimally enlarged.

2.2 Multiple ray cluster calculation

In order to achieve 3D imaging with maximized viewing zone at arbitrary view distance, our interactive integral imaging system calculates the multiple ray cluster (MRC), which can reconstruct the optimal light field rays (OLFRs) in a viewing zone control method [12

12. R. Fukushima, K. Taira, T. Saishu, and Y. Hirayama, “Novel viewing zone control method for computer generated integral 3-D imaging,” Proceedings of SPIE – IS&T Electronic Imaging, SPIE Vol. 5291, 81–92, (2004). [CrossRef]

]. In the conventional integral imaging system, the light field rays are arranged as multiple groups of parallel rays and therefore the size of the elemental image is the same as the size of the elemental lens, as shown in Fig. 2(b)
Fig. 2 Viewing zone of integral imaging system with light field rays (a) The proposed integral imaging system (b) Conventional integral imaging system.
. When a viewer stands near the display, he or she will perceive crosstalk because the sight rays may reach the pixels under the neighboring elemental lens. To overcome the problem, viewing zone control method was proposed [12

12. R. Fukushima, K. Taira, T. Saishu, and Y. Hirayama, “Novel viewing zone control method for computer generated integral 3-D imaging,” Proceedings of SPIE – IS&T Electronic Imaging, SPIE Vol. 5291, 81–92, (2004). [CrossRef]

14

14. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” Digital Holography and Three-Dimensional Imaging, OSA Technical Digest, DWB27 (1990).

]. In the viewing zone control method, the elemental image is slightly bigger than the elemental lens and is not exactly under the corresponding elemental lens but with a small lateral shifting, as shown in Fig. 2(a). By constructing the OLFRs in the viewing zone control method, even though the viewer stands near the display, no crosstalk will be perceived because all the perceived pixels are rendered for the viewing zone. For each frame, the OLFRs are generated according to the given view distance determined by current observers.

The principle of the proposed MRC is illustrated in Fig. 3
Fig. 3 Illustration of multiple ray cluster (a) Ray clusters in integral imaging system (b) Multiple perspective view frustums for rendering.
. The light rays that converge at a point on the viewing width line are grouped in one ray cluster. Three examples of ray clusters (C1, C2 and C3) are indicated in Fig. 3(a). As mentioned above, the rays are grouped into clusters to enable efficient EIA rendering since multiple rays in one cluster can be configured in one shear perspective view frustum (SPVF) as shown in Fig. 3(b).

In the MRC method, we firstly compute the viewing width W and the elemental image width E with the following equations:

W=p×(D+g)g,
(1)
E=W×gD,
(2)

where p stands for the pitch value of a lens element in the lens array, and gis the gap value between the lens array and the display panel. Here D is defined as the current view distance.

The number of ray clusters n equals to the nearest integer number of pixels in one EI. Let nx represent the ray cluster number in the horizontal direction, nxis determined by Eq. (3):

nxEpd,nxN,
(3)

where pdis the pixel pitch of the display panel. Here the value of nxshould be non-zero integer. The ray cluster number in the vertical direction ny is calculated similarly.

The rendering parameters for each SPVF in the horizontal direction, includes the viewpoint Vi and the view angleθi, which are given by the following equations:

Vi=-W2+Wnx1×i,
(4)
θi=arctan(L/2p/2ViD)arctan(L/2+p/2ViD),
(5)

where Lstands for the width value of the whole lens array, and i is the order number of ray cluster in the horizontal direction and i[0,nx). By the above rendering parameters, each shear perspective view is generated with image resolution of L/p.

Considering the rendering process for calculating n shear perspective views, a straightforward algorithm is to perform n rendering passes. Here, a rendering pass means a whole procedure for rendering one image. Assuming that the rendering time of a single view image is t, the time cost for the n-pass rendering will be n×t. For efficient calculation, we propose a method to calculate the MRC in one perspective view frustum. Specifically, all the ray clusters are translated to one joint viewpoint V, as shown in Fig. 4
Fig. 4 MRC calculation in one perspective view frustum.
.

By rendering the above perspective view frustum, MRC can be computed in only one rendering pass. Compared to the straightforward algorithm, the computational time cost of MRC is significantly. An analysis on the reduced time is given in section 2.3.

For acquire the perspective view image, the rendering parameters are defined as:

V=(0,0),
(6)
θ=2arctan((Lp)nxD).
(7)

The resolution of the perspective view image is L×nx/p. The EIA is computed by pixel rearrangement (which will be introduced in section 2.3) in the perspective view image. It is worth noting that the desired EIA should be scaled with a scaling factors, which is defined as the following equation:

s=Epd×nx.
(8)

The reason for scaling the composition result is to correct the calculation error. Because the MRC method takes the resolution of the elemental image as an integer, however, in some situation, the actual resolution of the elemental image is not an integral value, so the scaling process is of great importance for achieving the correct EIA.

2.3 Rapid calculation of EIA on GPU

In recent years, vertex, geometry and pixel shaders on GPU have been widely used to speed up and improve the rendering quality [18

18. R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics (Addison-Wesley, 2004).

]. Our method makes good use of the shader programming for efficient EIA rendering. The process of EIA computation consists of two rendering pass. The first pass is to compute the MRC, and the second pass is to composite the EIA from the rendered MRC. The computation flowchart is as shown in Fig. 5
Fig. 5 Flowchart of EIA computation process on GPU.
.

In the first rendering pass, the displayed 3D content is duplicated by the geometry shader on GPU, as shown in Fig. 6
Fig. 6 Description of the geometry duplication used in the MRC rendering.
.

The duplicated 3D content is transformed by translation matrix T, which is defined as:

Ti',j'=[1/nx00001/ny000010W2+(Wnx1)·i'W2+(Wny1)·j'01],
(9)

where i', j' respectively stands for the horizontal and the vertical number of the duplicated 3D content, which is employed to be rendered to obtain the corresponding ray clusters. Here, i'=nxi and j'=nyj.

Assuming the displayed 3D content is M, which can be expressed as:

M={v1,v2,,vm},vkM,
(10)

where vkis the 3D point in M. vkis given by the Eq. (11):

vk=[xkykzk1].
(11)

Specifically, each 3D point vi',j',k belonging to a cloned 3D content Mi',j'={vi',j',1,vi',j',2,,vi',j',m} is generated as the following equation:

vi',j',k=vk·Ti',j'.
(12)

By the above method, nnew 3D content are generated for n ray clusters. The MRCs can be calculated by rendering nnew 3D content in one perspective view frustum (shown as in Fig. 6) within only one rendering pass. Although the geometry data of 3D content to be rendered has been increased byn times, the rendering time for generating MRC is still decreased compared to the straightforwardn-pass method, since the transmission time between CPU and GPU has been greatly reduced. Besides, the duplication is implemented after the vertex shader, thus, we only need to calculate the translated and rotated points of one 3D content. In all, the proposed GPU method can release the great burden resulted from traditional multi-rendering pass method.

The initial rendering result of multiple ray clusters are shown in Fig. 8(b)
Fig. 8 Description of (a) the pixel translation calculation, (b) the render result of MRC, and (c) the rectified result of MRC.
and undesirable pixels can be observed due to the overlap of different ray clusters in one perspective view frustum. In order to eliminate the artifacts, an image processing of pixel translation, shown in Fig. 8(a), is proposed and implemented after the super sampling process. Specifically, each view result according to the corresponding ray cluster is translated to the correct position.

As shown in Fig. 8(a), Ri stands for the center coordinate of the rendering result of the ith ray cluster. Here, Ri is represented by the following equation:

Ri=Vi
(13)

Si represents the center coordinate of the correct result of the ray cluster, and it can be calculated by:

Si=(Lp)nx2+(Lp)(nxi)+(Lp)2
(14)

Thus, the offset value Oi of each view result is calculated by the following equation:

Oi=SiRi=Vi+(Lp)(nx+1)2-(Lp)i
(15)

In the second rendering pass, the EIA is calculated by interleaving the rendered ray clusters which are acquired in the first rendering pass. Figure 9
Fig. 9 Illustration of the pixel re-arrangement method.
depicts the pixel re-arrangement method for computing the interleaving process. To speed up the computational time of this stage, the re-arrangement is also implemented on the fragment shader of GPU in parallel way.

After the two rendering passes, the EIA is generated. Figure 10
Fig. 10 Example of (a) 3D data (mesh with 4798 vertices and texture), (b) elemental images set generated by the proposed method (Media 1) and (c) 3D image optically reconstructed using interactive integral imaging display system.
shows an example including the mesh data with textures, a set of the elemental images generated by the proposed method, and the reconstructed 3D image.

3. Experimental results

We have implemented the proposed MRCR method with the computing parameters in the experiment environment listed in Table 1

Table 1. Experiment Environment and Computing Parameters

table-icon
View This Table
| View All Tables
. The configuration parameters about our integral imaging system are given in Table 2

Table 2. Integral imaging system characteristics

table-icon
View This Table
| View All Tables
.

The experimental setup of our interactive integral imaging system is shown in Fig. 11
Fig. 11 Interactive integral imaging system with (a) optical experimental setup and (b) user controlled 3D image (Media 2). (The tiled lens array consists of 2x2 small lens arrays)
, in which the user’s motion is captured by the depth camera for manipulating the displayed 3D image. In each frame, the EIA is generated by the MRCR rendering method and subsequently displayed on the interactive integral imaging system, 3D image is successfully reconstructed and the motion parallax can be observed, as shown in Fig. 12
Fig. 12 Displayed EIA accompanied with the reconstructed view images and the 3D image. (Media 3)
.

In order to testify the adaptability of our MRC method, multiple types of 3D data inputs have been experimented, which includes the triangle mesh with and without the texture data, volume medical data (whose slices can be represented as triangle meshes with alpha texture) and also the scanned point data. Figure 13
Fig. 13 3D objects and displayed 3D images in experiments; (a) Dragon: 50,000 vertices, (b) Bunny: 2503 vertices, (c) MRI: 128x128x40, (d) Buddha: 49,990 point, (e) CT: 128x128x256.
shows the 3D input data, the generated elemental images, and the displayed 3D images.

Figure 14 shows that the data size of 3D content influences the speed performance of EIA calculation. In order to analysis this relationship, we experiment a set of EIAs, which are generated from the same 3D content with different number of vertices. In Fig. 15
Fig. 15 The measurement result of speed performance with different 3D data size.
, the measured calculation speed result obviously indicates that the computational speed decreases with the increased number of vertices. However, the speed drop slows down as the vertex number increases.

It is worth noting that the proposed MRCR method on GPU is faster than the MRCR method implemented on CPU. The reason is due to: (a) the 3D content duplication is accelerated on GPU; (b) To render MRC on CPU, the method must compute all the translated and rotated vertices which belongs to the duplicated 3D contents, while the GPU method only compute the vertices of one 3D content.

In order to evaluate the enlargement of viewing zone by our interactive integral imaging system, cases of different view distance are analyzed and the viewing zones are given by Fig. 16
Fig. 16 Viewing zone results in our interactive integral imaging system at (a) view distance is 0.6m, (b) view distance is 0.8m, (c) view distance is 1.0m, (d) view distance is 2.0m, and (f) viewing zone in conventional integral imaging system.
. As shown in Figs. 16(a), 16(b) and 16(c), the viewers, who are stand close to the lens array, can observe correct 3D image in the viewing zone. However, if the light rays are organized as the manner as which is used in conventional integral imaging system, as shown in Fig. 16(f), the viewers will not observe the correct 3D image because they are stand out of the viewing zone. Figure 16 illustrates that our interactive integral imaging system can achieve optimally enlarged viewing zone by OLFRs.

Experimental parameters for our interactive integral imaging system are given in Table 3

Table 3. Experimental parameters for interactive integral imaging system

table-icon
View This Table
| View All Tables
, including the image location, observer’s view distance and viewing angle. Here, we give the depth range value for the image location, and 0mm represents the depth location of the lens array. The viewing angle, which we calculated, is illustrated in Fig. 16. For comparison, we have also calculated the viewing angle of traditional integral imaging system with the view distance of 2 meters, and the result is 8.8 degrees. Moreover, when the view distance is smaller than 1.025 meters, the viewing angle becomes zero, which means that the viewers cannot observe the correct 3D image when they stand closer than 1.025 meters.

From Fig. 17
Fig. 17 The reconstructed 3D images from (a) traditional EIA rendering method, and (b) our MRC method.
and Table 3, we can learn that an enlarged viewing zone and viewing angle is demonstrated by our interactive integral imaging system.

4. Conclusion

A new type of CGII method called MRCR method has been presented and demonstrated. In this method, MRCs are calculated and reconstructed in only one rendering pass on GPU. This drastically reduces the process time, so that it is possible to achieve a high resolution, real-time rendering integral imaging system. Moreover, since light field rays are optimized in this method, maximal viewing zone can be obtained with the rendering process. As an experiment, an interactive integral imaging system based on MRCR method with an EIA resolution of 1000 × 1000 pixels is implemented. 3D images are rendered in real-time considering user’s position and motion captured by a depth camera. In future work, more real-time applications on integral imaging system will be presented, and we also want to further enlarge the viewing zone of our interactive integral imaging system.

Acknowledgments

The 3D content data used in our experiment are taken from Stanford University Computer Graphics Laboratory.

References and links

1.

G. Lippmann, “La photographie integrale,” C.R. Acad. Sci. 146, 446–451 (1908).

2.

J.-H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-16-12-8800. [CrossRef] [PubMed]

3.

Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]

4.

M. Halle, “Multiple viewpoint rendering,” SIGGRAPH’98, Proceedings of 25th annual conference on Computer graphics and interactive techniques, 243–254 (1998).

5.

S.-W. Min, J. Kim, and B. Lee, “New characteristics equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]

6.

R. Yang, X. Huang, and S. Chen, “Efficient rendering of integral images,” SIGGRAPH’05, Proceedings of 32nd Annual Conference on Computer Graphics and Interactive Techniques, 44 (2005).

7.

S.-W. Min, K. S. Park, B. Lee, Y. Cho, and M. Hahn, “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys. 45(28), L744–L747 (2006). [CrossRef]

8.

B.-N.-R. Lee, Y. Cho, K. S. Park, S.-W. Min, J.-S. Lim, M. C. Whang, and K. R. Park, “Design and implementation of a fast integral image rendering method,” International Conference on Electronic Commerce, 135–140 (2006). [CrossRef]

9.

K. S. Park, S.-W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 231–241 (2007).

10.

F. P. Brooks, “What’s real about virtual reality?” IEEE Comput. Graph. Appl. 19(6), 16–27 (1999). [CrossRef]

11.

K.-C. Kwon, C. Park, M.-U. Erdenebat, J.-S. Jeong, J.-H. Choi, N. Kim, J.-H. Park, Y.-T. Lim, and K.-H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express 20(2), 732–740 (2012), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-20-2-732. [CrossRef] [PubMed]

12.

R. Fukushima, K. Taira, T. Saishu, and Y. Hirayama, “Novel viewing zone control method for computer generated integral 3-D imaging,” Proceedings of SPIE – IS&T Electronic Imaging, SPIE Vol. 5291, 81–92, (2004). [CrossRef]

13.

G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express 17(20), 17895–17908 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-19-5-4129. [CrossRef] [PubMed]

14.

G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” Digital Holography and Three-Dimensional Imaging, OSA Technical Digest, DWB27 (1990).

15.

PrimerSense 3D sensor: http://www.primesense.com/solutions/sensor/.

16.

H. Choi, Y. Kim, J.-H. Park, S. Jung, and B. Lee, “Improved analysis on the viewing angle of integral imaging,” Appl. Opt. 44(12), 2311–2317 (2005). [CrossRef] [PubMed]

17.

J.-D. Foley, D. Van, Feiner, and Hughes, Computer Graphics: Principles and Practice, 2nd ed. (Addison-Wesley, 1990).

18.

R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics (Addison-Wesley, 2004).

19.

F. de Sorbier, V. Nozick, and V. Biri, “GPU rendering for autostereoscopic displays,” 4th International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT’ 08), Jun. 2008, (2008).

20.

F. de Sorbier, V. Nozick, and H. Saito, “Multi-view rendering using GPU for 3-D displays,” Computer Games, Multimedia and Allied Technology (CGAT’10), April. 2010, (2010). [CrossRef]

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(120.2040) Instrumentation, measurement, and metrology : Displays

ToC Category:
Image Processing

History
Original Manuscript: January 23, 2013
Revised Manuscript: March 25, 2013
Manuscript Accepted: April 10, 2013
Published: April 16, 2013

Virtual Issues
Vol. 8, Iss. 5 Virtual Journal for Biomedical Optics

Citation
Shaohui Jiao, Xiaoguang Wang, Mingcai Zhou, Weiming Li, Tao Hong, Dongkyung Nam, Jin-Ho Lee, Enhua Wu, Haitao Wang, and Ji-Yeun Kim, "Multiple ray cluster rendering for interactive integral imaging system," Opt. Express 21, 10070-10086 (2013)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-21-8-10070


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. G. Lippmann, “La photographie integrale,” C.R. Acad. Sci.146, 446–451 (1908).
  2. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express16(12), 8800–8813 (2008), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-16-12-8800 . [CrossRef] [PubMed]
  3. Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys.17(9), 1683–1684 (1978). [CrossRef]
  4. M. Halle, “Multiple viewpoint rendering,” SIGGRAPH’98, Proceedings of 25th annual conference on Computer graphics and interactive techniques, 243–254 (1998).
  5. S.-W. Min, J. Kim, and B. Lee, “New characteristics equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys.44(2), L71–L74 (2005). [CrossRef]
  6. R. Yang, X. Huang, and S. Chen, “Efficient rendering of integral images,” SIGGRAPH’05, Proceedings of 32nd Annual Conference on Computer Graphics and Interactive Techniques, 44 (2005).
  7. S.-W. Min, K. S. Park, B. Lee, Y. Cho, and M. Hahn, “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys.45(28), L744–L747 (2006). [CrossRef]
  8. B.-N.-R. Lee, Y. Cho, K. S. Park, S.-W. Min, J.-S. Lim, M. C. Whang, and K. R. Park, “Design and implementation of a fast integral image rendering method,” International Conference on Electronic Commerce, 135–140 (2006). [CrossRef]
  9. K. S. Park, S.-W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 231–241 (2007).
  10. F. P. Brooks, “What’s real about virtual reality?” IEEE Comput. Graph. Appl.19(6), 16–27 (1999). [CrossRef]
  11. K.-C. Kwon, C. Park, M.-U. Erdenebat, J.-S. Jeong, J.-H. Choi, N. Kim, J.-H. Park, Y.-T. Lim, and K.-H. Yoo, “High speed image space parallel processing for computer-generated integral imaging system,” Opt. Express20(2), 732–740 (2012), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-20-2-732 . [CrossRef] [PubMed]
  12. R. Fukushima, K. Taira, T. Saishu, and Y. Hirayama, “Novel viewing zone control method for computer generated integral 3-D imaging,” Proceedings of SPIE – IS&T Electronic Imaging, SPIE Vol. 5291, 81–92, (2004). [CrossRef]
  13. G. Park, J.-H. Jung, K. Hong, Y. Kim, Y.-H. Kim, S.-W. Min, and B. Lee, “Multi-viewer tracking integral imaging system and its viewing zone analysis,” Opt. Express17(20), 17895–17908 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-19-5-4129 . [CrossRef] [PubMed]
  14. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” Digital Holography and Three-Dimensional Imaging, OSA Technical Digest, DWB27 (1990).
  15. PrimerSense 3D sensor: http://www.primesense.com/solutions/sensor/ .
  16. H. Choi, Y. Kim, J.-H. Park, S. Jung, and B. Lee, “Improved analysis on the viewing angle of integral imaging,” Appl. Opt.44(12), 2311–2317 (2005). [CrossRef] [PubMed]
  17. J.-D. Foley, D. Van, Feiner, and Hughes, Computer Graphics: Principles and Practice, 2nd ed. (Addison-Wesley, 1990).
  18. R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics (Addison-Wesley, 2004).
  19. F. de Sorbier, V. Nozick, and V. Biri, “GPU rendering for autostereoscopic displays,” 4th International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT’ 08), Jun. 2008, (2008).
  20. F. de Sorbier, V. Nozick, and H. Saito, “Multi-view rendering using GPU for 3-D displays,” Computer Games, Multimedia and Allied Technology (CGAT’10), April. 2010, (2010). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: MOV (2760 KB)     
» Media 2: MOV (2234 KB)     
» Media 3: MOV (703 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited