OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 20, Iss. 2 — Jan. 16, 2012
  • pp: 732–740
« Show journal navigation

High speed image space parallel processing for computer-generated integral imaging system

Ki-Chul Kwon, Chan Park, Munkh-Uchral Erdenebat, Ji-Seong Jeong, Jeong-Hun Choi, Nam Kim, Jae-Hyeung Park, Young-Tae Lim, and Kwan-Hee Yoo  »View Author Affiliations


Optics Express, Vol. 20, Issue 2, pp. 732-740 (2012)
http://dx.doi.org/10.1364/OE.20.000732


View Full Text Article

Acrobat PDF (11761 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In an integral imaging display, the computer-generated integral imaging method has been widely used to create the elemental images from a given three-dimensional object data. Long processing time, however, has been problematic especially when the three-dimensional object data set or the number of the elemental lenses are large. In this paper, we propose an image space parallel processing method, which is implemented by using Open Computer Language (OpenCL) for rapid generation of the elemental images sets from large three-dimensional volume data. Using the proposed technique, it is possible to realize a real-time interactive integral imaging display system for 3D volume data constructed from computational tomography (CT) or magnetic resonance imaging (MRI) data.

© 2012 OSA

1. Introduction

Integral imaging technology is distinguished from other three-dimensional (3D) display methods in the points that it can display full-parallax, full-color, auto-stereoscopic 3D images and it can be implemented into existing two-dimensional (2D) monitor devices. The integral imaging system can be divided into a pickup part, which captures the elemental images of 3D objects, and a display part that integrates the elemental images into 3D images using a lens array. An elemental image is a 2D perspective of the object focused by an elemental lens in the lens array. Each elemental image is different from each other because the position of each elemental lens is different relative to the 3D objects. The 3D object information stored as the set of elemental images is optically reconstructed as 3D images in the display part [1

1. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

-2

2. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008). [CrossRef] [PubMed]

]. computer-generated integral imaging (CGII) technique is a computational substitute of the optical pickup part. CGII creates the set of elemental images by using computer graphic techniques with the parameters of the virtual lens array without a real optical system.

Several methods have been proposed for CGII [3

3. M. Levoy and P. Hanrahan, “Light field rendering,” SIGGRAPH '96, Proceedings of the 23rd annual conference on Computer graphics and interactive techniques 31–36 (1996).

9

9. K. S. Park, S.-W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE – Transactions on Information and Systems, E 90-D, 233–241 (2007).

]. Such methods include point retracing rendering (PRR) [4

4. Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]

], multiple viewpoint rendering (MVR) [5

5. M. Halle, “Multiple viewpoint rendering,” SIGGRAPH '98, Proceedings of the 25th annual conference on Computer graphics and interactive techniques 243–254 (1998).

], parallel group rendering (PGR) [6

6. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]

], viewpoint vector rendering (VVR) [7

7. S.-W. Min, K. S. Park, B. Lee, Y. Cho, and M. Hahn, “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys. 45(28), L744–L747 (2006). [CrossRef]

9

9. K. S. Park, S.-W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE – Transactions on Information and Systems, E 90-D, 233–241 (2007).

]. PRR is a simple method in that the set of elemental images is drawn point by point to retrace the display image. This method can be easily implemented, but it is unsuitable for real-time processing due to the heavy computation requirement. MVR treats the process of rendering a set of perspective images as a unit, and generates each elemental image by computer graphics such as OpenGL graphics library [10

10. R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics (Addison-Wesley, 2004).

]. However, it is influenced by the number of elemental lenses and the data size of 3D object. Although a method using Octree for structuring 3D object data before applying MVR in an effort to enhance the processing efficiency has been proposed, the real-time rendering for a large 3D volume data has not been demonstrated [11

11. Y.-H. Jang, C. Park, J.-S. Jung, J.-H. Park, N. Kim, J.-S. Ha, and K.-H. Yoo, “Integral imaging pickup method of bio-medical data using GPU and Octree,” J. Korea Contents Assoc. 10(6), 1–9 (2010). [CrossRef]

]. PGR uses the viewing characteristics of focused mode where each elemental lens appears as a pixel to the observer. In PGR a set of elemental images is obtained from the imaginary scenes observed in a certain direction which are renamed directional scenes. The number of directional scenes is the same as that of the display pixels in the elemental lens area, and the directions correspond to the vector from each display pixel to the center of the corresponding elemental lens. Therefore, the elemental image generation is faster and less affected by the numbers of elemental lenses and 3D object polygons. However, PGR can only be used in the focused mode, not in other display modes. VVR is similar to PGR with regard to the elemental image generation using directional scenes. The larger the size of elemental images is the more directional scenes have to be generated. Hence the computation speed of the elemental images becomes slow. This method can also suffer from distortion of the displayed 3D images.

In this paper, we propose a new method we call image space parallel processing technique. The proposed method enhances the speed by using graphic processing unit (GPU) based parallel processing scheme which is implemented in OpenCL [12

12. NVIDIA, “OpenCL programming guide for the CUDA architecture,” Ver. 2.3 (2009).

-13

13. NVIDIA, “CUDA C programming guide,” Ver. 3.1.1 (2010).

]. The existing method [11

11. Y.-H. Jang, C. Park, J.-S. Jung, J.-H. Park, N. Kim, J.-S. Ha, and K.-H. Yoo, “Integral imaging pickup method of bio-medical data using GPU and Octree,” J. Korea Contents Assoc. 10(6), 1–9 (2010). [CrossRef]

] uses GPU processing to improve the rendering speed as well. However, the use of the OpenCL and the thread assignment for each elemental image pixel in the proposed method reduce the rendering time much less than the existing method. Using the proposed method, we achieved 24.39 frames per second (fps) in creating the elemental images for 512 × 512 × 512 large volume data size when the lens array consists of 200 × 200 lenses and each elemental image has 3 × 3 pixels.

2. Image space parallel computing of CGII

2.1 Principle of conventional CGII computation

CGII generates the elemental images for a given 3D volume data and virtual lens array parameters. Figure 1
Fig. 1 (a) Geometry of elemental image generation and (b) an example of elemental images.
shows concept of elemental image generation. For each elemental lens in the array, the corresponding perspective image of the 3D object is calculated. In calculation, blurring by the out-of-focus and diffraction by the finite aperture size is intentionally ignored as they degrade the quality of the optically reconstructed 3D images in later display stage. Hence the perspective image calculated in CGII is a simple perspective projection of the 3D object on the pickup plane through a pinhole located at the elemental lens position. The calculated perspective image is cropped to the elemental image area of the corresponding elemental lens in order to avoid overlapping of the images between neighboring elemental lenses. In usual configuration, the elemental image area has the same lateral location and size as the corresponding elemental lens. This process is repeated for all elemental lenses, and the set of the elemental images can be obtained. Figure 1(b) shows an example of the elemental images generated by CGII.

2.2 Proposed image space parallel processing technique

In the input stage, pixel information of display panel, and parameters of the virtual lens array including focal length of an elemental lens, central depth plane and the number of the elemental lenses are inputted. The 3D volume object data is also loaded along with translation, rotation, and scale parameters via inputted user mouse and keyboard. These 3D volume object parameters can be controlled interactively so that the corresponding elemental images are generated in real-time. Parameters for the system configuration such as gap between the lens array and the pickup plane are also input in this stage.

In the calculation stage, virtual lens array properties and direction of view point are computed based on inputted parameters. OpenGL graphics library functions render objects from the camera in 3D virtual space, where objects and camera can be translated, rotated and scaled, using 4 × 4 homogeneous matrix [14

14. E. Angel, Interactive Computer Graphics: A Top-Down Approach with OpenGL, 2nd ed. (Addison-Wesley, 2000).

]. Also in this stage, view matrix and transformation matrix are computed respectively for information of virtual lens array properties and the information of view point direction.

The view matrix is a 4 × 4 matrix containing information on orientation and the location of the elemental lens. The transformation matrix is also a 4 × 4 matrix containing information on translation, rotation, and scale of the 3D volume data. The elements of the view and transformation matrices and their physical meaning are illustrated in Fig. 3
Fig. 3 View matrix of an elemental lens and transformation matrix of 3D volume data when CTn is the information of the (i, j)-th elemental lens center and nCD is the direction vectors of lens array.
. Note that in general case where the elemental lenses are aligned in a plane the view matrices of elemental lenses are different only by their fourth column which indicates lateral position of the elemental lenses. Therefore instead of creating view matrix for every elemental lens separately, the proposed method prepares only one view matrix for an elemental lens with a separate lateral positional data of all elemental lenses. This reduces the amount of data transferred to the OpenCL kernel in later rendering process. Note that the transformation matrix for the 3D volume data is common for all elemental lenses.

The pixel value calculation in each thread is performed using image based algorithm in the proposed method. Figure 5
Fig. 5 Concept of image based rendering for generating an elemental image.
shows the concept of the image based algorithm. For a given pixel location, the corresponding ray is first calculated. The intersection points between this ray and the 3D volume data are then computed using the view and transformation matrices along with the positional data of the corresponding elemental lens. The maximum or average value of the colors and the intensities of the intersection points in the 3D volume data are finally assigned to the elemental image pixel in the thread. Hence in the proposed image based algorithm, the mapping is performed from the elemental image pixel to the 3D volume object. In the object based algorithm where the mapping is performed from each 3D object point to the elemental image pixel, mapping procedure should be performed in multiple times for a single elemental image pixel as rays from multiple object points can fall onto the same elemental image pixel area. The image based algorithm in the proposed method eliminates such possibility and enables to assign the proper pixel value in a single mapping process, reducing the processing time. Also note that the speed of the proposed image based rendering method is not affected by the relative position of the 3D volume data to the lens array. Hence the elemental images for the volume object in the real field and the virtual field are rendered with the same speed.

The final elemental images set is sent to the frame buffer, which can be visualized on display panel such as liquid crystal display (LCD). Since the proposed method explained above can generate the elemental images set in a high speed, the 3D image can be displayed in real-time with the user interaction. Figure 6
Fig. 6 Example of (a) 3D volume Data, (b) elemental images set generated by the proposed method and (c) 3D image optically reconstructed using integral imaging display system.
shows an example of the 3D volume data, a set of the elemental images generated by the proposed method, and the reconstructed 3D image.

3. Experimental results

The proposed image space parallel computing for CGII has been implemented by MS Visual Studio 2008, OpenGL lib., and OpenCL. The PC hardware was composed of Intel® Core2 Quad (2.66 GHz) CPU with 4Gb RAM and NVIDIA Quadro 4000 (GPU core: 256) graphic card. The performance of the proposed method was evaluated by comparing the rate of generation of elemental images sets. Five kinds of 3D volume data including Bucky, Mummy, Male, CTA, and Mouse are used in the evaluation. Figure 7
Fig. 7 3D objects and displayed 3D images in experiments; (a) Bucky: 32 × 32 × 32, (b) Mummy: 256 × 128 × 128, (c) Male: 128 × 256 × 256, (d) CTA: 512 × 512 × 79 and (e) Mouse: 512 × 512 × 512.
shows 3D volume data, generated elemental images, and displayed 3D images.

Table 1

Table 1. The measurement result of generation for CGII

table-icon
View This Table
shows the processing speed comparison between the method proposed by Jang, et.al [11

11. Y.-H. Jang, C. Park, J.-S. Jung, J.-H. Park, N. Kim, J.-S. Ha, and K.-H. Yoo, “Integral imaging pickup method of bio-medical data using GPU and Octree,” J. Korea Contents Assoc. 10(6), 1–9 (2010). [CrossRef]

] using GPU & Octree, and our proposed method when the lens array consists of 20 × 20 lenses and each elemental image has 25 × 25 pixels. It can be observed that the proposed method is much faster than GPU & Octree method in all cases considered.

Figure 9
Fig. 9 The generating time of an elemental image for 512 × 512 × 512 input 3D volume data from (a) 200 × 200 number of lens array when each lens has 3 × 3 pixels and (b) 30 × 30 number of lens array, when each lens has 20 × 20 pixels (Media 1).
shows a screen shot of the elemental images in experiment. During the experiment, the worst case processing time was 24.39 fps (0.041 sec) for 512 × 512 × 512 large input volume data where the number of elemental lenses are 200 × 200, and a set of elemental images consists of 600 × 600 pixels as shown in Fig. 9(a). In Fig. 9(b), processing time was similar as Fig. 9(a) when the number of elemental lenses are 30 × 30 and each elemental lens generates 20 × 20 pixels.

In this paper, the processing time is proportional to total resolution of the set of elemental images. Due to different resolutions of elemental images, the Fig. 8 and Fig. 9 can be seen as confused to each other. But in Fig. 8(c) and Fig. 9(b), total resolution of a set of elemental images is same to Fig. 9(a), that 600 × 600 pixels and we can obtain the same result to Fig. 9(a).

4. Conclusion

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grants 2011-0025849) and by the grant of the Korean Ministry of Education, Science and Technology (Regional Core Research Program / Chungbuk BIT Research-Oriented University Consortium).

References and links

1.

G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

2.

J.-H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008). [CrossRef] [PubMed]

3.

M. Levoy and P. Hanrahan, “Light field rendering,” SIGGRAPH '96, Proceedings of the 23rd annual conference on Computer graphics and interactive techniques 31–36 (1996).

4.

Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]

5.

M. Halle, “Multiple viewpoint rendering,” SIGGRAPH '98, Proceedings of the 25th annual conference on Computer graphics and interactive techniques 243–254 (1998).

6.

S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44(2), L71–L74 (2005). [CrossRef]

7.

S.-W. Min, K. S. Park, B. Lee, Y. Cho, and M. Hahn, “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys. 45(28), L744–L747 (2006). [CrossRef]

8.

B.-N.-R. Lee, Y. Cho, K. S. Park, S.-W. Min, J.-S. Lim, M. C. Whang, and K. R. Park, “Design and implementation of a fast integral image rendering method,” International Conference on Electronic Commerce 2006, 135–140 (2006).

9.

K. S. Park, S.-W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE – Transactions on Information and Systems, E 90-D, 233–241 (2007).

10.

R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics (Addison-Wesley, 2004).

11.

Y.-H. Jang, C. Park, J.-S. Jung, J.-H. Park, N. Kim, J.-S. Ha, and K.-H. Yoo, “Integral imaging pickup method of bio-medical data using GPU and Octree,” J. Korea Contents Assoc. 10(6), 1–9 (2010). [CrossRef]

12.

NVIDIA, “OpenCL programming guide for the CUDA architecture,” Ver. 2.3 (2009).

13.

NVIDIA, “CUDA C programming guide,” Ver. 3.1.1 (2010).

14.

E. Angel, Interactive Computer Graphics: A Top-Down Approach with OpenGL, 2nd ed. (Addison-Wesley, 2000).

OCIS Codes
(100.0100) Image processing : Image processing
(100.6890) Image processing : Three-dimensional image processing

ToC Category:
Image Processing

History
Original Manuscript: November 7, 2011
Revised Manuscript: December 4, 2011
Manuscript Accepted: December 5, 2011
Published: January 3, 2012

Citation
Ki-Chul Kwon, Chan Park, Munkh-Uchral Erdenebat, Ji-Seong Jeong, Jeong-Hun Choi, Nam Kim, Jae-Hyeung Park, Young-Tae Lim, and Kwan-Hee Yoo, "High speed image space parallel processing for computer-generated integral imaging system," Opt. Express 20, 732-740 (2012)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-2-732


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci.146, 446–451 (1908).
  2. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express16(12), 8800–8813 (2008). [CrossRef] [PubMed]
  3. M. Levoy and P. Hanrahan, “Light field rendering,” SIGGRAPH '96, Proceedings of the 23rd annual conference on Computer graphics and interactive techniques 31–36 (1996).
  4. Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys.17(9), 1683–1684 (1978). [CrossRef]
  5. M. Halle, “Multiple viewpoint rendering,” SIGGRAPH '98, Proceedings of the 25th annual conference on Computer graphics and interactive techniques 243–254 (1998).
  6. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys.44(2), L71–L74 (2005). [CrossRef]
  7. S.-W. Min, K. S. Park, B. Lee, Y. Cho, and M. Hahn, “Enhanced image mapping algorithm for computer-generated integral imaging system,” Jpn. J. Appl. Phys.45(28), L744–L747 (2006). [CrossRef]
  8. B.-N.-R. Lee, Y. Cho, K. S. Park, S.-W. Min, J.-S. Lim, M. C. Whang, and K. R. Park, “Design and implementation of a fast integral image rendering method,” International Conference on Electronic Commerce 2006, 135–140 (2006).
  9. K. S. Park, S.-W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE – Transactions on Information and Systems, E90-D, 233–241 (2007).
  10. R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics (Addison-Wesley, 2004).
  11. Y.-H. Jang, C. Park, J.-S. Jung, J.-H. Park, N. Kim, J.-S. Ha, and K.-H. Yoo, “Integral imaging pickup method of bio-medical data using GPU and Octree,” J. Korea Contents Assoc.10(6), 1–9 (2010). [CrossRef]
  12. NVIDIA, “OpenCL programming guide for the CUDA architecture,” Ver. 2.3 (2009).
  13. NVIDIA, “CUDA C programming guide,” Ver. 3.1.1 (2010).
  14. E. Angel, Interactive Computer Graphics: A Top-Down Approach with OpenGL, 2nd ed. (Addison-Wesley, 2000).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: MOV (13847 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited