Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

MPEG-based novel look-up table for rapid generation of video holograms of fast-moving three-dimensional objects

Open Access Open Access

Abstract

A new robust MPEG-based novel look-up table (MPEG-NLUT) is proposed for accelerated computation of video holograms of fast-moving three-dimensional (3-D) objects in space. Here, the input 3-D video frames are sequentially grouped into sets of four, in which the first and remaining three frames in each set become the reference (RF) and general frames (GFs). Then, the frame images are divided into blocks, from which motion vectors are estimated between the RF and each of the GFs, and with these estimated motion vectors, object motions in all blocks are compensated. Subsequently, only the difference images between the motion-compensated RF and each of the GFs are applied to the NLUT for CGH calculation based on its unique property of shift-invariance. Experiments with three types of test 3-D video scenarios confirm that the average number of calculated object points and the average calculation time of the proposed method, have found to be reduced down to 27.34%, 55.46%, 45.70% and 19.88%, 44.98%, 30.72%, respectively compared to those of the conventional NLUT, temporal redundancy-based NLUT (TR-NLUT) and motion compensation-based NLUT (MC-NLUT) methods.

© 2014 Optical Society of America

Full Article  |  PDF Article
More Like This
Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table

Seung-Cheol Kim, Xiao-Bin Dong, Min-Woo Kwon, and Eun-Soo Kim
Opt. Express 21(9) 11568-11584 (2013)

Supplementary Material (15)

Media 1: AVI (5678 KB)     
Media 2: AVI (6060 KB)     
Media 3: AVI (6035 KB)     
Media 4: AVI (4979 KB)     
Media 5: AVI (5174 KB)     
Media 6: AVI (5158 KB)     
Media 7: AVI (5682 KB)     
Media 8: AVI (6066 KB)     
Media 9: AVI (6041 KB)     
Media 10: AVI (2765 KB)     
Media 11: AVI (2941 KB)     
Media 12: AVI (2926 KB)     
Media 13: AVI (11136 KB)     
Media 14: AVI (14097 KB)     
Media 15: AVI (14296 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Geometry for generating the Fresnel hologram pattern of a 3-D object.
Fig. 2
Fig. 2 Schematic for showing the shift-invariance property of the NLUT.
Fig. 3
Fig. 3 (a) Reference image, (b) Input image, (c) Object-based motion vector, (d) Block-based motion vectors (e) Object points of the input image, (f)-(h) Object points of the difference images extracted between the reference and input images without a motion vector, with the object-based motion vector and the block-based motion vectors, respectively.
Fig. 4
Fig. 4 Operational flowchart of the proposed MPEG-NLUT.
Fig. 5
Fig. 5 A structure of the input 3-D video frames grouped into a sequence of GOPs.
Fig. 6
Fig. 6 A reference frame divided into M × N blocks and the calculated B-CGHs for each block with the TR-NLUT.
Fig. 7
Fig. 7 Intensity and depth images of the 1st frame for each test 3-D video of (a) Case I (Media 1), (b) Case II (Media 2) and (c) Case III (Media 3).
Fig. 8
Fig. 8 Block-based motion estimation between the RF and one of the GFs.
Fig. 9
Fig. 9 An example of the block-based motion estimation: (a) An optical-flow map showing the MVs of 9 blocks of the RF, which are represented by the arrows, (b) A motion-vector map showing the corresponding motion-vector values given by the number of displaced pixels along the x and y directions.
Fig. 10
Fig. 10 Motion vectors of the (a) 1st frame of the ‘Case I’, (b) 2nd frame of the ‘Case I’ (Media 4), (c) 71st frame of the ‘Case I’, (d) 1st frame of the ‘Case II’, (e) 2nd frame of the ‘Case II’ (Media 5), (f) 71st frame of the ‘Case II’, (g) 1st frame of the ‘Case III’, (h) 2nd frame of the ‘Case III’ (Media 6), (i) 71st frame of the ‘Case III’.
Fig. 11
Fig. 11 An example of the block-based motion compensation: (a) Motion compensation with the estimated motion vectors, (b) Motion-compensated version of the RF.
Fig. 12
Fig. 12 Motion-compensated object images of the 2nd, 32nd and 71st frames for each test video of (a) Case I (Media 7), (b) Case II (Media 8), and (c) Case III (Media 9).
Fig. 13
Fig. 13 Difference images of the 2nd, 32nd and 71st frames for each test video of (a) Case I (Media 10), (b) Case II (Media 11), and (c) Case III (Media 12).
Fig. 14
Fig. 14 Flowchart of the CGH generation process for each of the GFs.
Fig. 15
Fig. 15 A shifting process of the B-CGHs of the RF in Fig. 9: (a), (d) and (g) represent the B-CGHs for each block of A, B and I, (b), (e) and (h) show their shifted versions with the corresponding MVs, (c), (f) and (i) show their shifted versions compensated with the hologram patterns for the blank areas, and (j) CGH for the MC-RF obtained by adding all shifted B-CGHs.
Fig. 16
Fig. 16 Reconstructed 3-D ‘Car’ object images at the distances of 630mm and 700mm for each test video of (a) ‘Case I’ (Media 13), (b) ‘Case II’ (Media 14) and (c) ‘Case III’ (Media 15).
Fig. 17
Fig. 17 Comparison results: (a)-(c) Numbers of calculated object points, (d)-(f) Calculation times for one-object point in the conventional NLUT, TR-NLUT, MC-NLUT and proposed methods for each test video of ‘Case I’, ‘Case II’ and ‘Case III’.

Tables (1)

Tables Icon

Table 1 Average numbers of calculated object points and average calculation times for one-object point in the conventional NLUT, TR-NLUT, MC-NLUT and proposed methods for each test video of ‘Case I’, ‘Case II’ and ‘Case III’

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

T ( x , y ; z p ) 1 r p cos [ k r p + k x sin θ R + φ p ]
r p = ( x x p ) 2 + ( y y p ) 2 + z p 2
I ( x , y ) = p = 1 N a p T ( x x p , y y p ; z p )
I R = m = 1 M n = 1 N I B ( m , n )
B m , n ( x 2 , y 2 , t + Δ t ) = A m , n ( x 1 + d x , y 1 + d y , t )
M A D = 1 S L 2 i = 0 S L 1 j = 0 S L 1 | C i j R i j |
( d x , d y ) = ( x 2 x 1 , y 2 y 1 )
A ' ( x , y ) = A ( x + 3 , y )
I S ( x , y ) = m = 1 M [ I R m ( x d x , y d y ) + I R B m ( x , y ) ]
I C ( x , y ) = I S ( x , y ) I R O ( x , y )
I ( x , y ) = I C ( x , y ) + I D ( x , y )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.