## Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging |

Optics Express, Vol. 20, Issue 5, pp. 5440-5459 (2012)

http://dx.doi.org/10.1364/OE.20.005440

Acrobat PDF (4193 KB)

### Abstract

In this paper, we proposed a new approach to notably enhance the compression rate of integral images by using the motion-compensated residual images (MCRIs). In the proposed method, sub-images (SIs) transformed from the picked-up elemental images of a three-dimensional (3-D) object, are sequentially rearranged with a spiral scanning topology. The moving vectors among the SIs, then, are estimated and compensated with the block-matching algorithm. Furthermore, spatial redundancy among the SIs is also removed by computing the differences between the local SIs and their motion-compensated versions, from which a sequence of MCRIs are finally generated and compressed with the MPEG-4 algorithm. Experimental results show that the compression efficiency of the proposed method has been improved up to 861.1% on average from that of the JPEG-based elemental images (EIs) method, and up to 1,497.0% and 118.8% on average from those of the MPEG-based MCSIs and the MPEG-based RIs method, respectively.

© 2012 OSA

## 1. Introduction

3. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. **36**(7), 1598–1603 (1997). [CrossRef] [PubMed]

4. D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. **45**(19), 4631–4637 (2006). [CrossRef] [PubMed]

8. S.-H. Hong and B. Javidi, “Improved resolution 3-D object reconstruction using computational II with time multiplexing,” Opt. Express **12**(19), 4579–4588 (2004). [CrossRef] [PubMed]

9. J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. **46**(31), 7697–7708 (2007). [CrossRef] [PubMed]

11. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. **27**(5), 324–326 (2002). [CrossRef] [PubMed]

12. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. **45**(11), 117004 (2006). [CrossRef]

14. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. **40**(20), 3318–3325 (2001). [CrossRef] [PubMed]

16. S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express **12**(8), 1632–1642 (2004). [CrossRef] [PubMed]

17. J.-S. Jang, S. Yeom, and B. Javidi, “Compression of ray information in three-dimensional integral imaging,” Opt. Eng. **44**(12), 127001 (2005). [CrossRef]

18. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. **281**(14), 3640–3647 (2008). [CrossRef]

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. **283**(6), 920–928 (2010). [CrossRef]

20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. **284**(20), 4884–4893 (2011). [CrossRef]

*CR*) and peak-to-peak signal to noise ratio (

*PSNR*).

## 2. EIA-to-SIA transformation

*z*from the 3-D object and the EIA of the 3-D object is captured at the distance

*g*from the lenslet array by using a CCD camera.

*s*and

_{x}*s*are the number of pixels for each elemental image, and

_{y}*l*and

_{x}*l*are the number of elemental image in the

_{y}*x*and

*y*axis, respectively. Then, the EIA, which is denoted as , becomes (

*i*=

_{ey}*s*)x(

_{y}l_{y}*j*=

_{ex}*s*) pixels. Therefore, the number of pixels in the SIA, which is denoted as

_{x}l_{x}*S*, can be calculated by Eq. (1).Here,

*p*=

_{x}*j*%2,

_{x}*p*

_{y}=

*i*%2,

_{y}*q*= (

_{x}*j*+ 1)%2,

_{x}*q*= (

_{y}*i*+ 1)%2,

_{y}*r*= (

_{x}*j*+ 1)/2,

_{x}*r*= (

_{y}*i*+ 1)/2,

_{y}*t*= (

_{x}*j*+

_{x}*p*)/2 and

_{sx}*t*= (

_{y}*i*+

_{y}*p*)/2.

_{sy}*p*and

_{sx}*p*are the number of EIs about each axis on sub-plane. In addition, a%b is the remainder on division of a by b. In other words, in case all pixels located in the position [

_{sy}*i*,

_{x}*j*] in each elemental image, which pixels are rearranged to generate SIA, that is, the corresponding pixels might be collected to generate SIs on the coordinate [

_{y}*i*,

_{m}*j*] in the sub-plane. Here,

_{n}*i*

_{x}and

*j*

_{y}mean the coordinate of each pixel in EIA, and

*i*

_{m}and

*j*

_{n}also do the coordinates of corresponding pixels on SIA. Here, the SIA has a high similarity among the adjacent sub-image since all pixels on each sub-image contain the ray information of the same view point on each lenslet.

21. J.-H. Park, J.-H. Kim, and B.-H. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express **13**(13), 5116–5126 (2005). [CrossRef] [PubMed]

18. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. **281**(14), 3640–3647 (2008). [CrossRef]

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. **283**(6), 920–928 (2010). [CrossRef]

20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. **284**(20), 4884–4893 (2011). [CrossRef]

*i*

^{th}sub-image obtained from the specific angle

*θ*

_{i}, then this angle is given bywhere,

*f*and

*y*

_{i}represents the focal length of the lenslet and the position of

*i*

^{th}pixel, respectively. In case the 3-D object is shifted by the distance

*b*between two object positions, the distance

*L*can be calculated from rays obtained from two points of the 3-D object which are represented as A and B. Then, this distance can be given as follows.Equation (3) shows that the distance between two object positions is proportional to the pitch of the lenslet, which means the object position in SIs may change as a function of the shifting distance of the 3-D object as shown in Fig. 3.

## 3. Proposed method

### 3.1 Rearrangement of the SIA into a sequence of SIs

20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. **284**(20), 4884–4893 (2011). [CrossRef]

16. S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express **12**(8), 1632–1642 (2004). [CrossRef] [PubMed]

18. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. **281**(14), 3640–3647 (2008). [CrossRef]

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. **283**(6), 920–928 (2010). [CrossRef]

### 3.2 Motion estimation and compensation among the sequentially rearranged SIs

**283**(6), 920–928 (2010). [CrossRef]

22. J.-S. Lee, J.-H. Ko, and E.-S. Kim, “Real-time stereo object tracking system by using block matching algorithm and optical binary phase extraction joint transform correlator,” Opt. Commun. **191**(3-6), 191–202 (2001). [CrossRef]

*C*and

_{ij}*R*represent one of the local SIs and the reference sub-image, and

_{ij}*i*and

*j*denote the pixel on the

*x*-axis and

*y*-axis, respectively. That is, a block of the reference sub-image sized by

*N*×

*M*is fully scanned on each local sub-image. That is, as shown in Fig. 5, a block of the reference sub-image containing the object is sequentially matched with those of the local SIs by using the MSE algorithm of Eq. (4) to estimate their moving vectors.

**283**(6), 920–928 (2010). [CrossRef]

*i*

^{th}local sub-image based on the block-matching algorithm.

*α*in the reference sub-image of Fig. 7(a) represents a predetermined macro-block (

*N*×

*M*) of the reference sub-image to be matched with those of the local SIs, where

*R*(10,11) represents the location coordinates of the reference block. Now, the block

*α*of the reference sub-image is matched with those of the

*i*

^{th}sub-image of Fig. 7(b) in the full image area to find out the corresponding block

*β*with the MSE algorithm. From this block-matching process, the corresponding point of

*C*(4,16) in the

*i*

^{th}sub-image can be estimated and the difference between these two points

*D*(−6,5) is computed as the shifted location coordinates of the block

*α*of the reference sub-image.

*R*(10,11) to the point

*C*(4,16) to generate the motion-compensated version of the

*i*

^{th}sub-image, which is shown in Fig. 7(d). Then, by computing the difference between the

*i*

^{th}sub-image of (b) and its motion-compensated version of (d), the corresponding

*i*

^{th}MCRI can be generated, which is shown in Fig. 7(e). Finally, the

*i*

^{th}sub-image reconstructed from the

*i*

^{th}MCRI is also included in Fig. 7(f) for comparison.

### 3.3 Generation of motion-compensated residual images

13. H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. **284**(13), 3227–3233 (2011). [CrossRef]

*P*and

_{r}, P_{o}*P*represent the pixel intensity values of the MCRIs, the local SIs and the motion-compensated versions of the local SIs, respectively. In addition,

_{c}*P*and

_{max}*ω*represent the maximum pixel value of the MCSIs and the quality factor, respectively. Here in this paper, the quality factor

*ω*representing the value to determine the number of quantum steps for normalization is assumed to be in the range of [2,10

10. Y. Piao and E.-S. Kim, “Resolution-enhanced reconstruction of far 3-D objects by using a direct pixel mapping method in computational curving-effective integral imaging,” Appl. Opt. **48**(34), H222–H230 (2009). [CrossRef] [PubMed]

**284**(20), 4884–4893 (2011). [CrossRef]

*ω*= 2 the intensity values of the pixel are normalized in the range of [0~127], whereas they are normalized in the range of [0 ~25] for

*ω*= 10. Accordingly, as the value of

*ω*gets smaller, the number of quantum steps for normalization increases, which results in an improvement of the reconstructed image quality as well.

## 4. Compression of MCRIs with MPEG-4

## 5. Reconstruction process

*P*is decoded by using the received reference sub-image and motion vectors. Secondly, the pixel values of the received MCRIs

_{c}*P*are decoded into the original pixel values of the local SIs

_{r}*P*, which are ranging from −255 to 255, by using Eq. (6).Then, the decoded local SIs are inversely transformed into the EIs to reconstruct a 3-D object image.

_{o}## 5. Performance evaluation

*CQ*)’is employed to comparatively evaluate the similarity characteristics between the rearranged EIs and SIs [18

**281**(14), 3640–3647 (2008). [CrossRef]

*E*and

_{i}*E*, which are composed of

_{j}*M*×

*N*pixels. Then, the

*CQ*value between these two EIs can be defined bywhere

*m*and

*n*are the coordinates of the elemental image. Also, the average correlation quality (

*ACQ*) value for all of the EIs can be given bywhere

*P*is the total number of EIs. Now, the

*ACQ*values for each of the EIA and the SIA are computed with Eq. (8) and they can be used for computation of the similarity among the adjacent EIs or SIs. That is, in case two images are perfectly the same and, its

*ACQ*value, then, turns out to be 1, which means as the

*ACQ*value gets closer to 1,the resultant similarity among the adjacent EIs or SIs increases correspondingly.

*CR*) and the other is the peak-to-peak signal-to-noise-ratio (

*PSNR*). Here, the

*CR*and the

*PSNR*are defined as Eq. (9) and Eq. (10), respectively. where

*I*and

_{o}*I*mean the originally picked-up EIA before encoding and the decoded EIA in the receiver, respectively.

_{c}## 6. Experiments and results

### 6.1. Experiments

*mm*× 40

*mm*× 30

*mm*is used as the test object. Figure 10 shows the experimental setup for picking up the EIA of the test object ‘Car’.

*mm*× 1.08

*mm*. In the experiment, the distance between the CCD camera and the lenslet array (

*g*) is fixed at 30

*mm*, but the distance from the lenslet array to the test object (

*z*) has been changed from 30

*mm*to 60 and 120

*mm*for the effective evaluation of the proposed method’s compression performance.

*Car_30*,

*Car_60*and

*Car_120*.

*ACQ*values computed for each picked-up EIA and its corresponding SIA by using Eq. (8) are shown in Table 1 . Table 1 reveals that the

*ACQ*values of the SIAs have been improved up to 95.1%, 132.0% and 56.9%, respectively from those of the EIAs for three cases. This means that much higher similarity may exist among the SIs compared to that among the originally picked-up EIs as SIs are composed of the de-magnified images of the test object. Accordingly, a motion estimation scheme such as MSE can be efficiently applied to these SIs for their enhanced compression.Then, each SIA is sequentially rearranged along the spiral direction starting from the center of the SIA for three cases, in which the center sub-image is assigned as the reference sub-image. Even though the SIs have a high similarity among their object images, the objects in each local sub-image may be somewhat displaced from the center location depending on the distance from the center sub-image, which means motion vectors must exist among the SIs.

### 6.2. Comparison of CR values between the proposed and the JPEG-based EIs method

*CR*and

*PSNR*values of the proposed (MPEG-based MCRIs) method for each case of Figs. 11(a)–11(c), in which the compression results of the conventional JPEG-based EIs method [13

13. H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. **284**(13), 3227–3233 (2011). [CrossRef]

*CR*of the proposed method has been improved up to 157.5% for

*Car_30*, and 912.7% for

*Car_60*, and 1,513.4% for

*Car_120*, respectively when compared to those of the conventional JPEG-based EIs method under the condition of

*PSNR*= 30dB. That is, the compression efficiency of the proposed method has been improved up to 861.1% on average from that of the JPEG-based EIs method,

*CR*of the proposed method may improve as the value of

*ω*increases and the distance from the lenslet array to the test object gets longer. Furthermore, the calculated

*CR*values showing the

*PSNR*above 30dB are widely ranged from 31.0 to 314.6 in the proposed method. Here it must be noted that it is certainly easy to recognize the decoded object images with the human visual system in that they all show more than 30dB in

*PSNR*[25].

*CR*values showing the PSNR above 30dB are very narrowly ranged from 15.3 to 28.3, which means that the conventional method may not be efficient in compression of picked-up EIs.

### 6.3. Comparison of CR values between the proposed and the MPEG-based MCSIs method

*CR*and

*PSNR*values of the conventional MPEG-based MCSIs method [13

13. H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. **284**(13), 3227–3233 (2011). [CrossRef]

**283**(6), 920–928 (2010). [CrossRef]

*CR*values than the MPEG-based MCSIs method for all cases of

*Car_30, Car_60*and

*Car_120.*

*CR*of the proposed method has been improved up to 1,497.0% for

*Car_120*as compared to that of the MPEG-based MCSIs method under the condition of

*PSNR*= 30dB.

*CR*values having the PSNR above 30dB are obtained only in the case of

*Car_120*and they are almost fixed at 19.7. Accordingly, the compression performance of the MPEG-based MCSIs method has been found much worse than that of the conventional JPEG-based EIS method even though the motion vectors among the SIs have been compensated. These results can be visually seen in Figs. 15 and 16.

### 6.4. Performance comparison between the proposed and the MPEG-based RIs method

*CR*and

*PSNR*for the conventional MPEG-based RIs (residual images) [20

**284**(20), 4884–4893 (2011). [CrossRef]

*CR*of the proposed method has been improved up to 52.7% for

*Car_30*, and 114.7% for

*Car_60*, and 189.0% for

*Car_120*, respectively as compared to those of the conventional MPEG-based RIs method under the condition of

*PSNR*= 30dB. That is, the compression efficiency of the proposed method has been improved up to 118.8% on average from that of the JPEG-based EIs method,

*CR*values of the MPEG-based RIs method having the PSNR above 30dB are also widely ranged from 24.7 to 119.6 and these results confirmed that this method outperformed the other previous methods.

### 6.5. Analysis of experimental results

*CR*with the

*CR*components having 30dB of PSNR for each case of the proposed, the conventional JPEG-based EIs, MPEG-based MCSIs and MPEG-based RIs methods. In Table 2, a notation of ‘-’ signifies that there exist no

*CR*components having the PSNR value above 30dB in the method.

*CR*for all cases of

*Car_30, Car_60*and

*Car_120*in the experiment.For each case of

*Car_30*and

*Car_60*, the compression rate of the proposed method have been enhanced by 1.57, 0.53 fold, and by 9.13, 1.15 fold, respectively when compared to those of the conventional JPEG-based EIs and MPEG-based RIs method. Moreover, for the case of

*Car_120*, the compression rate of the proposed method has been enhanced by 15.13, 14.97 and 1.89 fold, respectively on average when compared to the conventional JPEG-based EIs, MPEG-based MCSIs and MPEG-based RIs method.

### 6.6. Reconstructed object images

*Case_120*. Here, the

*PSNR*values of the reconstructed object images having the compression rates of 65.8 for

*ω*= 2, 244.7 for

*ω*= 5 and 320.8 for

*ω*= 9, have been found to be 37.5, 33.5 and 30.2dB, respectively.

*ω*increases, whereas the PSNR value of the reconstructed object image gets somewhat lower. But, all the reconstructed object images have the PSNR values above 30dB in the proposed method, so that these reconstructed object images shown in Fig. 18 can be easily recognized in the human visual system.

### 6.7. Discussions

*ms*, 135.16

*ms*, 128.05

*ms*in the encoding process, and 129.51

*ms*, 126.56

*ms*, 126.96

*ms*in the decoding process, respectively for each case of

*Car_30*,

*Car_60*, and

*Car_120*.

26. A. Barjatya, “Block matching algorithms for motion estimation,” (2005), Matlab central: http://www.mathworks.com/matlabcentral/fileexchange/8761.

*s*and 150.05

*s*on the average, respectively by using the Core

*i5*processor (2.67 GHz, Intel) and the Matlab

*R2008a*version (Mathworks Inc.).

26. A. Barjatya, “Block matching algorithms for motion estimation,” (2005), Matlab central: http://www.mathworks.com/matlabcentral/fileexchange/8761.

## 7. Conclusions

## Acknowledgment

## References and links

1. | G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences |

2. | S. A. Benton, ed., |

3. | F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. |

4. | D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. |

5. | B.-G. Lee, H.-H. Kang, and E.-S. Kim, “Occlusion removal method of partially occluded object using variance in computational integral imaging,” 3D Res. |

6. | S.-C. Kim, C.-K. Kim, and E.-S. Kim, “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” 3D Res. |

7. | P. B. Han, Y. Piao, and E.-S. Kim, “Accelerated reconstruction of 3-D object images using estimated object area in backward computational integral imaging reconstruction,” 3D Res. |

8. | S.-H. Hong and B. Javidi, “Improved resolution 3-D object reconstruction using computational II with time multiplexing,” Opt. Express |

9. | J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. |

10. | Y. Piao and E.-S. Kim, “Resolution-enhanced reconstruction of far 3-D objects by using a direct pixel mapping method in computational curving-effective integral imaging,” Appl. Opt. |

11. | J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. |

12. | J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. |

13. | H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. |

14. | O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. |

15. | M. Forman and A. Aggoun, “Quantization strategies for 3D-DCT based compression of full parallax 3D images,” in |

16. | S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express |

17. | J.-S. Jang, S. Yeom, and B. Javidi, “Compression of ray information in three-dimensional integral imaging,” Opt. Eng. |

18. | H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. |

19. | H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. |

20. | C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. |

21. | J.-H. Park, J.-H. Kim, and B.-H. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express |

22. | J.-S. Lee, J.-H. Ko, and E.-S. Kim, “Real-time stereo object tracking system by using block matching algorithm and optical binary phase extraction joint transform correlator,” Opt. Commun. |

23. | R. C. Gonzalez, R. E. Woods, and S. L. Eddins, eds., |

24. | I. E. G. Richardson, ed., |

25. | D. S. Taubman and M. W. Marcellin, eds., |

26. | A. Barjatya, “Block matching algorithms for motion estimation,” (2005), Matlab central: http://www.mathworks.com/matlabcentral/fileexchange/8761. |

**OCIS Codes**

(080.0080) Geometric optics : Geometric optics

(110.0110) Imaging systems : Imaging systems

(110.6880) Imaging systems : Three-dimensional image acquisition

**ToC Category:**

Imaging Systems

**History**

Original Manuscript: January 3, 2012

Revised Manuscript: February 11, 2012

Manuscript Accepted: February 12, 2012

Published: February 21, 2012

**Citation**

Ho-Hyun Kang, Ju-Han Lee, and Eun-Soo Kim, "Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging," Opt. Express **20**, 5440-5459 (2012)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-5-5440

Sort: Year | Journal | Reset

### References

- G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).
- S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, 2001).
- F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]
- D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef] [PubMed]
- B.-G. Lee, H.-H. Kang, and E.-S. Kim, “Occlusion removal method of partially occluded object using variance in computational integral imaging,” 3D Res. 1(2), 2.1–2.5 (2010). [CrossRef]
- S.-C. Kim, C.-K. Kim, and E.-S. Kim, “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” 3D Res. 2(2), 2.1–2.9 (2011). [CrossRef]
- P. B. Han, Y. Piao, and E.-S. Kim, “Accelerated reconstruction of 3-D object images using estimated object area in backward computational integral imaging reconstruction,” 3D Res. 1, 4.1–4.8 (2011).
- S.-H. Hong and B. Javidi, “Improved resolution 3-D object reconstruction using computational II with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef] [PubMed]
- J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. 46(31), 7697–7708 (2007). [CrossRef] [PubMed]
- Y. Piao and E.-S. Kim, “Resolution-enhanced reconstruction of far 3-D objects by using a direct pixel mapping method in computational curving-effective integral imaging,” Appl. Opt. 48(34), H222–H230 (2009). [CrossRef] [PubMed]
- J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef] [PubMed]
- J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. 45(11), 117004 (2006). [CrossRef]
- H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. 284(13), 3227–3233 (2011). [CrossRef]
- O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. 40(20), 3318–3325 (2001). [CrossRef] [PubMed]
- M. Forman and A. Aggoun, “Quantization strategies for 3D-DCT based compression of full parallax 3D images,” in Proceedings of IEEE 6th International Conference on Image Processing and Applications, IPA97, No. 443, 32–35 (1997).
- S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express 12(8), 1632–1642 (2004). [CrossRef] [PubMed]
- J.-S. Jang, S. Yeom, and B. Javidi, “Compression of ray information in three-dimensional integral imaging,” Opt. Eng. 44(12), 127001 (2005). [CrossRef]
- H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 281(14), 3640–3647 (2008). [CrossRef]
- H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]
- C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. 284(20), 4884–4893 (2011). [CrossRef]
- J.-H. Park, J.-H. Kim, and B.-H. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef] [PubMed]
- J.-S. Lee, J.-H. Ko, and E.-S. Kim, “Real-time stereo object tracking system by using block matching algorithm and optical binary phase extraction joint transform correlator,” Opt. Commun. 191(3-6), 191–202 (2001). [CrossRef]
- R. C. Gonzalez, R. E. Woods, and S. L. Eddins, eds., Digital Image Processing (Pearson Prentice Hall, 2008).
- I. E. G. Richardson, ed., H.264 and MPEG-4 video compression (Wiley, 2003).
- D. S. Taubman and M. W. Marcellin, eds., JPEG2000-Image Compression Fundamentals, Standards and Practice, (Kluwer Academic Publishers, 2002).
- A. Barjatya, “Block matching algorithms for motion estimation,” (2005), Matlab central: http://www.mathworks.com/matlabcentral/fileexchange/8761 .

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Figures

Fig. 1 |
Fig. 2 |
Fig. 3 |

Fig. 4 |
Fig. 5 |
Fig. 6 |

Fig. 7 |
Fig. 8 |
Fig. 9 |

Fig. 10 |
Fig. 11 |
Fig. 12 |

Fig. 13 |
Fig. 14 |
Fig. 15 |

Fig. 16 |
Fig. 17 |
Fig. 18 |

Fig. 19 |
||

« Previous Article | Next Article »

OSA is a member of CrossRef.