OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 20, Iss. 5 — Feb. 27, 2012
  • pp: 5440–5459
« Show journal navigation

Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging

Ho-Hyun Kang, Ju-Han Lee, and Eun-Soo Kim  »View Author Affiliations


Optics Express, Vol. 20, Issue 5, pp. 5440-5459 (2012)
http://dx.doi.org/10.1364/OE.20.005440


View Full Text Article

Acrobat PDF (4193 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper, we proposed a new approach to notably enhance the compression rate of integral images by using the motion-compensated residual images (MCRIs). In the proposed method, sub-images (SIs) transformed from the picked-up elemental images of a three-dimensional (3-D) object, are sequentially rearranged with a spiral scanning topology. The moving vectors among the SIs, then, are estimated and compensated with the block-matching algorithm. Furthermore, spatial redundancy among the SIs is also removed by computing the differences between the local SIs and their motion-compensated versions, from which a sequence of MCRIs are finally generated and compressed with the MPEG-4 algorithm. Experimental results show that the compression efficiency of the proposed method has been improved up to 861.1% on average from that of the JPEG-based elemental images (EIs) method, and up to 1,497.0% and 118.8% on average from those of the MPEG-based MCSIs and the MPEG-based RIs method, respectively.

© 2012 OSA

1. Introduction

Basically, the conventional integral-imaging system is composed of two processes: pick-up and reconstruction. In the pick-up process, the ray information emanating from a three-dimensional (3-D) object is picked up by a CCD camera through the pickup lenslet array and recorded as a form of elemental images (EIs) or an elemental image array (EIA) representing different perspectives of the 3-D object. In the reconstruction process, a 3-D object image can be reconstructed from the picked-up EIA by the combined use of the display panel and the lenslet array [1

1. G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

3

3. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]

].

In fact, reconstruction of a 3-D object image from the picked-up EIA in high-resolution has been regarded as one of the challenging issues in the conventional integral-imaging system [4

4. D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef] [PubMed]

8

8. S.-H. Hong and B. Javidi, “Improved resolution 3-D object reconstruction using computational II with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef] [PubMed]

]. Here, the resolution of the reconstructed 3-D object image may highly depend on the number of picked-up EIs, as well as the resolution of each elemental image [9

9. J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. 46(31), 7697–7708 (2007). [CrossRef] [PubMed]

11

11. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef] [PubMed]

]. Thus, as the number of picked up EIs increases and the resolution of each elemental images enhanced, resultant resolution of the reconstructed object image can be improved correspondingly. However, this process simultaneously results in a massive increase in the 3-D data to be processed, stored and transmitted in the conventional integral-imaging system [12

12. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. 45(11), 117004 (2006). [CrossRef]

14

14. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. 40(20), 3318–3325 (2001). [CrossRef] [PubMed]

].

Thus far, several approaches to effectively reduce the data of the picked-up EIA, which are based on the conventional image compression algorithms, have been reported. M. Forman et al. proposed a 3-D discrete cosine-transform (DCT)-based compression algorithm by using a variable number of micro lens images [15

15. M. Forman and A. Aggoun, “Quantization strategies for 3D-DCT based compression of full parallax 3D images,” in Proceedings of IEEE 6th International Conference on Image Processing and Applications, IPA97, No. 443, 32–35 (1997).

]. S. Yeom et al. used the video compression algorithm of MPEG-2 for data reduction of the picked-up EIA [16

16. S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express 12(8), 1632–1642 (2004). [CrossRef] [PubMed]

]. J. -S. Jang et al. also used the Karhunen-Loeve transform (KLT) algorithm for compression of the picked-up EIs [17

17. J.-S. Jang, S. Yeom, and B. Javidi, “Compression of ray information in three-dimensional integral imaging,” Opt. Eng. 44(12), 127001 (2005). [CrossRef]

].

On the other hand, H. -H. Kang et al. applied the KLT compression algorithm to the sub images (SIs) transformed from the picked-up EIs. Since the perspective size of each sub-image is invariant regardless of the object depth [18

18. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 281(14), 3640–3647 (2008). [CrossRef]

], the similarity among the adjacent SIs gets larger than that among the originally picked-up EIs, which means that compression efficiency can be much improved compared to EIA’s.

The SIs, however, have their own perspectives, so that motion vectors among the SIs must exist, which may cause an additional increase in the image data to be compressed. Therefore, H. -H. Kang et al. employed the motion-compensated sub-images (MCSIs) for compression of integral images [19

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]

].

However, there still is much spatial redundancy among the SIs in addition to the motion vectors, so its compression efficiency is, therefore, expected to be further improved in case the spatial redundancy is taken into account during the compression process. Recently, C. –H. Yoo et al. proposed a concept of residual images (RIs) for effective compression of the SIs [20

20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. 284(20), 4884–4893 (2011). [CrossRef]

], in which the RIs were derived by sequentially computing the differences between the reference sub-image and the other SIs. This method, but, could not gain a high compression rate due to ineffective removal of the redundant data among the SIs as well as the motion vectors among the SIs not being considered.

Accordingly, in this paper, we propose a novel approach to effectively enhance the compression rate of integral images by simultaneous consideration of both the moving vectors and the spatial redundancy among the SIs.

That is, the sub-image array (SIA) is sequentially rearranged by using the spiral scanning topology, and the first sub-image originally located at the center of the SIA is designated as the reference sub-image. A block of the reference sub-image containing the object is selected and this image block is sequentially matched with those of the other local SIs to estimate the moving vectors between them by means of the mean-square-error (MSE) algorithm.

With these estimated moving vectors, the reference sub-image is shifted to each matching point of the local SIs, from which the motion-compensated versions of the local SIs are generated. By sequentially computing the differences between the local SIs and their motion-compensated versions, a sequence of motion-compensated residual images (MCRIs) are generated. Finally, these MCRIs are compressed with the MPEG-4 algorithm for transmission.

To test the feasibility of the proposed method, experiments with the 3-D object are performed and the results are compared to those of the conventional methods and discussed in terms of compression rate (CR) and peak-to-peak signal to noise ratio (PSNR).

2. EIA-to-SIA transformation

Figure 1
Fig. 1 Optical setup for picking up the EIA of a 3-D object.
depicts an optical setup to pick up the EIA of a 3-D object through the lenslet array in the conventional integral-imaging system. The lenslet array is located at the distance z from the 3-D object and the EIA of the 3-D object is captured at the distance g from the lenslet array by using a CCD camera.

Figure 2
Fig. 2 Conceptual diagram of an EIA-to-SIA transformation process.
shows a process of EIA-to-SIA transformation(EST) to generate the SIA from the picked-up EIA. As we can see in Fig. 2, all pixels located at the same positions in each elemental image are collected and rearranged to form the corresponding SIs. Here, a sub-image can be generated from the pixels located at the positions corresponding to the number in each elemental image, so that SIs can be generated as many as the pixel number of an elemental image.

Suppose that sx and sy are the number of pixels for each elemental image, and lx and ly are the number of elemental image in the x and y axis, respectively. Then, the EIA, which is denoted as , becomes (iey = syly)x(jex = sxlx) pixels. Therefore, the number of pixels in the SIA, which is denoted as S, can be calculated by Eq. (1).
S(isy,jsx)=E¯(pyry+qyty,pxry+qxtx)
(1)
Here, px = jx%2, py = iy%2, qx = (jx + 1)%2, qy = (iy + 1)%2, rx = (jx + 1)/2, ry = (iy + 1)/2, tx = (jx + psx)/2 and ty = (iy + psy)/2. psx and psy are the number of EIs about each axis on sub-plane. In addition, a%b is the remainder on division of a by b. In other words, in case all pixels located in the position [ix, jy] in each elemental image, which pixels are rearranged to generate SIA, that is, the corresponding pixels might be collected to generate SIs on the coordinate [im, jn] in the sub-plane. Here, ix and jy mean the coordinate of each pixel in EIA, and im and jn also do the coordinates of corresponding pixels on SIA. Here, the SIA has a high similarity among the adjacent sub-image since all pixels on each sub-image contain the ray information of the same view point on each lenslet.

This newly generated SIA has a couple of useful features for data compression when it is compared to the EIA. One is the invariance of the perspective size of the 3-D object among the SIs, where the perspective is defined as the 3-D object size projected in each sub-image through the lenslet array [21

21. J.-H. Park, J.-H. Kim, and B.-H. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef] [PubMed]

].

The other feature is the high similarity among the SIs regardless of the 3-D object depth because the pixels at the same position on each elemental image are rearranged to generate the corresponding sub-image [18

18. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 281(14), 3640–3647 (2008). [CrossRef]

,19

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]

]. That is, SIA has a high similarity among the adjacent SIs since all pixels on each sub-image contains the ray information of the same view point on each lenslet.

Figure 3
Fig. 3 Configuration of the ray analysis for the transformed SIs.
shows a configuration of the ray analysis for the transformed SIA. From the Fig. 3, it can be found that the object size is invariant even though the object location from the lenslet array is shifted, and the 3-D object imagined on the SIA is just shifted as the 3-D object is moved on the pickup condition [20

20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. 284(20), 4884–4893 (2011). [CrossRef]

]. Ray information emanated from the 3-D object reaches on the display panel through the lenslet array.

Here, the transmitted ray does not change by shifting the position of the object, but the ray information reached on the display panel may include the information of the changed position of the object. In Fig. 3, the SIA represent a ray diagram that a specific angle in which the 3-D object is observed regardless of the position of the object. Here, we suppose the ith sub-image obtained from the specific angle θi, then this angle is given by
tanθi=yif
(2)
where, f and yi represents the focal length of the lenslet and the position of ith pixel, respectively. In case the 3-D object is shifted by the distance b between two object positions, the distance L can be calculated from rays obtained from two points of the 3-D object which are represented as A and B. Then, this distance can be given as follows.
L=btanθi=byif
(3)
Equation (3) shows that the distance between two object positions is proportional to the pitch of the lenslet, which means the object position in SIs may change as a function of the shifting distance of the 3-D object as shown in Fig. 3.

3. Proposed method

Figure 4
Fig. 4 Overall block-diagram of the proposed compression method.
shows an overall block-diagram of the proposed compression method employing a new concept of motion-compensated residual images (MCRIs). The proposed method largely consists of four steps: 1) rearrangement of the SIA into a sequence of SIs by using the spiral scanning topology, 2) motion estimation and compensation of the SIs with the block-matching algorithm, 3) generation of MCRIs by sequential computation of the differences between the local SIs and their motion-compensated versions (MCSIs), and 4) compression of the MCRIs with MPEG-4.

3.1 Rearrangement of the SIA into a sequence of SIs

In this paper, a spiral scanning topology is employed for rearrangement of the SIA into a sequence of SIs [20

20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. 284(20), 4884–4893 (2011). [CrossRef]

], in which the SIA is sequentially scanned along the spiral direction starting from the center of the SIA as shown in Fig. 4.

Thus far, several approaches to enhance the similarity among the EIs and SIs have been researched [16

16. S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express 12(8), 1632–1642 (2004). [CrossRef] [PubMed]

], and a scanning topology has been mostly employed for rearrangement of the originally picked-up 2-D EIA into a sequence of EIs. There are three kinds of scanning topologies used in the integral-imaging system such as the parallel, perpendicular and spiral methods [18

18. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 281(14), 3640–3647 (2008). [CrossRef]

,19

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]

]. A parallel scanning topology has been known to be suitable for integral images having different sizes on the horizontal and perpendicular directions. Perpendicular and spiral scanning topologies are mostly employed for minimization of motion compensation between SIA transformed from the picked-up EIA.

Especially, the spiral scanning method is mostly preferred for the case that the EIs are picked up from the 3-D object located at the center axis of the pick-up system. In other words, with the spiral scanning method, we can get a highly focused image containing the most perspective data of a 3-D object at the center of the SIA and this focused sub-image can be used for a reference sub-image to be compared with the other rearranged sub-images for compression.

That is, the SIA represents a 2-D array of de-magnified images of a 3-D object having different perspectives, so that the spiral scanning method may show the best performance for enhancing the similarity among the rearranged SIs. Figure 5
Fig. 5 Rearrangement of the SIA into a sequence of SIs by spiral scanning: (a) Segmentation of SIs with spiral scanning topology, (b) Sequentially rearranged SIs.
shows a process of spiral scanning, in which the first sub-image squared in red is assigned as the reference sub-image and the other SIs are sequentially rearranged according to the order of the dotted red line. The sub-image squared with blue color is arranged to be the last one of the sequence in Fig. 4.

3.2 Motion estimation and compensation among the sequentially rearranged SIs

The SIs transformed from the picked-up EIs have high similarity among their object images, but the object in each sub-image may be somewhat displaced from the center location depending on its distance from the reference sub-image. That is, motion vectors must exist among the SIs, which may cause an additional increase in image data to be compressed [19

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]

].

Accordingly, in this paper, the motion vectors among the rearranged SIs of Fig. 4(b) are estimated and compensated by using the block-matching algorithm [22

22. J.-S. Lee, J.-H. Ko, and E.-S. Kim, “Real-time stereo object tracking system by using block matching algorithm and optical binary phase extraction joint transform correlator,” Opt. Commun. 191(3-6), 191–202 (2001). [CrossRef]

]. Figure 6
Fig. 6 Flowchart of the proposed motion estimation and compensation process based on the block-matching algorithm.
shows a flowchart of the proposed motion estimation and compensation process based on the block-matching process between the reference sub-image and the other local SIs.

In order to estimate the motion vectors between the reference sub-image and the other SIs through the block-matching process, a MSE (mean-squared error) algorithm of Eq. (4) is employed here in this paper [23

23. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, eds., Digital Image Processing (Pearson Prentice Hall, 2008).

].
MSE=1N×Mi=1Nj=1M(CijRij)2
(4)
where Cij and Rij represent one of the local SIs and the reference sub-image, and i and j denote the pixel on the x-axis and y-axis, respectively. That is, a block of the reference sub-image sized by N × M is fully scanned on each local sub-image. That is, as shown in Fig. 5, a block of the reference sub-image containing the object is sequentially matched with those of the local SIs by using the MSE algorithm of Eq. (4) to estimate their moving vectors.

With these estimated moving vectors, then, the reference sub-image is shifted to each matching point of the local SIs to compensate for their moving vectors, from which the motion-compensated versions of each local sub-image are generated. In the proposed method, the motion-compensated versions of the local SIs are generated by shifting the reference sub-image to each matching point of the local SIs contrary to the conventional method in which the local SIs have been shifted to their matching points of the reference sub-image [19

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]

].

Accordingly, in the proposed method, a much enhanced similarity among the motion-compensated versions of the local SIs can be achieved, which means that the spatially redundant data between the local SIs and their motion-compensated versions (MCSIs) can be effectively removed in the following computation process of motion-compensated residual images (MCRIs).

Figure 7
Fig. 7 A process of motion estimation and compensation and computation of MCRI: (a) Block α in the reference sub-image, (b) Matching block β in the ith sub-image, (c) Shifting the reference sub-image from α to β for motion-compensation, (d) Motion-compensated version of ith sub-image of (b), (e) MCRI computed between the ith sub-image of (b) and its motion-compensated version of (d), (f) Reconstructed object image.
shows an example of the proposed motion-estimation and compensation process of the ith local sub-image based on the block-matching algorithm.

The squared area α in the reference sub-image of Fig. 7(a) represents a predetermined macro-block (N × M) of the reference sub-image to be matched with those of the local SIs, where R(10,11) represents the location coordinates of the reference block. Now, the block α of the reference sub-image is matched with those of the ith sub-image of Fig. 7(b) in the full image area to find out the corresponding block β with the MSE algorithm. From this block-matching process, the corresponding point of C(4,16) in the ith sub-image can be estimated and the difference between these two points D(−6,5) is computed as the shifted location coordinates of the block α of the reference sub-image.

With this estimated motion vector, the object in the reference sub-image is shifted from the point R(10,11) to the point C(4,16) to generate the motion-compensated version of the ith sub-image, which is shown in Fig. 7(d). Then, by computing the difference between the ith sub-image of (b) and its motion-compensated version of (d), the corresponding ith MCRI can be generated, which is shown in Fig. 7(e). Finally, the ith sub-image reconstructed from the ith MCRI is also included in Fig. 7(f) for comparison.

In the proposed method, the reference sub-image including the most perspective data of the object is shifted to the matching points of each local sub-image. Therefore, the spatial redundancy among the local SIs can be effectively removed in the following process of generation of MCRIs, from which much enhanced compression efficiency can be expected.

3.3 Generation of motion-compensated residual images

Figure 8
Fig. 8 Generation process of a sequence of MCRIs in the proposed method: (a) Local SIs, (b) Motion-compensated versions of the local SIs, (c) MCRIs.
shows a generation process of MCRIs. The reference sub-image and the other consecutive local SIs are shown in Fig. 8(a). The motion-compensated versions of the local SIs (MCSIs) are also shown in Fig. 8(b). By sequentially computing the differences between the consecutive local SIs of Fig. 8(a) and their motion-compensated versions of Fig. 8(b), a sequence of MCRIs can be generated which is shown in Fig. 8(c).

Here in the proposed method, the MCRIs are generated by computing the differences between the local SIs and their motion-compensated versions, so that the resultant MCRIs may consist of the negative and positive pixel intensity values in the range of [-255 ~ 255]. However, since the intensity values of the pixels ranged from −255 to −1 in the MCRIs may cause some loss in the negative redundancy, the pixels having the negative intensity values could be degraded in the decoding process. Therefore, it is necessary to normalize the negative pixel values into the positive range of 0 to 255 for efficient transmission of the MCRIs [13

13. H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. 284(13), 3227–3233 (2011). [CrossRef]

].

Now, each pixel value of the MCRIs can be transformed into the new ones having the positive intensity values ranging from 0 to 255 for the prevention of data loss occurred in the reconstruction process by using Eq. (5) as follows.
Pr(i,j)=(Po(i,j)Pc(i,j))+Pmaxω,for{i=1,,nj=1,,m
(5)
Here, Pr, Po and Pc represent the pixel intensity values of the MCRIs, the local SIs and the motion-compensated versions of the local SIs, respectively. In addition, Pmax and ω represent the maximum pixel value of the MCSIs and the quality factor, respectively. Here in this paper, the quality factor ω representing the value to determine the number of quantum steps for normalization is assumed to be in the range of [2

2. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, 2001).

,10

10. Y. Piao and E.-S. Kim, “Resolution-enhanced reconstruction of far 3-D objects by using a direct pixel mapping method in computational curving-effective integral imaging,” Appl. Opt. 48(34), H222–H230 (2009). [CrossRef] [PubMed]

] considering the decoding process [20

20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. 284(20), 4884–4893 (2011). [CrossRef]

]. That is, for ω = 2 the intensity values of the pixel are normalized in the range of [0~127], whereas they are normalized in the range of [0 ~25] for ω = 10. Accordingly, as the value of ω gets smaller, the number of quantum steps for normalization increases, which results in an improvement of the reconstructed image quality as well.

4. Compression of MCRIs with MPEG-4

A sequence of MCRIs can be regarded as a video stream, so that the well-defined motion picture algorithm of MPEG-4 [24

24. I. E. G. Richardson, ed., H.264 and MPEG-4 video compression (Wiley, 2003).

] is employed for their compression in this paper. That is, MPEG-4 can perform the compression of the MCRIs basing on the correlation between the MCRIs. The encoder performs color conversion to YCbCr with sampling reduction of 4:2:0 or 4:2:2, in which Y is the luminance component, and Cb and Cr are the chrominance components.

MPEG-4 has two strategies for video compression. One is “intra-frame coding” and the other is “inter-frame coding.” The former one is similar to the still-image compression by JPEG. For the latter one, MPEG subtracts macro-blocks established with the motion vector and performs DCT coding of their difference. The macro-block is the smallest coded unit which consists of our 8x8 blocks of Y, one 8x8 block of Cb and one 8x8 block of Cr. The compressed MCRIs(C-MCRIs) are finally transmitted to the recipient.

5. Reconstruction process

Figure 9
Fig. 9 Reconstruction process of the SIs from the received C-MCRIs: (a) Decoded MCRIs, (b) Decoded motion-compensated versions of SIs (Decoded MCSIs), (c) Decoded local Sis.
shows the reconstruction process of the proposed method which is just the inverse of the compression process of Fig. 8. Here, the C-MCRIs are received with the data type ranging from 0 to 255 pixel values, so that the original SIs including the negative and positive redundancy must be decoded from these received MCRIs

At first, each pixel value of the motion-compensated versions of the local SIs Pc is decoded by using the received reference sub-image and motion vectors. Secondly, the pixel values of the received MCRIs Pr are decoded into the original pixel values of the local SIs Po, which are ranging from −255 to 255, by using Eq. (6).
Po(i,j)=ω×(Pr(i,j)Pmaxω)+Pc(i,j),for{i=1,,nj=1,,m
(6)
Then, the decoded local SIs are inversely transformed into the EIs to reconstruct a 3-D object image.

5. Performance evaluation

Here, a concept of ‘Correlation quality (CQ)’is employed to comparatively evaluate the similarity characteristics between the rearranged EIs and SIs [18

18. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 281(14), 3640–3647 (2008). [CrossRef]

]. For comparison, the originally picked-up EIA is also repositioned as a sequence of EIs by using the spiral scanning topology just like in the case of SIs mentioned above.

Assume that two adjacent images in the sequence of EIs are Ei and Ej, which are composed of M × N pixels. Then, the CQ value between these two EIs can be defined by
CQ=m=1Mn=1N{Ei(m,n)Ej(m,n)Ei2(m,n)},
(7)
where m and n are the coordinates of the elemental image. Also, the average correlation quality (ACQ) value for all of the EIs can be given by
ACQ=1Pi=1P[m=1Mn=1N{Ei(m,n)Ej(m,n)Ei2(m,n)}],
(8)
where P is the total number of EIs. Now, the ACQ values for each of the EIA and the SIA are computed with Eq. (8) and they can be used for computation of the similarity among the adjacent EIs or SIs. That is, in case two images are perfectly the same and, its ACQ value, then, turns out to be 1, which means as the ACQ value gets closer to 1,the resultant similarity among the adjacent EIs or SIs increases correspondingly.

Furthermore, to confirm the performance of the proposed method, its compression efficiency is evaluated by two parameters. One is the compression rate (CR) and the other is the peak-to-peak signal-to-noise-ratio (PSNR). Here, the CR and the PSNR are defined as Eq. (9) and Eq. (10), respectively.
CR=Originalimagesize(bytes)Compressedimagesize(bytes)
(9)
PSNR(Io,Ic)=10log(2552MSE).
(10)
where Io and Ic mean the originally picked-up EIA before encoding and the decoded EIA in the receiver, respectively.

6. Experiments and results

6.1. Experiments

To confirm the feasibility of proposed method, optical experiments have been carried out. In the experiment, a 3-D object ‘Car’ having the volume size of 50mm × 40mm × 30mm is used as the test object. Figure 10
Fig. 10 Experimental set-up for picking up the EIA of the test object ‘Car’.
shows the experimental setup for picking up the EIA of the test object ‘Car’.

EIA of the test object is optically picked up with a CCD camera through the lenslet array. Here, the lenslet array is composed of 32 × 32 lenslets, in which each lenslet is square-shaped and has a uniform size of 1.08mm × 1.08mm. In the experiment, the distance between the CCD camera and the lenslet array (g) is fixed at 30mm, but the distance from the lenslet array to the test object (z) has been changed from 30mm to 60 and 120mm for the effective evaluation of the proposed method’s compression performance.

Figure 11
Fig. 11 Three cases of the test object ‘Car’ for experiments: (a) Car_30, (b) Car_60, (c) Car_120.
shows three cases of the test object for experiments and they are denoted here as the cases of Car_30, Car_60 and Car_120.

Accordingly, by sequentially changing the distance between the lenslet array and the test object, three kinds of EIAs composed of 32 × 32 EIs are picked-up, in which each elemental image consists of 32 × 32 pixels, so that 1,024(32 × 32) numbers of SIs are generated for each picked-up EIA.

Figure 12(a)
Fig. 12 Three kinds of picked-up EIAs (a)-(c) and its corresponding SIAs (a′)-(c′) for each case of Car_30, Car_60, and Car_120.
12(c) shows three kinds of picked-up EIAs composed of 1024 × 1024 pixels for three cases of Fig. 10, in which magnified areas of each picked-up EIA are also shown. As we can see in each magnified portion of the EIAs, the sampled ray information of the test object and the similarity among adjacent EIs increase as the distance between the object and lenslet array increases.

Then, these EIAs are transformed into its corresponding SIAs as shown in Fig. 12(a’)–12(c’). As you can see in Fig. 12, each SIA shows the de-magnified images of the test object, so that the similarity among the adjacent SIs can be much more improved than that among the EIs. Here, three kinds of picked-up EIAs and its corresponding SIAs are recorded as a BMP (windows bitmap) file format, and the original file size of each EIA and SIA shown in Fig. 11is given by 3,145,782 bytes.

The ACQ values computed for each picked-up EIA and its corresponding SIA by using Eq. (8) are shown in Table 1

Table 1. ACQ value of the EIA and the SIA for each case of Fig. 11

table-icon
View This Table
| View All Tables
. Table 1 reveals that the ACQ values of the SIAs have been improved up to 95.1%, 132.0% and 56.9%, respectively from those of the EIAs for three cases. This means that much higher similarity may exist among the SIs compared to that among the originally picked-up EIs as SIs are composed of the de-magnified images of the test object. Accordingly, a motion estimation scheme such as MSE can be efficiently applied to these SIs for their enhanced compression.Then, each SIA is sequentially rearranged along the spiral direction starting from the center of the SIA for three cases, in which the center sub-image is assigned as the reference sub-image. Even though the SIs have a high similarity among their object images, the objects in each local sub-image may be somewhat displaced from the center location depending on the distance from the center sub-image, which means motion vectors must exist among the SIs.

Here, in this paper, these motion vectors among the SIs are estimated and compensated by using the block-matching method. Figures 13(a)
Fig. 13 Sequences of local SIs (a)-(c) and their motion-compensated versions (a′)-(c′) for each case of Car_30, Car_60 and Car_120.
13(c) show the sequences of original SIs rearranged from each SIA of Figs. 12(a’)–12(c’). Figures 13(a’)–13(c’) also illustrate the sequences of motion compensated versions of the local SIs for three cases. As mentioned above, here in this paper, the reference sub-image has been shifted to each matching point of the local SIs for the effective generation of their motion-compensated versions.

Now, by using Eq. (2), motion-compensated residual images (MCRIs) are generated from the local SIs of Fig. 13(a)13(c) and their motion-compensated versions (MCSIs) of Figs. 13(a’)–13(c’). These are shown in Figs. 14(a)
Fig. 14 Sequences of MCRIs generated from the local SIs and their motion-compensated versions for each case of (a) Car_30, (b) Car_60 and (c) Car_120.
14(c).

Finally, these sequences of MCRIs considered as the consecutive video frames, are compressed by using the MPEG-4 algorithm of ISO/IEC 14496 Video Codec Software and transmitted to the recipient.

6.2. Comparison of CR values between the proposed and the JPEG-based EIs method

Figure 15
Fig. 15 Calculated CR and PSNR values of the proposed and the JPEG-based EIs method.
shows the calculated CR and PSNR values of the proposed (MPEG-based MCRIs) method for each case of Figs. 11(a)11(c), in which the compression results of the conventional JPEG-based EIs method [13

13. H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. 284(13), 3227–3233 (2011). [CrossRef]

] are also included for comparison. Here, the quality factor of JPEG is performed from 1 to 8 in converting the BMP files into the JPEG format.

From Fig. 15, it is found that the CR of the proposed method has been improved up to 157.5% for Car_30, and 912.7% for Car_60, and 1,513.4% for Car_120, respectively when compared to those of the conventional JPEG-based EIs method under the condition of PSNR = 30dB. That is, the compression efficiency of the proposed method has been improved up to 861.1% on average from that of the JPEG-based EIs method,

As we can see in Fig. 15, the CR of the proposed method may improve as the value of ω increases and the distance from the lenslet array to the test object gets longer. Furthermore, the calculated CR values showing the PSNR above 30dB are widely ranged from 31.0 to 314.6 in the proposed method. Here it must be noted that it is certainly easy to recognize the decoded object images with the human visual system in that they all show more than 30dB in PSNR [25

25. D. S. Taubman and M. W. Marcellin, eds., JPEG2000-Image Compression Fundamentals, Standards and Practice, (Kluwer Academic Publishers, 2002).

].

On the other hand, in the conventional JPEG-based EIs method, the calculated CR values showing the PSNR above 30dB are very narrowly ranged from 15.3 to 28.3, which means that the conventional method may not be efficient in compression of picked-up EIs.

6.3. Comparison of CR values between the proposed and the MPEG-based MCSIs method

Figure 16
Fig. 16 Calculated CR and PSNR values of the proposed and the conventional MPEG-based MCSIs method.
shows the CR and PSNR values of the conventional MPEG-based MCSIs method [13

13. H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. 284(13), 3227–3233 (2011). [CrossRef]

,19

19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]

] together with those of the proposed method. As you can see in Fig. 16, the proposed method has much higher and wider range of CR values than the MPEG-based MCSIs method for all cases of Car_30, Car_60 and Car_120.

The CR of the proposed method has been improved up to 1,497.0% for Car_120 as compared to that of the MPEG-based MCSIs method under the condition of PSNR = 30dB.

In the MPEG-based MCSIs method, the CR values having the PSNR above 30dB are obtained only in the case of Car_120 and they are almost fixed at 19.7. Accordingly, the compression performance of the MPEG-based MCSIs method has been found much worse than that of the conventional JPEG-based EIS method even though the motion vectors among the SIs have been compensated. These results can be visually seen in Figs. 15 and 16.

Actually, in the MPEG-based MCSIs method, the NCC (normalized cross-correlation) method has been employed for estimation of the motion vectors among the SIs. However, as the correlation operation between the reference sub-image and one of the consecutive SIs is performed on the image-by-image basis, the correlation results might be largely affected by the pickup conditions of the 3-D object such as brightness, illumination angle, etc. Therefore, motion vectors among the SIs could not be accurately compensated and additional image data resulted from the perspective differences of the object in each SIs has not been effectively removed.

Moreover, the object in each local sub-image has its own perspective, so that part of the object in the MCSIs happens to be cut out depending on the distance from the reference sub-image, which result in some data loss of the object image in the MCSIs even though their motion vectors are removed from them.

In short, inaccurate estimation of motion vectors and data loss of the object image during the motion compensation process might have caused to deteriorate the compression efficiency of the conventional MPEG-based MCSIs method. In addition, spatially redundant data among the SIs has not been removed in this method which might also increase the image data to be compressed to a great extent.

6.4. Performance comparison between the proposed and the MPEG-based RIs method

Figure 17
Fig. 17 Calculated CR and PSNR values of the proposed and the conventional MPEG-based RIs method.
shows the calculated CR and PSNR for the conventional MPEG-based RIs (residual images) [20

20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. 284(20), 4884–4893 (2011). [CrossRef]

] method together with those of the proposed method for each case of Fig. 11(a), 11(b) and 11(c). The results of Fig. 17 reveal that the CR of the proposed method has been improved up to 52.7% for Car_30, and 114.7% for Car_60, and 189.0% for Car_120, respectively as compared to those of the conventional MPEG-based RIs method under the condition of PSNR = 30dB. That is, the compression efficiency of the proposed method has been improved up to 118.8% on average from that of the JPEG-based EIs method,

As mentioned above, the SIs have their own perspectives, so that motion vectors exist among the SIs, which may cause an additional increase in the image data to be compressed. But, in the conventional MPEG-based RIs method the motion vectors among the SIs were not considered, which resulted in some decrease in compression efficiency when compared to the proposed method.

In addition, in the conventional MPEG-based RIs method, a sequence of RIs are generated by computing the differences between the reference sub-image and the other consecutive sub-images, so that there must exist some difference between the object images in the reference sub-image and the other SIs located far from the center, which might result in inefficient removal of redundant data between them.

On the other hand, in the proposed method, the motion vectors are accurately estimated and compensated by using the pixel-based block matching algorithm. Moreover, spatially redundant data among the motion compensated versions of the local SIs are effectively removed by calculating the differences between the local SIs and their motion-compensated versions.

In another words, the proposed method has simultaneously considered both of the moving vectors and the spatial redundancy among the SIs, so that its compression rate has been significantly enhanced compared to the conventional methods. From the experimental results, it has been analyzed that the redundant data among the SIs have played an important role in overall compression performance of the integral-imaging system.

6.5. Analysis of experimental results

Table 2

Table 2. Comparison of the CR between the Proposed and the Conventional Methods

table-icon
View This Table
| View All Tables
shows the averaged CR with the CR components having 30dB of PSNR for each case of the proposed, the conventional JPEG-based EIs, MPEG-based MCSIs and MPEG-based RIs methods. In Table 2, a notation of ‘-’ signifies that there exist no CR components having the PSNR value above 30dB in the method.

Table 2 confirms that the proposed method has shown the highest CR for all cases of Car_30, Car_60 and Car_120 in the experiment.For each case of Car_30 and Car_60, the compression rate of the proposed method have been enhanced by 1.57, 0.53 fold, and by 9.13, 1.15 fold, respectively when compared to those of the conventional JPEG-based EIs and MPEG-based RIs method. Moreover, for the case of Car_120, the compression rate of the proposed method has been enhanced by 15.13, 14.97 and 1.89 fold, respectively on average when compared to the conventional JPEG-based EIs, MPEG-based MCSIs and MPEG-based RIs method.

6.6. Reconstructed object images

Figure 18
Fig. 18 Decoded SIs, EIs and reconstructed object images for each case of (a) Car_120 and ω = 2, (b) Car_120 and ω = 5 and (c) Car_120 and ω = 9.
illustrates the SIs and the EIs decoded from the received MCRIs, and the object images reconstructed from the decoded EIs for the case of Case_120. Here, the PSNR values of the reconstructed object images having the compression rates of 65.8 for ω = 2, 244.7 for ω = 5 and 320.8 for ω = 9, have been found to be 37.5, 33.5 and 30.2dB, respectively.

These experimental results reveal that the compression rate of the EIs could be enhanced as the quality factor ω increases, whereas the PSNR value of the reconstructed object image gets somewhat lower. But, all the reconstructed object images have the PSNR values above 30dB in the proposed method, so that these reconstructed object images shown in Fig. 18 can be easily recognized in the human visual system.

6.7. Discussions

Figure 19
Fig. 19 Time responses for motion-compensation of each local sub-image: (a) In the encoding process, (b) In the decoding.
shows the calculated time responses for motion-compensation of each local sub-image in both of the encoding and decoding processes. That is, the average time responses for motion-compensation are found to be 123.00ms, 135.16ms, 128.05ms in the encoding process, and 129.51ms, 126.56ms, 126.96ms in the decoding process, respectively for each case of Car_30, Car_60, and Car_120.

These delayed time response might be partly resulted from the fact that a fully searched and serially processed MSE algorithm was employed for motion estimation and compensation.

As we mentioned above, the time response can be much improved just by using the parallel matching algorithms and hardware systems [26

26. A. Barjatya, “Block matching algorithms for motion estimation,” (2005), Matlab central: http://www.mathworks.com/matlabcentral/fileexchange/8761.

]. However, these experimental results show us one of the advantages of the proposed method that the time response for motion compensation may not depend on the distance between the pickup lenslet array and the 3-D object because the SIs were used for motion compensation instead of the originally picked-up EIs in the proposed method.

Even though the BMA (Block matching algorithm) was successfully used for accurate estimation of the motion vectors between the SIs, it took some time in the process of motion estimation because the BMA was processed in a full and serial searching mode. Table 3

Table 3. Time Responses for the Encoding and Reconstruction Processes for Three Cases

table-icon
View This Table
| View All Tables
shows the time responses for the encoding and decoding processes. That is, the time responses for encoding and decoding are computed to be 139.6s and 150.05s on the average, respectively by using the Core i5 processor (2.67 GHz, Intel) and the Matlab R2008a version (Mathworks Inc.).

But, for the practical application of the proposed method, several approaches to speed up the encoding and decoding processes must be developed. Thus far, many approaches to be able to enhance the speed of the BMA during the compression process have been reported [26

26. A. Barjatya, “Block matching algorithms for motion estimation,” (2005), Matlab central: http://www.mathworks.com/matlabcentral/fileexchange/8761.

], so we are going to apply those algorithms to our scheme in the future works for improving the time response of the encoding and decoding processes

7. Conclusions

In this paper, we have proposed a novel approach for effectively enhancing the compression rate of integral images by employing a new concept of MCRIs, in which the MCRIs have been generated by simultaneously considering both of the temporal and spatial redundancy among the SIs. Experimental results show that the compression efficiency of the proposed method has been improved up to 861.1% on average from that of the JPEG-based EIs method, and up to 1,497.0% and 118.8% on average from those of the MPEG-based MCSIs and the MPEG-based RIs method, respectively.

Good experimental results finally confirmed the feasibility of the proposed method in practical application.

Acknowledgment

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2011-0030815). This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2011-0030815). The present research has been also partially conducted by the Research Grant of Kwangwoon University in 2011.

References and links

1.

G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).

2.

S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, 2001).

3.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]

4.

D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef] [PubMed]

5.

B.-G. Lee, H.-H. Kang, and E.-S. Kim, “Occlusion removal method of partially occluded object using variance in computational integral imaging,” 3D Res. 1(2), 2.1–2.5 (2010). [CrossRef]

6.

S.-C. Kim, C.-K. Kim, and E.-S. Kim, “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” 3D Res. 2(2), 2.1–2.9 (2011). [CrossRef]

7.

P. B. Han, Y. Piao, and E.-S. Kim, “Accelerated reconstruction of 3-D object images using estimated object area in backward computational integral imaging reconstruction,” 3D Res. 1, 4.1–4.8 (2011).

8.

S.-H. Hong and B. Javidi, “Improved resolution 3-D object reconstruction using computational II with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef] [PubMed]

9.

J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. 46(31), 7697–7708 (2007). [CrossRef] [PubMed]

10.

Y. Piao and E.-S. Kim, “Resolution-enhanced reconstruction of far 3-D objects by using a direct pixel mapping method in computational curving-effective integral imaging,” Appl. Opt. 48(34), H222–H230 (2009). [CrossRef] [PubMed]

11.

J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef] [PubMed]

12.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. 45(11), 117004 (2006). [CrossRef]

13.

H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. 284(13), 3227–3233 (2011). [CrossRef]

14.

O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. 40(20), 3318–3325 (2001). [CrossRef] [PubMed]

15.

M. Forman and A. Aggoun, “Quantization strategies for 3D-DCT based compression of full parallax 3D images,” in Proceedings of IEEE 6th International Conference on Image Processing and Applications, IPA97, No. 443, 32–35 (1997).

16.

S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express 12(8), 1632–1642 (2004). [CrossRef] [PubMed]

17.

J.-S. Jang, S. Yeom, and B. Javidi, “Compression of ray information in three-dimensional integral imaging,” Opt. Eng. 44(12), 127001 (2005). [CrossRef]

18.

H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 281(14), 3640–3647 (2008). [CrossRef]

19.

H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]

20.

C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. 284(20), 4884–4893 (2011). [CrossRef]

21.

J.-H. Park, J.-H. Kim, and B.-H. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef] [PubMed]

22.

J.-S. Lee, J.-H. Ko, and E.-S. Kim, “Real-time stereo object tracking system by using block matching algorithm and optical binary phase extraction joint transform correlator,” Opt. Commun. 191(3-6), 191–202 (2001). [CrossRef]

23.

R. C. Gonzalez, R. E. Woods, and S. L. Eddins, eds., Digital Image Processing (Pearson Prentice Hall, 2008).

24.

I. E. G. Richardson, ed., H.264 and MPEG-4 video compression (Wiley, 2003).

25.

D. S. Taubman and M. W. Marcellin, eds., JPEG2000-Image Compression Fundamentals, Standards and Practice, (Kluwer Academic Publishers, 2002).

26.

A. Barjatya, “Block matching algorithms for motion estimation,” (2005), Matlab central: http://www.mathworks.com/matlabcentral/fileexchange/8761.

OCIS Codes
(080.0080) Geometric optics : Geometric optics
(110.0110) Imaging systems : Imaging systems
(110.6880) Imaging systems : Three-dimensional image acquisition

ToC Category:
Imaging Systems

History
Original Manuscript: January 3, 2012
Revised Manuscript: February 11, 2012
Manuscript Accepted: February 12, 2012
Published: February 21, 2012

Citation
Ho-Hyun Kang, Ju-Han Lee, and Eun-Soo Kim, "Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging," Opt. Express 20, 5440-5459 (2012)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-5-5440


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. G. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences 146, 446–451 (1908).
  2. S. A. Benton, ed., Selected Papers on Three-Dimensional Displays (SPIE Optical Engineering Press, 2001).
  3. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]
  4. D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45(19), 4631–4637 (2006). [CrossRef] [PubMed]
  5. B.-G. Lee, H.-H. Kang, and E.-S. Kim, “Occlusion removal method of partially occluded object using variance in computational integral imaging,” 3D Res. 1(2), 2.1–2.5 (2010). [CrossRef]
  6. S.-C. Kim, C.-K. Kim, and E.-S. Kim, “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” 3D Res. 2(2), 2.1–2.9 (2011). [CrossRef]
  7. P. B. Han, Y. Piao, and E.-S. Kim, “Accelerated reconstruction of 3-D object images using estimated object area in backward computational integral imaging reconstruction,” 3D Res. 1, 4.1–4.8 (2011).
  8. S.-H. Hong and B. Javidi, “Improved resolution 3-D object reconstruction using computational II with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef] [PubMed]
  9. J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. 46(31), 7697–7708 (2007). [CrossRef] [PubMed]
  10. Y. Piao and E.-S. Kim, “Resolution-enhanced reconstruction of far 3-D objects by using a direct pixel mapping method in computational curving-effective integral imaging,” Appl. Opt. 48(34), H222–H230 (2009). [CrossRef] [PubMed]
  11. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef] [PubMed]
  12. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. 45(11), 117004 (2006). [CrossRef]
  13. H.-H. Kang, B.-G. Lee, and E.-S. Kim, “Efficient compression of rearranged time-multiplexed elemental image arrays in MALT-based three-dimensional integral imaging,” Opt. Commun. 284(13), 3227–3233 (2011). [CrossRef]
  14. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. 40(20), 3318–3325 (2001). [CrossRef] [PubMed]
  15. M. Forman and A. Aggoun, “Quantization strategies for 3D-DCT based compression of full parallax 3D images,” in Proceedings of IEEE 6th International Conference on Image Processing and Applications, IPA97, No. 443, 32–35 (1997).
  16. S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express 12(8), 1632–1642 (2004). [CrossRef] [PubMed]
  17. J.-S. Jang, S. Yeom, and B. Javidi, “Compression of ray information in three-dimensional integral imaging,” Opt. Eng. 44(12), 127001 (2005). [CrossRef]
  18. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Compression scheme of sub-images using Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 281(14), 3640–3647 (2008). [CrossRef]
  19. H.-H. Kang, D.-H. Shin, and E.-S. Kim, “Efficient compression of motion-compensated sub-images with Karhunen-Loeve transform in three-dimensional integral imaging,” Opt. Commun. 283(6), 920–928 (2010). [CrossRef]
  20. C.-H. Yoo, H.-H. Kang, and E.-S. Kim, “Enhanced compression of integral images by combined use of residual images and MPEG-4 algorithm in three-dimensional integral imaging,” Opt. Commun. 284(20), 4884–4893 (2011). [CrossRef]
  21. J.-H. Park, J.-H. Kim, and B.-H. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef] [PubMed]
  22. J.-S. Lee, J.-H. Ko, and E.-S. Kim, “Real-time stereo object tracking system by using block matching algorithm and optical binary phase extraction joint transform correlator,” Opt. Commun. 191(3-6), 191–202 (2001). [CrossRef]
  23. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, eds., Digital Image Processing (Pearson Prentice Hall, 2008).
  24. I. E. G. Richardson, ed., H.264 and MPEG-4 video compression (Wiley, 2003).
  25. D. S. Taubman and M. W. Marcellin, eds., JPEG2000-Image Compression Fundamentals, Standards and Practice, (Kluwer Academic Publishers, 2002).
  26. A. Barjatya, “Block matching algorithms for motion estimation,” (2005), Matlab central: http://www.mathworks.com/matlabcentral/fileexchange/8761 .

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited