OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 22, Iss. 2 — Jan. 27, 2014
  • pp: 1533–1550
« Show journal navigation

Optical three-dimensional refocusing from elemental images based on a sifting property of the periodic δ-function array in integral-imaging

Jae-Young Jang, Donghak Shin, and Eun-Soo Kim  »View Author Affiliations


Optics Express, Vol. 22, Issue 2, pp. 1533-1550 (2014)
http://dx.doi.org/10.1364/OE.22.001533


View Full Text Article

Acrobat PDF (5488 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We propose a novel approach to optically refocus three-dimensional (3-D) objects on their real depth from the captured elemental image array (EIA) by using a sifting property of the periodic δ-function array (PDFA) in integral-imaging. By convolving the PDFAs whose spatial periods correspond to each object’s depth with the sub-image array (SIA) transformed from the EIA, a set of spatially filtered-SIAs (SF-SIAs) for each object’s depth can be extracted. These SF-SIAs are then inverse-transformed into the corresponding versions of the EIAs, and from these, 3-D objects with their own perspectives can be reconstructed to be refocused on their depth in the space. The feasibility of the proposed method has been confirmed through optical experiments as well as ray-optical analysis.

© 2014 Optical Society of America

1. Introduction

Thus far, digital two-dimensional (2-D) refocusing from the light fields has attracted many attentions in the fields of integral-imaging and digital light-field photography because it allows us to computationally refocus on each object in turn after a single exposure by using the computational integral-imaging reconstruction and ray-tracing techniques [1

1. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).

11

11. H.-J. Lee, D.-H. Shin, H. Yoo, J.-J. Lee, and E.-S. Kim, “Computational integral imaging reconstruction scheme of far 3D objects by additional use of an imaging lens,” Opt. Commun. 281(8), 2026–2032 (2008). [CrossRef]

].

Here, digital light-field photography mostly uses a microlens array in front of the photosensor for recording the light-field inside the camera, and ray-tracing techniques are employed for processing the final photographs from the recorded light-field [1

1. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).

7

7. R. Raskar and A.-K. Agrawal, “4D light field cameras,” US patent 772423 (September 2010).

]. In this system, digital refocusing is based on a physical simulation of the way photographs are formed inside a real camera. That is, the software simulates a camera configured as desired, and traces the recorded light rays through its optics to its virtual imaging plane [4

4. R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, Stanford, CA, USA, 2006).

]. The desired photograph is then produced by summing the light rays in this imaginary image and digitally displayed on the 2-D monitor.

Hence, in the conventional integral-imaging system, a set of 2-D images with different perspectives of the 3-D objects may be captured as an elemental image array (EIA) through a lenslet array [12

12. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006). [CrossRef]

19

19. D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of ture 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun. 275(2), 330–334 (2007). [CrossRef]

]. Then, from this captured EIA, 3-D object images can be digitally or optically reconstructed. For digital reconstruction, a computational integral-imaging reconstruction technique based on simulated ray-optics has been used for reconstruction of the 3-D object images from the captured EIA [8

8. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef] [PubMed]

11

11. H.-J. Lee, D.-H. Shin, H. Yoo, J.-J. Lee, and E.-S. Kim, “Computational integral imaging reconstruction scheme of far 3D objects by additional use of an imaging lens,” Opt. Commun. 281(8), 2026–2032 (2008). [CrossRef]

]. However, with this method, all images are reconstructed as a 2-D form of center-viewed plane object images along the output plane due to the unique feature of the computational integral-imaging reconstruction algorithm. Simply stated, the reconstructed object images can be digitally displayed only on a 2-D monitor without showing any real depth and perspective variations of the 3-D objects.

In fact, in many application fields including biomedical 3-D imaging and display, 3-D spatial interfacing and interaction, 3-D object detection and recognition, and 3-D photography, there has been a strong need for the development of an optical scheme to refocus the perspective-variant 3-D object images on their real depth from the captured EIA [20

20. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006). [CrossRef] [PubMed]

25

25. B.-G. Lee, H.-H. Kang, E.-S. Kim, “Occlusion removal method of partially occluded object using variance in computational integral imaging,” 3D Research, 1:2 (2010).

]. This new method is called an optical 3-D refocusing contrasting with the conventional digital 2-D refocusing.

Recently, we proposed a new depth extraction method by using the periodic δ-function array (PDFA) in computational integral-imaging [26

26. J.-Y. Jang, J.-I. Ser, S. Cha, and S.-H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. 51(16), 3279–3286 (2012). [CrossRef] [PubMed]

]. Here, this PDFA can also be applied to the new optical 3-D refocusing because depth-dependent spatial filtering of the captured EIA can be possible based on its own sifting property.

However, with this method, the capture range of the 3-D object is very limited [26

26. J.-Y. Jang, J.-I. Ser, S. Cha, and S.-H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. 51(16), 3279–3286 (2012). [CrossRef] [PubMed]

, 27

27. J.-Y. Jang, D. Shin, and E.-S. Kim, “Improved 3-D image reconstruction using the convolution property of periodic functions in curved integral-imaging,” Opt. Lasers Eng. 54, 14–20 (2014). [CrossRef]

]. That is, object points must be located within a so-called effective-capture-zone (ECZ) somewhat away from the lenslet array, in which all components of the lenslet array can be seen. Therefore, the whole point images which result from a specific depth point can be uniquely extracted by using the PDFA having the corresponding spatial period.

Alternatively, the object points located outside the ECZ cannot see all components of the lenslet array. That is, they can only see part of them, so that the captured EIA may be composed of the point images resulted from two more object points having the same depth but different lateral coordinates, which means two more different object points can be represented just by one PDFA. Accordingly, under this circumstance, the resulting EIA extracted by the PDFA’s sifting operation no longer uniquely represents a specific target object’s depth, which may cause a critical problem when it is applied to the optical 3-D refocusing.

To overcome this drawback, an EIA-to-SIA (sub-image array) transformation scheme can also be employed for effective extraction of the depth-dependent EIA because the transformed SIA may provide whole projected images regarding the 3-D object without any limitation on the object’s location from the lenslet array [21

21. J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef] [PubMed]

]. That is, contrary to the EIA, each sub-image may contain all image points coming from the object points. However, for the PDFA to be applicable to this transformed SIA, a spatial periodicity of the image points recorded on the SIA must also be confirmed just like the case of the captured EIA [26

26. J.-Y. Jang, J.-I. Ser, S. Cha, and S.-H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. 51(16), 3279–3286 (2012). [CrossRef] [PubMed]

].

Accordingly, in this paper, we propose a novel approach for optical 3-D refocusing of 3-D objects with their own perspectives on their real depth from the captured EIA as well as the SIA transformed from the captured EIA based on the sifting property of the PDFA. For this, a periodicity of the image points of the SIA depending on the object depth is analyzed with ray-optics. Then, by convolving the PDFAs whose spatial periods correspond to each object’s depth, with the transformed SIA, a set of SF-SIAs for each object’s depth are extracted. These SF-SIAs are inverse-transformed into the corresponding versions of the EIA called SF-EIAs, and from these, 3-D object images with their own perspectives are optically reconstructed to be refocused on their real depth in the space.

Moreover, to show the practical feasibility of the proposed method, optical experiments with 3-D test object as well as theoretical analysis with the ray-optics are carried out and the results are discussed.

2. Proposed PDFA-based optical 3-D refocusing

Fig. 1 Block-diagram of the proposed system: (a) Capturing process, (b) EIA-to-SIA transformation and sifting processes, (c) Optical 3-D refocusing process.
Figure 1 shows a schematic diagram of the proposed PDFA-based optical 3-D refocusing system, which is largely composed of 3 processes: 1) capturing process, 2) spatial-filtering process and 3) optical 3-D refocusing process. In the first step, 3-D information of the objects having different depth are captured as a form of 2-D EIA through a lenslet array as shown in Fig. 1(a). In the second step, as shown in Fig. 1(b), the captured EIA is transformed into the corresponding SIA, and by convolving this transformed SIA with the PDFAs whose spatial periods correspond to each object’s depth, a set of SF-SIAs is extracted. Then, these spatially-filtered SIAs are inverse-transformed into the corresponding versions of EIAs. In the third step, 3-D object images showing their own perspectives are optically reconstructed to be refocused on their real depth in turn from the SF-EIAs, which is shown in Fig. 1(c).

2.1 Capturing the EIA of 3-D objects

Fig. 2 Geometrical relation between a point object and its corresponding imaging points in the lens array-based integral-imaging system.
The first process of the proposed method is to capture the EIA of 3-D objects. In the conventional integral-imaging system, 3-D information of the objects can be captured as a form of 2-D EIA through a lenslet array. In this system, as shown in Fig. 2, the geometrical relationship between a point object and its corresponding point images captured on the elemental image plane may exist and its 2-D form can be given by Eq. (1) based on ray-optics.

xEk=xO+zOzO+f[(k12)PxO].
(1)

Here, the origin of the coordinates is assumed to be the edge of the elemental lens located at the bottom of the lenslet array. The object point (xo, yo, zo) is a position along the x, y, and z axes. P represents the pitch of an elemental lens in the lenslet array as well as the diameter of an elemental lens (lateral length of an elemental image). f denotes the focal length of an elemental lens. In addition, xEk represents a point image by the kth elemental lens, in which the valid xEk is restricted by (k-1)PxEkkP in the direct capture condition and k means the natural number [26

26. J.-Y. Jang, J.-I. Ser, S. Cha, and S.-H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. 51(16), 3279–3286 (2012). [CrossRef] [PubMed]

, 27

27. J.-Y. Jang, D. Shin, and E.-S. Kim, “Improved 3-D image reconstruction using the convolution property of periodic functions in curved integral-imaging,” Opt. Lasers Eng. 54, 14–20 (2014). [CrossRef]

].

In Fig. 2, the imaging distance of a point object measured from the lenslet array can be given by Eq. (2).
zE=zOf(zO+f).
(2)
According to Eq. (2), the imaging distance of a point object may depend on the z-coordinate value of the object point and the focal length of the elemental lens. However, because of the limited resolution of a pickup sensor, the unit of period in Eq. (1) may be converted into that of pixel. Thus, the pixelated version of Eq. (1) can be denoted by
xCEk=ceil[xEk×NPkmax],
(3)
where N and kmax represent the lateral resolution of the EIA (number of pixels) in one-dimensional (1-D) condition and the lateral number of elemental lenses (number of lenses), respectively.

From the geometrical relationship, 1-D form of the spatial period of the EIA depending on the object’s depth can be given by |xEixE(i-1)| where 2 ≤ ikmax. Then, the spatial period depending on the object’s depth can be derived to be |xEixE(i-1)| = |zOP / (zO + f)|. Hence, the pixelated form of the spatial period can be denoted by

XzO=|xCEixCE(i1)|.
(4)

2.2 Operational limitation of the PDFA to the captured EIA

The operation of the PDFA may be based on the superimposition property among the same periodic δ-functions. Therefore, a spatial periodicity of the object images recorded in the EIA corresponding to the object’s depth would be a crucial factor in the process of depth-dependent spatial filtering of the captured EIA by using the PDFAs.

Moreover, a captured EIA in the lens-array-based integral-imaging system may be regarded as a sum of spatially periodic image points depending on each object’s depth. Therefore, through the convolution operations between the captured EIA and the PDFAs whose spatial periods correspond to each target depth, the targeting periodic spatial information can be uniquely extracted from the captured EIA.

In the conventional 2-D imaging system, the image intensity can be represented as g(xE) = f(xE)h(xE), in which xE and denote the x coordinate on the imaging plane and the convolution, and two functions of f(xE) and h(xE) represent the scaled object intensity and the intensity impulse response, respectively. On the other hand, the image intensity for the 3-D object can be represented by g(xE)|Zo = f(xE)|Zoh(xE)|Zo having the zo dependence.

Assuming the geometrical optics condition of λ→0, the intensity impulse response can be represented by the δ-function. Therefore, the intensity impulse response can be represented by the array of δ-functions as a spike of unit height located at the point where xE = xEk. Then, 1-D description of the intensity of the EIA corresponding to the object’s depth of zo can be written by [26

26. J.-Y. Jang, J.-I. Ser, S. Cha, and S.-H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. 51(16), 3279–3286 (2012). [CrossRef] [PubMed]

]
g(xE)|zO=f(xE)|zOk=1kmaxδ(xExEk|zO),
(5)
, where f(xE)|Zo and ∑δ(xE-xEk|Zo) denote the scaled object intensity and the intensity impulse response dependence on the object’s depth of zo, respectively.

Although the object intensity is continuously distributed along the z-axis, the intensity impulse response of the EIA can be represented as a sum of discrete spatial periods because of the limited resolution of a capturing sensor. Here, the EIA for a 3-D object can be represented as G(xE) = ∑g(xE)|Zo, and for extraction of the depth information from the EIA, the PDFA’s sifting operation may be performed onto the EIA as G(xE)h(xE)|Zo, where h(xE)|Zo represents the PDFA whose spatial period corresponds to the target object’s depth. In addition, Eq. (5) may also confirm that 1-point object can be represented by one PDFA (intensity impulse response) operating on the EIA. Thus, for n-point objects, the n number of the corresponding PDFAs are needed to uniquely represent all those object points.

Here, it must be noted that the PDFA’s sifting property can be applicable only to the EIA captured from the object points which are located within the effective-capture-zone (ECZ) as shown in Fig. 3(a).
Fig. 3 Effective-capture-zone (ECZ), two kinds of EIAs captured from 2 and 3 objects locating within the ECZ and their SF-EIAs obtained with the PDFAs: (a) ECZ, (b) Captured EIAs, (c) SF-EIAs.
Based on the geometrical relationship of Fig. 2, the condition that a point object is to be imaged through all elemental lenses can be obtained as follows.
{xOP2fzO+(kmax12)PxOP2fzO+P2zO<0
(6)
The first and second rows of Eq. (6) are derived from the relationships between the object point of (zO, xO) and each center coordinate of the elemental lenses located on the top and bottom of the lenslet array, (0, (kmax-1/2)P) and (0, P/2). Now, the minimum object distance of (zmin, xO) to the ECZ from the lenslet array can be calculated to be ((1-kmax)f, Pkmax/2) just by equating the first and second rows of Eq. (6). As seen in Fig. 3(a) and Eq. (6), the ECZ condition can be determined by the employed lenslet array’s specifications such as the total number of elemental lenses in the lenslet array (kmax) and the focal length of an elemental lens (f). For visual convenience, the ECZ satisfying the condition of Eq. (6) is displayed with the blue color in Fig. 3(a).

Here, an object point, which is located within the ECZ, can see all components of the lenslet array, so the point images for this object point can be captured on the elemental image plane through all elemental lens of the lenslet array. Therefore, point images resulted from this object point with a specific depth can be uniquely extracted by using the sifting operation of the PDFA with the corresponding spatial period. In other words, n object points satisfying the ECZ condition must normally be represented by the same number of PDFAs with corresponding spatial periods.

Figure 3(b) shows two examples of EIAs captured from two objects of ‘3′ and ‘D’ having the same depth but different lateral coordinates of (zO, xO3) and (zO, xOD) as well as three objects of ‘K’, ‘W’ and ‘D’ having different depth and lateral coordinates of (zOK, xOK), (zOW, xOW) and (zOU, xOU), respectively. Here, they have been colored with red and blue, and red, green and blue, respectively for visual convenience. As seen in the magnified portions of Fig. 3(b), each elemental image of the captured EIAs is composed of the mutually separated red, green and blue point images which resulted from two or three object points. The object images may be overlapped together, but they are not blended up together.

From these captured EIAs, two kinds of SF-EIAs; SF-EIA (3, D) and SF-EIA (K), can be obtained by convolving each of the captured EIAs of Fig. 3(b) with the PDFAs having the spatial periods corresponding to the object depth of zO and zOK, respectively. As seen in Fig. 3(c), the red and green object images of ‘3′ and ‘D’ are simultaneously extracted for the two-object case because they were located at the same depth plane of zO, whereas only the red object image of ‘K’ is extracted for the three-object case because it was located at the depth plane of zOK. As mentioned above, two kinds of PDFs having the same period but different lateral shifts are needed for two object images while one PDF is needed for one object image of ‘K’.

Fig. 4 Two kinds of EIAs captured from 2 and 3 objects locating outside the ECZ and their SF-EIAs obtained with the PDFAs: (a) Captured EIAs, (b) SF-EIAs.
However, if the object points move close to the lenslet array and are eventually located beyond the ECZ, the spatially-periodic intensity impulse response in Eq. (5), h(xE)|Zo, begins to represent more than one object point as shown in Figs. 4(a) and 4(b).

That is, the object points located outside the ECZ cannot see all but only part of the components of the lenslet array, so that these captured EIAs may be composed of the mixed point images which result from two object points having the same depth but different lateral coordinates or three object points having different depth and lateral coordinates of Fig. 4(a), which means those point objects cannot be uniquely reconstructed from the SF-EIAs because two or three object images are represented by one PDFA. As seen in Fig. 4(a), each elemental image of the captured EIAs consists of a mixture of the red, green and blue point images resulting from two or three object points contrary to the cases of Fig. 3(b). In other words, they are not simply overlapped, but rather mixed together with three color images. Therefore, as seen in Fig. 4(b), two kinds of SF-EIAs; SF-EIA(3, D) and SF-EIA(K), which are extracted from the captured EIAs of Fig. 4(a) by sifting operations of the PDFAs with the spatial periods corresponding to each of the depth planes of zO and zOK, may be given by the mixed forms of elemental images resulting from the red and green object images of ‘3′ and ‘D’ for the two-object case, and the red, green and blue object images of ‘K’, ‘W’ and ‘D’ for the three-object case, respectively because all those objects are located outside the ECZ.

Accordingly, under this circumstance, the resultant SF-EIAs extracted from the picked-up EIAs with the PDFAs no longer uniquely represent the scaled object intensity corresponding to the specific target object’s depth because these SF-EIAs just represent the blended EIAs coming from two or three object points with different depth and lateral locations, which may cause a critical drawback when it is applied to the optical 3-D refocusing. In addition, these examples may confirm that depth-dependent object points cannot be uniquely reconstructed with the SF-EIAs extracted from the EIAs captured from the objects located outside the ECZ. Therefore, at least for these situations, the sifting operation of the PDFA cannot be applied to the optical 3-D refocusing.

2.3 Transformation of the EIAs to the corresponding SIAs

As we discussed above, the captured EIA from the objects located outside the ECZ may not be uniquely represented as the scaled object intensity corresponding to a specific target object’s depth. For this reason, here we employ the EIA-to-SIA transformation scheme because the SIA can provide whole projected object images in the EIA [21

21. J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef] [PubMed]

].

Fig. 5 Conceptual block-diagram of an EIA-to-SIA transformation process.
Figure 5 shows a process of EIA-to-SIA transformation to generate the SIA from the captured EIA. As seen in Fig. 5, all pixels located at the same positions in each elemental image are collected and rearranged to form the corresponding SIA. Here, a sub-image can be generated from the pixels located at the positions corresponding to the number in each elemental image, so that sub-images can be generated as many as the pixel number of an elemental image.

Suppose that nx and ny are the number of pixels for each elemental image, and kx and ky are the number of elemental images in the x and y axis, respectively. Then, the EIA, which is denoted as , becomes (Ney = nyky)x(Nex = nxkx) pixels. Therefore, the number of pixels in the SIA, which is denoted as S, can be calculated by Eq. (7).
S(Nny,Nnx)=E¯(pyry+qyty,pxry+qxtx)
(7)
Here, px = Nx%2, py = Ny%2, qx = (Nx + 1)%2, qy = (Ny + 1)%2, rx = (Nx + 1)/2, ry = (Ny + 1)/2, tx = (Nx + psx)/2 and ty = (Ny + psy)/2. psx and psy are the number of elemental images about each axis on the sub-plane. In addition, a%b is the remainder on division of a by b. If all pixels are located in the position (Nx, Ny) in each elemental image, the pixels are rearranged to generate the SIA. Therefore, the corresponding pixels might be collected to generate the sub-images on the coordinates (im, jn) in the sub-image plane. Here, ix and jy denote the coordinates of each pixel in an EIA, and im and jn denote the coordinates of the corresponding pixels on a SIA. Now, this SIA has a high similarity among the adjacent sub-images since all pixels on each sub-image contain the ray information of the same view-point on each lenslet.

Fig. 6 Two kinds of EIAs and transformed SIAs for two or three object cases of Fig. 4(a): (a) Captured EIAs, (b) Transformed SIAs.
Figure 6 shows the captured EIAs from two or three object points located outside of the ECZ of Fig. 4(a) and the SIAs transformed from these by using Eq. (7). The captured EIA consists of 100 × 100 elemental images, in which the resolution of each elemental image is assumed to be 10 × 10 pixels. In addition, the SIA consists of 10 × 10 sub-images, in which the resolution of each sub-image is assumed to be 100 × 100 pixels.

As seen in the magnified portions of Fig. 6(b), each sub-image of the transformed SIAs is composed of the mutually separated object images just like the case Fig. 3(b) even though the EIAs are picked up from the objects locating outside the ECZ, whereas the captured EIAs are the mixed forms of two or three object images as shown in Fig. 6(a). These results may confirm that depth-dependent spatial-filtering of the EIAs captured from the objects locating outside the ECZ can be possible with these transformed SIAs.

2.4 Analysis of a spatial periodicity of the SIA for the PDFA’s sifting operation

As mentioned above, the PDFA-based spatial-filtering of the captured EIA can be possible only when the target object is located within the ECZ as shown in Fig. 3(a). Here, the output image resolution obtained from the PDFA operation may be proportional to the number of sampled object images to be convolved.

In the conventional lenslet array-based integral imaging system, however, the number of sampled 3-D object data, which can be represented as the spatial periods recorded on the EIA, may decrease or even act just like a serious noise as the distance between an object and a lenslet array is reduced to the range of the ECZ. Thus, this restriction of the ECZ may cause an unwanted image distortion onto the output image obtained from the PDFA operation.

To overcome this inapplicability of the PDFA to the captured EIA from the objects located near the lenslet array, here we employ the SIA transformed from the EIA which can provide the whole projected object images regarding the 3-D object from the EIA without any restrictions of the capture zone of the objects [21

21. J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef] [PubMed]

].

It must be noted here that the PDFA-based extraction of the SF-EIAs corresponding to each object’ depth from the captured EIA could be possible based on the fact that the object images recorded on the EIA have been proved to have a spatial periodicity [26

26. J.-Y. Jang, J.-I. Ser, S. Cha, and S.-H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. 51(16), 3279–3286 (2012). [CrossRef] [PubMed]

]. Therefore, for applying the PDFA to the newly transformed SIA for the depth-dependent spatial filtering of the captured integral images, the periodicity of the object images recorded on the SIA must to be also confirmed just like the case of the EIA.

Fig. 7 Pixel correspondence between the EIA and its transformed SIA.
Now, for the theoretical demonstration of the periodic structure of the SIA depending on the object’s depth, the spatial period of an SIA can be derived as stated below. Here, the lateral resolution of an EIA, N is given by N = n×kmax when each elemental image consists of n pixels. Furthermore, the xCEk in Eq. (3) can be represented as an ordered pair (k, m) as shown in Fig. 7(b), where k is the index of an elemental image and m is given by
m=xCEk(k1)n,
(8)
where m is an order of pixel in the kth elemental image and 1≤ mn. The lateral resolution of the SIA is also given by N, which is the same with that of the EIA. If the number of sub-images is jmax and each sub-image is composed of ξ pixels, then N can be described by N = ξ× jmax. Here, a pixel in the SIA can also be represented as an ordered pair (j, η) as shown in Fig. 7(c), where j is an index of the sub-image and η is an order of pixel in the jth sub-image and 1≤ηξ.

Now, the relationship between the EIA and the corresponding SIA can be given by n = cjmax and ξ = ckmax, where c is the number of pixels to be extracted from each elemental image for the EIA-to-SIA transformation operation. To show the periodic property of the SIA adequately, let us consider a condition of c = 1 as shown in Fig. 8.
Fig. 8 Three kinds of SF-SIAs extracted the transformed SIA with the PDFAs having the spatial periods corresponding to each depth of ‘K’, ‘W’ and ‘U’.
The pixel’s correspondence between the EIA and the corresponding SIA can be represented as follows
(k,m)c=1(j,η)=(m,k).
(9)
The ordered pair of pixel’s index (j, η) on the SIA domain can be represented as the pixel’s position measured from the origin of the coordinates by
sj(η)=(j1)ξ+η,
(10)
where (j-1)ξs.

Here, under the condition of c = 1, Eq. (10) can be described as a function sj(η) as variables in the EIA domain by substituting Eq. (8) into Eq. (10), and using the relations of j = m, η = k, and ξ = kmax as follows.

s(xCEk)=(xCEk(k1)n1)kmax+k.
(11)

In Eq. (11), as xCEk depends on the object’s depth, the function sj(η) also depends on the object’s depth. Hence, the 1-D form of the spatial period in the SIA domain can be given by | s(xCEi) – s(xCE(i-1)) |, 2 ≤ i ≤ jmax. Furthermore, the detailed version of the spatial period in the SIA domain can be given by
Xsub=|s(xCEi)s(xCE(i1))|=|(xCEixCE(i1)n)kmax+1|,
(12)
where by considering the constraint of (j-1)ξs shown in Eq. (10), the range of the spatial period in the SIA can be given by 0 < Xsub < 2kmax.

Here, 1-D description of an SIA intensity corresponding to the object’s depth of zO, may be written as follows

gsub(s(xCEk))|zO=f(xCEk)|zOj=0jmax1δ[s(xCEk)(s1(η)+jXsub|zO)].
(13)

In the SIA domain, the PDFA-based spatial filtering process can be performed through the convolution between the SIA and a sequence of 2-D δ-function arrays whose spatial periods are in the range of 0 < Xsub < 2kmax.

As a result, a set of SF-SIAs can be obtained just like the same structure of the SIA. Then, these SF-SIAs for each object’s depth are inverse-transformed to the corresponding SF-EIAs for the optical reconstruction of the 3-D objects to be refocused on their real depth.

2.5 Spatial-filtering of the SIAs with the corresponding PDFAs and their reconstruction

For this, here the location coordinates of three objects of ‘K’, ‘W’, and ‘U’ of Fig. 4(a), zOK,, zOW and zOU are assumed to be 10, 60, and 110mm, respectively from the lenslet array. The lenslet array is composed of 100 × 100 elemental lenses, in which the focal length and the diameter of an elemental lens are assumed to be 3mm and 1mm, respectively. Then, the ECZ of this integral-imaging system is calculated to be 297mm, which means all objects are located outside the ECZ. In addition, the captured EIA consists of 100 × 100 elemental images, in which the resolution of each elemental image is assumed to be 10 × 10 pixels.

As seen in Fig. 8, the SF-SIA(K) representing the partial SIA extracted from the specific depth plane of 10mm where ‘K’ was located, shows that the object ‘K’ looks to be focused while the other objects of ‘W’ and ‘U’ appears to be defocused because they were all located on different depth planes of 60 and 110mm. The same results have also been obtained for the cases of SF-SIA(W) and SF-SIA(U).

Fig. 9 SF-EIAs inverse-transformed from the SF-SIAs of Fig. 8 and reconstructed object images to be refocused on their depth: (a) SF-EIAs, (b) Reconstructed object images.
The SF-SIAs of Fig. 8 are then inverse-transformed into the corresponding versions of the EIA called SF-EIAs, and from these SF-EIAs each object can be reconstructed to be refocused on their real depth. Figure 9(a) and 9(b) show the inverse-transformed SF-EIAs and the reconstructed object images from each of them, respectively. As seen in Fig. 9, each object has been uniquely reconstructed on its real depth from the corresponding SF-EIAs even for the objects located outside the ECZ because the PDFA’s sifting operation is applied to the transformed SIA, not to the captured EIA.

As mentioned above, all 3-D objects are simultaneously reconstructed from the whole captured EIA in the conventional optical integral-imaging system whereas in the proposed method, by using the proposed PDFA-based spatial filtering scheme, each 3-D object can be selectively reconstructed to be refocused on its depth from each of the SF-EIAs as shown in Fig. 9(b). That is, depth-dependent reconstruction of the 3-D objects along the output plane from the captured EIA can be possible in the proposed 3-D refocusing system.

Thus, we can implement a new type of optical 3-D camera to selectively reconstruct the refocused 3-D objects on their depth in the real space contrary to the conventional 2-D refocusing-based digital camera in which all objects are reconstructed as a 2-D form of center-viewed plane object images along the output plane and displayed only on a 2-D monitor without showing any real depth of the 3-D objects in the space [2

2. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenoptic Camera,” Technical Report CTSR 2005–02, Dept. of Computer Science, Stanford Univ., 2005.

].

3. Experiments on the optical 3-D refocusing with 3-D volumetric objects

To demonstrate the practical feasibility of the proposed method as well as to verify the theoretical analysis explained above, experiments on the optical 3-D refocusing of 3-D volumetric objects on their real depth are performed.

Here, the captured EIA is assumed to be composed of 100 × 100 elemental images, in which each elemental image has a resolution of 10 × 10 pixels. A lenslet array composed of 100 × 100 elemental lenses is also used for capturing and reconstruction of the 3-D objects, in which the focal length and the diameter of an elemental lens are given by 8mm and 1.6mm, respectively. In addition, an LCD monitor located at the back focal length of the lenslet array, is used to display the reconstructed object images from each of the spatially filtered-EIAs.

In the experiment, two 3-D volumetric objects of ‘Car’, whose surface is covered with an Arabic numeral and many alphabet letters of ‘3D research center’, and ‘Plant’ are used as the test objects as shown in Fig. 10(a).
Fig. 10 Two 3-D volumetric objects of ‘Car’ and ‘Plant, and the EIA captured from them: (a) Test 3-D objects, (b) Captured EIA.
Here, two 3-D volumetric objects of ‘Car’ and ‘Plant’ located in the front and back sides of the lens array, are assumed to be distributed on the ranges of −1 ~-70mm and + 25 ~ + 70mm from the lenslet array, respectively along the z-axis. The ‘Car’ object is set to be rotated 30° against the lenslet array in the counter-clockwise direction for its large depth variation. Moreover, from the specifications of the employed lenslet array, the minimum distance from the lenslet array satisfying the ECZ is calculated to be 792mm, which means two test 3-D volumetric objects are located outside of the ECZ.

During the first step, the EIA is captured from the 3-D test objects of ‘Car’ and ‘Plant’ as shown in Fig. 10(b). Here, the EIA is computationally captured by using a virtual lenslet array having the same specifications as the real lenslet array used for optical reconstruction.

Fig. 11 Optically reconstructed 3-D objects of ‘Car’ and ‘Plant’ from the whole captured EIA and observed object images at three view-points of the left, center and right along the horizontal direction.
During the second step, the objects are optically reconstructed from the whole captured EIA of Fig. 10(b) using the lenslet array-based integral-imaging method for comparing with those of the proposed optical 3-D refocusing method explained below. Figure 11 shows the optically reconstructed 3-D images which are observed at three viewpoints of the left, center and right along the horizontal direction. As we can easily sense from the capture condition mentioned above, the ‘Car’ and ‘Plant’ objects are reconstructed on the back (virtual image) and the front (real image) spaces of the lenslet array, respectively.

As shown in Fig. 11, different views of the 3-D objects are seen depending on the viewing-direction even though the ‘Car’ object looks to be significantly occluded by the ‘Plant’ object. That is, we can see much wider portions of the object’s scene to the far right (marked with yellow circle) and the far left (marked with the blue circle) in each of the left and right views, respectively, compared to the scene in the center-view as shown in Fig. 11, which means perspective-variant 3-D object images can be observed in the reconstructed object image.

Fig. 12 Three kinds of SF-EIAs obtained from the captured EIA of Fig. 10(b) on each depth of + 35, −40 and −60mm, and their optically reconstructed 3-D objects: (a) SF-EIAs, (b) Optically reconstructed 3-D objects.
Figure 12(a) shows the SF-EIAs obtained from the captured EIA of Fig. 10(b) on each depth of + 35, −40 and −60mm through spatial-filtering operations with the corresponding PDFAs. Here in Fig. 12, a minus and a plus sign represent the imaging coordinates on the back and front of the lenslet array, respectively. As expected, the obtained SF-EIAs of Fig. 12(a) show nothing about the object images of ‘Car’ and ‘Plant’ because they do not represent any elemental images of the 3-D objects uniquely extracted from each specific depth, but rather display the mixtures of the elemental images resulting from the object points located on all depth planes of the 3-D objects because they are all positioned outside the ECZ. Accordingly, the optically reconstructed object images from these SF-EIAs also do not show anything about the objects, but just noises as shown in Fig. 12(b).

Thus, during the third step, the captured EIA of Fig. 10(b) is transformed into the corresponding SIA for performing the proposed PDFA-based spatial-filtering operation, in which one-pixel extraction method (c = 1) is employed in the EIA-to-SIA transformation process, so the transformed SIA is composed of 10 × 10 sub-image and the resolution of each sub-image is given by 100 × 100 pixels.

Fig. 13 SIA transformed from the captured EIA and the SF-SIAs obtained from the SIA with the PDFAs: (a) Sampled 1x3 sub-images of the transformed SIA, (b) Sampled 6 subsets of 1x3 SF-SIAs, (c) 6 SF-EIAs inverse-transformed from the corresponding SF-SIAs.
Figure 13(a) shows a subset of 1 × 3 sub-images sampled from the whole SIA composed of 10 × 10 sub-images. By performing the spatial-filtering operation to this transformed SIA with the PDFAs having fifteen kinds of spatial periods ranging from 93 pixels (−60mm) to 107 pixels ( + 60mm) with an incremental step of 1 pixel, the same number of SF-SIAs corresponding to each spatial period is obtained. Figure 13(b) shows six subsets of 1 × 3 SF-SIAs sampled from fifteen kinds of spatially filtered-SIAs, and from these we can see the object images contrary to the cases of Fig. 12(a). These results may confirm that depth-dependent spatial filtering of the SIA by using the corresponding PDFAs can be possible in the sub-image domain even though the objects were located outside the ECZ.

Fig. 14 Optically reconstructed 3-D object images to be refocused on their depth.
Then, the SF-SIAs are inverse-transformed into the corresponding SF-EIAs for optical reconstruction of the 3-D objects, which are shown in Fig. 13(c). Now, Fig. 14 shows the optically reconstructed 3-D objects to be refocused on their real depth from each of the SF-EIAs, which are viewed here from three different lateral directions of the left, center and right to confirm the perspective variations of the reconstructed object images.

From Fig. 14, we can observe the continuous variation of the perspectives of the object images contrary to the conventional 2-D refocusing method. That is, as the refocusing plane moves along the longitudinal direction, continuously focus-varying depth images of the 3-D objects are seen because the 3-D volumetric objects are distributed on a certain range along the depth direction. Specifically, at the specific output plane, only the depth image of the 3-D objects positioned on that plane is refocused while the others get defocused.

Figure 14 also visually shows us a continuous variation of the refocused depth images of the 3-D objects along the output plane. On the depth plane of −2mm in the virtual domain, only the alphabetical letters of ‘D, r, c’ located on the left-hand side of the ‘Bus’ and marked with a yellow rectangle, are refocused. Moreover, on the depth planes of −20, −40 and −60mm, the alphabetical letters of ‘ese, ent’, ‘arc, er’ and ‘h’, which are also marked with the yellow rectangles, are refocused, respectively. In other words, as the depth plane gets deeper, the focused image plane of the ‘Bus’ moves from the left to the right because the ‘Bus’ object was originally set to be rotated 30 degrees counterclockwise against the lenslet array, which means the right-hand side of the ‘Bus’ has a far depth compared to that of the left one.

These results may confirm that 3-D objects can be reconstructed to be refocused on their depth from the depth-dependently extracted SF-EIAs based on the proposed PDFA-based sifting operation. Furthermore, on the depth planes of + 25, + 35, + 45 and + 55mm in the real domain, we can also see the same results for the object of ‘Plant’. In other words, as the depth plane gets deeper, the focused image plane of the ‘Plant’ moves from the bottom area (marked with the yellow circles) where some leaves of the ‘Plant’ were located at the most front depth plane, to the upper and middle areas (marked with the yellow circles) where some leaves were located at the middle and back depth planes, respectively.

At the same time, Fig. 14 also tells us the perspective variations in the reconstructed 3-D object images. As shown in Fig. 14, different viewpoints of the 3-D objects are seen depending on the viewing-direction on each depth plane. That is, we can see much wider right portions of the objects (marked with red circles) in the left views compared to the center views, whereas wider left portions of the objects (marked with blue circles) are seen in the right views, which may also confirm the perspective-variant reconstruction of the refocused 3-D object images in the proposed method.

In summary, theoretical analysis and experimental results explained above thus far may finally confirm that depth-dependent reconstruction of the 3-D objects to be refocused on their real depth from the captured EIA or the transformed SIA can be possible in the proposed 3-D refocusing system, and suggest a possibility of the practical implementation of a novel optical 3-D camera system to optically reconstruct the 3-D objects to be refocused on their real depth from the captured EIA.

4. Conclusions

In this paper, we have proposed a new optical 3-D refocusing method to reconstruct the refocused 3-D objects on their real depth from the captured EIA by using the sifting property of the PDFA. With the theoretical verification of a spatial periodicity of the object images recorded on the transformed SIA, spatial-filtering of the captured EIA could be possible on the sub-image domain, which may also enable us to capture the EIA of the 3-D objects despite their original locations from the lenslet array. In addition, the feasibility of the proposed method has been confirmed through theoretical analysis and experiments with 3-D test objects.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science, ICT and Future Planning) (No. 2013-067321).

References and links

1.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006).

2.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenoptic Camera,” Technical Report CTSR 2005–02, Dept. of Computer Science, Stanford Univ., 2005.

3.

M. Levoy and P. Hanrahan, “Light Field Rendering,” Proc. ACM SIGGRAPH, (1996), pp. 31–42.

4.

R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, Stanford, CA, USA, 2006).

5.

T. Bishop, S. Zanetti, and P. Favaro, “Light Field Superresolution,” Proc. International Conference on Computational Photography, (2009), pp. 1–9.

6.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” Proc. International Conference on Computational Photography, (2009), pp. 1–8.

7.

R. Raskar and A.-K. Agrawal, “4D light field cameras,” US patent 772423 (September 2010).

8.

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef] [PubMed]

9.

D.-H. Shin, B. Lee, and E.-S. Kim, “Improved viewing quality of 3-D images in computational integral imaging reconstruction based on lenslet array model,” ETRI Journal 28(4), 521–524 (2006). [CrossRef]

10.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng. 45(11), 117004 (2006). [CrossRef]

11.

H.-J. Lee, D.-H. Shin, H. Yoo, J.-J. Lee, and E.-S. Kim, “Computational integral imaging reconstruction scheme of far 3D objects by additional use of an imaging lens,” Opt. Commun. 281(8), 2026–2032 (2008). [CrossRef]

12.

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006). [CrossRef]

13.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]

14.

J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef] [PubMed]

15.

Y. Kim, K. Hong, and B. Lee, “Recent researches based on integral imaging display method,” 3D Research, vol. 1, 17–27 (2010).

16.

B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27(10), 818–820 (2002). [CrossRef] [PubMed]

17.

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays,” Appl. Opt. 43(31), 5806–5813 (2004). [CrossRef] [PubMed]

18.

J.-H. Park, J. Kim, Y. Kim, and B. Lee, “Resolution-enhanced three-dimension / two-dimension convertible display based on integral imaging,” Opt. Express 13(6), 1875–1884 (2005). [CrossRef] [PubMed]

19.

D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of ture 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun. 275(2), 330–334 (2007). [CrossRef]

20.

B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31(8), 1106–1108 (2006). [CrossRef] [PubMed]

21.

J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express 13(13), 5116–5126 (2005). [CrossRef] [PubMed]

22.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced 3D image correlator using computationally reconstructed integral images,” Opt. Commun. 276(1), 72–79 (2007). [CrossRef]

23.

M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt. 49(14), 2571–2580 (2010). [CrossRef]

24.

D. Shin and B. Javidi, “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Opt. Lett. 37(9), 1394–1396 (2012). [CrossRef] [PubMed]

25.

B.-G. Lee, H.-H. Kang, E.-S. Kim, “Occlusion removal method of partially occluded object using variance in computational integral imaging,” 3D Research, 1:2 (2010).

26.

J.-Y. Jang, J.-I. Ser, S. Cha, and S.-H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt. 51(16), 3279–3286 (2012). [CrossRef] [PubMed]

27.

J.-Y. Jang, D. Shin, and E.-S. Kim, “Improved 3-D image reconstruction using the convolution property of periodic functions in curved integral-imaging,” Opt. Lasers Eng. 54, 14–20 (2014). [CrossRef]

OCIS Codes
(110.4190) Imaging systems : Multiple imaging
(110.6880) Imaging systems : Three-dimensional image acquisition
(150.5670) Machine vision : Range finding

ToC Category:
Imaging Systems

History
Original Manuscript: November 8, 2013
Revised Manuscript: December 27, 2013
Manuscript Accepted: January 9, 2014
Published: January 15, 2014

Citation
Jae-Young Jang, Donghak Shin, and Eun-Soo Kim, "Optical three-dimensional refocusing from elemental images based on a sifting property of the periodic δ-function array in integral-imaging," Opt. Express 22, 1533-1550 (2014)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-22-2-1533


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph.25(3), 924–934 (2006).
  2. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenoptic Camera,” Technical Report CTSR 2005–02, Dept. of Computer Science, Stanford Univ., 2005.
  3. M. Levoy and P. Hanrahan, “Light Field Rendering,” Proc. ACM SIGGRAPH, (1996), pp. 31–42.
  4. R. Ng, “Digital light field photography,” Ph.D. dissertation (Stanford University, Stanford, CA, USA, 2006).
  5. T. Bishop, S. Zanetti, and P. Favaro, “Light Field Superresolution,” Proc. International Conference on Computational Photography, (2009), pp. 1–9.
  6. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” Proc. International Conference on Computational Photography, (2009), pp. 1–8.
  7. R. Raskar and A.-K. Agrawal, “4D light field cameras,” US patent 772423 (September 2010).
  8. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express12(3), 483–491 (2004). [CrossRef] [PubMed]
  9. D.-H. Shin, B. Lee, and E.-S. Kim, “Improved viewing quality of 3-D images in computational integral imaging reconstruction based on lenslet array model,” ETRI Journal28(4), 521–524 (2006). [CrossRef]
  10. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Enhanced-resolution computational integral imaging reconstruction using an intermediate-view reconstruction technique,” Opt. Eng.45(11), 117004 (2006). [CrossRef]
  11. H.-J. Lee, D.-H. Shin, H. Yoo, J.-J. Lee, and E.-S. Kim, “Computational integral imaging reconstruction scheme of far 3D objects by additional use of an imaging lens,” Opt. Commun.281(8), 2026–2032 (2008). [CrossRef]
  12. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE94(3), 591–607 (2006). [CrossRef]
  13. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt.36(7), 1598–1603 (1997). [CrossRef] [PubMed]
  14. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett.27(5), 324–326 (2002). [CrossRef] [PubMed]
  15. Y. Kim, K. Hong, and B. Lee, “Recent researches based on integral imaging display method,” 3D Research, vol. 1, 17–27 (2010).
  16. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett.27(10), 818–820 (2002). [CrossRef] [PubMed]
  17. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays,” Appl. Opt.43(31), 5806–5813 (2004). [CrossRef] [PubMed]
  18. J.-H. Park, J. Kim, Y. Kim, and B. Lee, “Resolution-enhanced three-dimension / two-dimension convertible display based on integral imaging,” Opt. Express13(6), 1875–1884 (2005). [CrossRef] [PubMed]
  19. D.-H. Shin, S.-H. Lee, and E.-S. Kim, “Optical display of ture 3D objects in depth-priority integral imaging using an active sensor,” Opt. Commun.275(2), 330–334 (2007). [CrossRef]
  20. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett.31(8), 1106–1108 (2006). [CrossRef] [PubMed]
  21. J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express13(13), 5116–5126 (2005). [CrossRef] [PubMed]
  22. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced 3D image correlator using computationally reconstructed integral images,” Opt. Commun.276(1), 72–79 (2007). [CrossRef]
  23. M. Zhang, Y. Piao, and E.-S. Kim, “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” Appl. Opt.49(14), 2571–2580 (2010). [CrossRef]
  24. D. Shin and B. Javidi, “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Opt. Lett.37(9), 1394–1396 (2012). [CrossRef] [PubMed]
  25. B.-G. Lee, H.-H. Kang, E.-S. Kim, “Occlusion removal method of partially occluded object using variance in computational integral imaging,” 3D Research, 1:2 (2010).
  26. J.-Y. Jang, J.-I. Ser, S. Cha, and S.-H. Shin, “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” Appl. Opt.51(16), 3279–3286 (2012). [CrossRef] [PubMed]
  27. J.-Y. Jang, D. Shin, and E.-S. Kim, “Improved 3-D image reconstruction using the convolution property of periodic functions in curved integral-imaging,” Opt. Lasers Eng.54, 14–20 (2014). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited