OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 7 — Apr. 8, 2013
  • pp: 8873–8878
« Show journal navigation

Axially moving a lenslet array for high-resolution 3D images in computational integral imaging

Hoon Yoo  »View Author Affiliations


Optics Express, Vol. 21, Issue 7, pp. 8873-8878 (2013)
http://dx.doi.org/10.1364/OE.21.008873


View Full Text Article

Acrobat PDF (1892 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper presents a new high-resolution computational integral imaging system employing a pickup with the axial movement of a lenslet array and a computation reconstruction algorithm with pixel-to-pixel mapping. In the proposed method, a lenslet array and its image sensor are moved together along the z-axis direction (or axial direction) and a series of elemental image arrays are obtained while moving. The elemental image arrays are then applied to pixel-to-pixel mapping without interpolation for the reconstruction of 3D slice images. Also, an analysis of the proposed reconstruction method is provided. To show the usefulness of the proposed method, experiments are conducted. The results indicate that the proposed method is superior to the existing method such as MALT in terms of image quality.

© OSA

1. Introduction

The integral imaging technique, which was proposed by G. Lippmann for the first time in 1908 [1

1. G. Lippmann, “La photographic integrale,” C.R. Acad. Sci. 146, 446–451 (1908).

], has been actively studied since it provides important features such as 3D display without eye-glasses, full parallax, and continuous viewing points [2

2. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef] [PubMed]

8

8. B. Lee, S.-Y. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26(19), 1481–1482 (2001). [CrossRef] [PubMed]

]. Thus, it is considered as one of next-generation three-dimensional techniques. Normally, an integral imaging system has two stages: pickup and display/reconstruction. Pickup devices such as a camera attached with a lenslet array capture perspectives of 3D objects into an image array, which is called as the elemental image array. Display or reconstruction devices show us the 3D objects by simply placing the same lenslet array in front of the 2D screen displaying the elemental image array. Besides 3D display, a computational reconstruction method that simulates the geometrical optics using the pinhole model produces a series of slice images representing a volume of 3D space. This reconstruction, called as the computational integral imaging reconstruction (CIIR), is considered as a substitute for optical reconstruction [2

2. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef] [PubMed]

].

Despite advantages over optical reconstruction, the image quality of the slice images from CIIR still depends on the resolution of each elemental image; CIIR suffers from low image quality due to the limitation of capture devices. To remedy the problem, some studies have been discussed [9

9. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef] [PubMed]

16

16. A. Stern, B. Javidi, A. Stern, and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42(35), 7036–7042 (2003). [CrossRef] [PubMed]

]. One approach is to increase the sampling rate 3D scene by moving a lenslet array in two-dimensional directions, which is called as the moving array-lenslet technique (MALT) [9

9. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef] [PubMed]

]. In the technique, shifting a lenslet array and multiple snapshots provides a set of elemental image arrays. The technique however requires a very accurate xy-moving stage. Also, a double snapshots-based technique was recently addressed [12

12. H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20(2), 890–895 (2012). [CrossRef] [PubMed]

]. Another approach is to increase the resolution of elemental images by moving a camera [13

13. R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett. 34(13), 2012–2014 (2009). [CrossRef] [PubMed]

15

15. D.-H. Shin and B. Javidi, “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Opt. Lett. 37(9), 1394–1396 (2012). [CrossRef] [PubMed]

], which is called as synthetic aperture integral imaging (SAII). Every snapshot from a camera moving is considered as an elemental image, where the resolution of each elemental image is much higher than that of the lenslet array-based method. However, the number of elemental images in SAII is much smaller than that of the lenslet array-based method. A much lower number of elemental images may restrict the usefulness of the SAII technique.

It can be seen that the axial movement, one of one-dimensional movements, has advantages over the two-dimensional movement that is engaged in MALT; the former has much simpler design than the latter in terms of physical designs. In addition, the digitization error and the limitation of optical pickup devices destroy a subtle movement of a lenslet array in MALT (for example, a typical moving step is about a few hundred micrometers); thus, two elemental image arrays can be almost the same in spite of being obtained from different snapshots. Therefore, the axial movement of a lenslet array and its corresponding reconstruction deserve to be addressed; thus, this paper describes analyses on the axial movement of a lenslet array and proposes a high-resolution computational reconstruction method. This work also reveals that a pixel-to-pixel mapping without interpolation is suitable for the axial movement.

2. Overview of computational integral imaging

The CII system, as shown in Fig. 1
Fig. 1 Computational Integral Imaging. (a) Pickup process (b) Reconstruction process.
, is composed of two processes: a pickup process of 3D objects and a computational reconstruction process from elemental images [2

2. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef] [PubMed]

,4

4. D.-H. Shin and H. Yoo, “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” Opt. Express 16(12), 8855–8867 (2008). [CrossRef] [PubMed]

,6

6. H. Yoo, “Artifact analysis and image enhancement in three-dimensional computational integral imaging using smooth windowing technique,” Opt. Lett. 36(11), 2107–2109 (2011). [CrossRef] [PubMed]

,10

10. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef] [PubMed]

]. In the pickup process of CII, elemental images are recorded by use of a lenslet array and an image sensor. On the other hand, in the computational reconstruction process 3D images are reconstructed from the elemental images by use of a digital computer; in the computer domain, 3D images are easily reconstructed by simulating ray-optics without optical devices. As shown in Fig. 1(b), the elemental images are back-projected on an image plane that is located away from the virtual pinhole array. Here, the elemental images are magnified by a factor of z/g, where z is the distance between the reconstruction plane and the virtual pinhole array and g is the distance between the elemental images and the virtual pinhole array. Then, the magnified elemental images are superimposed on the reconstruction plane. After normalization to compensate intensity irregularity, a slice image for the 3D image is finally produced with respect to the reconstruction plane at a distance z. To completely generate the volume data for 3D space, this process is repeatedly conducted adjusting the distance z.

3. Ray analysis of axial movement of a lenslet array

Figure 2
Fig. 2 (a) Pickup with longitudinal (or axial) movement (b) Reconstruction with a series of elemental image arrays.
illustrates the operational principle of the proposed method. The method consists of two stages; optical pickup and computational reconstruction. In the optical pickup stage, as shown in Fig. 2(a), a lenslet array and an image sensor array are moved together along the longitudinal direction (the z-direction) with a moving step.

While moving, multiple snapshots produce a number of elemental image arrays, where each array is captured at a different distance. Thus, each elemental image array has different perspectives of 3D objects. This can increase the sampling rate of rays for 3D objects and gather lots of 3D information.

In the computational reconstruction stage, as shown in Fig. 2(b), a set of elemental image arrays from the different locations are used to computationally reconstruct 3D slice images. The conventional CIIR method produces a slice image at a certain reconstruction plane by overlapping all magnified elemental images in an elemental image array. Here, many slice images are given at the same reconstruction plane because multiple elemental image arrays are prepared. In this case, MALT provides a resulting slice image at a certain reconstruction plane by averaging the slice images. This multiple slice images however can be utilized in a different way.

4. Sufficient interpolation-free conditions

As shown in Fig. 3, let us consider a longitudinal moving step ∆, and two parallel rays coming from two adjacent arrays of elemental images, which are projected onto the same reconstruction plane. Also, let a value δ be the distance between two projected pixels in the reconstruction plane. If the value δ is the pixel size, the reconstruction plane is completely covered by projected pixels. Then the empty pixels no longer exist. Thus, the maximum moving step ∆ avoiding empty pixels is given by
Δ=2gδ/w=2g/N,
(2)
where N is the number of pixels in width of each elemental image. The condition derived above is a sufficient condition because the elemental images distributed longitudinally are only taken into account. Here, a sufficient number of elemental image arrays can be chosen by considering the worst case of emptiness, when reconstructing the slice image at the distance Nz/g. The length of the empty area between two pixels in this case is the same as N times of the pixel size, thus N projections are required to cover the empty area. Consequently, N is the sufficient number of the arrays to cover any empty area.

Another sufficient condition to eliminate the empty pixels is possibly obtained by putting the moving step to be the focal length of the lenslet. This is basically from the experimental results in [7

7. D.-H. Shin and H. Yoo, “Computational integral imaging reconstruction method of 3D images using pixel-to-pixel mapping and image interpolation,” Opt. Commun. 282(14), 2760–2767 (2009). [CrossRef]

]. The results indicated that at least one slice image among the slice images located at multiples of the focal length does not have empty pixels at all. Conversely, this fact is the same that all pixels in a slice image located at a certain distance can be one-to-one mapped from at least one elemental image array among multiple elemental image arrays which are located at multiples of the focal length. In the proposed method, the multiple elemental image arrays are obtained by moving a lenslet array. This yields a flexible and larger moving step, compared with the moving step in (2).

5. Experiments and discussions

To show the usefulness of the proposed method, a preliminary experiment with a partially occluded 3D object is carried out. Figure 4(a)
Fig. 4 (a) Experimental setup (b) 1st elemental image array (c) 21st elemental image array.
shows an experimental setup for the optical pickup using a lenslet array. A target object, a toy car, is occluded by an occluding object, a tree. The tree and car are longitudinally located at z = 3 mm and z = 21 mm, respectively. The lenslet array with 30 × 30 lenslets is initially located at z1 = 0 mm. Each tiny lens has a size of 1.08 mm, a pixel array of 30 × 30, and a focal length of 3 mm. Then, each elemental image array has a size of 900 × 900 pixels. The lenslet array and a camera or a CCD array are moved together backward with a step of 3 mm, which is the same as the focal length, and multiple snapshots produces a series of twenty one elemental image arrays. For examples, the first and twenty-first elemental image arrays are shown in Figs. 4(b) and 4(c), respectively. It is seen that the recorded elemental images shows the car in different scales.

Note that the lenslet array and CCD array are moved together; thus, while moving, tiny lenses in the lenslet array are observed with a constant size. Therefore, the size of all elemental images captured is the same regardless of the pickup locations. However, different elemental images from different locations are shows the object in different scales, as depicted in Figs. 4(b) and 4(c).

To evaluate the proposed method, the conventional CIIR with an elemental image array and the MALT with the twenty one are compared with the proposed method. Figure 5
Fig. 5 Reconstructed slice images from (a) single elemental image array (b) 21 elemental image arrays with MALT (c) 21 elemental image arrays with the proposed method.
shows three slice images reconstructed at z = 21 mm, where the object was located originally. The conventional CIIR with an elemental image array provided a low image quality, as shown in Fig. 5(a). The MALT with the 21 elemental image arrays improved the quality of the reconstructed slice image, as depicted in Fig. 5(b). However, the slice image from MALT still suffers from blurring since MALT still has interpolation errors by magnification and the effect of the occluding object. On the contrary, the proposed method with the 21 arrays enhanced the image quality because pixel-to-pixel mapping without interpolation can reduce those errors, as indicated in Fig. 5(c). Especially, the zoomed area, showing the word ‘car’ in the object, indicated that the proposed method substantially improved the image quality of the slice image. Therefore, it is the much more useful in object recognition than the existing technique such as MALT.

6. Conclusions

A novel high-resolution reconstruction method in computational integral imaging has been proposed, employing a pickup process with longitudinal or axial movement and a pixel-to-pixel mapping algorithm without interpolation. Also, sufficient conditions to eliminate the empty pixels have been introduced. An experiment indicated that the proposed method based on the pixel-to-pixel mapping without interpolation with a set of elemental image arrays improves the image quality of the slice images. In addition, the pixel-to-pixel mapping without interpolation is very fast naturally; thus, it is expected that the proposed method can be applied to various applications using integral imaging, such as 3D object recognition.

References and links

1.

G. Lippmann, “La photographic integrale,” C.R. Acad. Sci. 146, 446–451 (1908).

2.

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef] [PubMed]

3.

A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006). [CrossRef]

4.

D.-H. Shin and H. Yoo, “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” Opt. Express 16(12), 8855–8867 (2008). [CrossRef] [PubMed]

5.

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]

6.

H. Yoo, “Artifact analysis and image enhancement in three-dimensional computational integral imaging using smooth windowing technique,” Opt. Lett. 36(11), 2107–2109 (2011). [CrossRef] [PubMed]

7.

D.-H. Shin and H. Yoo, “Computational integral imaging reconstruction method of 3D images using pixel-to-pixel mapping and image interpolation,” Opt. Commun. 282(14), 2760–2767 (2009). [CrossRef]

8.

B. Lee, S.-Y. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26(19), 1481–1482 (2001). [CrossRef] [PubMed]

9.

J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef] [PubMed]

10.

S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12(19), 4579–4588 (2004). [CrossRef] [PubMed]

11.

Y.-T. Lim, J.-H. Park, K.-C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17(21), 19253–19263 (2009). [CrossRef] [PubMed]

12.

H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20(2), 890–895 (2012). [CrossRef] [PubMed]

13.

R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett. 34(13), 2012–2014 (2009). [CrossRef] [PubMed]

14.

D.-H. Shin, M. Cho, and B. Javidi, “Three-dimensional optical microscopy using axially distributed image sensing,” Opt. Lett. 35(21), 3646–3648 (2010). [CrossRef] [PubMed]

15.

D.-H. Shin and B. Javidi, “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Opt. Lett. 37(9), 1394–1396 (2012). [CrossRef] [PubMed]

16.

A. Stern, B. Javidi, A. Stern, and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42(35), 7036–7042 (2003). [CrossRef] [PubMed]

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(110.3010) Imaging systems : Image reconstruction techniques

ToC Category:
Imaging Systems

History
Original Manuscript: January 16, 2013
Revised Manuscript: March 28, 2013
Manuscript Accepted: March 31, 2013
Published: April 3, 2013

Citation
Hoon Yoo, "Axially moving a lenslet array for high-resolution 3D images in computational integral imaging," Opt. Express 21, 8873-8878 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-7-8873


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. G. Lippmann, “La photographic integrale,” C.R. Acad. Sci.146, 446–451 (1908).
  2. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett.26(3), 157–159 (2001). [CrossRef] [PubMed]
  3. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE94(3), 591–607 (2006). [CrossRef]
  4. D.-H. Shin and H. Yoo, “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” Opt. Express16(12), 8855–8867 (2008). [CrossRef] [PubMed]
  5. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt.48(34), H77–H94 (2009). [CrossRef] [PubMed]
  6. H. Yoo, “Artifact analysis and image enhancement in three-dimensional computational integral imaging using smooth windowing technique,” Opt. Lett.36(11), 2107–2109 (2011). [CrossRef] [PubMed]
  7. D.-H. Shin and H. Yoo, “Computational integral imaging reconstruction method of 3D images using pixel-to-pixel mapping and image interpolation,” Opt. Commun.282(14), 2760–2767 (2009). [CrossRef]
  8. B. Lee, S.-Y. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett.26(19), 1481–1482 (2001). [CrossRef] [PubMed]
  9. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett.27(5), 324–326 (2002). [CrossRef] [PubMed]
  10. S.-H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express12(19), 4579–4588 (2004). [CrossRef] [PubMed]
  11. Y.-T. Lim, J.-H. Park, K.-C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express17(21), 19253–19263 (2009). [CrossRef] [PubMed]
  12. H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express20(2), 890–895 (2012). [CrossRef] [PubMed]
  13. R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett.34(13), 2012–2014 (2009). [CrossRef] [PubMed]
  14. D.-H. Shin, M. Cho, and B. Javidi, “Three-dimensional optical microscopy using axially distributed image sensing,” Opt. Lett.35(21), 3646–3648 (2010). [CrossRef] [PubMed]
  15. D.-H. Shin and B. Javidi, “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” Opt. Lett.37(9), 1394–1396 (2012). [CrossRef] [PubMed]
  16. A. Stern, B. Javidi, A. Stern, and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt.42(35), 7036–7042 (2003). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4 Fig. 5
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited