OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 16, Iss. 22 — Oct. 27, 2008
  • pp: 17154–17160
« Show journal navigation

Digital slicing of 3D scenes by Fourier filtering of integral images

G. Saavedra, R. Martínez-Cuenca, M. Martínez-Corral, H. Navarro, M. Daneshpanah, and B. Javidi  »View Author Affiliations


Optics Express, Vol. 16, Issue 22, pp. 17154-17160 (2008)
http://dx.doi.org/10.1364/OE.16.017154


View Full Text Article

Acrobat PDF (231 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We present a novel technique to extract depth information from 3D scenes recorded using an Integral Imaging system. The technique exploits the periodic structure of the recorded integral image to implement a Fourier-domain filtering algorithm. A proper projection of the filtered integral image permits reconstruction of different planes that constitute the 3D scene. The main feature of our method is that the Fourier-domain filtering allows the reduction of out-of-focus information, providing the InI system with real optical sectioning capacity.

© 2008 Optical Society of America

1. Introduction

Integral Imaging (InI) is a 3D imaging technique that is based on the principle of Integral Photography (IP) [1

1. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. (Paris) 7, 821–825 (1908).

-4

4. T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68, 548–564 (1980). [CrossRef]

]. The IP uses a microlens array (MLA) to record a collection of 2D elemental images onto a photographic plate. Since each of these images conveys a different perspective of the 3D scene, the system is capable of acquiring 3D depth information. We refer to the complete set of elemental images as the integral image. When the integral image is imaged through a MLA the rays of light draw the same directions as in the pickup stage. Any observer in front of the MLA sees a 3D image of the original scene without the need of any special glasses. Furthermore, this image can be observed from certain range of angles. It was not until the late 20th century that the principle of IP actually attracted the attention of researchers in 3D Television and Imaging [5

5. F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]

-9

9. S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 12, 1346–1356 (2003). [CrossRef]

]. The InI systems have been developed thanks to the advances in the fabrication of lenticular systems and the rising resolution of digital devices used for the pickup or for the reconstruction stage [10

10. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. 45, 1704–1712 (2006). [CrossRef] [PubMed]

]. In the past few years, the research efforts have been addressed to improve the performance of InI. In this sense, the researchers strive to increase the depth of field [11

11. J.-S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes,” Opt. Lett. 28, 1924–1926 (2003). [CrossRef] [PubMed]

-13

13. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Extended depth-of-field 3-D display and visualization by combination of amplitude-modulated microlenses and deconvolution tools,” J. Disp. Technol. 1, 321–327 (2005). [CrossRef]

, 14

14. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29, 2734–2736 (2004). [CrossRef] [PubMed]

], the viewing angle [15

15. H. Choi, S.-W. Min, S. Yung, J.-H. Park, and B. Lee, “Multiple viewing zone integral image using dynamic barrier array fro three-dimensional displays,” Opt. Express 11, 927–932 (2003). [CrossRef] [PubMed]

,16

16. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15, 16255–16260 (2007). [CrossRef] [PubMed]

] and the quality of the displayed images [17

17. J.-S Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]

,18

18. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005). [CrossRef]

]. There have been remarkable practical advances by designing 2D-3D convertible displays [19

19. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29, 2734–2736 (2004). [CrossRef] [PubMed]

] and multiview video architecture and rendering [20

20. W. Matusik and H. Pfister. “3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes.” ACM Trans. Graph. 23, 814–824 (2004). [CrossRef]

-21

21. H. Liao, S. Nakajima, M. Iwahara, N. Hata, and S. I. y T. Dohi, “Real-time 3D image-guided navigation system based on integral videography,” Proc. SPIE 4615, 36–44 (2002). [CrossRef]

]. Amongst the new applications of the InI, the reconstruction of the original 3D scenes from the corresponding integral images is especially interesting. In fact, although the reconstruction algorithms are still far from a true profilometric technique, the reconstructed images allow the visualization of partially occluded objects [22

22. Y. Frauel and B. Javidi, “Digital Three-Dimensional Image Correlation by Use of Computer-Reconstructed Integral Imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef] [PubMed]

-25

25. C. Wu, A. Aggoun, M. McCormick, and S. Y. Kung, “Depth measurement from integral images through viewpoint image extraction and a modified multibaseline disparity analysis algorithm,” J. Electron. Imaging 14, 023018 (2005). [CrossRef]

] as well as the recognition of 3D objects [26

26. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]

-28

28. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef] [PubMed]

].

The concept of optical sectioning was minted in the field of optical microscopy to refer to the capacity of providing sharp images of the sections of a 3D sample [29

29. T. Wilson, ed. Confocal Microscopy (Academic, London1990).

]. In scanning microscopes the optical sectioning is obtained with a pinhole that rejects signals scattered from out-of-focus sections. Here we implement this concept by means of a comb filtering in the integral-image spectrum. Thus, we extract depth information of the 3D scene with real optical sectioning.

2. Fourier filtering of integral images

Let us start by analyzing the pickup stage of an InI system in the simple case in which a 2D object is set parallel to the MLA at a distance zs. Assuming that paraxial approximation holds, is clear that each microlens provides a scaled image of the object, and therefore the integral image is composed by a set of equally spaced replicas of the object. In Fig. 1 we have drawn a scheme of the pickup. For the sake of simplicity, the scheme and also the following equations have been described in one dimension. The extension to 2D is straightforward.

Fig. 1. Each microlens is labeled by its position in the array. The origin for the indexes is the center of the central microlens. The images of a point source through the microlenses are depicted.

We assume that the MLA has an odd number of microlenses, Nx, and that the central microlens is aligned with the optical axis of the pickup system. We label this lens as L0 and the other microlenses as Lm, m being the integer lens index ranging from -(Nx-1)/2 to (Nx-1)/2. As shown in Fig 1, the integral image of a point object placed at (xS, zS) is composed of a series of replicas positioned at:

xm(zS)=MSxS+mTS,
(1)

where MS=-g/zS is the scale factor between the object and the image plane. The pickup period, TS, is the distance between replicas of S, and depends on the MLA pitch, p, through

TS=(1+gzS)p.
(2)

The periodic structure of the integral image is the key in our Fourier-filtering procedure. Note that if the pickup is performed with optical barriers [30

30. R. Martínez-Cuenca, A. Pons, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted integral image display,” Opt. Express 14, 9657–9663 (2006). [CrossRef] [PubMed]

], only the microlenses whose index verify the condition |xm(zS)-mp|<p/2, are able to record the replica of S. This constraint limits the maximum, mmax, and the minimum, mmin, index of lenses that record the image of S. The number microlenses that are contributing to the integral image is then nx(zS)=mmax-mmin+1. Thus, the integral image of a plane object centered at S can be calculated, as

IS(x)=rect(xxSΔx)·Σm=δ(xxm)1MSO(xMS),
(3)

where ⊗ denotes the convolution product, Δx=(nx-1)TS, and O(x) is the object intensity distribution. The Fourier transform of the expression above is

I˜S(u)=Δxsinc[Δxu]exp(i2πgzSu.xS)O˜(MSu)1TSΣm=δ(umTS),
(4)

I˜(u)=0I˜S(u)dzs
(5)

This particular structure of the spectrum of the integral image allows the use of Fourier-filtering tools to discriminate the spectral components corresponding to a particular depth. The filtering corresponding to a depth position zR can be written as

I˜R(u)=I˜(u)FR(u),
(6)

where the frequency filter is simply the comb function

FR(u)=Σm=δ(u+mTR)
(7)

The inverse Fourier transform of the filtered image provides a new integral image which only includes the information corresponding to the selected depth. We have illustrated the filtering process in Fig. 2. Due to pixilated nature of the sensors, each depth section consists of an array of sinc functions in the Fourier domain. Since this function does not fall sharply to zero, the signals generated by objects at different depths cannot be perfectly discriminated.

Fig. 2. The Fourier transform of an integral image. Left side illustrates the signals that correspond to two sources at different depths, namely S and S’. On the right, we show the performance of the filtering. Only signals with pitch close to TR pass though the filter.

3. Volumetric reconstruction of Fourier filtered integral image

The volumetric reconstruction using back projection technique described in [24

24. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef] [PubMed]

] can be applied on the filtered integral image to reconstruct an arbitrary plane parallel to the MLA. In this approach, each elemental image is inversely back projected on the desired hypothetical reconstruction plane through its unique associated pinhole. The collection of all the back projected elemental images are then superimposed computationally to achieve the intensity distribution on the desired reconstruction plane. The intensity of each point is determined by averaging the intensity information carried by all rays intersecting on the reconstruction plane. It should be noted that the number of rays conveying information about each object point might vary from point to point depending on the field of view of each elemental image. For instance, in Fig. 3, point R1 lies within the field of view of 7 elemental images, whereas the intensity information about R2 is only carried by 6 rays. This difference must be taken into account in the averaging process. Note, besides, that ray cones emanated from a single object point at the reconstruction plane would recombine again by the back projection method to accurately recreate the intensity of the object point. However, rays emanated from the object points away from the reconstruction plane would mix with their neighbors resulting in a defocused effect. Thus, with computational reconstruction one is able to get a focus image of an object at the correct reconstruction distance. The rest of the scene appears blurred.

Fig. 3. The volumetric reconstruction calculates the reconstructed field by projecting the integral image through the pinhole array. Optical barriers are also simulated to avoid overlapping.

Let the filtered k-th elemental image be denoted by Ok (x). For image reconstruction, each filtered elemental image is flipped and shifted according to the reconstruction distance and superimposed to generate the desired plane. Therefore the final reconstruction plane, I(x,zR) consists of partial overlap of flipped and shifted filtered elemental images as:

I(x,zR)=Σk=0K1Ok(x+(1MS)TSk)R2(x),
(8)

in which K denote the number of elemental images acquired. Besides, R compensates for intensity variation due to different distances from the object plane to elemental image Ok (x) on the sensor and is given by:

R2(x)=(zS+g)2+(MS1x+TSk)2(1MS)2,
(9)

However, for most cases of interest where the sensor size is smaller than the reconstruction distance, Eq. (9) is dominated by the term (zs+g)2 and can be assumed to be constant for a particular reconstruction distance.

If the computational reconstruction using back projection is applied to the filtered integral image, a true optical sectioning becomes possible. This is due to the fact that, after filtering, the objects that are away from the reconstruction plane are filtered in each elemental image, whereas the objects at the reconstructed plane remain as sharp images.

4. Experimental results

To show the feasibility of our method, we have conducted optical experiments to obtain an integral image of a 3D scene consisting of two toy cars located at different distances from the MLA, as depicted in Fig. 4. In the experiment, the images were recorded using a square MLA composed of 41×27 lenslets with focal length f=3mm, pitch p=1.03mm and gap g=3.10mm. The cars labeled with 6 and 2 were located approximately 70mm and 90mm away from the MLA.

Fig. 4. The 3D scene was composed by two toy cars. The cars were about 20–10 mm in size.

In Fig. 5(a) we show the subset of 1×3 elemental images obtained with the 3 central microlenses. Note that, from this perspective, there is significant occlusion of the blue car by the red car. Fig. 5(b) shows the periodic structure of the spectrum of the integral image (the spectrum is not strictly periodic since it is modulated by a sinc function, which is the Fourier transform a the pixel shape). Note the spreading of each order in this figure due to the presence of signals at different depths. In Fig. 5(c) we mark the filtering positions corresponding to two different planes.

Fig. 5. a) Set of 1×3 elemental images of the recorded integral image. b) Central part of its spectrum. c) The filtering with comb functions of different period permits to discriminate the information at a given depth in the 3D scene. We show two different filtering pitchs, in red red for zR=70 mm and in blue for zR=120 mm..

After performing the Fourier filtering we obtained a stack of 35 filtered integral images, ranging from 50 mm to 120 mm in steps of 2 mm. In Fig.6 we show two subset of elemental images from filtered stacks at zR=70 mm and zR=92 mm. Each filtered integral image is then used as an input for the subsequent volumetric reconstruction. This allows slice by slice reconstruction of the 3D scene showing optical sectioning effects.

Fig. 6. a) Set of 1×3 elemental images of the integral image filtered at zR=70 mm. b) The same part of the integral image, but filtered at zR=92 mm.

In Fig. 7 we show a set of reconstructed images at different depths. Apparently, the proposed procedure has permitted to focus the two cars separately. In the movie we see the result of the reconstruction over the filtered planes.

Fig. 7. Five frames excerpts, corresponding to depth planes 50, 70, 80, 92 and 120 mm, from a video that shows the reconstruction over different depth planes between 50 and 120 mm (Media 1).

5. Conclusions.

We have presented an alternative computational reconstruction method for integral imaging using Fourier filtering. The technique is simply based on the fact that an each point in the object space is imaged as a set of replicas with a given spatial period that is dependent on the distance of the object point. By performing a volumetric reconstruction from the filtered integral image, the system is capable of performing optical sectioning since the signals that are out of focus appear blurred twice. Experimental results are presented to demonstrate the feasibility of the technique in terms of providing sharp slices of the reconstructed scene.

Acknowledgment

This work was funded in part by Ministerio de Ciencia e Innovación (DPI2003-8309), Spain.

References and Links

1.

M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. (Paris) 7, 821–825 (1908).

2.

H. E. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]

3.

C. B. Burckhardt, “Optimum parameters and resolution limitation of Integral Photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]

4.

T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68, 548–564 (1980). [CrossRef]

5.

F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]

6.

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]

7.

H. Arimoto and B. Javidi, “Integral 3D imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]

8.

J.-S Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]

9.

S. Jung, J.-H. Park, H. Choi, and B. Lee, “Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement,” Opt. Express 12, 1346–1356 (2003). [CrossRef]

10.

J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. 45, 1704–1712 (2006). [CrossRef] [PubMed]

11.

J.-S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes,” Opt. Lett. 28, 1924–1926 (2003). [CrossRef] [PubMed]

12.

R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5237–5242 (2004). [CrossRef] [PubMed]

13.

R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Extended depth-of-field 3-D display and visualization by combination of amplitude-modulated microlenses and deconvolution tools,” J. Disp. Technol. 1, 321–327 (2005). [CrossRef]

14.

J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29, 2734–2736 (2004). [CrossRef] [PubMed]

15.

H. Choi, S.-W. Min, S. Yung, J.-H. Park, and B. Lee, “Multiple viewing zone integral image using dynamic barrier array fro three-dimensional displays,” Opt. Express 11, 927–932 (2003). [CrossRef] [PubMed]

16.

R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15, 16255–16260 (2007). [CrossRef] [PubMed]

17.

J.-S Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]

18.

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005). [CrossRef]

19.

J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29, 2734–2736 (2004). [CrossRef] [PubMed]

20.

W. Matusik and H. Pfister. “3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes.” ACM Trans. Graph. 23, 814–824 (2004). [CrossRef]

21.

H. Liao, S. Nakajima, M. Iwahara, N. Hata, and S. I. y T. Dohi, “Real-time 3D image-guided navigation system based on integral videography,” Proc. SPIE 4615, 36–44 (2002). [CrossRef]

22.

Y. Frauel and B. Javidi, “Digital Three-Dimensional Image Correlation by Use of Computer-Reconstructed Integral Imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef] [PubMed]

23.

S. Yeom and B. Javidi, “Three-dimensional distortion-tolerant object recognition using integral imaging,” Opt. Express 12, 5795–5809 (2004). [CrossRef] [PubMed]

24.

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef] [PubMed]

25.

C. Wu, A. Aggoun, M. McCormick, and S. Y. Kung, “Depth measurement from integral images through viewpoint image extraction and a modified multibaseline disparity analysis algorithm,” J. Electron. Imaging 14, 023018 (2005). [CrossRef]

26.

J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]

27.

S.-H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14, 12085–12095 (2006). [CrossRef] [PubMed]

28.

B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef] [PubMed]

29.

T. Wilson, ed. Confocal Microscopy (Academic, London1990).

30.

R. Martínez-Cuenca, A. Pons, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted integral image display,” Opt. Express 14, 9657–9663 (2006). [CrossRef] [PubMed]

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(110.4190) Imaging systems : Multiple imaging
(110.6880) Imaging systems : Three-dimensional image acquisition
(150.5670) Machine vision : Range finding

ToC Category:
Image Processing

History
Original Manuscript: July 8, 2008
Revised Manuscript: September 11, 2008
Manuscript Accepted: September 11, 2008
Published: October 13, 2008

Citation
G. Saavedra, R. Martinez-Cuenca, M. Martinez-Corral, H. Navarro, M. Daneshpanah, and B. Javidi, "Digital slicing of 3D scenes by Fourier filtering of integral images," Opt. Express 16, 17154-17160 (2008)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-22-17154


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. M. G. Lippmann, "Epreuves reversibles donnant la sensation du relief," J. Phys. (Paris) 7, 821-825 (1908).
  2. H. E. Ives, "Optical properties of a Lippman lenticulated sheet," J. Opt. Soc. Am. 21, 171-176 (1931). [CrossRef]
  3. C. B. Burckhardt, "Optimum parameters and resolution limitation of Integral Photography," J. Opt. Soc. Am. 58, 71-76 (1968). [CrossRef]
  4. T. Okoshi, "Three-dimensional displays," Proc. IEEE 68, 548-564 (1980). [CrossRef]
  5. F. Okano, H. Hoshino, J. Arai, and I. Yayuma, "Real time pickup method for a three-dimensional image based on integral photography," Appl. Opt. 36, 1598-1603 (1997). [CrossRef] [PubMed]
  6. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, "Gradient-index lens-array method based on real-time integral photography for three-dimensional images," Appl. Opt. 37,2034-2045 (1998). [CrossRef]
  7. H. Arimoto and B. Javidi, "Integral 3D imaging with digital reconstruction," Opt. Lett. 26, 157-159 (2001). [CrossRef]
  8. J.-S Jang and B. Javidi, "Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics," Opt. Lett. 27, 324-326 (2002). [CrossRef]
  9. S. Jung, J.-H. Park, H. Choi and B. Lee, "Viewing-angle-enhanced integral three-dimensional imaging along all directions without mechanical movement," Opt. Express 12, 1346-1356 (2003). [CrossRef]
  10. J. Arai, M. Okui, T. Yamashita, and F. Okano, "Integral three-dimensional television using a 2000-scanning-line video system," Appl. Opt. 45, 1704-1712 (2006) [CrossRef] [PubMed]
  11. J.-S. Jang and B. Javidi, "Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lenslets with nonuniform focal lengths and aperture sizes," Opt. Lett. 28, 1924-1926 (2003). [CrossRef] [PubMed]
  12. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, "Enhanced depth of field integral imaging with sensor resolution constraints," Opt. Express 12, 5237-5242 (2004). [CrossRef] [PubMed]
  13. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, "Extended depth-of-field 3-D display and visualization by combination of amplitude-modulated microlenses and deconvolution tools," J. Disp. Technol. 1, 321-327 (2005). [CrossRef]
  14. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, "Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging," Opt. Lett. 29, 2734-2736 (2004). [CrossRef] [PubMed]
  15. H. Choi, S.-W. Min, S. Yung, J.-H. Park, and B. Lee, "Multiple viewing zone integral image using dynamic barrier array fro three-dimensional displays," Opt. Express 11, 927-932 (2003). [CrossRef] [PubMed]
  16. R.  Martínez-Cuenca, H.  Navarro, G.  Saavedra, B.  Javidi, and M. Martínez-Corral, "Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system," Opt. Express 15, 16255-16260 (2007). [CrossRef] [PubMed]
  17. J.-S Jang and B. Javidi, "Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics," Opt. Lett. 27, 324-326 (2002). [CrossRef]
  18. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, "Multifacet structure of observed reconstructed integral images," J. Opt. Soc. Am. A 22, 597-603 (2005). [CrossRef]
  19. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, "Depth-enhanced three-dimensional two-dimensional convertible display based on modified integral imaging," Opt. Lett. 29, 2734-2736 (2004). [CrossRef] [PubMed]
  20. W. Matusik and H. Pfister, "3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes," ACM Trans. Graph. 23, 814-824 (2004). [CrossRef]
  21. H. Liao, S. Nakajima, M. Iwahara, N. Hata, and S. I. y T. Dohi, "Real-time 3D image-guided navigation system based on integral videography," Proc. SPIE 4615, 36-44 (2002). [CrossRef]
  22. Y. Frauel and B. Javidi, "Digital Three-Dimensional Image Correlation by use of Computer-Reconstructed Integral Imaging," Appl. Opt. 41, 5488-5496 (2002) [CrossRef] [PubMed]
  23. S. Yeom and B. Javidi, "Three-dimensional distortion-tolerant object recognition using integral imaging," Opt. Express 12, 5795-5809 (2004) [CrossRef] [PubMed]
  24. S.-H. Hong, J.-S. Jang, and B. Javidi, "Three-dimensional volumetric object reconstruction using computational integral imaging," Opt. Express 12, 483-491 (2004). [CrossRef] [PubMed]
  25. C. Wu, A. Aggoun, M. McCormick, and S. Y. Kung, "Depth measurement from integral images through viewpoint image extraction and a modified multibaseline disparity analysis algorithm," J. Electron. Imaging 14, 023018 (2005) [CrossRef]
  26. J.-S. Jang and B. Javidi, "Three-dimensional synthetic aperture integral imaging," Opt. Lett. 27, 1144-1146 (2002). [CrossRef]
  27. S.-H. Hong and B. Javidi, "Distortion-tolerant 3D recognition of occluded objects using computational integral imaging," Opt. Express 14, 12085- 12095 (2006). [CrossRef] [PubMed]
  28. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, "Three-dimensional recognition of occluded objects by using computational integral imaging," Opt. Lett. 31, 1106-1108 (2006). [CrossRef] [PubMed]
  29. T. Wilson, ed., Confocal Microscopy (Academic, London 1990).
  30. R. Martínez-Cuenca, A. Pons, G. Saavedra, M. Martínez-Corral and B. Javidi, "Optically-corrected elemental images for undistorted integral image display," Opt. Express 14, 9657-9663 (2006). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Multimedia

Multimedia FilesRecommended Software
» Media 1: AVI (4846 KB)      QuickTime

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited