## Three dimensional imaging with randomly distributed sensors

Optics Express, Vol. 16, Issue 9, pp. 6368-6377 (2008)

http://dx.doi.org/10.1364/OE.16.006368

Acrobat PDF (928 KB)

### Abstract

As a promising three dimensional passive imaging modality, Integral Imaging (II) has been investigated widely within the research community. In virtually all of such investigations, there is an implicit assumption that the collection of elemental images lie on a simple geometric surface (e.g. flat, concave, etc), also known as pickup surface. In this paper, we present a generalized framework for 3D II with arbitrary pickup surface geometry and randomly distributed sensor configuration. In particular, we will study the case of Synthetic Aperture Integral Imaging (SAII) with random location of cameras in space, while all cameras have parallel optical axes but different distances from the 3D scene. We assume that the sensors are randomly distributed in 3D volume of pick up space. For 3D reconstruction, a finite number of sensors with known coordinates are randomly selected from within this volume. The mathematical framework for 3D scene reconstruction is developed based on an affine transform representation of imaging under geometrical optics regime. We demonstrate the feasibility of the methods proposed here by experimental results. To the best of our knowledge, this is the first report on 3D imaging using randomly distributed sensors.

© 2008 Optical Society of America

## 1. Introduction

7. K. Itoh, W. Watanabe, H. Arimoto, and K. Isobe, “Coherence-based 3-D and spectral imaging and laser-scanning microscopy,” Proceedings of the IEEE **94**, 608–628 (2006). [CrossRef]

8. B. Javidi Ed., *Optical Imaging Sensors and Systems for Homeland Security Applications* (Springer, NewYork, 2006). [CrossRef]

9. Y. Frauel, E. Tajahuerce, O. Matoba, A. Castro, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for three-dimensional object recognition,” Appl. Opt. **43**, 452–462 (2004). [CrossRef] [PubMed]

10. S. Yeom, B. Javidi, and E. Watson, “Photon-counting passive 3D image sensing for reconstruction and recognition of occluded objects,” Opt. Express **15**, 16189–16195 (2007). [CrossRef] [PubMed]

12. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. **36**, 1598–1603 (1997). [CrossRef] [PubMed]

21. H. E. Ives, “Optical properties of a Lippmann lenticuled sheet,” J. Opt. Soc. Am. **21**, 171–176 (1931). [CrossRef]

24. M. Levoy, “Light fields and computional imaging,” IEEE Computer **39**, 46–55 (2006). [CrossRef]

36. J. -S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. **27**, 1144–1146 (2002). [CrossRef]

39. R. Zaharia, A. Aggoun, and M. McCormick, “Adaptive 3D-DCT compression algorithm for continuous parallax 3-D integral imaging,” Signal Process. Image Commun. **17**, 231–242 (2002). [CrossRef]

40. S. Yeom, A. Stern, and B. Javidi, “Compression of color integral images using MPEG 2,” Opt. Express **12**, 1632–1642 (2004). [CrossRef] [PubMed]

38. B. Tavakoli, M. DaneshPanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express **15**, 11889–11902 (2007). [CrossRef] [PubMed]

## 2. Sensor distribution

*i*th elemental image, 𝒫

*, is measured in a universal frame of reference in Cartesian coordinates, Φ: (*

_{i}*x*,

*y*,

*z*), which is used later for 3D computational reconstruction of the scene. The origin of the frame of reference is rather arbitrary, but it has to be fixed during all position measurements. However, since the proposed mathematical image reconstruction framework merely relies on relative distance of elemental images in space, it stays consistent if the origin moves and all position measurements are adjusted accordingly. Also, the local coordinate system, Ψ

*: (*

_{i}*u*,

_{i}*ν*,

_{i}*w*), is defined for each sensor with its origin lying on the sensor’s midpoint. We assume that the sensor size (

_{i}*L*,

_{x}*L*), effective focal length of the i-th imaging optics,

_{y}*g*, and the position of each sensor from the pickup stage is known. In our analysis, we make no assumption on the distribution of elemental images in the space to achieve a generic reconstruction scheme. Without loss of generality, to demonstrate the feasibility of the proposed technique, the random pickup locations, 𝒫

_{i}*=(*

_{i}*x*,

_{i}*y*,

_{i}*z*), are chosen from three independent uniform random variables as following:

_{i}*U*(

*a*,

*b*);

*b*>

*a*denotes a uniform distribution with probability distribution function of

*f*(

*x*)=(

*b*-

*a*)

^{-1}for

*a*≤

*x*≤

*b*and

*f*(

*x*)=0 elsewhere. The actual distribution of elemental images is dictated by the specific application of interest. We have used uniform distribution to give all locations in the pickup volume an equal chance to get selected as sensor positions.

*z*axis lies on the optical axis of the reference elemental image, i.e. the elemental image acquired from the perspective that reconstruction is desired [see Fig. 1]. Also, as the numbering sequence of elemental images is arbitrary, we label this elemental image as

*𝓔*

_{0}.

## 3. Computational image reconstruction

*O*(

*n*

^{2}log

*n*),

*n*being the total number of samples. However, this method is intrinsically based on the assumption of periodic sampling of the light field and thus may require heuristic adjustments if elemental images are not ordered regularly. In the spatial domain, a fast, ray tracing based reconstruction from the observers point of view is proposed [14

14. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. **26**, 157–159 (2001). [CrossRef]

*O*(

*m*),

*m*being the number of elemental images. Although fast and simple, this method yields low resolution reconstructions. Yet another spatial domain reconstruction method is based on series of 2D image back projections [27

27. S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**, 483–491 (2004). [CrossRef] [PubMed]

14. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. **26**, 157–159 (2001). [CrossRef]

*O*(

*n*), since usually

*n*≫

*m*. For instance,

*m*is typically in the range of 100-200 elemental images, while

*n*can be as large as 10

^{7}pixels. In the context of this paper, we stay within the spatial domain in order to provide a generic reconstruction algorithm with minimum assumptions about the pickup geometry.

26. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. **42**, 7036–7042 (2003). [CrossRef] [PubMed]

*compound linear transformation*and

*translation*matrices, respectively. Since the numbering sequence of elemental images is completely arbitrary, we take the view point from which we want to reconstruct the 3D scene to be one of the elemental images we label by 𝓔

_{0}. Assume that the reconstruction at distance

*z*from the reference sensor at 𝒫

_{r}_{0}is desired. According to Fig. 1, the distance of the

*i*-th elemental image from the desired reconstruction plane is:

*i*-th elemental image, matrix 𝒜

*and translation vector 𝒫*

_{i}*can be written as:*

_{i}*denotes the position of*

_{i}*i*-th sensor and

*M*=

_{i}*z*/

_{i}*g*is the associated magnification between the

_{i}*i*-th elemental image and its projection at distance

*z*. According to Fig. 1, the position of the midpoint of the plane that we are interested in reconstructing can be written as:

_{i}*needs to be augmented by vector 𝒫*

_{i}*-𝒫*

_{r}*, which yields the following expression:*

_{i}*x*,

*y*,

*z*]

^{T}and Ψ

*=[*

_{i}*u*,

_{i}*ν*,

_{i}*w*]

_{i}^{T}denote the points in the reconstruction space and

*i*-th elemental image space, respectively. Each elemental image captured (as shown in Fig. 1) can be expressed as a scalar function in space. For instance, for the

*i*th elemental image:

*L*and

_{x}*L*are the size of CCD imaging sensor in horizontal (

_{y}*x*) and vertical (

*y*) directions, respectively. Using Eq. (4) one can expand Eq. (6) to obtain the relationship between Φ and Ψ

*explicitly as:*

_{i}*i*th elemental image in Φ coordinate system at plane

*z*=

*p*

^{z}_{0}-

*z*as:

_{r}*M*which may get very large for

_{i}*z*≫

_{i}*g*. Such magnification does not introduce new information to the information already contained in the collection of original elemental images. Here, we formulate a substitute reconstruction method which, with no sacrifice in terms of information content, is more tractable in the aspect of required computational resources and speed, specially in synthetic aperture applications in which each elemental image can (and usually is) captured on a full frame imaging sensor array of mega pixel size. In practice, magnifying the elemental images with large ratios is not memory efficient if not impractical, especially when one tries to reconstruct full frame elemental images captured with SAII technique. Since elemental images are taken at arbitrary

_{i}*z*, their correspondingmagnification varies. Nevertheless, using Eq. (3) it is possible to decompose the magnification factor,

*M*, into a fixed magnification with a differential term as follows:

_{i}*p*=

^{z}_{i}*p*-

^{z}_{i}*p*

^{z}_{0}denotes the difference in the longitudinal distance of

*i*th sensor position and that of the 0-th (reference) sensor. Also, Δ

*g*=

_{i}*g*-

_{i}*g*signifies the difference between focal lengths of the optics used for image capture at

_{r}*i*th and reference images, respectively. It should be noted that Δ

*p*and Δ

^{z}_{i}*g*can be either positive or negative, resulting in varying

_{i}*γ*. From Eq. Eq. (10) it is clear that if all

_{i}*g*>0,

_{i}*γ*would stay bounded. In the experimental results section, we will show that for most applications of concern,

*γ*remains close to 1.

*x̃*=

*x/M*and

_{r}*ỹ*=

*y/M*, it is possible to reduce Eq. (9) to the following:

_{r}*M*, and translating them laterally by large amounts (

_{i}*p*-

^{x/y}_{i}*p*

^{x/y}_{0}), we only adjust the size of elemental images by factor

*γ*and reduce the lateral translation by factor

_{i}*M*. Thus, Eq. (11) can be simply interpreted as magnifying (or demagnifying) the

_{i}*i*th elemental image by factor

*γ*and shifting it by a scaled version of its lateral distance with the reference elemental image.

_{i}*M*=

*z/g*, each back-projected elemental image would be

*M*

^{2}times the size of original elemental image. This technique requires algebraic operations as well as memory resources exponentially increasing with the reconstruction distance. Such a resource demanding technique would make the end to end system costly and slows down the reconstruction process due to large machine cycles needed in addition to memory access delay.

30. H. Yoo and D.-H. Shin “Improved analysis on the signal property of computational integral imaging system,” Opt. Express **15**, 14107–14114 (2007). [CrossRef] [PubMed]

*u*, the shift (

*p*-

^{x}_{i}*p*

^{x}_{0})

*g*

_{i}*z*

^{-1}

_{i}*u*

^{-1}has a fractional part that varies randomly for different elemental images. This is in part due to the non-periodic nature of (

*p*-

^{x}_{i}*p*

^{x}_{0}) as well as varying magnification,

*M*=

_{i}*g*/

_{i}*z*, for each elemental image. Essentially, in contrast to II with lenslet [30

_{i}30. H. Yoo and D.-H. Shin “Improved analysis on the signal property of computational integral imaging system,” Opt. Express **15**, 14107–14114 (2007). [CrossRef] [PubMed]

*N*is the total number of elemental images. The described reconstruction technique, similar to its counterpart in [27

27. S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**, 483–491 (2004). [CrossRef] [PubMed]

## 4. Experimental results

*cm*

^{3}) whereas the car model fits in a volume of 4×3×2.5

*cm*

^{3}. Also, the tank and the car are placed approximately 19

*cm*and 24

*cm*away from the reference elemental image, respectively.

*=(*

_{i}*p*,

^{x}_{i}*p*,

^{y}_{i}*p*), are obtained from three uniform random variable generators. For the lateral position variables,

^{z}_{i}*p*and

^{x}_{i}*p*, a parallax of 8

^{y}_{i}*cm*is considered, i.e.

*p*,

^{x}_{i}*p*~

^{y}_{i}*U*(-4

*cm*,4

*cm*). Whereas the variation in the longitudinal direction is set to 20% of object distances, i.e.

*p*~

^{z}_{i}*U*(25

*cm*,27

*cm*) assuming the desirable reconstruction range within [19

*cm*,24

*cm*] from the reference sensor. Figure 2 illustrates the random distribution of the cameras in the pickup stage.

*i*th elemental image is then taken with a digital camera at its associated random pickup position, 𝒫

*. The focal length for all lenses are set to be equal, i.e.*

_{i}*g*=25

_{i}*mm*, however, according to Eq. (11), the set of 100 randomly generated positions from Eq. (1) result in

*γ*∈ [0.943,1.065] due to variability of Δ

_{i}*p*∈ [-0.91

^{z}_{i}*cm*,1.04

*cm*] for the entire reconstruction range

*z*∈ [160

_{r}*mm*,300

*mm*]. This has the same effect as variability in

*g*. The CMOS sensor size is 22.7×15.1

_{i}*mm*

^{2}with 7

*µm*pixel pitch. The field of view (FOV) for each elemental image is then 48°×33° in the horizontal and vertical directions respectively, which covers an area of 18×12

*cm*

^{2}at 20

*cm*away from the pickup location in the object space. A single camera is translated between the acquisition points such that it only passes each location once while at each location a full frame image with size 3072×2048 pixels is captured. The camera is translated in

*x*,

*y*,

*z*using off the shelve translation components.

_{0}=(-0.2,1.8,25.9)

*cm*with varying

*z*∈ [160

_{r}*mm*,300

*mm*]. Two of such reconstruction planes are shown in Fig. 4 at

*z*=185

_{r}*mm*and

*z*=240

_{r}*mm*, respectively.

*M*

^{2}

*times more computational and memory efficient. For parameters used in the experimental results in which*

_{r}## 5. Conclusion

27. S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**, 483–491 (2004). [CrossRef] [PubMed]

38. B. Tavakoli, M. DaneshPanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express **15**, 11889–11902 (2007). [CrossRef] [PubMed]

## References and links

1. | B. Javidi and F. Okano, Eds., |

2. | B. Javidi, S. H. Hong, and O. Matoba, “Multidimensional optical sensor and imaging system,” Appl. Opt. |

3. | T. Okoshi, “Three-dimensional displays,” Proceedings of the IEEE |

4. | L. Yang, M. McCornick, and N. Davies, “Discussion of the optics of a new 3-D imaging system,” Appl. Opt. |

5. | Y. Igarishi, H. Murata, and M. Ueda, “3D display system using a computer-generated integral photograph,” Jpn. J. Appl. Phys. |

6. | A. R. L. Travis, “The display of three dimensional video images,” Proc. of the IEEE |

7. | K. Itoh, W. Watanabe, H. Arimoto, and K. Isobe, “Coherence-based 3-D and spectral imaging and laser-scanning microscopy,” Proceedings of the IEEE |

8. | B. Javidi Ed., |

9. | Y. Frauel, E. Tajahuerce, O. Matoba, A. Castro, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for three-dimensional object recognition,” Appl. Opt. |

10. | S. Yeom, B. Javidi, and E. Watson, “Photon-counting passive 3D image sensing for reconstruction and recognition of occluded objects,” Opt. Express |

11. | L. Lypton, |

12. | F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. |

13. | F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. |

14. | H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. |

15. | A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proceedings of the IEEE |

16. | R. Martìnez, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Optically-corrected elemental images for undistorted integral image display,” Opt. Express |

17. | R. Martìnez-Cuenca, G. Saavedra, M. Martnez-Corral, and B. Javidi, “Extended Depth-of-Field 3-D Display and Visualization by Combination of Amplitude-Modulated Microlenses and Deconvolution Tools,” IEEE J. Display Technol. |

18. | T. Okoshi, |

19. | M. G. Lippmann, “La photographie intégrale,” |

20. | A. P. Sokolov, |

21. | H. E. Ives, “Optical properties of a Lippmann lenticuled sheet,” J. Opt. Soc. Am. |

22. | K. Perlin, S. Paxia, and J. S. Kollin, “An autostereoscopic display,” in |

23. | M. Levoy and P. Hanrahan, “Light field rendering,” in |

24. | M. Levoy, “Light fields and computional imaging,” IEEE Computer |

25. | J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates,” Opt. Eng. |

26. | A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. |

27. | S. -H. Hong, J. -S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express |

28. | S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express |

29. | D. -H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express |

30. | H. Yoo and D.-H. Shin “Improved analysis on the signal property of computational integral imaging system,” Opt. Express |

31. | A. Castro, Y. Frauel, and B. Javidi, “Integral imaging with large depth of field using an asymmetric phase mask,” Opt. Express |

32. | R. Martìnez-Cuenca, G. Saavedra, M. Martnez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express |

33. | J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. |

34. | J. S. Jang and B. Javidi, “Improved viewing resolution of 3-D integral imaging with nonstationary micro-optics,” Opt. Lett. |

35. | J. -S. Jang and B. Javidi, “Three-dimensional integral imaging with electronically synthesized lenslet arrays,” Opt. Lett. |

36. | J. -S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. |

37. | Y. Hwang, S. Hong, and B. Javidi, “Free View 3-D Visualization of Occluded Objects by Using Computational Synthetic Aperture Integral Imaging,” IEEE J. Display Technol. |

38. | B. Tavakoli, M. DaneshPanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express |

39. | R. Zaharia, A. Aggoun, and M. McCormick, “Adaptive 3D-DCT compression algorithm for continuous parallax 3-D integral imaging,” Signal Process. Image Commun. |

40. | S. Yeom, A. Stern, and B. Javidi, “Compression of color integral images using MPEG 2,” Opt. Express |

41. | Ren Ng, “Fourier slice photography,” in |

**OCIS Codes**

(100.3010) Image processing : Image reconstruction techniques

(100.6890) Image processing : Three-dimensional image processing

(110.6880) Imaging systems : Three-dimensional image acquisition

**ToC Category:**

Image Processing

**History**

Original Manuscript: February 7, 2008

Revised Manuscript: March 17, 2008

Manuscript Accepted: April 2, 2008

Published: April 21, 2008

**Citation**

Mehdi DaneshPanah, Bahram Javidi, and Edward A. Watson, "Three dimensional imaging with randomly distributed sensors," Opt. Express **16**, 6368-6377 (2008)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-9-6368

Sort: Year | Journal | Reset

### References

- B. Javidi and F. Okano, Eds., Three Dimensional Television, Video, and Display Technology (Springer-Verlag, Berlin, Germany, 2002).
- B. Javidi, S. H. Hong, and O. Matoba, "Multidimensional optical sensor and imaging system," Appl. Opt. 45, 2986-2994 (2006). [CrossRef] [PubMed]
- T. Okoshi, "Three-dimensional displays," Proceedings of the IEEE 68, 548-564 (1980). [CrossRef]
- L. Yang, M. McCornick, and N. Davies, "Discussion of the optics of a new 3-D imaging system," Appl. Opt. 27, 4529-4534 (1988). [CrossRef] [PubMed]
- Y. Igarishi, H. Murata, and M. Ueda, "3D display system using a computer-generated integral photograph," Jpn. J. Appl. Phys. 17, 1683-1684 (1978). [CrossRef]
- A. R. L. Travis, "The display of three dimensional video images," Proc. of the IEEE 85, 1817-1832 (1997). [CrossRef]
- K. Itoh, W. Watanabe, H. Arimoto, and K. Isobe, "Coherence-based 3-D and spectral imaging and laser-scanning microscopy," Proceedings of the IEEE 94, 608-628 (2006). [CrossRef]
- B. Javidi Ed., Optical Imaging Sensors and Systems for Homeland Security Applications (Springer, NewYork, 2006). [CrossRef]
- Y. Frauel, E. Tajahuerce, O. Matoba, A. Castro, and B. Javidi, "Comparison of passive ranging integral imaging and active imaging digital holography for three-dimensional object recognition," Appl. Opt. 43, 452-462 (2004). [CrossRef] [PubMed]
- S. Yeom, B. Javidi, and E. Watson, "Photon-counting passive 3D image sensing for reconstruction and recognition of occluded objects," Opt. Express 15, 16189-16195 (2007). [CrossRef] [PubMed]
- L. Lypton, Foundation of Stereoscopic Cinema, A Study in Depth (Van Nostrand Reinhold, New York, 1982).
- F. Okano, H. Hoshino, J. Arai, and I. Yuyama, "Real-time pickup method for a three-dimensional image based on integral photography," Appl. Opt. 36, 1598-1603 (1997). [CrossRef] [PubMed]
- F. Okano, J. Arai, H. Hoshino, and I. Yuyama, "Three-dimensional video system based on integral photography," Opt. Eng. 38, 1072-1078 (1999). [CrossRef]
- H. Arimoto and B. Javidi, "Integral three-dimensional imaging with digital reconstruction," Opt. Lett. 26, 157-159 (2001). [CrossRef]
- A. Stern and B. Javidi, "Three-dimensional image sensing, visualization, and processing using integral imaging," Proceedings of the IEEE 94, 591-607 (2006). [CrossRef]
- R. Martınez, A. Pons, G. Saavedra, M. Martinez-Corral, and B. Javidi, "Optically-corrected elemental images for undistorted integral image display," Opt. Express 14, 9657-9663 (2006). [CrossRef]
- R. Mart`ınez-Cuenca, G. Saavedra, M. Martnez-Corral, and B. Javidi, "Extended Depth-of-Field 3-D Display and Visualization by Combination of Amplitude-Modulated Microlenses and Deconvolution Tools," IEEE J. Display Technol. 1, 321-327 (2005). [CrossRef]
- T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).
- M. G. Lippmann, "La photographie int??egrale," Comptes-rendus de l??Acad??emie des Sciences 146, 446-451 (1908).
- A. P. Sokolov, Autostereoscpy and Integral Photography by Professor Lippmanns Method (Moscow State Univ. Press, Moscow, 1911).
- H. E. Ives, "Optical properties of a Lippmann lenticuled sheet," J. Opt. Soc. Am. 21, 171-176 (1931). [CrossRef]
- K. Perlin, S. Paxia, and J. S. Kollin, "An autostereoscopic display," in Proceedings of the 27th Ann. Conf. on Computer Graphics and Interactive Techniques (ACM Press/Addison-Wesley, 2000), pp.319-326.
- M. Levoy and P. Hanrahan, "Light field rendering," in Proc. of ACM SIGGRAPH (New Orleans, 1996), pp. 31-42.
- M. Levoy, "Light fields and computional imaging," IEEE Computer 39, 46-55 (2006). [CrossRef]
- J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, S.-K. Kim, and H. Choi, "Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates," Opt. Eng. 42, 3326-3333 (2003). [CrossRef]
- A. Stern and B. Javidi, "Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging," Appl. Opt. 42, 7036-7042 (2003). [CrossRef] [PubMed]
- S. -H. Hong, J. -S. Jang, and B. Javidi, "Three-dimensional volumetric object reconstruction using computational integral imaging," Opt. Express 12, 483-491 (2004). [CrossRef] [PubMed]
- S. -H. Hong and B. Javidi, "Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing," Opt. Express 12, 4579-4588 (2004). [CrossRef] [PubMed]
- D. -H. Shin and H. Yoo, "Image quality enhancement in 3D computational integral imaging by use of interpolation methods," Opt. Express 15, 12039-12049 (2007). [CrossRef] [PubMed]
- H. Yoo and D.-H. Shin "Improved analysis on the signal property of computational integral imaging system," Opt. Express 15, 14107-14114 (2007). [CrossRef] [PubMed]
- A. Castro, Y. Frauel, and B. Javidi, "Integral imaging with large depth of field using an asymmetric phase mask," Opt. Express 15, 10266-10273 (2007). [CrossRef] [PubMed]
- R. Martınez-Cuenca, G. Saavedra, M. Martnez-Corral, and B. Javidi, "Enhanced depth of field integral imaging with sensor resolution constraints," Opt. Express 12, 5237-5242 (2004). [CrossRef] [PubMed]
- J. Arai, F. Okano, H. Hoshino, and I. Yuyama, "Gradient-index lens-array method based on real-time integral photography for three-dimensional images," Appl. Opt. 37, 2034-2045 (1998). [CrossRef]
- J. S. Jang and B. Javidi, "Improved viewing resolution of 3-D integral imaging with nonstationary micro-optics," Opt. Lett. 27, 324-326 (2002). [CrossRef]
- J. -S. Jang and B. Javidi, "Three-dimensional integral imaging with electronically synthesized lenslet arrays," Opt. Lett. 27, 1767-1769 (2002). [CrossRef]
- J. -S. Jang and B. Javidi, "Three-dimensional synthetic aperture integral imaging," Opt. Lett. 27, 1144-1146 (2002). [CrossRef]
- Y. Hwang, S. Hong, and B. Javidi, "Free View 3-D Visualization of Occluded Objects by Using Computational Synthetic Aperture Integral Imaging," IEEE J. Display Technol. 3, 64-70 (2007). [CrossRef]
- B. Tavakoli, M. DaneshPanah, B. Javidi, and E. Watson, "Performance of 3D integral imaging with position uncertainty," Opt. Express 15, 11889-11902 (2007). [CrossRef] [PubMed]
- R. Zaharia, A. Aggoun, and M. McCormick, "Adaptive 3D-DCT compression algorithm for continuous parallax 3-D integral imaging," Signal Process. Image Commun. 17, 231-242 (2002). [CrossRef]
- S. Yeom, A. Stern, and B. Javidi, "Compression of color integral images using MPEG 2," Opt. Express 12, 1632-1642 (2004). [CrossRef] [PubMed]
- Ren Ng, "Fourier slice photography," in Proceedings of ACM SIGGRAPH24, pp. 735-744 (2005).

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.