OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editors: Andrew Dunn and Anthony Durkin
  • Vol. 8, Iss. 3 — Apr. 4, 2013
« Show journal navigation

Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]

Xiao Xiao, Bahram Javidi, Manuel Martinez-Corral, and Adrian Stern  »View Author Affiliations


Applied Optics, Vol. 52, Issue 4, pp. 546-560 (2013)
http://dx.doi.org/10.1364/AO.52.000546


View Full Text Article

Acrobat PDF (1833 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.

© 2013 Optical Society of America

1. Introduction

In 1908, Lippmann proposed a novel technique, named integral photography (IP), which can reconstruct true 3D images that can be observed with full parallax and quasi-continuous viewing angles. This technique, which is based on the reversibility principle of light rays, produces autostereoscopic images. Thus, no special viewing devices are required to perceive 3D images. Besides early follow-ups on Lippmann’s IP [1

1. A. Sokolov, “Autostereoscopy and integral photography by Professor Lippmann’s method,” in Izd. MGU (Moscow State University, 1911).

5

5. T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

], there were no substantial activities in this field for much of the 20th century. This was mainly due to the unavailability of mature technologies for cost-effective devices such high-resolution image sensors, displays, and microlens arrays (MLAs).

However, thanks to the advances in optoelectronic sensors such as CMOS and CCDs, display devices such as LCDs, and commercially available digital computers, the principles of IP have been resurrected and developed recently. In its present form, integral imaging belongs to a broader class of multiview imaging systems and has become a promising approach for 3D sensing and imaging. Integral imaging has been extensively researched for 3D sensing, capture, and visualization of objects utilizing state-of-the-art optical and digital devices and various imaging techniques [6

6. B. Javidi, F. Okano, and J. Y. Son, Three-Dimensional Imaging, Visualization, and Display (Springer, 2009).

]. Numerous research results have been achieved including 3D display and television [7

7. L. Yang, M. McCormick, and N. Davies, “Discussion of the optics of a new 3-D imaging system,” Appl. Opt. 27, 4529–4534 (1988). [CrossRef]

12

12. J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yoshimura, M. Furuya, and M. Sato, “Integral three-dimensional television using a 33-megapixel imaging system,” J. Disp. Technol. 6, 422–430 (2010). [CrossRef]

], automatic target recognition [13

13. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. 40, 3318–3325 (2001). [CrossRef]

16

16. R. Schulein, C. M. Do, and B. Javidi, “Distortion-tolerant 3D recognition of underwater objects using neural networks,” J. Opt. Soc. Am. A 27, 461–468 (2010). [CrossRef]

], target ranging [17

17. M. DaneshPanah and B. Javidi, “Profilometry and optical slicing by passive three-dimensional imaging,” Opt. Lett. 34, 1105–1107 (2009). [CrossRef]

,18

18. J. H. Park and K. M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express 19, 18729–18741 (2011). [CrossRef]

], 3D photon counting imaging [19

19. A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]

24

24. D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express 19, 19681–19687 (2011). [CrossRef]

], 3D imaging for objects that are partially occluded or are in scattering medium [25

25. S. H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Disp. Technol. 1, 354–359 (2005). [CrossRef]

,26

26. I. Moon and B. Javidi, “Three-dimensional visualization of objects in scattering medium by use of computational integral imaging,” Opt. Express 16, 13080–13089 (2008). [CrossRef]

], 3D underwater imaging [16

16. R. Schulein, C. M. Do, and B. Javidi, “Distortion-tolerant 3D recognition of underwater objects using neural networks,” J. Opt. Soc. Am. A 27, 461–468 (2010). [CrossRef]

,27

27. M. Cho and B. Javidi, “Three-dimensional visualization of objects in turbid water using integral imaging,” J. Disp. Technol. 6, 544–547 (2010). [CrossRef]

], medical imaging [28

28. B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14, 12096–12108 (2006). [CrossRef]

30

30. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235, 144–162 (2009). [CrossRef]

], and others [31

31. D. Shin, M. Cho, and B. Javidi, “Three-dimensional optical microscopy using axially distributed image sensing,” Opt. Lett. 35, 3646–3648 (2010). [CrossRef]

33

33. X. Xiao, B. Javidi, G. Saavedra, M. Eismann, and M. Martinez-Corral, “Three-dimensional polarimetric computational integral imaging,” Opt. Express 20, 15481–15488 (2012). [CrossRef]

], to name a few.

In this paper, we present an overview of literature on recent developments and applications in integral imaging. The histories and principles of integral imaging are reviewed in Section 2. Several different sensor configurations for 3D sensing are discussed in Section 3. Section 4 covers 3D displays or optical reconstruction and several computational 3D reconstruction methodologies. Some applications of 3D sensing and display technologies are presented in Section 5.

As is the case with any technical overview paper of this type, it is not possible to present an exhaustive coverage of the field. Therefore, we may have inadvertently overlooked some relevant work, for which we apologize in advance. A number of references [1

1. A. Sokolov, “Autostereoscopy and integral photography by Professor Lippmann’s method,” in Izd. MGU (Moscow State University, 1911).

90

90. A. Gotchev, G. Akar, T. Capin, D. Strohmeier, and A. Boev, “Three-dimensional media for mobile devices,” Proc. IEEE 99, 708–741 (2011). [CrossRef]

] are provided to aid the readers with various aspects of this technology.

2. History of Development and Principles of Integral Imaging

A typical 3D imaging process may include a series of stages such as image capture, digital processing, and finally the display stage. Wheatstone introduced the first stereoscope about 170 years ago [34

34. C. Wheatstone, “Contributions to the physiology of vision.—Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision,” Philos. Trans. R. Soc. Lond. 128, 371–394 (1838). [CrossRef]

]. Since then, many 3D techniques have been proposed. However, none have clearly demonstrated superiorities over the others for mass commercialization. The human visual system perceives the 3D information of a scene from what could be called depth cues. Among them, the binocular disparity, or stereopsis, is the most decisive. Stereopsis may be acquired from parallax between the retinal images obtained with different perspectives for both eyes.

The stereoscopic 3D display technology is based on the use of special glasses that induce binocular disparity by providing a different image to each eye of the observer. The first proposal in this sense dates from 1853, when Rollmann proposed the use of anaglyph glasses [35

35. W. Rollmann, “Zwei neue stereoskopische Methoden,” Ann. Phys. 166, 186–187 (1853). [CrossRef]

]. In such cases, the stereoscopic information is encoded in two complementary colors. This method is still widely used due to its simplicity and low price. However, it has poorly reproduced colors and is very sensitive to chromatic anomalies of the observer. A more recent approach uses temporal multiplexing of left and right images using viewing glasses made with liquid crystal with a shutter [36

36. D. S. Kim, S. M. Park, J. H. Jung, and D. C. Hwang, “51.2: new 240 Hz driving method for full HD & high quality 3D LCD TV,” SID Symp. Dig. Tech. Pap. 41, 762–765 (2010). [CrossRef]

,37

37. S. S. Kim, B. H. You, H. Choi, B. H. Berkeley, D. G. Kim, and N. D. Kim, “World’s first 240 Hz TFT‐LCD technology for full‐HD LCD‐TV and its application to 3D display,” SID Symp. Dig. Tech. Pap. 40, 424–427 (2009). [CrossRef]

]. Other stereoscopic techniques are based on the use of polarized crystals to induce binocular disparity. In this case, the left and right images are emitted with orthogonal polarizations [38

38. H. Kang, S. D. Roh, I. S. Baik, H. J. Jung, W. N. Jeong, J. K. Shin, and I. J. Chung, “3.1: a novel polarizer glasses‐type 3D displays with a patterned retarder,” SID Symp. Dig. Tech. Pap. 41, 1–4 (2010). [CrossRef]

].

These techniques have the disadvantage of requiring the use of special viewing glasses for observation of the images, which causes discomfort for long observation periods. Thus, there is motivation for the development of autostereoscopic techniques, which do not require the use of special glasses. Among the techniques that provide autostereoscopic images, the most fascinating and elegant is holography. Holographic technology offers continuous parallax in all directions. However, holography requires coherent illumination and dark conditions during data recording. Also, there is a large volume of information carried by the images. Thus, holographic techniques may not be ready for commercial 3D displays in the near future [39

39. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005). [CrossRef]

41

41. P. A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W. Y. Hsieh, and M. Kathaperumal, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468, 80–83 (2010). [CrossRef]

]. Another alternative is the volumetric display [42

42. M. Holroyd, I. Baran, J. Lawrence, and W. Matusik, “Computing and fabricating multilayer models,” ACM Trans. Graph. 30, 187 (2011). [CrossRef]

]. However, this technology is still at a very early stage of development.

The autostereoscopic techniques, which have reached more development so far, are those in which the monitor itself implements the function of sending the corresponding images to the left and the right eyes. Among these techniques, the best known is the use of a lenticular sheet [43

43. A. Marraud and M. Bonnet, “Restitution of stereoscopic picture by means of a lenticular sheet,” Proc. SPIE 0402, 129–132 (1983).

], which is an array of flat-cylindrical lenses imprinted on a transparent panel to project a stereoscopic image. The major disadvantage of this technique is that it provides only binocular parallax for a single position of the observer and for a unique perspective. Extension of this principle to multiple views was also developed, to allow horizontal parallax. Multiview autostereoscopic techniques typically use five to nine views. Several techniques were developed to avoid the lateral resolution loss by generating multiple views, such as using slanted displays or utilizing RGB subpixel structure of the monitor [44

44. Mashitani, “Autostereoscopic video display with a parallax barrier having oblique apertures,” U.S. patent 7,317,494(8January2008).

]. Another alternative approach of this concept is by using parallax barriers, which are arrays of vertical seals that allow the view shown on the left eye to be different with the one on the right eye [45

45. H. J. Lee, H. Nam, J. D. Lee, H. W. Jang, M. S. Song, B. S. Kim, J. S. Gu, C. Y. Park, and K. H. Choi, “A high resolution autostereoscopic display employing a time division parallax barrier,” SID Symp. Dig. Tech. Pap. 37, 81–84 (2006). [CrossRef]

,46

46. G. Hamagishi, “Analysis and improvement of viewing conditions for two‐view and multi‐view displays,” SID Symp. Dig. Tech. Pap. 40, 340–343 (2009). [CrossRef]

]. A major disadvantage of the parallax barriers is their optical loss.

In any case, these autostereoscopic techniques, and also the stereoscopic ones detailed above, have an essential problem that may prevent them from applications that require prolonged observation time. This problem arises from the conflict between eye accommodation and convergence of the visual axes. This happens because, during the observation of a stereoscopic monitor, the eyes relax the accommodation to focus on the screen. Then, the projected stereoscopic images produce disparity in the retinal images. Such disparity stimulates convergence movements of the eyes axes to allow the fusion and perceive the depth. This kind of convergence creates an accommodation stress to allow the eyes to focus on the closer scene. However, accommodation should not change because the eyes must focus all the time onto the screen to allow perceiving sharp images. This discrepancy forces the visual system to an ongoing effort against nature. As a result, people feel visual fatigue and, sometimes, strong feelings of discomfort [47

47. T. Inoue and H. Ohzu, “Accommodative responses to stereoscopic three-dimensional display,” Appl. Opt. 36, 4509–4515 (1997). [CrossRef]

,48

48. F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25, 99–108 (2004). [CrossRef]

].

A very interesting alternative to these techniques comes from a technique called IP, proposed by Lippmann in 1908 [49

49. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908). [CrossRef]

]. Basically, Lippmann’s idea was to record many 2D images of a scene from different perspectives. This can be done on a macroscopic scale using an array of cameras or, on a smaller scale, inserting a MLA in front of the optical sensor (see Fig. 1).

Fig. 1. Image capture stage in IP.

In Fig. 1, we see that the array of microlenses permits the capture of the 3D scene from many different perspectives. The individual images are usually called elemental images (EIs). The matrix of all the EIs is called the integral image of the 3D scene. Note that, to avoid the overlapping between neighboring EIs, it is necessary to insert barriers physically or optically between them. Because of the imaging capacity of the microlenses, there is only one plane of the 3D scene, the conjugate plane, which produces sharp images onto the sensor. Other planes are known as out-of-focus planes and produce blurred images on the sensor. However, the microlenses usually have low numerical aperture and produce slightly blurred images for the region of interest. This blurring is negligible when compared with the size of the pixels of the sensor. Thus, in the following sections, we will consider that all parts of the 3D scene are captured in focus. In Fig. 1, we have marked in grey the field of view (FOV) of a microlens, which is defined by the angle subtended by the EI from the center of the corresponding microlens. Note that each microlens captures a frontal view of the 3D scene. To capture sufficient 3D information, it is necessary that each part of the scene be captured by multiple EIs.

As we will present in this paper, this 3D recorded information can be processed in different ways to be used in a variety of applications.

Now we concentrate on the original application presented by Lippmann. This idea was to project the integral image onto a 2D display placed in front of an MLA. The display stage is the reverse of the pickup stage. The microlenses used in the display could be similar to the ones used in the data capture stage or can be scaled proportionally. As we can see in Fig. 2, the different perspectives are integrated into a 3D image. Integral imaging is essentially different from stereoscopy. In stereoscopy, two different 2D images are projected to the right and the left eye, respectively, and the brain makes the fusion to perceive depth. In integral imaging, the microlenses produce differences in the light density in the space in front of the observer. Thus, there is a real reconstruction of the light structure produced by the original 3D scene. In Fig. 2, we show a very simple example of the reconstruction of a point source. The ray bundles produced by the pixels of the display intersect at the same region of the reconstruction space. After the intersection, the bundles continue to propagate toward the observer. Then what the observer receives is a diverging ray beam that is equivalent to the one produced by a real point source on the object. Thus, the visual system perceives this virtual point source as a real image. In this case, there is agreement between the accommodation effort and the convergence. Both the eye axes and the accommodation are adjusted to the position of the virtual point object. The scene is perceived as 3D by the observer regardless of the observation positions and without eye strain. Although this concept is a century old, it has been widely researched only recently due to advances in technology to have suitable MLAs, imagers, displays, numerical data processing, and communication systems [50

50. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]

].

Fig. 2. Display stage in integral imaging.

Over the past decade, there has been much effort to develop this technology, seeking to improve its performance in terms of resolution, viewing angle, continuity of perspectives, and applications, such as computational reconstruction of 3D scenes, 3D object recognition, and 3D imaging in very low levels of illumination.

3. Sensing Stage of Integral Imaging

A. Direct Image Capture

The original Lippmann concept is based on the capture of many perspectives of a 3D scene by means of a multilens. As shown in Fig. 1, the image sensor is placed behind the lenslet array. The proper selection of the capture parameters strongly depends on the application. For example, when the aim of the capture is to record EIs intended for being displayed in an integral imaging monitor, one has to take into account that microlens pitch is the display resolution unit (DRU) in integral imaging displays [62

62. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Disp. Technol. 6, 404–411 (2010). [CrossRef]

]. Thus, for this kind of application, a large number of EIs with moderate number of pixels is required.

If the 3D scene is far from the camera, the angular extension of the array of lenses, as seen from the center of the scene, is small. In this case, a camera lens, also named as depth-control lens [63

63. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999). [CrossRef]

,64

64. N. Davies, M. McCormick, and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt. 27, 4520–4528 (1988). [CrossRef]

], is necessary to image the reference plane of the far 3D scene onto MLA. In that case, some parts of the 3D scene are imaged in front of the MLA, and other parts are imaged behind the MLA. Since this capture modality, shown in Fig. 3, is different from the one described in Section 2, we denote it as far-field integral imaging (FInI). In the computer-graphics community, the FInI cameras involve the same optical principles as the plenoptic cameras [65

65. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]

].

Fig. 3. Capture setup of FInI.

Note that using the camera lens has the effect of transposing the resolution constraints [66

66. M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]

]. Thus, in FInI the MLA pitch determines the spatial resolution of reconstructed sections of the 3D scene. The angular resolution, or segmentation capacity of the 3D reconstruction, is restricted by the number pixels per EI. To guarantee good spatial resolution in the reconstructed sections of the 3D scene, a large number of small microlenses are required. From the captured EIs, one can calculate the so-called subimages, or view images, by extracting and composing the pixels at the same local position in every EI [67

67. J. H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009). [CrossRef]

]. As we can see in Fig. 4, all the pixels of a subimage (for example the red pixels in the figure) only receive the light proceeding from the 3D scene and passing through a specific subaperture of the camera lens. Any subimage sees a different perspective of the scene and has a high depth of field, which corresponds to images obtained through smaller subapertures. The number of pixels of these subimages is just equal to the number of microlenses of the array.

Fig. 4. FInI scene capture. (a) Subimages are synthesized from the pixels of each EI. (b) Subimages are equivalent to the EIs that could be obtained by one integral imaging sensor.

This direct pickup procedure is very useful because it allows the capture of the EIs by use of only one sensor and one snapshot. The obtained parallax is determined by the angle subtended by the camera lens as seen from the center of the scene. The integral images captured by this procedure can be very useful for the depth reconstruction of far scenes with good optical segmentation capacity. Also, since FInI cameras record scenes that are in the close neighborhood of the MLA, the acquired EIs are ready for direct display in an integral imaging monitor.

B. Synthetic Aperture Integral Imaging

In the lenslet-based integral imaging systems, the achievable resolution is limited by the size of lenslet and the number of pixels allocated to each lenslet. In essence, resolution of each EI is limited by three parameters: the pixel size, the lenslet point spread function, and the lenslet depth of focus [8

8. F. Okano, J. Arai, K. Mitani, and M. Okui, “Real-time integral imaging based on extremely high resolution video system,” Proc. IEEE 94, 490–501 (2006). [CrossRef]

,19

19. A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]

]. In addition, aberrations and diffraction are significant because the size of the lenslet is relatively small.

In contrast to the lenslet-based systems, integral imaging can be performed either in a synthetic aperture mode or with an array of high-resolution imaging sensors. Each perspective image can be recorded by a full-size CCD or CMOS sensor of several megapixels [66

66. M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]

,68

68. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]

]. Moreover, instead of the sensor array, a single sensor can translate on a 2D plane to capture multiple 2D images. This approach may be considered synthetic aperture integral imaging (SAII). SAII enables one to obtain larger FOV, high-resolution 2D images because each 2D image makes full use of the detector array and the optical aperture. In addition, SAII potentially creates large pickup apertures, which is much larger than what is practical with conventional lenslet-array-based integral imaging. Larger pickup apertures are important to obtain the required range resolution at longer distances. It should be noted that this method may not be suitable for dynamic scenes. Figure 5 illustrates the pickup stage by using a sensor array.

Fig. 5. Pickup stage of integral imaging using a camera array.

SAII also allows for the entire integral imaging system to move. For example, a small array of cameras or a camera with a single lenslet array can be moved to increase parallax and improve both range and FOV [68

68. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]

].

C. Randomly Distributed Sensing

Integral imaging has been investigated under the assumption that EIs are captured on a known equally spaced, planar, and regular grid of lenslets or lenticular elements. In the synthetic aperture regime, however, maintaining the regular pickup pattern might not be feasible for certain applications such as aerial 3D imaging. In the process of collecting imagery across a large aperture, the positions of sensors can hardly be restricted to what is prescribed by conventional pickup strategies, i.e., regular, planar rectangular grid. In [69

69. M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008). [CrossRef]

] a generalized framework for 3D integral imaging with arbitrary 3D pickup geometry has been presented. A finite number of sensors with known coordinates are randomly distributed in 3D space. In addition, it is assumed that all the sensors have parallel optical axes and no rotation from each other. Figure 6 illustrates an integral imaging system with random distributed sensors. The reference (E0) image and the kth EI (Ek) are shown with their respective FOV in blue and green, respectively. The pickup locations of all the sensors are measured in a universal frame of reference, Φ:(X,Y,Z). The local coordinate system, Ψ:(u,v,w), is defined for each sensor with its origin lying on the center of the sensor. To reconstruct 3D images, a computational reconstruction framework based on the backprojection method has developed using a variable affine transform between the image space and the object space. More detail is introduced in Section 4.

Fig. 6. Illustration of an integral imaging system with randomly distributed sensors.

D. Axially Distributed Sensing

3D imaging with axially distributed sensing (ADS) is another multiperspective 3D imaging architecture. In this ADS system, either a single image sensor is translated along its optical axis or the objects move parallel to the optical axis of the sensor. In ADS architecture [70

70. R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009). [CrossRef]

], nonuniform longitudinal perspective information is recorded across the FOV of the sensor. In fact, no parallax is available for points residing on the optical axis while parallax increases as objects get close to the periphery of the FOV. The diagram of the ADS architecture is shown in Fig. 7. For simplification, a pinhole approximation is considered for each pickup position with the distance between the pinhole and the sensor defined as g. We assume that the total number of EIs is K. The ability of the ADS architecture to collect 3D information can be described by the total angle subtended by the nearest and the farthest pickup positions. Consider an object point located at a longitudinal distance z0 away from the nearest pickup position and radial distance r from the optical axis (see Fig. 7). The total angle subtended by the sensors with respect to the object point can be written as
Ω=tan1(rsr2+z02+z0s),
(1)
where s denotes the distance between the nearest and the farthest pickup positions along the optical axis. Note the special case of r=0 where there is no parallax available; hence no 3D information is obtained for on-optical-axis object points. The total angle increases with r increasing, which implies a greater capacity for 3D information collection when objects are farther from the optical axis (large r). The collected imagery can be reconstructed using the modified backpropagation technique taking into account the varying magnification ratio for each intermittent 2D image (see Section 4 in detail).

Fig. 7. Optical pickup stage for the ADS architecture.

E. Unknown Sensor Position Estimation

Prior knowledge of sensor positions in the pickup stage is required for conventional integral imaging 3D reconstruction using backprojection algorithms. In certain image pick up geometries, it may be difficult to obtain an accurate measurement of sensor positions such as sensors on moving platforms and/or randomly distributed sensors. In [71

71. X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3D integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010). [CrossRef]

], a multisensor position estimator in conjunction with 3D reconstruction method of integral imaging has been presented to extend its applications to scenarios where sensor positions are not available and/or cannot be measured accurately. This method assumes that the relative position of two sensors is known whereas all other sensors positions are unknown. In addition, all the sensors are located within an identical plane with parallel optical axes. This method combines image correspondence extraction and matching, pinhole perspective model, two view geometry and computational integral imaging 3D reconstruction techniques to overcome the limitation in conventional integral imaging systems. The steps to estimate the sensor positions are briefly as follows.
  • Find image correspondences/matching points in the image sequence. For example, for an object point Mi, its image correspondences are m1i,m2i,,mki, respectively (see Fig. 8).
  • Calculate 3D coordinates of object points (Mi=[Xi,Yi,Zi]) by using their corresponding image points (m1i and m2i) and the first two known sensor positions.
  • Estimate the remaining sensor positions by using the calculated 3D object points (Mi) and their corresponding image points (mki=[uki,vki])
The expression to estimate the kth sensor position, (Sxk,Syk), an be written as
Sxk=Zi(ukiax)fXifxk,Syk=Zi(vkiay)fYifyk,
(2)
where f is focal length and ax and ay are the coordinate of the principle point on the image.

Fig. 8. Illustration of estimating unknown sensor positions.

Experimental results [71

71. X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3D integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010). [CrossRef]

] also demonstrate that the method to estimate the unknown sensor positions may be used to improve the image reconstruction quality in situations where the sensor positions recorded are subject to measurement errors.

F. Computer-Synthesized EIs

Instead of the pickup stage using the lenslet array or multiple image sensors, a composite of 2D images of the same scene from different perspectives can be generated by computer graphics. In image rendering techniques, multiview images are generated by computing the fixed mapping between the object points and their corresponding image points. The key problem is how to efficiently render a 3D scene from multiple perspectives. The typical and simple method to render a single image is to use the ray-tracing technique [72

72. Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys. 17, 1683–1684 (1978). [CrossRef]

]. However, single-view rendering is not efficient for the large number of 2D images and complex 3D scenes. Methods to simultaneously render multiple images have been studied in [73

73. M. Halle, “Multiple viewpoint rendering,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (1998), pp. 243–254.

,74

74. R. Yang, X. Huang, S. Li, and C. Jaynes, “Toward the light field display: autostereoscopic rendering via a cluster of projectors,” IEEE Trans. Vis. Comput. Graph. 14, 84–96 (2008).

]. The parallel group rendering technique renders multiple image points located in a certain direction at the same time, which brings tremendous savings in rendering time to scenarios where the total number of images is a few orders higher than the resolution of each image. The multiple viewpoint rendering technique can render images for a row of cameras simultaneously, which is better to generate high-resolution dense light fields on a regular grid. Note that computer-generated integral imaging requires the prior knowledge of the 3D model of the scene.

4. 3D Visualization of Integral Imaging

A. Optical Display

As stated in Section 2, one of the important applications of integral imaging technology is the implementation of screens for the 3D display of images. An integral imaging monitor potentially provides the observer with a light distribution that reconstructs the original scene. The viewer can perceive this reconstruction as 3D, independent of his or her position relative to the screen.

The second problem that has to be overcome is the angular resolution. The problem of angular resolution is closely linked to the problem of lateral resolution of the sensor. The number of pixels under each microlens determines the angular resolution limit. The higher the number of pixels is, the better the angular resolution becomes. This is the bottleneck of resolution of integral imaging monitors at the moment. Practically, although a large number of pixels per EI would be desirable, a density of about 16×16 pixels per microlens would typically provide a soft transition between different views. A good integral imaging monitor would require the use of microlenses with pitch about p=250μm and width of pixels w16μm. Current realizations of integral imaging displays (w=79μm in [76

76. M. Martínez-Corral, H. Navarro, R. Martínez-Cuenca, G. Saavedra, and B. Javidi, “Full parallax 3-D TV with programmable display parameters,” Opt. Photon. News 22(12), 50–50 (2011). [CrossRef]

] and w=75μm in [12

12. J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yoshimura, M. Furuya, and M. Sato, “Integral three-dimensional television using a 33-megapixel imaging system,” J. Disp. Technol. 6, 422–430 (2010). [CrossRef]

]) are still far from these numbers. It is worth noting that at present there are competitions in producing screens with very high resolution among smartphone manufacturers [90

90. A. Gotchev, G. Akar, T. Capin, D. Strohmeier, and A. Boev, “Three-dimensional media for mobile devices,” Proc. IEEE 99, 708–741 (2011). [CrossRef]

]. It is expectable that these competitions will result in the mass production of ultrahigh-resolution displays, which will be very useful for the implementation of integral imaging monitors.

The third problem is the pseudoscopic, or depth reversed, nature of optically reconstructed images when the EIs are displayed in devices with no preprocessing. Okano et al. proposed a simple solution to this problem [50

50. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]

]. They proposed to capture the EIs with the standard pickup architecture. Then, each EI is rotated by 180° around its center. Taking into account the pixilated structure of the EIs, this operation simply implies a local pixel mapping. As we show in Fig. 9 when these rotated EIs are displayed at a distance gv=g2f2/(d-f), a virtual, undistorted orthoscopic image is obtained at a distance dv=d-f, from the lenslet array. Note that, to obtain a reconstructed virtual image, two conditions are required. First, each microlens must produce a virtual image of its corresponding EI in the reference plane. On the other hand, the set of ray cones corresponding to the same point of the object must intersect in the same virtual point of the reconstructed image, as shown in Fig. 9. Although in this scheme there is no degradation of the image due to the introduction of additional elements or stages, it still has the drawback that the reconstructed images are virtual.

Fig. 9. Schematic drawing of the orthoscopic, virtual reconstruction.

More recently, a more flexible digital method is reported that allows the calculation of a new set of synthetic EIs to be used in a display configuration that can be essentially different from the one used in the capture. The new algorithm has been called smart pseudoscopic-to-orthoscopic conversion (SPOC) algorithm [58

58. H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express 18, 25573–25583 (2010). [CrossRef]

]. It allows the calculation of EIs ready to be displayed in an integral imaging monitor in which the pitch, the microlenses focal length, the number of pixels per EI, the depth position of the reference plane, and even the grid geometry of the MLA can be selected to fit the conditions of the display architecture. In Fig. 10 we show a demonstration of the utility of the algorithm.

Fig. 10. (a) Collection of EIs obtained with a conventional integral imaging pickup system, (b) synthetic EIs calculated with SPOC, and (c) reconstruction of the orthoscopic, floating 3D image through an MP4 player.

The last important problem to be faced for the implementation of integral imaging monitors is the limited viewing angle. The viewing angle of these devices is determined by the angle subtended by any EI from the center of the corresponding microlens (Fig. 1). Naturally, the first direction of research effort must be addressed to the production of microlenses with high numerical aperture and free from aberrations. One optical solution to this problem comes from the use of a telecentric relay for the parallel acquisition and display of three sets of EIs [77

77. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15, 16255–16260 (2007). [CrossRef]

]. Although elegant, this method cannot be used when one aims to build flat integral imaging monitors. A different method was suggested by Choi et al. [78

78. H. Choi, S. W. Min, S. Jung, J. H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003). [CrossRef]

]. The idea is to capture three sets of EIs and display them simultaneously, with three different barrier arrangements, so that the viewing angle could be expanded by factor of three. To do that, Choi et al. suggested tilting the barrier with enough speed to induce afterimage effect and synchronizing the display of the corresponding EIs. Another clever method was recently proposed in [79

79. M. Miura, J. Arai, T. Mishina, M. Okui, and F. Okano, “Integral imaging system with enlarged horizontal viewing angle,” Proc. SPIE 8384, 83840O (2012). [CrossRef]

]. In this paper the authors have proposed the use of MLA arranged in hexagonal grid both in the capture and also in the display. By use of capture display in the FInI configuration, or by proper application of the SPOC algorithm, it is possible to compose a set of EIs much wider in the horizontal than in the vertical direction. Then, by tilting the MLA in the way shown in Fig. 11, it is possible to enlarge the horizontal viewing angle by factor 1.75 but at the cost of reducing the vertical viewing angle by a factor of 2. Note that this reduction has little relevance since horizontal parallax is much more demanded by our visual system.

Fig. 11. Method for the enlargement of the horizontal viewing angle in integral imaging monitors.

B. Computational Volumetric Reconstruction

1. Computational Reconstruction in Conventional Integral Imaging

3D reconstruction of images can be achieved computationally by simulating the optical backprojection of the multiple 2D images in computers [25

25. S. H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Disp. Technol. 1, 354–359 (2005). [CrossRef]

,66

66. M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]

,80

80. S. H. Hong, J. S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef]

82

82. V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

]. There are numerous activities in this domain in the computer science community (e.g., [66

66. M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]

,82

82. V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

,89

89. S. Sinha, D. Steedly, R. Szeliski, M. Agrawala, and M. Pollefeys, “Interactive 3D architectural modeling from unordered photo collections,” ACM Trans. Graph. 27, 1–10 (2008). [CrossRef]

]), and there are numerous algorithms to accomplish this task, including restoring occluded objects [82

82. V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

]. One reconstruction method uses a computer-synthesized virtual pinhole array to inversely map EIs into the object space as illustrated in Fig. 12. In this method, each EI is projected on the desired reconstruction plane and overlaps with all other backprojected EIs. With this process, volumetric 3D information of a scene can be represented by multiple plane-by-plane images.

Fig. 12. Illustration of computational reconstruction method in integral imaging.

For simplification, we assume that the number of pixels of the reconstructed 3D image is the same as the one of each EI. The 3D reconstructed image consists of the average of superimposed pixels from all the EIs. The computational reconstruction algorithm can be written as follows:
R(x,y,z)=1O(x,y)k=0K1l=0L1Ekl(xkNx×pcx×M,ylNy×pcy×M),
(3)
where R(x,y,z) represents the intensity of the reconstructed 3D image at depth z, x and y are the index of pixels, Ekl represents the intensity of the kth column and lth row EI, Nx, Ny are the total number of pixels for each EI, M is the magnification factor and equals z/g, g is the focal length, p is the pitch between image sensors, cx, cy are the size of the image sensor, and O(x,y) is the overlapping number matrix.

Compared with 2D reconstruction, 3D reconstruction may mitigate the effect of occlusion because, when the reconstruction plane is located at the range of the object of interest, only that object is in focus, while occlusion and background are out of focus. Figure 13 shows 3D experimental results by using the computational reconstruction method in an outdoor environment. A subset of EIs is shown in Fig. 13(a). A total of 37 EIs are used in the 3D reconstruction. In the 3D scene, the fronts of the red and gray cars are located at approximately 4.5 and 5.3 m away from the pickup plane, respectively. The red car is heavily occluded by the trees. 3D reconstructed images at z=4.5m and z=5.3m and image details are given in Fig. 13(b), where the profile of the red car is recognizable despite the fact that it is occluded by the trees in each EI.

Fig. 13. Computational reconstruction results using integral imaging data capture. (a) Three examples of EIs and (b) reconstructed 3D images at z=4.5m and z=5.3m.

2. Computational Reconstruction in an ADS System

In an ADS system, an image sensor moves parallel to its optical axis to record longitudinal perspective information of a 3D scene (see Fig. 7). The reconstructed images can be achieved by superposing the magnified EIs. Assume that the total number of EIs is K. The reconstructed image R(x,y,z) at distance z from the nearest pickup position is given as follows [70

70. R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009). [CrossRef]

]:
R(x,y,z)=1O(x,y)k=0K1Ek(xMk,yMk)withMk=zkz0,
(4)
where z0 is the distance between the nearest pickup position and the reconstruction plane (z) and zk is the distance between the kth pickup position and the reconstruction plane. Compared to the computational reconstruction method in conventional integral imaging [see Eq. (3)], the magnification changes with the different pickup positions and the overlapped EIs do not have lateral shift from each other.

Figure 14 shows a subset of EIs and the reconstructed images of an object in front and behind a concentric ring occluding pattern. The center of the ring pattern coincides with the optical axis and hence is always in focus (no parallax) in the reconstructed images, while more parallax (more blurring) is shown in the periphery (see the discussion in Section 3).

Fig. 14. Computational reconstruction results in ADS system. (a) EIs. The left is the closest EI to the scene, the right is the farthest one. (b) Reconstructed 3D images, where the green car and the red fire truck are in focus, respectively.

3. Computational Reconstruction in Integral Imaging with Randomly Distributed Sensors

In integral imaging with randomly distributed sensors, affine transformation can be conveniently used to model the relationship between object space and image space for each sensor. If provided with relative position between the kth randomly distributed sensor and the reference sensor at X and Y directions, (pxk,pyk), the reconstructed 3D images can be obtained as follows [69

69. M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008). [CrossRef]

]:
R(x,y,z)=k=0K1Ek(xMkNx×pxkcx×M,yMkNy×pykcy×M)withMk=zkz,
(5)
where z is the depth of the reconstruction plane measured from the reference sensor and zk is the distance between the kth sensor and the reconstruction plane. In fact, Eq. (5) can be treated as a general computational reconstruction method, which considers both the different magnifications and lateral shift among EIs.

C. 3D Profilometric Reconstruction

Instead of obtaining plane-by-plane reconstruction images discussed above, a 3D profile of a scene can also be reconstructed [17

17. M. DaneshPanah and B. Javidi, “Profilometry and optical slicing by passive three-dimensional imaging,” Opt. Lett. 34, 1105–1107 (2009). [CrossRef]

]. A spectral radiation pattern (SRP) can be used to capture the radiation intensity at a certain wavelength and direction in object space. In [17

17. M. DaneshPanah and B. Javidi, “Profilometry and optical slicing by passive three-dimensional imaging,” Opt. Lett. 34, 1105–1107 (2009). [CrossRef]

], a method has been proposed to infer the depth of Lambertian surfaces from the statistics of the SRP in a multiview imaging system.

For each point in 3D space, (x,y,z), the SRP, L(θ,ϕ,λ), describes its radiation intensity based on its direction (θ,ϕ) and wavelength (λ). In practice, each pixel of an image provides an SRP along the associated chief ray. For simplification, an integral imaging scheme, where image sensors are located on a plane K×L grid, is considered. The ray diagram between two points in space and the corresponding images in the integral imaging system are illustrated in Fig. 15. Note that, for clarity, this figure only shows the y-z plane coordinate system. For an object surface point, Po(xo,yo,zo), K×L sample intensities of this point are collected by K×L image sensors from nonidentical perspectives (different SRP). However, the intensities (λ) among these SRP are expected to be correlated with each other if this point satisfies the assumption of a Lambertian or semi-Lambertian point. If the point, Pv(xv,yv,zv), is in free space, that is, it does not belong to any object surfaces in 3D space, then K×L collected sample intensities are more likely to vary because these samples are probably from different parts of the scene (see Fig. 15). This intensity variation among the collected SRP can be used to estimate the depth of object points with the following statement:
z^(x,y)=argminzZk=0K1l=0L1[L(θkl,φkl,λ)L¯(θ,φ,λ)]2,
(6)
where L¯ denotes the mean of the SRP over all the directions and Z is the range of objects of interest. Equation (6) can be explained as the variance of the SRP function and is expected to reach a local minimum at the real depth of object points. Once the depth of object points is recovered, one can reconstruct 3D profile of a scene.

Fig. 15. Ray diagram for object surface point and free-space point in integral imaging.

Figure 16 illustrates the recovered depth information from an ensemble of 121 EIs based on Eq. (6). The poorly estimated points on the helicopter window are due to the specular reflection off the glossy surface, which departs from Lambertian surface assumption.

Fig. 16. Ray diagram (3D profile) for object surface points and free-space points in integral imaging.

5. Applications of 3D Integral Imaging

In this section, we present several applications of 3D integral imaging system in various fields. These applications further demonstrate the advantages of this 3D sensing and imaging system.

A. 3D Imaging of Objects in Turbid Water

Integral imaging can be used for 3D visualization of underwater objects. Underwater imaging is inherently different from aerial imaging because of absorption and scattering of light from various underwater particles and molecules. In [27

27. M. Cho and B. Javidi, “Three-dimensional visualization of objects in turbid water using integral imaging,” J. Disp. Technol. 6, 544–547 (2010). [CrossRef]

], a statistical image processing approach and computational 3D reconstruction algorithms in integral imaging have been proposed to remedy the effects of scattering and to visualize 3D scene in turbid water. To obtain the actual reconstruction distance, this method considers the geometrical alteration due to the difference in refraction index of water. Figure 17 demonstrates the 3D visualization results in turbid water. A 3D scene in clear water and one EI in turbid water are shown in Fig. 17(a) and Fig. 17(b), respectively. Figure 17(c) and Fig. 17(d) show the 3D reconstruction results in clear water and in turbid water, respectively. It is evident that the reconstructed image for turbid water [Fig. 17(d)] cannot visualize the objects due to light scattering. The 3D reconstruction results with statistical processing [27

27. M. Cho and B. Javidi, “Three-dimensional visualization of objects in turbid water using integral imaging,” J. Disp. Technol. 6, 544–547 (2010). [CrossRef]

] in turbid water are illustrated in Fig. 17(e) and Fig. 17(f), which focus on the insect (bug) and the small fish, respectively.

Fig. 17. 3D visualization in turbid water. (a) 3D scene in clear water, (b) one sample of EI in turbid water, (c), (d) 3D reconstruction results in clear water and in turbid water, respectively, and (e), (f) 3D reconstruction results in turbid water at various depth planes with statistical image processing.

B. Photon Counting and Photon-Starved 3D Visualization

Photon counting integral imaging has been proposed to perform 3D visualization [21

21. B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express 16, 4426–4436 (2008). [CrossRef]

,84

84. X. Xiao and B. Javidi, “3D Photon counting integral imaging with unknown sensor positions,” J. Opt. Soc. Am. A 29, 767–771 (2012). [CrossRef]

] and 3D target recognition [20

20. S. Yeom, B. Javidi, and E. Watson, “Three-dimensional distortion-tolerant object recognition using photon-counting integral imaging,” Opt. Express 15, 1513–1533 (2007). [CrossRef]

,23

23. M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional object recognition with photon counting imagery in the presence of noise,” Opt. Express 18, 26450–26460 (2010). [CrossRef]

,83

83. S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13, 9310–9330 (2005). [CrossRef]

] in photon-starved conditions or very low light levels in the scene. 3D reconstruction of a photon-starved scene can be performed by applying computational reconstruction algorithm based on various algorithms such as the maximum likelihood estimator [21

21. B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express 16, 4426–4436 (2008). [CrossRef]

].

In general, photon counting images have far fewer photons than the conventional irradiance images due to extremely low light level environments. As a result, they have far fewer nonzero pixels than conventional intensity images. Thus, many of the details contained in the original 3D scene disappear. To achieve 3D visualization, some approaches based on total variation constraint [24

24. D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express 19, 19681–19687 (2011). [CrossRef]

,84

84. X. Xiao and B. Javidi, “3D Photon counting integral imaging with unknown sensor positions,” J. Opt. Soc. Am. A 29, 767–771 (2012). [CrossRef]

,87

87. A. Stern, D. Aloni, and B. Javidi, “Experiments with three-dimensional integral imaging under low light levels,” IEEE Photonics J. 4, 1188–1195 (2012). [CrossRef]

] have been applied to the photon counting integral imaging system. In [24

24. D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express 19, 19681–19687 (2011). [CrossRef]

], total variation maximum likelihood expectation maximization has been applied to restore the reconstructed 3D images, and in [84

84. X. Xiao and B. Javidi, “3D Photon counting integral imaging with unknown sensor positions,” J. Opt. Soc. Am. A 29, 767–771 (2012). [CrossRef]

], an iterative method to restore photon counting EIs based the on total variation maximum a posteriori expectation maximization (MAP-EM) algorithm and the photon counting detection model has been presented.

Based on the MAP-EM algorithm [86

86. P. J. Green, “Bayesian reconstructions from emission tomography data using a modified EM algorithm,” IEEE Trans. Med. Imag. 9, 84–93 (1990). [CrossRef]

], the restored object intensity, r, can be obtained by the following iterative equation:
rj(n+1)=rj(n)iHij+β(U(r(n))/rj)iHijcikHikrk(n),
(8)
where Hij is constructed by discrete point spread function of the pickup imaging lens, U(r) is the prior energy functional (e.g., total variation constraint [85

85. V. Y. Panin, G. L. Zeng, and G. T. Gullberg, “Total variation regulated EM algorithm,” IEEE Trans. Nucl. Sci. 46, 2202–2210 (1999). [CrossRef]

]), β is a regularization parameter, n is the iteration time, and i, j, k are pixel indices. The iteration is stopped when mean-square error between rn+1 and rn is smaller than a given threshold.

In Fig. 18, two groups of the simulated photon counting EIs are shown. Figures 18(a) and 18(c) show the EIs, and the corresponding restored EIs by using the total variation MAP-EM algorithm are shown in Figs. 18(b) and 18(d). Np is the total number of photons in each image.

Fig. 18. (a), (c) Binary photon counting EIs when Np=10,000 and Np=30,000, respectively and (b), (d) corresponding image restoration results using TV MAP.

Total variation constraint has been found to be an efficient prior for reconstruction of integral images captured with photon counting sensors [24

24. D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express 19, 19681–19687 (2011). [CrossRef]

]. In [87

87. A. Stern, D. Aloni, and B. Javidi, “Experiments with three-dimensional integral imaging under low light levels,” IEEE Photonics J. 4, 1188–1195 (2012). [CrossRef]

] it was also demonstrated to be efficient for reconstruction of photon-starved integral images captured with conventional CCD or CMOS sensors. When conventional image sensors are used, the image is typically corrupted by thermal and readout noise in addition to the Poisson noise caused by the random photon arrival process. Therefore, significantly larger photon flux is required compared with photon counting sensors. Figure 19(a) shows a simulation of a captured EI with commercial cooled camera. The signal-to-noise ratio (SNR) in this image is 0.18; therefore, the object is not distinguishable. However, by applying the total variation maximum likelihood expectation algorithm on an array of 7×7 EIs, the image in Fig. 19(b) is obtained, in which the airplane object is clearly distinguishable, despite the EI having an SNR much lower than 1. In [87

87. A. Stern, D. Aloni, and B. Javidi, “Experiments with three-dimensional integral imaging under low light levels,” IEEE Photonics J. 4, 1188–1195 (2012). [CrossRef]

], real experimental results have shown the successful reconstruction of integral images captured with conventional camera illuminated by less than four photons per pixels on the average.

Fig. 19. (a) Simulated EI obtained with conventional camera having an SNR=0.18 and (b) reconstructed image.

C. 3D Tracking of Occluded Objects Using Integral Imaging

As discussed previously, the computational reconstruction method in integral imaging can reduce the effect of occlusion. As a result, tracking occluded objects in 3D space is feasible. In [32

32. Y. Zhao, X. Xiao, M. Cho, and B. Javidi, “Tracking of multiple objects in unknown background using Bayesian estimation in 3D space,” J. Opt. Soc. Am. A 28, 1935–1940 (2011). [CrossRef]

], tracking of multiple occluded 3D objects using a region tracking method based on statistical Bayesian formulation has been proposed in the integral imaging system. It is assumed that the background is stationary for each frame and the reconstructed pixel intensities of both background and multiple objects are independent identically distributed. The background region and object region follow Gaussian and Gamma distributions, respectively. Within the Bayesian framework, the parameters of the likelihood functions of the background and the object region are estimated based on appropriate prior assumptions. At each frame, the 3D scene is reconstructed, represented plane by plane. The 3D locations of the objects are obtained by maximizing the geodesic distance between the log-likelihood of the reconstructed background and objects across all the 2D reconstructed planes.

Figure 20 shows a subset of EIs used in occluded object tracking experiments. Figure 21 illustrates the 3D tracking experimental results performed with varying target orientations and scene illumination. Experimental results show that the tracking method is robust to partial occlusion and unknown background scene and works with objects with unknown position, range, rotation, scale, and illumination.

Fig. 20. Subset of EIs (each 2784×1856 pixels) in 3D tracking experiments of occluded objects.
Fig. 21. 3D tracking results of two moving cars on the reconstructed images in integral imaging. (a) Second temporal frame. (b) Frame nine. Here the scene illumination is reduced to one half and the left car is rotated. (c) Frame 14. Here both cars are rotated. (d) Frame 17. Here the right car is rotated. (e) Frame 27. Here the scene illumination is doubled and the left car is rotated.

D. 3D Microscopy Using Integral Imaging for Visualization and Identification of Cells

Integral imaging has been applied to 3D microscopy [29

29. J. S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29, 1230–1232 (2004). [CrossRef]

31

31. D. Shin, M. Cho, and B. Javidi, “Three-dimensional optical microscopy using axially distributed image sensing,” Opt. Lett. 35, 3646–3648 (2010). [CrossRef]

,56

56. Y. T. Lim, J. H. Park, K. C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17, 19253–19263 (2009). [CrossRef]

] and identification of biological micro-organisms [28

28. B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14, 12096–12108 (2006). [CrossRef]

]. To visualize 3D information of micro-objects, uniformly magnified 2D Eis are obtained from computer-synthesized EIs [29

29. J. S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29, 1230–1232 (2004). [CrossRef]

]. Figure 22 illustrates the 3D sensing and visualization of the micro-organisms using integral imaging microscopy. Cells and/or micro-organism identification and classification [28

28. B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14, 12096–12108 (2006). [CrossRef]

] has been performed on the 3D reconstructed images by using statistical pattern recognition.

Fig. 22. Integral imaging 3D microscopy and automated cell identification.

E. 3D Polarimetric Integral Imaging

Light polarization can provide an important visual extension compared to intensity-only imagery. Polarimetric imaging measures the polarization states of light coming from all the points in a 3D scene and enhances the understanding about the surfaces of objects. A 3D polarimetric integral imaging system has been proposed by using degree of polarization (DoP) images under natural illumination conditions [33

33. X. Xiao, B. Javidi, G. Saavedra, M. Eismann, and M. Martinez-Corral, “Three-dimensional polarimetric computational integral imaging,” Opt. Express 20, 15481–15488 (2012). [CrossRef]

]. The measured Stokes parameters are utilized to generate DoP images. By using DoP information and the original EIs, 3D polarimetric reconstruction has been performed by a modified computational reconstruction method. Instead of using all the pixels from the shifted EIs, as is the case in conventional computational reconstruction method (see Section 4.B.1), only the pixels whose DoP is greater than a given threshold (p) are used to perform the 3D image reconstruction. The 3D polarimetric integral imaging may be used to distinguish objects with specular-reflection surfaces (metal, glass) and objects with diffuse-reflection surfaces (soil, grass).

Figure 23 shows the experimental results of the 3D polarimetric integral imaging system. In the scene, two cars (objects) are located approximately 530 mm away, with a tree (450 mm away) in front of them (occlusion), and some others (720 mm away) behind the cars (background). Figure 23(a) illustrates three examples among 36 EIs. The 3D image reconstruction results in both conventional integral imaging and the 3D polarimetric integral imaging are shown in Fig. 23(b) and Fig. 23(c), respectively. In Fig. 23(c), only the cars show up, while the occlusion and the background do not appear on the reconstruction images.

Fig. 23. 3D polarimetric integral imaging experimental results. (a) Subset of EIs. (b) Reconstruction results of conventional integral imaging at 450 mm, 530 mm, and 720 mm. (c) Reconstruction results of the 3D polarimetric integral imaging with p=0.2mm.

F. Multidimensional Optical Sensing and Imaging

A multidimensional optical sensor and imaging system using integral imaging has been proposed [91

91. B. Javidi, S. H. Hong, and O. Matoba, “Multidimensional optical sensor and imaging system,” Appl. Opt. 45, 2986–2994 (2006). [CrossRef]

]. This is an extension of conventional integral imaging to incorporate multimodality into sensing and reconstruction. In [91

91. B. Javidi, S. H. Hong, and O. Matoba, “Multidimensional optical sensor and imaging system,” Appl. Opt. 45, 2986–2994 (2006). [CrossRef]

], polarimetric and multispectral imaging is used to reconstruct a fully integrated multidimensional scene. The multidimensional reconstructed scene contains more information than single 2D or 3D images. The multidimensional imaging system may utilize polarimetric imaging, hyperspectral imaging, 3D spatial imaging, etc., to reconstruct the multidimensional integrated scene. The system can be implemented with separate polarimetric sensors and spectral sensors and then integrate the multidimensional data. Or, a single polarimetric and spectral sensor can be used for a single exposure multidimensional sensing.

6. Conclusion

References

1.

A. Sokolov, “Autostereoscopy and integral photography by Professor Lippmann’s method,” in Izd. MGU (Moscow State University, 1911).

2.

H. E. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. 21, 171 (1931). [CrossRef]

3.

C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. A 58, 71–74 (1968). [CrossRef]

4.

T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68, 548–564 (1980). [CrossRef]

5.

T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

6.

B. Javidi, F. Okano, and J. Y. Son, Three-Dimensional Imaging, Visualization, and Display (Springer, 2009).

7.

L. Yang, M. McCormick, and N. Davies, “Discussion of the optics of a new 3-D imaging system,” Appl. Opt. 27, 4529–4534 (1988). [CrossRef]

8.

F. Okano, J. Arai, K. Mitani, and M. Okui, “Real-time integral imaging based on extremely high resolution video system,” Proc. IEEE 94, 490–501 (2006). [CrossRef]

9.

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]

10.

H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998). [CrossRef]

11.

T. Mishina, “3D television system based on integral photography,” in Proceedings of the Picture Coding Symposium (PCS), 2010 (IEEE, 2010), p. 20.

12.

J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yoshimura, M. Furuya, and M. Sato, “Integral three-dimensional television using a 33-megapixel imaging system,” J. Disp. Technol. 6, 422–430 (2010). [CrossRef]

13.

O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. 40, 3318–3325 (2001). [CrossRef]

14.

S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003). [CrossRef]

15.

S. H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14, 12085–12095 (2006). [CrossRef]

16.

R. Schulein, C. M. Do, and B. Javidi, “Distortion-tolerant 3D recognition of underwater objects using neural networks,” J. Opt. Soc. Am. A 27, 461–468 (2010). [CrossRef]

17.

M. DaneshPanah and B. Javidi, “Profilometry and optical slicing by passive three-dimensional imaging,” Opt. Lett. 34, 1105–1107 (2009). [CrossRef]

18.

J. H. Park and K. M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express 19, 18729–18741 (2011). [CrossRef]

19.

A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]

20.

S. Yeom, B. Javidi, and E. Watson, “Three-dimensional distortion-tolerant object recognition using photon-counting integral imaging,” Opt. Express 15, 1513–1533 (2007). [CrossRef]

21.

B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express 16, 4426–4436 (2008). [CrossRef]

22.

I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett. 34, 731–733 (2009). [CrossRef]

23.

M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional object recognition with photon counting imagery in the presence of noise,” Opt. Express 18, 26450–26460 (2010). [CrossRef]

24.

D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express 19, 19681–19687 (2011). [CrossRef]

25.

S. H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Disp. Technol. 1, 354–359 (2005). [CrossRef]

26.

I. Moon and B. Javidi, “Three-dimensional visualization of objects in scattering medium by use of computational integral imaging,” Opt. Express 16, 13080–13089 (2008). [CrossRef]

27.

M. Cho and B. Javidi, “Three-dimensional visualization of objects in turbid water using integral imaging,” J. Disp. Technol. 6, 544–547 (2010). [CrossRef]

28.

B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14, 12096–12108 (2006). [CrossRef]

29.

J. S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29, 1230–1232 (2004). [CrossRef]

30.

M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235, 144–162 (2009). [CrossRef]

31.

D. Shin, M. Cho, and B. Javidi, “Three-dimensional optical microscopy using axially distributed image sensing,” Opt. Lett. 35, 3646–3648 (2010). [CrossRef]

32.

Y. Zhao, X. Xiao, M. Cho, and B. Javidi, “Tracking of multiple objects in unknown background using Bayesian estimation in 3D space,” J. Opt. Soc. Am. A 28, 1935–1940 (2011). [CrossRef]

33.

X. Xiao, B. Javidi, G. Saavedra, M. Eismann, and M. Martinez-Corral, “Three-dimensional polarimetric computational integral imaging,” Opt. Express 20, 15481–15488 (2012). [CrossRef]

34.

C. Wheatstone, “Contributions to the physiology of vision.—Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision,” Philos. Trans. R. Soc. Lond. 128, 371–394 (1838). [CrossRef]

35.

W. Rollmann, “Zwei neue stereoskopische Methoden,” Ann. Phys. 166, 186–187 (1853). [CrossRef]

36.

D. S. Kim, S. M. Park, J. H. Jung, and D. C. Hwang, “51.2: new 240 Hz driving method for full HD & high quality 3D LCD TV,” SID Symp. Dig. Tech. Pap. 41, 762–765 (2010). [CrossRef]

37.

S. S. Kim, B. H. You, H. Choi, B. H. Berkeley, D. G. Kim, and N. D. Kim, “World’s first 240 Hz TFT‐LCD technology for full‐HD LCD‐TV and its application to 3D display,” SID Symp. Dig. Tech. Pap. 40, 424–427 (2009). [CrossRef]

38.

H. Kang, S. D. Roh, I. S. Baik, H. J. Jung, W. N. Jeong, J. K. Shin, and I. J. Chung, “3.1: a novel polarizer glasses‐type 3D displays with a patterned retarder,” SID Symp. Dig. Tech. Pap. 41, 1–4 (2010). [CrossRef]

39.

C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005). [CrossRef]

40.

R. B. A. Tanjung, X. Xu, X. Liang, S. Solanki, Y. Pan, F. Farbiz, B. Xu, and T. C. Chong, “Digital holographic three-dimensional display of 50-Mpixel holograms using a two-axis scanning mirror device,” Opt. Eng. 49, 025801(2010). [CrossRef]

41.

P. A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W. Y. Hsieh, and M. Kathaperumal, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468, 80–83 (2010). [CrossRef]

42.

M. Holroyd, I. Baran, J. Lawrence, and W. Matusik, “Computing and fabricating multilayer models,” ACM Trans. Graph. 30, 187 (2011). [CrossRef]

43.

A. Marraud and M. Bonnet, “Restitution of stereoscopic picture by means of a lenticular sheet,” Proc. SPIE 0402, 129–132 (1983).

44.

Mashitani, “Autostereoscopic video display with a parallax barrier having oblique apertures,” U.S. patent 7,317,494(8January2008).

45.

H. J. Lee, H. Nam, J. D. Lee, H. W. Jang, M. S. Song, B. S. Kim, J. S. Gu, C. Y. Park, and K. H. Choi, “A high resolution autostereoscopic display employing a time division parallax barrier,” SID Symp. Dig. Tech. Pap. 37, 81–84 (2006). [CrossRef]

46.

G. Hamagishi, “Analysis and improvement of viewing conditions for two‐view and multi‐view displays,” SID Symp. Dig. Tech. Pap. 40, 340–343 (2009). [CrossRef]

47.

T. Inoue and H. Ohzu, “Accommodative responses to stereoscopic three-dimensional display,” Appl. Opt. 36, 4509–4515 (1997). [CrossRef]

48.

F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25, 99–108 (2004). [CrossRef]

49.

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908). [CrossRef]

50.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]

51.

J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. A 20, 996–1004 (2003). [CrossRef]

52.

D. H. Shin, E. S. Kim, and B. Lee, “Computational reconstruction of three-dimensional objects in integral imaging using lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005). [CrossRef]

53.

B. Tavakoli, M. Daneshpanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15, 11889–11902 (2007). [CrossRef]

54.

J. H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16, 8800–8813 (2008). [CrossRef]

55.

J. Y. Son, S. H. Kim, D. S. Kim, B. Javidi, and K. D. Kwack, “Image-forming principle of integral photography,” J. Disp. Technol. 4, 324–331 (2008). [CrossRef]

56.

Y. T. Lim, J. H. Park, K. C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17, 19253–19263 (2009). [CrossRef]

57.

M. U. Erdenebat, G. Baasantseren, and J. H. Park, “Full-parallax 360 degrees integral imaging display,” in Proceedings of the International Meeting on Information Display (Korean Information Display Society, 2010), pp. 812–813.

58.

H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express 18, 25573–25583 (2010). [CrossRef]

59.

H. Geng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. SID 19, 679–684 (2011).

60.

M. Cho and B. Javidi, “Optimization of 3D integral imaging system parameters,” IEEE J. Disp. Technol. 8, 357–360 (2012). [CrossRef]

61.

A. Yöntem and L. Onural, “Integral imaging using phase-only LCoS spatial light modulators as Fresnel lenslet arrays,” J. Opt. Soc. Am. A 28, 2359–2375 (2011). [CrossRef]

62.

H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Disp. Technol. 6, 404–411 (2010). [CrossRef]

63.

F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999). [CrossRef]

64.

N. Davies, M. McCormick, and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt. 27, 4520–4528 (1988). [CrossRef]

65.

E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]

66.

M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]

67.

J. H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009). [CrossRef]

68.

J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]

69.

M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008). [CrossRef]

70.

R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009). [CrossRef]

71.

X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3D integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010). [CrossRef]

72.

Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys. 17, 1683–1684 (1978). [CrossRef]

73.

M. Halle, “Multiple viewpoint rendering,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (1998), pp. 243–254.

74.

R. Yang, X. Huang, S. Li, and C. Jaynes, “Toward the light field display: autostereoscopic rendering via a cluster of projectors,” IEEE Trans. Vis. Comput. Graph. 14, 84–96 (2008).

75.

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005). [CrossRef]

76.

M. Martínez-Corral, H. Navarro, R. Martínez-Cuenca, G. Saavedra, and B. Javidi, “Full parallax 3-D TV with programmable display parameters,” Opt. Photon. News 22(12), 50–50 (2011). [CrossRef]

77.

R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15, 16255–16260 (2007). [CrossRef]

78.

H. Choi, S. W. Min, S. Jung, J. H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003). [CrossRef]

79.

M. Miura, J. Arai, T. Mishina, M. Okui, and F. Okano, “Integral imaging system with enlarged horizontal viewing angle,” Proc. SPIE 8384, 83840O (2012). [CrossRef]

80.

S. H. Hong, J. S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef]

81.

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]

82.

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

83.

S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13, 9310–9330 (2005). [CrossRef]

84.

X. Xiao and B. Javidi, “3D Photon counting integral imaging with unknown sensor positions,” J. Opt. Soc. Am. A 29, 767–771 (2012). [CrossRef]

85.

V. Y. Panin, G. L. Zeng, and G. T. Gullberg, “Total variation regulated EM algorithm,” IEEE Trans. Nucl. Sci. 46, 2202–2210 (1999). [CrossRef]

86.

P. J. Green, “Bayesian reconstructions from emission tomography data using a modified EM algorithm,” IEEE Trans. Med. Imag. 9, 84–93 (1990). [CrossRef]

87.

A. Stern, D. Aloni, and B. Javidi, “Experiments with three-dimensional integral imaging under low light levels,” IEEE Photonics J. 4, 1188–1195 (2012). [CrossRef]

88.

D. Shin, M. Daneshpanah, and B. Javidi, “Generalization of three-dimensional N-ocular imaging systems under fixed resource constraints,” Opt. Lett. 37, 19–21 (2012). [CrossRef]

89.

S. Sinha, D. Steedly, R. Szeliski, M. Agrawala, and M. Pollefeys, “Interactive 3D architectural modeling from unordered photo collections,” ACM Trans. Graph. 27, 1–10 (2008). [CrossRef]

90.

A. Gotchev, G. Akar, T. Capin, D. Strohmeier, and A. Boev, “Three-dimensional media for mobile devices,” Proc. IEEE 99, 708–741 (2011). [CrossRef]

91.

B. Javidi, S. H. Hong, and O. Matoba, “Multidimensional optical sensor and imaging system,” Appl. Opt. 45, 2986–2994 (2006). [CrossRef]

OCIS Codes
(110.6880) Imaging systems : Three-dimensional image acquisition
(120.2040) Instrumentation, measurement, and metrology : Displays
(150.6910) Machine vision : Three-dimensional sensing

ToC Category:
Imaging Systems

History
Original Manuscript: September 10, 2012
Manuscript Accepted: September 14, 2012
Published: January 24, 2013

Virtual Issues
(2013) Advances in Optics and Photonics
(2014) Advances in Optics and Photonics
Vol. 8, Iss. 3 Virtual Journal for Biomedical Optics

Citation
Xiao Xiao, Bahram Javidi, Manuel Martinez-Corral, and Adrian Stern, "Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]," Appl. Opt. 52, 546-560 (2013)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=ao-52-4-546


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. A. Sokolov, “Autostereoscopy and integral photography by Professor Lippmann’s method,” in Izd. MGU (Moscow State University, 1911).
  2. H. E. Ives, “Optical properties of a Lippman lenticulated sheet,” J. Opt. Soc. Am. 21, 171 (1931). [CrossRef]
  3. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. A 58, 71–74 (1968). [CrossRef]
  4. T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68, 548–564 (1980). [CrossRef]
  5. T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).
  6. B. Javidi, F. Okano, and J. Y. Son, Three-Dimensional Imaging, Visualization, and Display (Springer, 2009).
  7. L. Yang, M. McCormick, and N. Davies, “Discussion of the optics of a new 3-D imaging system,” Appl. Opt. 27, 4529–4534 (1988). [CrossRef]
  8. F. Okano, J. Arai, K. Mitani, and M. Okui, “Real-time integral imaging based on extremely high resolution video system,” Proc. IEEE 94, 490–501 (2006). [CrossRef]
  9. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]
  10. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A 15, 2059–2065 (1998). [CrossRef]
  11. T. Mishina, “3D television system based on integral photography,” in Proceedings of the Picture Coding Symposium (PCS), 2010 (IEEE, 2010), p. 20.
  12. J. Arai, F. Okano, M. Kawakita, M. Okui, Y. Haino, M. Yoshimura, M. Furuya, and M. Sato, “Integral three-dimensional television using a 33-megapixel imaging system,” J. Disp. Technol. 6, 422–430 (2010). [CrossRef]
  13. O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. 40, 3318–3325 (2001). [CrossRef]
  14. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003). [CrossRef]
  15. S. H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14, 12085–12095 (2006). [CrossRef]
  16. R. Schulein, C. M. Do, and B. Javidi, “Distortion-tolerant 3D recognition of underwater objects using neural networks,” J. Opt. Soc. Am. A 27, 461–468 (2010). [CrossRef]
  17. M. DaneshPanah and B. Javidi, “Profilometry and optical slicing by passive three-dimensional imaging,” Opt. Lett. 34, 1105–1107 (2009). [CrossRef]
  18. J. H. Park and K. M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express 19, 18729–18741 (2011). [CrossRef]
  19. A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]
  20. S. Yeom, B. Javidi, and E. Watson, “Three-dimensional distortion-tolerant object recognition using photon-counting integral imaging,” Opt. Express 15, 1513–1533 (2007). [CrossRef]
  21. B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express 16, 4426–4436 (2008). [CrossRef]
  22. I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett. 34, 731–733 (2009). [CrossRef]
  23. M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional object recognition with photon counting imagery in the presence of noise,” Opt. Express 18, 26450–26460 (2010). [CrossRef]
  24. D. Aloni, A. Stern, and B. Javidi, “Three-dimensional photon counting integral imaging reconstruction using penalized maximum likelihood expectation maximization,” Opt. Express 19, 19681–19687 (2011). [CrossRef]
  25. S. H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Disp. Technol. 1, 354–359 (2005). [CrossRef]
  26. I. Moon and B. Javidi, “Three-dimensional visualization of objects in scattering medium by use of computational integral imaging,” Opt. Express 16, 13080–13089 (2008). [CrossRef]
  27. M. Cho and B. Javidi, “Three-dimensional visualization of objects in turbid water using integral imaging,” J. Disp. Technol. 6, 544–547 (2010). [CrossRef]
  28. B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14, 12096–12108 (2006). [CrossRef]
  29. J. S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29, 1230–1232 (2004). [CrossRef]
  30. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235, 144–162 (2009). [CrossRef]
  31. D. Shin, M. Cho, and B. Javidi, “Three-dimensional optical microscopy using axially distributed image sensing,” Opt. Lett. 35, 3646–3648 (2010). [CrossRef]
  32. Y. Zhao, X. Xiao, M. Cho, and B. Javidi, “Tracking of multiple objects in unknown background using Bayesian estimation in 3D space,” J. Opt. Soc. Am. A 28, 1935–1940 (2011). [CrossRef]
  33. X. Xiao, B. Javidi, G. Saavedra, M. Eismann, and M. Martinez-Corral, “Three-dimensional polarimetric computational integral imaging,” Opt. Express 20, 15481–15488 (2012). [CrossRef]
  34. C. Wheatstone, “Contributions to the physiology of vision.—Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision,” Philos. Trans. R. Soc. Lond. 128, 371–394 (1838). [CrossRef]
  35. W. Rollmann, “Zwei neue stereoskopische Methoden,” Ann. Phys. 166, 186–187 (1853). [CrossRef]
  36. D. S. Kim, S. M. Park, J. H. Jung, and D. C. Hwang, “51.2: new 240 Hz driving method for full HD & high quality 3D LCD TV,” SID Symp. Dig. Tech. Pap. 41, 762–765 (2010). [CrossRef]
  37. S. S. Kim, B. H. You, H. Choi, B. H. Berkeley, D. G. Kim, and N. D. Kim, “World’s first 240 Hz TFT‐LCD technology for full‐HD LCD‐TV and its application to 3D display,” SID Symp. Dig. Tech. Pap. 40, 424–427 (2009). [CrossRef]
  38. H. Kang, S. D. Roh, I. S. Baik, H. J. Jung, W. N. Jeong, J. K. Shin, and I. J. Chung, “3.1: a novel polarizer glasses‐type 3D displays with a patterned retarder,” SID Symp. Dig. Tech. Pap. 41, 1–4 (2010). [CrossRef]
  39. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005). [CrossRef]
  40. R. B. A. Tanjung, X. Xu, X. Liang, S. Solanki, Y. Pan, F. Farbiz, B. Xu, and T. C. Chong, “Digital holographic three-dimensional display of 50-Mpixel holograms using a two-axis scanning mirror device,” Opt. Eng. 49, 025801(2010). [CrossRef]
  41. P. A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W. Y. Hsieh, and M. Kathaperumal, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468, 80–83 (2010). [CrossRef]
  42. M. Holroyd, I. Baran, J. Lawrence, and W. Matusik, “Computing and fabricating multilayer models,” ACM Trans. Graph. 30, 187 (2011). [CrossRef]
  43. A. Marraud and M. Bonnet, “Restitution of stereoscopic picture by means of a lenticular sheet,” Proc. SPIE 0402, 129–132 (1983).
  44. Mashitani, “Autostereoscopic video display with a parallax barrier having oblique apertures,” U.S. patent 7,317,494(8January2008).
  45. H. J. Lee, H. Nam, J. D. Lee, H. W. Jang, M. S. Song, B. S. Kim, J. S. Gu, C. Y. Park, and K. H. Choi, “A high resolution autostereoscopic display employing a time division parallax barrier,” SID Symp. Dig. Tech. Pap. 37, 81–84 (2006). [CrossRef]
  46. G. Hamagishi, “Analysis and improvement of viewing conditions for two‐view and multi‐view displays,” SID Symp. Dig. Tech. Pap. 40, 340–343 (2009). [CrossRef]
  47. T. Inoue and H. Ohzu, “Accommodative responses to stereoscopic three-dimensional display,” Appl. Opt. 36, 4509–4515 (1997). [CrossRef]
  48. F. L. Kooi and A. Toet, “Visual comfort of binocular and 3D displays,” Displays 25, 99–108 (2004). [CrossRef]
  49. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908). [CrossRef]
  50. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]
  51. J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. A 20, 996–1004 (2003). [CrossRef]
  52. D. H. Shin, E. S. Kim, and B. Lee, “Computational reconstruction of three-dimensional objects in integral imaging using lenslet array,” Jpn. J. Appl. Phys. 44, 8016–8018 (2005). [CrossRef]
  53. B. Tavakoli, M. Daneshpanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15, 11889–11902 (2007). [CrossRef]
  54. J. H. Park, G. Baasantseren, N. Kim, G. Park, J. M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16, 8800–8813 (2008). [CrossRef]
  55. J. Y. Son, S. H. Kim, D. S. Kim, B. Javidi, and K. D. Kwack, “Image-forming principle of integral photography,” J. Disp. Technol. 4, 324–331 (2008). [CrossRef]
  56. Y. T. Lim, J. H. Park, K. C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17, 19253–19263 (2009). [CrossRef]
  57. M. U. Erdenebat, G. Baasantseren, and J. H. Park, “Full-parallax 360 degrees integral imaging display,” in Proceedings of the International Meeting on Information Display (Korean Information Display Society, 2010), pp. 812–813.
  58. H. Navarro, R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “3D integral imaging display by smart pseudoscopic-to-orthoscopic conversion (SPOC),” Opt. Express 18, 25573–25583 (2010). [CrossRef]
  59. H. Geng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. SID 19, 679–684 (2011).
  60. M. Cho, and B. Javidi, “Optimization of 3D integral imaging system parameters,” IEEE J. Disp. Technol. 8, 357–360 (2012). [CrossRef]
  61. A. Yöntem and L. Onural, “Integral imaging using phase-only LCoS spatial light modulators as Fresnel lenslet arrays,” J. Opt. Soc. Am. A 28, 2359–2375 (2011). [CrossRef]
  62. H. Navarro, R. Martínez-Cuenca, A. Molina-Martín, M. Martínez-Corral, G. Saavedra, and B. Javidi, “Method to remedy image degradations due to facet braiding in 3D integral-imaging monitors,” J. Disp. Technol. 6, 404–411 (2010). [CrossRef]
  63. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38, 1072–1077 (1999). [CrossRef]
  64. N. Davies, M. McCormick, and L. Yang, “Three-dimensional imaging systems: a new development,” Appl. Opt. 27, 4520–4528 (1988). [CrossRef]
  65. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992). [CrossRef]
  66. M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]
  67. J. H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009). [CrossRef]
  68. J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]
  69. M. DaneshPanah, B. Javidi, and E. A. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008). [CrossRef]
  70. R. Schulein, M. DaneshPanah, and B. Javidi, “3D imaging with axially distributed sensing,” Opt. Lett. 34, 2012–2014 (2009). [CrossRef]
  71. X. Xiao, M. DaneshPanah, M. Cho, and B. Javidi, “3D integral imaging using sparse sensors with unknown positions,” J. Disp. Technol. 6, 614–619 (2010). [CrossRef]
  72. Y. Igarashi, H. Murata, and M. Ueda, “3D display system using a computer generated integral photography,” Jpn. J. Appl. Phys. 17, 1683–1684 (1978). [CrossRef]
  73. M. Halle, “Multiple viewpoint rendering,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (1998), pp. 243–254.
  74. R. Yang, X. Huang, S. Li, and C. Jaynes, “Toward the light field display: autostereoscopic rendering via a cluster of projectors,” IEEE Trans. Vis. Comput. Graph. 14, 84–96 (2008).
  75. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A 22, 597–603 (2005). [CrossRef]
  76. M. Martínez-Corral, H. Navarro, R. Martínez-Cuenca, G. Saavedra, and B. Javidi, “Full parallax 3-D TV with programmable display parameters,” Opt. Photon. News 22(12), 50–50 (2011). [CrossRef]
  77. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imaging by multiple-axis telecentric relay system,” Opt. Express 15, 16255–16260 (2007). [CrossRef]
  78. H. Choi, S. W. Min, S. Jung, J. H. Park, and B. Lee, “Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays,” Opt. Express 11, 927–932 (2003). [CrossRef]
  79. M. Miura, J. Arai, T. Mishina, M. Okui, and F. Okano, “Integral imaging system with enlarged horizontal viewing angle,” Proc. SPIE 8384, 83840O (2012). [CrossRef]
  80. S. H. Hong, J. S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef]
  81. H. Arimoto, and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]
  82. V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.
  83. S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13, 9310–9330 (2005). [CrossRef]
  84. X. Xiao and B. Javidi, “3D Photon counting integral imaging with unknown sensor positions,” J. Opt. Soc. Am. A 29, 767–771 (2012). [CrossRef]
  85. V. Y. Panin, G. L. Zeng, and G. T. Gullberg, “Total variation regulated EM algorithm,” IEEE Trans. Nucl. Sci. 46, 2202–2210 (1999). [CrossRef]
  86. P. J. Green, “Bayesian reconstructions from emission tomography data using a modified EM algorithm,” IEEE Trans. Med. Imag. 9, 84–93 (1990). [CrossRef]
  87. A. Stern, D. Aloni, and B. Javidi, “Experiments with three-dimensional integral imaging under low light levels,” IEEE Photonics J. 4, 1188–1195 (2012). [CrossRef]
  88. D. Shin, M. Daneshpanah, and B. Javidi, “Generalization of three-dimensional N-ocular imaging systems under fixed resource constraints,” Opt. Lett. 37, 19–21 (2012). [CrossRef]
  89. S. Sinha, D. Steedly, R. Szeliski, M. Agrawala, and M. Pollefeys, “Interactive 3D architectural modeling from unordered photo collections,” ACM Trans. Graph. 27, 1–10 (2008). [CrossRef]
  90. A. Gotchev, G. Akar, T. Capin, D. Strohmeier, and A. Boev, “Three-dimensional media for mobile devices,” Proc. IEEE 99, 708–741 (2011). [CrossRef]
  91. B. Javidi, S. H. Hong, and O. Matoba, “Multidimensional optical sensor and imaging system,” Appl. Opt. 45, 2986–2994 (2006). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited