Hologram contains three-dimensional (3D) information of object scene. Optical reconstruction of the hologram presents natural 3D imagery with all human depth cues, making it attractive for 3D display technique. Optical capturing of the hologram, however, is not sufficiently practical yet. Traditional capture technique of the hologram is based on a coherent interferometric optical system, which requires well-controlled laboratory environment free from external light and vibration. Laser illumination on the object also limits the maximum size and distance of the object that can be captured. Incoherent capture techniques of the interference pattern which do not use laser have been proposed, but precise alignment of the optical components including spatial light modulator (SLM) is still required [1
1. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef] [PubMed]
View based incoherent hologram capture techniques alleviate these limitations [2
2. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48(34), H120–H136 (2009). [CrossRef] [PubMed]
4. Y. Rivenson, A. Stern, and J. Rosen, “Compressive multiple view projection incoherent holography,” Opt. Express 19(7), 6109–6118 (2011). [CrossRef] [PubMed]
]. Instead of the interference pattern, multiple perspective images or spatio-angular light ray distribution of the 3D scene are captured under regular incoherent illumination. The captured information is then processed to synthesize the hologram. Since coherent illumination and interferometric optics are not required, the system is robust against external vibrations or misalignment. Outdoor capturing is also possible. However, the system configuration is not highly compact. An array of cameras is usually used to capture multiple views, which makes overall system bulky. A more compact system consisting of a single camera and an external lens has been reported to capture spatio-angular light ray distribution and hence synthesize corresponding hologram [5
5. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009). [CrossRef] [PubMed]
7. K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef] [PubMed]
]. However, this system still requires alignment between the external lens array and the camera and it is not portable. A portable integral imaging camera, or also called a plenoptic camera or a light field camera, which locates a micro lens array inside the camera body was developed to perform numerical refocusing after scene capture [8
8. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).
], but the hologram synthesis of the 3D scene and optical reconstruction have not been reported.
In this paper, we propose a hologram synthesis method of the real-existing 3D scene using the portable integral imaging camera. A micro lens array is integrated inside usual digital single lens reflex (DSLR) camera in front of its image sensor, making the system highly compact and portable. The proposed system captures spatio-angular light ray distribution of the 3D object scene, extracts various views, and finally synthesizes the hologram. In the followings, we explain the system configuration and the hologram synthesis algorithm along with the experimental verifications.
2. System configuration
Fig. 1 Optical configuration of the proposed camera.
shows a schematic configuration of the proposed camera system. A micro lens array which consists of the identical elemental lenses is placed in front of the image sensor like charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS). The distance between the micro lens array and the image sensor is set to be the focal length of the lens array. The main lens of the camera forms an intermediate image of the 3D scene around the micro lens array. The light from the intermediate image is then captured by the micro lens array to form an image array, which is also called a set of elemental images, on the image sensor. In order to prevent overlapping between the neighboring elemental images on the image sensor, the image-side f-number (f/#) of the main lens is matched to that of the micro lens array, i.e. φm
are the aperture size of the main lens and the micro lens array, respectively, lm
is the distance between the main lens and the micro lens array, and fa
is the focal length of the lens array.
3. Hologram synthesis
The proposed method synthesizes the hologram of the captured 3D scene using two step processing; sub-image array synthesis and its Fourier transform with random phase distribution assignment. Figure 2
Fig. 2 Sub-image array synthesis.
shows the sub-image array synthesis step. In the optical configuration of the proposed system, each pixel in the image sensor captures a light ray bundle of a specific angle which is determined by the local position of the pixel with respect to the optic axis of the corresponding elemental lens. By collecting the pixels at the same local positions as shown in Fig. 2
, an array of the sub-images of the 3D scene is synthesized [9
9. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]
]. Since a single pixel is extracted from each elemental image, the pixel count of each sub-image is given by the number of the elemental lenses in the array. The number of the sub-images is given by the pixel count of the image under each lens in the array. After the generation, the order of the sub-images in the array is reversed. Or equivalently, each elemental image is rotated by 180°, before the sub-image array synthesis. This inverts the depth of the 3D images, eliminating pseudoscopic image problem in the final optical reconstruction [10
10. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]
Note that each sub-image generated in the proposed system represents the parallel ray bundle in the image space of the 3D scene. Therefore, the sub-image contains the orthographic view of the intermediate image which is already de-magnified axially and laterally by the imaging of the main lens [11
11. H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20(2), 890–895 (2012). [CrossRef] [PubMed]
]. The final hologram which is synthesized from the sub-images also reconstructs the intermediate image of the 3D scene.
The second step is the hologram synthesis from the sub-images. In the proposed method, the Fourier-holographic stereogram is synthesized. Figure 3
Fig. 3 Fourier hologram synthesis.
illustrates the hologram synthesis process. Each sub-image is first multiplied with a random phase mask, and then Fourier-transformed to form a hologram patch. These hologram patches are stitched to form the final hologram. When Nx
sub-images of Mx
pixel count are prepared by the first step, each Fourier transformed hologram patch has Mx
pixel count and the final hologram becomes Mx
pixel count by the stitching.
The random phase mask has the same pixel count as the sub-image. Each pixel in the random phase mask has a random phase value in the 2π range. Note that, without the random phase mask, the Fourier transform of the sub-image will have a strong peak around the center of the corresponding hologram patch. The random phase mask multiplied to the sub-image removes this strong peak and distributes the Fourier transform evenly, making more efficient use of the hologram patch area [12
12. C. B. Burckhardt, “Use of a random phase mask for the recording of Fourier transform holograms of data masks,” Appl. Opt. 9(3), 695–700 (1970). [CrossRef] [PubMed]
4.1 Experimental setup
In experiment, a micro lens array of 95 × 95 elemental lenses was attached on the image sensor of a mirror-less DSLR camera. The pitches of the elemental lens and the camera pixel are 125 um and 5.1 um, respectively, resulting that each elemental lens forms an elemental image of approximately 25 × 25 pixel count. The gap between the micro lens array and the image sensor was adjusted to be the focal length of the lens, i.e. 2.4 mm. The specifications of the camera and the micro lens array used in the experiment are listed in Table 1
Table 1. Experimental Setup
. Figure 4
Fig. 4 Implemented camera with the micro lens array
shows a picture of the implemented camera with the micro lens array.
4.2 Synthetic aperture technique
Fig. 5 Concept of synthetic aperture technique.
shows the concept of the synthetic aperture technique. For a fixed object scene, multiple sets of the elemental images are captured at different aperture positions, and then combined to yield a single set of the elemental images with a larger FOV or reduced effective f/#. In order to obtain seamless combination, the shift step of the aperture was set to the same value as the aperture diameter 2.5 mm in the experiment.
4.3 Experimental result
Fig. 6 Experimental setup (a) Integral imaging camera with the aperture, (b) 3D scene.
shows the experimental setup. In front of the micro-lens-array-implemented camera, an aperture of 2.5 mm diameter was located to match the f/#. For a 3D object scene, two objects, ‘bear’ and ‘INHA’ were prepared at 33 cm, and 146 cm from the camera. The lateral size is 4cm × 3.5cm for the ‘bear’ and 26.5cm × 19cm for the ‘INHA’ object. For the synthetic aperture technique, 5 × 5 sets of elemental images were captured with appropriate lateral shifts of the aperture.
Fig. 7 Captured images (a) Single capture (94 × 67 lens images of 25 × 25 pixel count), (b) Synthetic aperture with 5 × 5 captures (94 × 67 lens images of 125 × 125 pixel count).
shows one example of the image captured with a single exposure and a fixed aperture position. In the magnified portion of Fig. 7(a)
, each circular image (yellow box of solid line) corresponds to each elemental lens in the array and it has 25 × 25 pixel count. Out of total 95 × 95 elemental images, only central 94 × 67 elemental images were cropped and used in the subsequent processing to cut black background region. Figure 7(b)
shows synthetic set of the elemental images generated from 5 × 5 captured images. In the synthetic image shown in Fig. 7(b)
, the collection of 5 × 5 circular images (yellow box of dotted line) corresponds to each elemental lens, providing 5 times enhanced angular FOV for each lens.
Figure 8(a) Fig. 8
Sub-images synthesized using Fig. 7(b)
. (a) All 125 × 125 sub-images of 94 × 67 pixel count. Among these, only 20 × 16 sub-images (represented in yellow box) selected regularly across whole array were used in the hologram synthesis, (b) Selected 20 × 16 sub-images.
shows the sub-images generated from the synthetic image shown in Fig. 7(b)
. 125 × 125 sub-images were synthesized with 94 × 67 pixel count per each. In our experiment, we did not use all 125 × 125 sub-images. Only 20 × 16 sub-images distributed regularly across the full 125 × 125 array were used in the hologram synthesis, giving (20 × 94) × (16 × 67) = 1880 × 1072 pixel count for the synthesized hologram. Figure 8(b)
shows the selected 20 × 16 sub-images. Brightness fluctuation observed in the selected sub-image array of Fig. 8(b)
originates from the cosine fourth power brightness falloff, or also called natural vignetting, of individual lens in the lens array.
Note that the selection of the sub-images needs to be made with a uniform interval (5 sub-image interval along horizontal and vertical directions in our experiment as indicated by yellow boxes in Fig. 8(a)
) to ensure constant parallax change in the reconstructed 3D image. Also note that the selection of the sub-images is not unique in the proposed method. Any selection can be used for the hologram synthesis as long as the interval between the selected sub-images is uniform. Figure 9
Fig. 9 Phase distribution of synthesized hologram (1880 × 1072 pixel count).
shows the phase distribution of the final hologram synthesized from the selected 20 × 16 sub-images.
shows the optical reconstruction result captured at different focal planes. It can be observed that ‘bear’ object and ‘INHA’ object are focused at different axial distance from the camera, exhibiting the 3D nature of the reconstruction. Figure 10(b)
shows the motion parallax. The images are captured at 9 different angles. When we captured Fig. 10(b)
, the depth of focus of the camera was intentionally increased to capture both object images with less blur, which degrades the captured quality. Nevertheless, the relative shift between the ‘bear’ and ‘INHA’ object images is clearly observed, confirming successful reconstruction of the 3D scene.
Note that the synthetic aperture technique is not an essential part of the proposed method. In our experiment, the synthetic aperture technique was used only to compensate too large f/# ( = 19.2) of the implemented micro lens array, not to increase the number of the sub-images or the number of the pixels for each elemental image. Even though we obtained (5 × 25) × (5 × 25) = 125 × 125 sub-images by the synthetic aperture technique with 5 × 5 captures, only 20 × 16 sub-images were actually used in the hologram synthesis. The successful optical reconstruction result shown in Fig. 10
indicates that if the f/# of the micro lens array were around 19.2/5≅4 or less in our experimental condition, only single capture without synthetic aperture technique would be sufficient to produce the similar result. An additional aperture used in our experiment is also eliminated in that case, leaving only a portable single DSLR camera with an integrated micro lens array in the system. Consequently, the successful optical reconstruction result shown in Fig. 10
supports the feasibility of the proposed portable hologram camera system based on integral imaging.
In this paper, we propose a portable integral imaging camera for synthesizing hologram of the real-existing 3D scene. The four-dimensional spatio-angular light ray distribution of the 3D scene is captured under regular incoherent illumination by the micro lens array implemented on the image sensor plane of usual mirror-less DSLR camera. The captured spatio-angular light ray distribution is processed to form an array of the sub-images and then used to synthesize Fourier holographic stereogram. Due to large f/# of the micro lens array available at the time of the experiment, an aperture and the synthetic aperture technique were additionally applied to capture sufficient angular range in the experiment. The experimental result shows that the hologram of the 3D scene is synthesized and optically reconstructed successfully, showing accommodation and motion parallax of the reconstructed images.
This work was partly supported by the IT R&D program of MSIP/MOTIE/KEIT[10039169, Development of Core Technologies for Digital Holographic 3-D Display and Printing System]. This work was also partly supported by INHA UNIVERSITY Research Grant. (INHA-47294)
References and links
J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32(8), 912–914 (2007). [CrossRef] [PubMed]
N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48(34), H120–H136 (2009). [CrossRef] [PubMed]
Y. Sando, M. Itoh, and T. Yatagai, “Holographic three-dimensional display synthesized from three-dimensional fourier spectra of real existing objects,” Opt. Lett. 28(24), 2518–2520 (2003). [CrossRef] [PubMed]
Y. Rivenson, A. Stern, and J. Rosen, “Compressive multiple view projection incoherent holography,” Opt. Express 19(7), 6109–6118 (2011). [CrossRef] [PubMed]
J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17(8), 6320–6334 (2009). [CrossRef] [PubMed]
T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45(17), 4026–4036 (2006). [CrossRef] [PubMed]
K. Wakunami and M. Yamaguchi, “Calculation for computer generated hologram using ray-sampling plane,” Opt. Express 19(10), 9086–9101 (2011). [CrossRef] [PubMed]
R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).
J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]
F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]
H. Navarro, J. C. Barreiro, G. Saavedra, M. Martínez-Corral, and B. Javidi, “High-resolution far-field integral-imaging camera by double snapshot,” Opt. Express 20(2), 890–895 (2012). [CrossRef] [PubMed]
C. B. Burckhardt, “Use of a random phase mask for the recording of Fourier transform holograms of data masks,” Appl. Opt. 9(3), 695–700 (1970). [CrossRef] [PubMed]