OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 22, Iss. 9 — May. 5, 2014
  • pp: 10210–10220
« Show journal navigation

Real-time integral imaging system for light field microscopy

Jonghyun Kim, Jae-Hyun Jung, Youngmo Jeong, Keehoon Hong, and Byoungho Lee  »View Author Affiliations


Optics Express, Vol. 22, Issue 9, pp. 10210-10220 (2014)
http://dx.doi.org/10.1364/OE.22.010210


View Full Text Article

Acrobat PDF (2497 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We propose a real-time integral imaging system for light field microscopy systems. To implement a 3D live in-vivo experimental environment for multiple experimentalists, we generate elemental images for an integral imaging system from the captured light field with a light field microscope in real-time. We apply the f-number matching method to generate an elemental image to reconstruct an undistorted 3D image. Our implemented system produces real and orthoscopic 3D images of micro objects in 16 frames per second. We verify the proposed system via experiments using Caenorhabditis elegans.

© 2014 Optical Society of America

1. Introduction

Visualizing a real object in three-dimensional (3D) space has been one of the main issues in 3D industries [1

1. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]

15

15. M. Kawakita, K. Iizuka, H. Nakamura, I. Mizuno, T. Kurita, T. Aida, Y. Yamanouchi, H. Mitsumine, T. Fukaya, H. Kikuchi, and F. Sato, “High-definition real-time depth-mapping TV camera: HDTV axi-vision camera,” Opt. Express 12(12), 2781–2794 (2004). [CrossRef] [PubMed]

]. It is possible to extract 3D information from objects using a multi-camera [3

3. W. J. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004). [CrossRef]

], a time of flight camera [15

15. M. Kawakita, K. Iizuka, H. Nakamura, I. Mizuno, T. Kurita, T. Aida, Y. Yamanouchi, H. Mitsumine, T. Fukaya, H. Kikuchi, and F. Sato, “High-definition real-time depth-mapping TV camera: HDTV axi-vision camera,” Opt. Express 12(12), 2781–2794 (2004). [CrossRef] [PubMed]

], a structured light method [16

16. E.-H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17(10), 7818–7830 (2009). [CrossRef] [PubMed]

], or a lens array [17

17. J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010). [CrossRef] [PubMed]

]. Among them, only a few methods are actually functional in real-time with 3D display systems such as stereoscopy, multi-view or integral imaging, which is a key technology for 3D broadcasting [3

3. W. J. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004). [CrossRef]

, 6

6. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef] [PubMed]

, 11

11. J. Kim, J.-H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21(16), 18742–18753 (2013). [CrossRef] [PubMed]

, 15

15. M. Kawakita, K. Iizuka, H. Nakamura, I. Mizuno, T. Kurita, T. Aida, Y. Yamanouchi, H. Mitsumine, T. Fukaya, H. Kikuchi, and F. Sato, “High-definition real-time depth-mapping TV camera: HDTV axi-vision camera,” Opt. Express 12(12), 2781–2794 (2004). [CrossRef] [PubMed]

]. Since stereoscopy and multi-view systems provide several view images, their base image can be easily generated by means of a multi-camera method [3

3. W. J. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004). [CrossRef]

, 18

18. J.-H. Jung, J. Yeom, J. Hong, K. Hong, S. W. Min, and B. Lee, “Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display,” Opt. Express 19(21), 20468–20482 (2011). [CrossRef] [PubMed]

]. However, the multi-camera capturing method requires a large space, a delicate alignment between cameras, and a relatively high computational load for post processing.

For an integral imaging system, a set of elemental images can be obtained with a camera and a lens array as introduced by Lippmann in 1908 [19

19. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

]. The lens array capturing method is less bulky and is not constrained by alignment problems [1

1. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]

, 13

13. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]

, 14

14. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]

]. However, if the captured image is used as the set of elemental images without post-processing, the reconstructed 3D image is pseudoscopic [1

1. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]

, 8

8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef] [PubMed]

14

14. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]

]. In the past decades, several methods have been proposed for solving the pseudoscopic problem, but most cannot satisfy real-time conditions [8

8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef] [PubMed]

], cannot provide a real 3D image [1

1. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]

] or they have a need of special optical devices [6

6. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef] [PubMed]

, 7

7. J. Arai, T. Yamashita, M. Miura, H. Hiura, N. Okaichi, F. Okano, and R. Funatsu, “Integral three-dimensional image capture equipment with closely positioned lens array and image sensor,” Opt. Lett. 38(12), 2044–2046 (2013). [CrossRef] [PubMed]

]. Recently, a simple pixel mapping algorithm was proposed, which can be used to produce real and orthoscopic 3D images in real-time [9

9. J.-H. Jung, J. Kim, and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38(1), 76–78 (2013). [CrossRef] [PubMed]

11

11. J. Kim, J.-H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21(16), 18742–18753 (2013). [CrossRef] [PubMed]

].

Until now, however, these 3D visualization studies have been limited to real-scale objects. Extracting 3D information from a micro object is different from the capturing methods explained above for 3D display systems. Various optical microscopes with high resolving power objectives are used to acquire 3D information from a micro object [20

20. P. Török and F. J. Kao, eds., Optical Imaging and Microscopy: Techniques and Advanced Systems (Springer, 2003).

29

29. B. Lee and J. Kim, “Real-time 3D capturing-visualization conversion for light field microscopy,” Proc. SPIE 8769, 876908 (2013). [CrossRef]

]. First of all, ordinary optical microscopes provide two-dimensional (2D) orthogonal images with a limited depth of field, and the entire structure of a micro object can only be estimated by moving the stage up and down [20

20. P. Török and F. J. Kao, eds., Optical Imaging and Microscopy: Techniques and Advanced Systems (Springer, 2003).

]. Several approaches for acquiring 3D information have been developed over the past decades and include confocal microscopy or near-field scanning optical microscopy [20

20. P. Török and F. J. Kao, eds., Optical Imaging and Microscopy: Techniques and Advanced Systems (Springer, 2003).

, 21

21. E. Betzig and R. J. Chichester, “Single molecules observed by near-field scanning optical microscopy,” Science 262(5138), 1422–1425 (1993). [CrossRef] [PubMed]

]. However, most of these procedures are time-consuming and are not appropriate for observing in-vivo micro objects in real-time.

Light field microscopy (LFM) is a type of single-shot microscopy that reconstructs 3D structure of micro objects using a micro lens array [22

22. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]

24

24. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef] [PubMed]

]. LFM can provide perspective views and focal stacks in real-time by adding a simple micro lens array to a conventional optical microscope [22

22. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]

]. Furthermore, LFM extends the depth of field greatly, thus permitting researchers to extract information on the 3D volume of a micro object in one-shot. However, the resolution of directional view images obtained by LFM is decreased by number of lenses in the micro lens array [22

22. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]

]. A number of studies are proposed to improve the image quality of LFM by lens shifting technology [25

25. Y. T. Lim, J. H. Park, K. C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17(21), 19253–19263 (2009). [CrossRef] [PubMed]

], light field illumination [23

23. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef] [PubMed]

], 3D deconvolution [24

24. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef] [PubMed]

] or fluorescence scanning methods [26

26. A. Orth and K. Crozier, “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20(12), 13522–13531 (2012). [CrossRef] [PubMed]

]. However until now, studies on LFM have mainly dealt with 3D reconstruction in virtual space rather than in real space.

Since LFM has major advantages in one-shot imaging and real-time calculation, it would be more natural to organize a real-time visualization system or 3D interactive system with LFM. However, to the best of our knowledge, a real-time 3D display system for LFM has not been developed or even discussed. There is a structural symmetry between the LFM system and integral imaging: they both use a lens array to acquire and visualize 3D information [12

12. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]

, 22

22. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]

, 27

27. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001). [CrossRef] [PubMed]

]. Some studies already applied integral imaging principles to LFM [25

25. Y. T. Lim, J. H. Park, K. C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17(21), 19253–19263 (2009). [CrossRef] [PubMed]

, 28

28. Y.-T. Lim, J.-H. Park, K.-C. Kwon, and N. Kim, “Analysis on enhanced depth of field for integral imaging microscope,” Opt. Express 20(21), 23480–23488 (2012). [CrossRef] [PubMed]

], and by using this symmetry between LFM and integral imaging, a micro object can be optically reconstructed in 3D.

In Section 2, real-time elemental image generation method with f-number matching is introduced and image simulation is also presented. The optical design and experimental setup are then introduced in Section 3. Experimental results for the proposed system with c.elegans are shown in images and in videos in Section 4. Finally this paper ends with the conclusion in Section 5.

2. Real-time elemental image generation from captured light field with f-number matching

2.1 Light field microscopy and integral imaging

As mentioned above, it is possible to reconstruct a 3D image using integral imaging system with the light field captured from LFM. Figure 1 shows a schematic diagram of our proposed method.
Fig. 1 The schematic diagram of proposed method: (a) light field capturing with LFM and (b) 3D image reconstruction with integral imaging.
The LFM system is composed of an objective lens and a micro lens array located at the image plane of the objective as shown in Fig. 1(a) [22

22. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]

]. The light field cone from one point of the micro object at the focal plane is recorded at the sensor located behind one lens of the micro lens array, while the light fields from the point that is not located at the focal plane is imaged to the pixels behind a number of lenses. Each pixel of each lens contains information regarding the light field with a different direction, which is illustrated by the color in Fig. 1(a). The aperture of the light field cone is determined by the numerical aperture (NA) of the objective rather than that of the micro lens array. Since it is easier to build one objective lens with a high resolving power than thousands of lenses in a micro lens array, LFM takes advantage of high resolving power of the objective lens [22

22. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]

].

Figure 1(b) shows a 3D reconstruction of an enlarged micro object obtained with an integral imaging system. The integral imaging system consists of a flat display panel and a lens array, as shown in Fig. 1(b). To reconstruct a 3D image with integral imaging, an elemental image should be generated from the captured light field. In this study, we applied the real-time pixel mapping algorithm proposed by Jung et al. in 2013 to solve the pseudoscopic problem [9

9. J.-H. Jung, J. Kim, and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38(1), 76–78 (2013). [CrossRef] [PubMed]

]. By locating captured pixels at the proper position of an elemental image, a real and orthoscopic 3D image can be obtained, as shown in Fig. 1(b). The observer can also instantly adjust the depth plane of the reconstructed 3D image by changing the parameters of the elemental generation algorithm [9

9. J.-H. Jung, J. Kim, and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38(1), 76–78 (2013). [CrossRef] [PubMed]

11

11. J. Kim, J.-H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21(16), 18742–18753 (2013). [CrossRef] [PubMed]

].

Since the pitch of the display lens array is usually bigger than that of the micro lens array in LFM, a reconstructed 3D image is magnified not only by magnification of the objective but also by the lens array difference. With the assumption that the number of sensor pixels is equal to the number of display pixels, a lateral magnification factor Mxy is derived by multiplication of the lens size difference and objective magnification as follows:
Mxy=Mo×pdpc,
(1)
where Mo is the magnification of objective, pd is the lens pitch of the display lens array and pc is the lens pitch of the micro lens array in the capturing stage.

However, the axial magnification factor Mz is determined by the lateral magnification factor and angular resolution. Since the maximum angle of the light field cone is determined by the NA of the objective lens in LFM, the NA of the lenses in the display lens array should be equal to that of objective lens in order to reconstruct right depth information. Here, Mz is derived as follows:
Mz=Mo×pdpc×NAoNAd,
(2)
where NAd is the NA of the display lens array and NAo is the NA of the objective lens in LFM. In practice, the NA of an individual lens in a display lens array is much lower than the NA of the objective lens. Therefore, depth information of the reconstructed 3D image is distorted unless additional image processing is applied [32

32. J.-H. Park, H. Choi, Y. Kim, J. Kim, and B. Lee, “Scaling of three-dimensional integral imaging,” Jpn. J. Appl. Phys. 44(1A), 216–224 (2005). [CrossRef]

].

2.2 Real-time elemental image generation method with f-number matching

To reconstruct a 3D image of a micro object without distortion, careful consideration of the f-number is required. The f-number of a lens (N) is defined as follows:
N=fp=12NA,
(3)
where f is the focal length and p is the diameter of the lens. As mentioned above, the NAs of the objective and display lens array are usually different, so the f-number of them should be matched by image processing. As mentioned above, in practice, it is much more difficult to make a high NA lens array than to make a high NA objective. Therefore, only a fraction of the captured information can be optically reconstructed as a 3D image. However, expressing the light field of a micro object without distortion is important, in terms of examining the 3D shape of an object, and the f-number matching method can provide right 3D information to experimenters.

Figure 2 shows an example of the light field of c.elegans captured by the LFM system.
Fig. 2 A part of captured light field of c.elegans by LFM with 40 × /0.65 NA objective, Fresnel Tech. 125 μm micro lens array (focal length 2.5 mm), Olympus BX53T optical microscope and AVT Prosilica GX2300C CCD: (red) 2 by 2 micro lens array region, (yellow) objective aperture stop, (sky blue) region that can be expressed with display lens array (1 mm lens array with 3.3 mm focal length).
We used a 40 × /0.65 NA objective, a Fresnel Tech. 125 μm micro lens array with 2.5 mm focal length, Olympus BX53T optical microscope and AVT Prosilica GX2300C charge coupled device (CCD) to build LFM system. In Fig. 2, the red lines indicate the micro lens array border, yellow circles show the circular aperture of the objective, and the sky blue rectangles indicate the region that can be expressed with a typical 1 mm lens array with the 3.3 mm focal length used in integral imaging. Detailed specifications for the implemented system are listed in Table 1.

Table 1. Specification of Implemented Real-time Integral Imaging System for LFM

table-icon
View This Table
Due to the mismatch between the image-side f-number of the objective and the f-number of the micro lens array, the outer region of the sensor cannot receive a light field signal [22

22. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]

, 33

33. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).

], and the circular aperture stop inside the objective lens forms an array of image circles. However, the expressible region is only a small part of the captured light because of another f-number mismatch between the objective and the display lens array as shown in Fig. 2. Fortunately, the resolution of CCD is usually much greater than the resolution of the display device so that the light field information is enough to generate the elemental image. The resolution of the captured image for a single lens is 31 × 31. However, the display panel pitch is 125 μm and the pitch of the display lens array is 1 mm. Therefore, the resolution of a single elemental image is 8 × 8, so the set of elemental images is generated by undersampling. Therefore, the resolution of the reconstructed 3D image can be improved by cropping wasted regions such as black regions due to the circular aperture before the undersampling process. Nevertheless, the captured light field should be stored for full-resolution post-processing regardless of the elemental image generation method used.

To generate an accurate elemental image from a captured light field, only the sky blue regions in Fig. 2 would be used; otherwise the reconstructed 3D image is distorted in depth. Therefore, the sky blue region should be cropped first. Figure 3 shows the principle of the elemental image generation process with one part of captured light field.
Fig. 3 Method for generating an elemental image from a captured light field with f-number matching: (a) a part of the captured light field with LFM, (b) rearranged image by cropping image regions that can be expressed with the display lens array, and (c) generated elemental image using the pixel mapping algorithm (k = 0).
Figure 3(b) shows the rearranged image with the cropped images. The pixel mapping algorithm is then applied to the rearranged image to produce a real and orthoscopic 3D image without pseudoscopic problems. As mentioned above, the depth plane can be adjusted by changing the algorithm parameter k in the pixel mapping algorithm [9

9. J.-H. Jung, J. Kim, and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38(1), 76–78 (2013). [CrossRef] [PubMed]

11

11. J. Kim, J.-H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21(16), 18742–18753 (2013). [CrossRef] [PubMed]

].

In this study, we set the parameter k to zero, which is the simplest way to solve the pseudoscopic problem: rotating each elemental image 180 degrees. This method was introduced earlier by Okano et al. in conjunction with a real-time display [1

1. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]

]. However, this algorithm provides only virtual orthoscopic images with the conventional integral imaging pickup system, because the pickup system is capable of capturing 3D objects only behind the lens array [1

1. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]

, 8

8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef] [PubMed]

]. However in the LFM system, the micro lens array captures the light field relayed by the objective lens, and the experimenter can easily adjust the focal plane relayed with the objective lens by moving the stage up and down. Therefore, the use of zero for the algorithm parameter k is the best for the LFM system, because it is not necessary to adjust the depth planes with post processing [34

34. C. Jang, J. Kim, J. Yeom, and B. Lee, “Analysis of color separation reduction through the gap control method in integral imaging,” J. Inf. Disp. 15(2) (to be published).

]. Orthoscopic 3D images are obtained as both virtual and real images by rotating each elemental image [11

11. J. Kim, J.-H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21(16), 18742–18753 (2013). [CrossRef] [PubMed]

, 29

29. B. Lee and J. Kim, “Real-time 3D capturing-visualization conversion for light field microscopy,” Proc. SPIE 8769, 876908 (2013). [CrossRef]

]. Of course, one can apply another value for the parameter k in other cases (e.g. fitting expressible depth range of display system), but we conclude that this rotation method is the optimal for the LFM system.

3. Real-time integral imaging system for light field microscopy

Figure 5 shows the implemented system of our proposed real-time integral imaging system for LFM.
Fig. 5 Implementation of proposed real-time integral imaging system for LFM.
An incoherent light source is located at the bottom, transmitted to the micro object, and imaged by a micro lens array. In practice, a relay lens (Canon EF 100 mm f/2.8 Macro USM) is used to image the light field from the micro lens array to the CCD sensor, as shown in Fig. 5. The captured light field information is transmitted to the PC at a 32 FPS frame rate. Therefore, half of the captured images are used for elemental image generation because the implemented pixel mapping algorithm is capable of providing only about 16 FPS. For integral imaging, a high resolution liquid crystal display (IBM 22 inch 3840 × 2400) and a 1 mm lens array with a 3.3 mm focal length are used, as listed in Table 1.

For real-time characteristics, the alignment of the optic devices is the most important issue, otherwise image rectification is needed, which usually requires much more time than the pixel mapping algorithm. In the proposed system, an optical zig is manufactured to calibrate optical elements as shown in Fig. 5. The tilted angle of the micro lens array is then aligned with the display, and the lens border and resolution are manually inserted into the elemental image generation code as the initial condition. After being calibrated, the implemented system is robust to external oscillations during an experiment.

4. Experimental results

With the captured light field image, we presented an integral imaging experiment. Figure 7(a) shows the perspective views of reconstructed 3D images with the generated elemental image.
Fig. 7 Experimental results for the proposed real-time integral imaging system for LFM: (a) perspective views of reconstructed 3D images with generated elemental image (Media 2) and (b) conceptual video of real-time 3D experiment (Media 3).
As shown in Fig. 7(a), the developed system provides an orthoscopic 3D image in real-time (see Media 2). By using this real-time characteristic of the proposed system, real-time 3D experiments can be performed. Figure 7(b) shows the conceptual experiment for the proposed 3D experiment. The experimenter observes a micro object in 3D and in real-time, and instant feedback with the microscope is possible (See Media 3). Due to the multiple viewpoints of integral imaging, multiple experimenters can share in the microscopic experiment. These experimental results also provide validity for our proposed real-time system.

5. Conclusion

In this study, we proposed a real-time integral imaging system for use with an LFM system. We generated elemental images for an integral imaging system from a captured light field with LFM in real-time. We applied an f-number matching method for elemental image generation to reconstruct an undistorted 3D image. Our implemented system is capable of providing real and orthoscopic 3D images of micro objects in 16 FPS. We verified proposed system with experiments using c.elegans. This system could be used for the microscopic experiments for multiple experimenters and observers.

Acknowledgments

This research was supported by ‘The Cross-Ministry Giga KOREA Project’ of The Ministry of Science, ICT and Future Planning, Korea. [GK13D0200, Development of Super Multi-View (SMV) Display Providing Real-Time Interaction]. We wish to thank Professor Junho Lee (Department of Biological Sciences, Seoul National University) for the generous donation of the c. elegans samples used in this study.

References and links

1.

F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]

2.

B. Javidi, S. Yeom, I. Moon, and M. Daneshpanah, “Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events,” Opt. Express 14(9), 3806–3829 (2006). [CrossRef] [PubMed]

3.

W. J. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004). [CrossRef]

4.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]

5.

G. Li, K.-C. Kwon, K.-H. Yoo, S.-G. Gil, and N. Kim, “Real-time display for real-existing three-dimensional objects with computer-generated integral imaging,” in Proceeding of International Meeting on Information Display (IMID), Daegu, Korea, Aug. 2012 (Society for Information Display and Korean Society for Information Display, 2012), pp. 471–472.

6.

J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef] [PubMed]

7.

J. Arai, T. Yamashita, M. Miura, H. Hiura, N. Okaichi, F. Okano, and R. Funatsu, “Integral three-dimensional image capture equipment with closely positioned lens array and image sensor,” Opt. Lett. 38(12), 2044–2046 (2013). [CrossRef] [PubMed]

8.

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef] [PubMed]

9.

J.-H. Jung, J. Kim, and B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38(1), 76–78 (2013). [CrossRef] [PubMed]

10.

J. Kim, J.-H. Jung, and B. Lee, “Real-time pickup and display integral imaging system without pseudoscopic problem,” Proc. SPIE 8643, 864303 (2013). [CrossRef]

11.

J. Kim, J.-H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21(16), 18742–18753 (2013). [CrossRef] [PubMed]

12.

B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]

13.

J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]

14.

J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]

15.

M. Kawakita, K. Iizuka, H. Nakamura, I. Mizuno, T. Kurita, T. Aida, Y. Yamanouchi, H. Mitsumine, T. Fukaya, H. Kikuchi, and F. Sato, “High-definition real-time depth-mapping TV camera: HDTV axi-vision camera,” Opt. Express 12(12), 2781–2794 (2004). [CrossRef] [PubMed]

16.

E.-H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17(10), 7818–7830 (2009). [CrossRef] [PubMed]

17.

J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010). [CrossRef] [PubMed]

18.

J.-H. Jung, J. Yeom, J. Hong, K. Hong, S. W. Min, and B. Lee, “Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display,” Opt. Express 19(21), 20468–20482 (2011). [CrossRef] [PubMed]

19.

G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

20.

P. Török and F. J. Kao, eds., Optical Imaging and Microscopy: Techniques and Advanced Systems (Springer, 2003).

21.

E. Betzig and R. J. Chichester, “Single molecules observed by near-field scanning optical microscopy,” Science 262(5138), 1422–1425 (1993). [CrossRef] [PubMed]

22.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]

23.

M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef] [PubMed]

24.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef] [PubMed]

25.

Y. T. Lim, J. H. Park, K. C. Kwon, and N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17(21), 19253–19263 (2009). [CrossRef] [PubMed]

26.

A. Orth and K. Crozier, “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20(12), 13522–13531 (2012). [CrossRef] [PubMed]

27.

J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001). [CrossRef] [PubMed]

28.

Y.-T. Lim, J.-H. Park, K.-C. Kwon, and N. Kim, “Analysis on enhanced depth of field for integral imaging microscope,” Opt. Express 20(21), 23480–23488 (2012). [CrossRef] [PubMed]

29.

B. Lee and J. Kim, “Real-time 3D capturing-visualization conversion for light field microscopy,” Proc. SPIE 8769, 876908 (2013). [CrossRef]

30.

A. Fire, S. Xu, M. K. Montgomery, S. A. Kostas, S. E. Driver, and C. C. Mello, “Potent and specific genetic interference by double-stranded RNA in Caenorhabditis elegans,” Nature 391(6669), 806–811 (1998). [CrossRef] [PubMed]

31.

H. Lee, M. K. Choi, D. Lee, H. S. Kim, H. Hwang, H. Kim, S. Park, Y. K. Paik, and J. Lee, “Nictation, a dispersal behavior of the nematode Caenorhabditis elegans, is regulated by IL2 neurons,” Nat. Neurosci. 15(1), 107–112 (2011). [CrossRef] [PubMed]

32.

J.-H. Park, H. Choi, Y. Kim, J. Kim, and B. Lee, “Scaling of three-dimensional integral imaging,” Jpn. J. Appl. Phys. 44(1A), 216–224 (2005). [CrossRef]

33.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).

34.

C. Jang, J. Kim, J. Yeom, and B. Lee, “Analysis of color separation reduction through the gap control method in integral imaging,” J. Inf. Disp. 15(2) (to be published).

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(110.2990) Imaging systems : Image formation theory
(180.6900) Microscopy : Three-dimensional microscopy

ToC Category:
Microscopy

History
Original Manuscript: February 21, 2014
Revised Manuscript: April 14, 2014
Manuscript Accepted: April 15, 2014
Published: April 21, 2014

Virtual Issues
Vol. 9, Iss. 7 Virtual Journal for Biomedical Optics

Citation
Jonghyun Kim, Jae-Hyun Jung, Youngmo Jeong, Keehoon Hong, and Byoungho Lee, "Real-time integral imaging system for light field microscopy," Opt. Express 22, 10210-10220 (2014)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-22-9-10210


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. F. Okano, J. Arai, H. Hoshino, I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]
  2. B. Javidi, S. Yeom, I. Moon, M. Daneshpanah, “Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events,” Opt. Express 14(9), 3806–3829 (2006). [CrossRef] [PubMed]
  3. W. J. Matusik, H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004). [CrossRef]
  4. F. Okano, H. Hoshino, J. Arai, I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef] [PubMed]
  5. G. Li, K.-C. Kwon, K.-H. Yoo, S.-G. Gil, and N. Kim, “Real-time display for real-existing three-dimensional objects with computer-generated integral imaging,” in Proceeding of International Meeting on Information Display (IMID), Daegu, Korea, Aug. 2012 (Society for Information Display and Korean Society for Information Display, 2012), pp. 471–472.
  6. J. Arai, F. Okano, H. Hoshino, I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef] [PubMed]
  7. J. Arai, T. Yamashita, M. Miura, H. Hiura, N. Okaichi, F. Okano, R. Funatsu, “Integral three-dimensional image capture equipment with closely positioned lens array and image sensor,” Opt. Lett. 38(12), 2044–2046 (2013). [CrossRef] [PubMed]
  8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef] [PubMed]
  9. J.-H. Jung, J. Kim, B. Lee, “Solution of pseudoscopic problem in integral imaging for real-time processing,” Opt. Lett. 38(1), 76–78 (2013). [CrossRef] [PubMed]
  10. J. Kim, J.-H. Jung, B. Lee, “Real-time pickup and display integral imaging system without pseudoscopic problem,” Proc. SPIE 8643, 864303 (2013). [CrossRef]
  11. J. Kim, J.-H. Jung, C. Jang, B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21(16), 18742–18753 (2013). [CrossRef] [PubMed]
  12. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]
  13. J.-H. Park, K. Hong, B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef] [PubMed]
  14. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]
  15. M. Kawakita, K. Iizuka, H. Nakamura, I. Mizuno, T. Kurita, T. Aida, Y. Yamanouchi, H. Mitsumine, T. Fukaya, H. Kikuchi, F. Sato, “High-definition real-time depth-mapping TV camera: HDTV axi-vision camera,” Opt. Express 12(12), 2781–2794 (2004). [CrossRef] [PubMed]
  16. E.-H. Kim, J. Hahn, H. Kim, B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17(10), 7818–7830 (2009). [CrossRef] [PubMed]
  17. J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010). [CrossRef] [PubMed]
  18. J.-H. Jung, J. Yeom, J. Hong, K. Hong, S. W. Min, B. Lee, “Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display,” Opt. Express 19(21), 20468–20482 (2011). [CrossRef] [PubMed]
  19. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).
  20. P. Török and F. J. Kao, eds., Optical Imaging and Microscopy: Techniques and Advanced Systems (Springer, 2003).
  21. E. Betzig, R. J. Chichester, “Single molecules observed by near-field scanning optical microscopy,” Science 262(5138), 1422–1425 (1993). [CrossRef] [PubMed]
  22. M. Levoy, R. Ng, A. Adams, M. Footer, M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]
  23. M. Levoy, Z. Zhang, I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef] [PubMed]
  24. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef] [PubMed]
  25. Y. T. Lim, J. H. Park, K. C. Kwon, N. Kim, “Resolution-enhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17(21), 19253–19263 (2009). [CrossRef] [PubMed]
  26. A. Orth, K. Crozier, “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20(12), 13522–13531 (2012). [CrossRef] [PubMed]
  27. J.-H. Park, S.-W. Min, S. Jung, B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40(29), 5217–5232 (2001). [CrossRef] [PubMed]
  28. Y.-T. Lim, J.-H. Park, K.-C. Kwon, N. Kim, “Analysis on enhanced depth of field for integral imaging microscope,” Opt. Express 20(21), 23480–23488 (2012). [CrossRef] [PubMed]
  29. B. Lee, J. Kim, “Real-time 3D capturing-visualization conversion for light field microscopy,” Proc. SPIE 8769, 876908 (2013). [CrossRef]
  30. A. Fire, S. Xu, M. K. Montgomery, S. A. Kostas, S. E. Driver, C. C. Mello, “Potent and specific genetic interference by double-stranded RNA in Caenorhabditis elegans,” Nature 391(6669), 806–811 (1998). [CrossRef] [PubMed]
  31. H. Lee, M. K. Choi, D. Lee, H. S. Kim, H. Hwang, H. Kim, S. Park, Y. K. Paik, J. Lee, “Nictation, a dispersal behavior of the nematode Caenorhabditis elegans, is regulated by IL2 neurons,” Nat. Neurosci. 15(1), 107–112 (2011). [CrossRef] [PubMed]
  32. J.-H. Park, H. Choi, Y. Kim, J. Kim, B. Lee, “Scaling of three-dimensional integral imaging,” Jpn. J. Appl. Phys. 44(1A), 216–224 (2005). [CrossRef]
  33. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).
  34. C. Jang, J. Kim, J. Yeom, and B. Lee, “Analysis of color separation reduction through the gap control method in integral imaging,” J. Inf. Disp. 15(2) (to be published).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: MOV (2475 KB)     
» Media 2: MOV (8392 KB)     
» Media 3: MOV (5768 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited