OSA's Digital Library

Applied Optics

Applied Optics

APPLICATIONS-CENTERED RESEARCH IN OPTICS

  • Editor: Joseph N. Mait
  • Vol. 50, Iss. 34 — Dec. 1, 2011
  • pp: H278–H284
« Show journal navigation

Digitized holography: modern holography for 3D imaging of virtual and real objects

Kyoji Matsushima, Yasuaki Arima, and Sumio Nakahara  »View Author Affiliations


Applied Optics, Vol. 50, Issue 34, pp. H278-H284 (2011)
http://dx.doi.org/10.1364/AO.50.00H278


View Full Text Article

Acrobat PDF (1864 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Recent developments in computer algorithms, image sensors, and microfabrication technologies make it possible to digitize the whole process of classical holography. This technique, referred to as digitized holography, allows us to create fine spatial three-dimensional (3D) images composed of virtual and real objects. In the technique, the wave field of real objects is captured in a wide area and at very high resolution using the technique of synthetic aperture digital holography. The captured field is incorporated in virtual 3D scenes including two-dimensional digital images and 3D polygon mesh objects. The synthetic field is optically reconstructed using the technique of computer-generated holograms. The reconstructed 3D images present all depth cues like classical holograms but are digitally editable, archivable, and transmittable unlike classical holograms. The synthetic hologram printed by a laser lithography system has a wide viewing zone in full-parallax and give viewers a strong sensation of depth, which has never been achieved by conventional 3D systems. A real hologram as well as the details of the technique is presented to verify the proposed technique.

© 2011 Optical Society of America

1. Introduction

In classical holography, the wave field emitted from a real object is recorded on light-sensitive films in the form of a fringe pattern generated by optical interference with a reference wave. The wave field of the object is optically reconstructed by diffraction with the fringe pattern after chemical processing of the film. This is the three-dimensional (3D) spatial image produced by classical holography. Therefore, we need a real object to create a 3D image in classical holography. Classical holography makes it possible to reconstruct brilliant 3D images that provide almost all depth cues, because holograms reconstruct the light of the recorded 3D scene itself. However, these holograms cannot be stored digitally and transmitted through digital networks. It is also almost impossible to edit the 3D scene after recording the interference fringe. These features are useful for some specific purposes such as security but are inconvenient in 3D imaging.

Two types of techniques are continuously being developed to advance classical holography. One technique commonly referred to as digital holography (DH) [1

1. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967). [CrossRef]

] captures the interference fringe pattern using digital image sensors. Images are numerically reconstructed by digital processing of captured fields in DH. However, these are not 3D images but two-dimensional (2D) digital images displayed on a screen or printed on a piece of paper. Since the captured data contains the information of the phase of light, this technique is mainly used for microscopes or some fields of metrology such as flow measurements.

The other technique is referred to as the generation of computer-generated hologram (CGH) [2

2. A. W. Lohmann and D. P. Paris, “Binary fraunhofer holograms, generated by computer,” Appl. Opt. 6, 1739–1748 (1967). [CrossRef]

]. This technique numerically generates a fringe pattern using a digital computer and reconstructs wave fields of light using diffraction with the fringe pattern printed or displayed. In principle, this technique is capable of producing any light if one knows what light should be produced. In the case of CGHs, a real object is no longer required but all depth cues are reconstructed similarly to classical holograms. This is an ideal feature for modern 3D technology. Thus, 3D imaging with CGHs is sometimes referred to as final 3D technology. However, for a long time, it was not possible to create fine CGHs for virtual 3D scenes such as those in modern computer graphics (CG). Instead of being used for 3D imaging, CGHs were treated as optical components such as optical filters. This was mainly due to the gigantic display resolution necessary to create 3D images using CGHs. Once the wave field is provided in the form of numerical data, CGHs can reconstruct the wave field. However, it is extremely difficult, even using modern computers, to compute the high-definition wave field emitted from virtual 3D scenes. Printing or displaying 3D images using CGHs is also very difficult because of the extreme high-definition necessary for reconstructing fine 3D images.

However, recent development of polygon-based computer algorithms [3

3. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]

] allows us to calculate the high-definition wave field of a completely virtual 3D scene whose shape and properties are given by a numerical model. We have reported fully synthetic full-parallax computer holograms [4

4. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]

9

9. H. Nishi, K. Higashi, Y. Arima, K. Matsushima, and S. Nakahara, “New techniques for wave-field rendering of polygon-based high-definition CGHs,” Proc. SPIE 7957, 79571A (2011).

] that were printed with laser lithography equipment developed to fabricate photomasks and available in the market [4

4. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]

]. These synthetic holograms are composed of more than a billion pixels and reconstruct brilliant 3D images of occluded virtual 3D scenes. The reconstructed 3D images are not motion pictures but stills, at least for now. However, the quality of the 3D images is comparable to that in classical holography. The reconstructed spatial 3D images are quite different from those reconstructed by currently available 3D systems; the 3D images give viewers a strong sensation of depth that has never been provided by conventional 3D systems, which provide only binocular disparity. In this paper, we refer to the technique for creating CGHs as computer holography and refer to the created hologram as a computer hologram. Figure 1 shows schematically the concept of computer holography.

Fig. 1. Schematic illustration of computer holography.

Conventional CGHs or computer holograms reported so far mainly reconstruct virtual 3D scenes or objects. To reconstruct real objects through computer holography, three approaches can be adopted at this time. The easiest approach is to measure the shape of 3D objects using a laser rangefinder or 3D scanner and texture-map the photograph of the object onto the obtained polygon mesh [9

9. H. Nishi, K. Higashi, Y. Arima, K. Matsushima, and S. Nakahara, “New techniques for wave-field rendering of polygon-based high-definition CGHs,” Proc. SPIE 7957, 79571A (2011).

,10

10. K. Matsushima, H. Nishi, and S. Nakahara are preparing a manuscript to be called “Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography,”

]. However, the 3D image obtained taking this approach may be regarded as a type of synthetic image rather than a real image. The second approach is to use some technique based on multiple viewpoint projection [11

11. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48, H120–H136(2009). [CrossRef]

]. This may be the most promising approach, but no high-quality hologram comparable to classical holograms has been reported for this technique as far as we know.

The third approach is to capture real wave fields using DH. This has been attempted in order to reconstruct real objects through electro-holography [12

12. N. Hashimoto, K. Hoshino, and S. Morokawa, “Improved real-time holography system with LCDs,” Proc. SPIE 1667, 2–7 (1992).

,13

13. K. Sato, “Record and display of color 3-D images by electronic holography,” in Topical Meeting on Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2007), paper DWA2.

]. However, electro-holography currently cannot reconstruct a fine 3D image in the first place. The reconstructed images are not comparable to classical holograms and the high-definition computer holograms. It is theoretically possible to capture any object field using DH, but it is not easy in practice to capture real fields that meet the following two requirements for creating the high-definition computer holograms mentioned above. The first requirement is that the sampling interval of the captured field is sufficiently small to provide a large viewing zone in the optical reconstruction. The interval should not exceed one micron. The second requirement is a large capturing area comparable in size to the created computer holograms. This also leads to a larger viewing zone and the requirement to reconstruct objects as large as the hologram itself, the length of which, for example, is approximately 10 cm. Both requirements lead to capturing the field on a large number of pixels. Unfortunately, current image sensors available in the market do not meet these requirements.

To resolve the problem, we use lensless-Fourier synthetic aperture digital holography [14

14. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). [CrossRef]

,15

15. T. Nakatsuji and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). [CrossRef]

] (LFSA-DH), that is a type of DH using spherical reference waves. LFSA-DH resolves the problems of capturing real fields relating to image sensors and makes it possible to reconstruct 3D images of a real object through computer holography. This means that the whole process of classical holography is replaced by digital counterparts. Thus, we refer to this technique as digitized holography as in Fig. 1. Digitized holography allows us to digitally edit, archive, transmit, and optically reconstruct the wave field of real existing 3D objects. In addition, the real wave field can be mixed with virtual 3D scenes composed of digital 2D images and polygon mesh 3D objects. The detail of the technique is presented in this paper and an actual high-definition computer hologram is demonstrated to verify the reconstruction of a mixed 3D scene including real and virtual objects.

2. Capturing Large-Scale Wave Fields

Capturing wave fields of real existing objects using an image sensor is simply the digital counterpart of recording in classical holography. However, impressive synthetic holograms commonly need display resolution of at least a billion pixels and physical resolution of less than one micron [4

4. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]

]. The wave fields captured by conventional DH do not meet these requirements, because there are no more than tens of millions of pixels even in state-of-the-art sensors and the resolution does not reach one micron. To resolve the problem, we use LFSA-DH [15

15. T. Nakatsuji and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). [CrossRef]

].

2.A. Principle for Reducing Sampling Intervals

In LFSA-DH, the wave field of an object is obtained by the Fourier transformation of the field captured by the image sensor using a phase-shifting technique [16

16. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]

], as shown in Fig. 2. Here, the sampling intervals of the Fourier-transformed field in the image plane are [15

15. T. Nakatsuji and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). [CrossRef]

]
Δx=λdRNxδx,Δy=λdRNyδy,
(1)
where λ and dR are the wavelength and the distance between the center of the spherical reference wave and the image sensor, respectively. The number of sensor pixels and sensor pitches are Nx×Ny and δx×δy, respectively. Note that the sampling intervals of the Fourier-transformed field are not the same as those of the image sensor according to Eq. (1). The sampling intervals directly depend on the distance dR and the sampling numbers Nx and Ny. This means that the sampling intervals can be controlled by these parameters so as to fit the high-definition computer holography.

Fig. 2. Coordinate system and geometry of lensless-Fourier digital holography.
Fig. 3. Capture of large-scale wave fields using the synthetic aperture technique.

2.B. Principle for Increasing the Sampling Cross Section

Since the sampling intervals decrease as the numbers of sensor pixels Nx and Ny increase, the synthetic aperture technique is used to increase the effective number of sensor pixels [14

14. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). [CrossRef]

,15

15. T. Nakatsuji and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). [CrossRef]

]. Here, the lensless-Fourier setup using a spherical reference wave has the advantage that the spatial frequency of the fringe is not increased at the edge of the sensor plane unlike the case for plane reference waves.

In the synthetic aperture DH, the image sensor is mechanically translated and captures the wave field at different positions. Here, the sensor shift is set to be smaller than the sensor area in order to overlap the captured field at each position, as shown in Fig. 3. This overlap area is used to avoid translation errors; i.e. sensor positions are measured exactly using a correlation function for the captured fields [14

14. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). [CrossRef]

]. As a result, all captured fields can be integrated into a single large-scale wave field using this technique.

2.C Experiment for Capturing the Large-Scale Wave Field through LFSA-DH

The experimental setup for capturing large-scale wave fields through LFSA-DH is shown in Fig. 4. The image sensor with 3000×2200 pixels (Lumenera Lw625) is mechanically translated by a computer-controlled motor stage. The fringe pattern is captured three times for each position to obtain a complex wave field [17

17. Y. Takaki, H. Kawai, and H. Ohzu, “Hybrid holographic microscopy free of conjugate and zero-order images,” Appl. Opt. 38, 4990–4996 (1999). [CrossRef]

] using the phase-shift provided by the mirror M3 installed in a piezo phase-shifter.

Fig. 4. Experimental setup for capturing a large wave field by synthetic aperture DH. M: mirror, BS: beam splitter, RP: retarder plate, SF: spatial filter.

Amplitude images of the captured and Fourier-transformed fields are shown in Fig. 5(a) and 5(b), respectively. The total field is obtained by stitching individual fields captured at 8×12 positions. The total cross section of the captured field is 77×80mm2. The parameters used for capturing are summarized in Table 1. Here, dR is the distance between SF3 generating the reference spherical wave and the sensor plane as in Fig. 4. The distance is set to 21.5 cm in this experiment. This is a free parameter and thus is determined using Eq. (1) so that the sampling intervals of the field are exactly 1.0μm×1.0μm after Fourier transformation.

Fig. 5. Amplitude images of the captured (a) and Fourier-transformed fields (b).

Table 1. Parameters Used to Capture the Large-Scale Wave Field

table-icon
View This Table
| View All Tables

3. Editing a 3D Scene in Digitized Holography

Captured large-scale fields are incorporated into a 3D scene that includes virtual objects such as polygon mesh 3D objects and digital 2D images. These elements comprising the 3D scene are referred to as components in this section.

3.A. Configuration of a 3D Scene

The coordinate system used to design 3D scenes is shown in Fig. 6. The center of the hologram is positioned at the origin of the global coordinates (X,Y,Z). All of the real and virtual objects composing the 3D scene are given by their wave field, i.e., the distribution of complex amplitudes sampled in a plane parallel to the hologram. This is true even in cases of 3D objects. In this case, the object fields are computed from the CG model and then incorporated into the 3D scene in a given plane. The wave field of the component n has its own local coordinates (xn,yn). The origin of the local coordinates, denoted (Xn,Yn,Zn) in the global coordinates, defines the position of the components in the 3D scene and is determined by the designer of the scene. Computation of the whole wave field of the 3D scene begins with the farthest components from the hologram, whose field is u(X,Y,Z0), and ends in the hologram plane; the whole field of the scene is given by u(X,Y,ZN), where Z0 and ZN are the Z-position of the farthest component and the hologram, respectively. N is the total number of components. Note that ZN0 by the definition of the global coordinates and the position of the nearest component is given by ZN1. This sequential calculation is necessary when employing the silhouette method [18

18. K. Matsushima and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIE 5290, 90–97 (2004).

,19

19. A. Kondoh and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38, 53–61 (2007). [CrossRef]

,4

4. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]

] to shield the light behind each object and prevent individual objects from being see-through images.

Fig. 6. The coordinate system and geometry used to design the 3D scene and compute the whole wave field of the scene.

3.B. Principle of Light Shielding Employing the Silhouette Method

The light behind a real existing object must be shielded to correctly reconstruct the occluded scene. The silhouette method proposed for light shielding in fully synthetic holography is applied to the real object. The principle of light shielding for captured fields is shown in Fig. 7. Since the incident field behind the captured object should be shielded over the cross section of the object, the incident field is multiplied by a binary mask Mn(xn,yn) that corresponds to the silhouette of the object. The captured field on(xn,yn) is then added to the masked background field. This process is exactly the same as the case of the synthetic field for virtual 2D and 3D objects [18

18. K. Matsushima and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIE 5290, 90–97 (2004).

,19

19. A. Kondoh and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38, 53–61 (2007). [CrossRef]

]. This sequential light shielding is written as a recurrence formula:
u(X,Y,Zn+1)=PZn+1Zn{u(X,Y,Zn)Mn(XXn,YYn)+on(XXn,YYn)},
(2)
where the symbol Pd{·} represents field propagation for the distance d. Note that the angular spectrum [20

20. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996), chap. 3.10.

] or band-limited angular spectrum method [21

21. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662–19673 (2009). [CrossRef]

] is used for the numerical propagation if the memory installed in the computer is sufficient to store the whole field; otherwise, segmented propagation [4

4. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]

] employing the off-axis propagation method [22

22. K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18, 18453–18463 (2010). [CrossRef]

,5

5. K. Matsushima and S. Nakahara, “High-definition full-parallax CGHs created by using the polygon-based method and the shifted angular spectrum method,” Proc. SPIE 7619, 761913 (2010).

] is required.

Fig. 7. The principle of the silhouette method for real captured fields. The background field is propagated to the position Zn, where the captured object is arranged as the designer intended.

3.C. Extraction of a Silhouette Mask from the Captured Wave Field

It is expected that the silhouette mask Mn(xn,yn) of real objects can be extracted from the captured field on(xn,yn) because the field retains the shape information of the object. However, in the amplitude image yielded from the captured field, the edge of the object is blurred by heavy defocusing, as shown in Fig. 5(b). This phenomenon is similar to blurring in macro photography, for which the depth of field is commonly small. Thus, an aperture should be used to capture the field so that the numerically reconstructed amplitude image is clear. In digital holography, however, this can be simply achieved by clipping a small part of the captured field after capturing. Figure 8(b) shows the amplitude image yielded by Fourier transformation of a small part of the captured field in (a). It is verified that the blurring disappears and the image is clear. The silhouette mask in (c) is obtained by binarizing and reversing the amplitude image in (b).

Fig. 8. Extraction of a silhouette mask from the captured field. The amplitude image (b) is obtained from a small part of the captured field (a). The silhouette mask (c) obtained from the amplitude image (b).

4. Computer Hologram of a Mixed 3D Scene Including Virtual and Real Objects

A computer hologram named “Bear II” is created using the captured field presented in section 2.C.

4.A. Mixed 3D Scene

A real existing object, a toy bear whose wave field is captured through LFSA-DH, is mixed with a virtual 3D scene. The design of the scene is shown in Fig. 9. Here, the bear appears twice in the scene, i.e., the same captured wave field is used twice in the scene. Virtual objects such as 2D wallpaper or 3D bees are arranged behind or in front of the two bears. The occluded relation is correctly reconstructed between the bears as well as between the objects behind and in front of the bears as if real objects are placed at the positions. This kind of editing of 3D scenes is impossible in classical holography. Only digitized holography allows us to edit the 3D scene.

Fig. 9. Mixed 3D scene of “Bear II”. The scene includes the fields of the real object and the CG-modeled virtual objects.

4.B. Fabrication and Reconstruction of “Bear II”

After calculation of the whole wave field of the mixed 3D scene, the fringe pattern is generated by numerical interference with a reference wave and then quantized to produce the binary pattern. Finally, the binary amplitude hologram is fabricated using a laser lithography system. There are approximately four billion pixels for Bear II. Since the pixel pitches are 1.0μm×1.0μm, the viewing angle is 37° both in the horizontal and vertical. The parameters used to create Bear II are summarized in Table 2.

Table 2. Summary of parameters used to create Bear II

table-icon
View This Table
| View All Tables

Photographs and videos of the optical reconstruction of Bear II are shown in Fig. 10 and Fig. 11. It is verified that occlusion of the 3D scene is accurately reconstructed, with the appearance of the 3D scene varying as the point of view changes.

Fig. 10. Photograph of the optical reconstruction of Bear II using reflected illumination of an ordinary red LED (Media 1).
Fig. 11. Photographs of the optical reconstruction of Bear II using transmitted illumination of a He-Ne laser (Media 2). Photographs (a)–(c) are taken from different viewpoints.

5. Discussion

Occluded scenes are reconstructed by a silhouette-masking technique that shields the field behind the object. However, silhouette-masking is not a universally applicable technique for light shielding. For example, black shadows that are not seen from the in-line viewpoint appear around the object are visible from an off-axis viewpoint, as shown in Fig. 12. This is most likely due to disagreement between the planes in that the real wave field is given and the object has the maximum cross section. As shown in Fig. 13, viewers see the silhouette mask itself in this case; the background light cannot be seen, even though not hidden by the object. In this case, however, we can easily resolve the problem by numerically propagating the field a short distance so that the field plane is exactly placed at the maximum cross section of the object.

Fig. 12. Occlusion errors occurring in the case of off-axis viewpoints.
Fig. 13. Origin of the occlusion error in the cases where the field plane is not placed at the maximum cross section of the object.

Unfortunately, silhouette-masking does not work well in some cases where the object has severe self-occlusion or the silhouette shape of the object does not fit with the cross section.

6. Conclusion

We proposed a technique called digitized holography. Using this technique, the wave field of a real object is captured by a personal computer using the technique of lensless-Fourier synthetic aperture digital holography. The captured field is incorporated in a virtual 3D scene and optically reconstructed by computer holography. This means that the whole process of classical holography is replaced with modern digital processing of the wave field. As a result, the 3D images reconstructed by holography can be edited, stored, and transmitted by digital technology unlike the case for classical holography. The reconstructed 3D image is a spatial image like its counterpart in classical holography and thus conveys a strong depth impression to viewers.

The authors thank Mr. Nishi for his assistance in designing the 3D scene of Bear II. This work was supported by JSPS. KAKENHI (21500114) and the Kansai University Research Grants: Grant-in-Aid for Encouragement of Scientists, 2011–2012.

References

1.

J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967). [CrossRef]

2.

A. W. Lohmann and D. P. Paris, “Binary fraunhofer holograms, generated by computer,” Appl. Opt. 6, 1739–1748 (1967). [CrossRef]

3.

K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]

4.

K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]

5.

K. Matsushima and S. Nakahara, “High-definition full-parallax CGHs created by using the polygon-based method and the shifted angular spectrum method,” Proc. SPIE 7619, 761913 (2010).

6.

K. Matsushima, M. Nakamura, and S. Nakahara, “Novel techniques introduced into polygon-based high-definition CGHs,” in Topical Meeting on Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2010), paper JMA10.

7.

K. Matsusima, M. Nakamura, I. Kanaya, and S. Nakahara, “Computational holography: Real 3D by fast wave-field rendering in ultra-high resolution,” in Proceedings of SIGGRAPH Posters’ 2010 (2010).

8.

K. Matsushima, “Wave-field rendering in computational holography,” in 2010 IEEE/ACIS 9th International Conference on Computer and Information Science (2010), pp. 846–851.

9.

H. Nishi, K. Higashi, Y. Arima, K. Matsushima, and S. Nakahara, “New techniques for wave-field rendering of polygon-based high-definition CGHs,” Proc. SPIE 7957, 79571A (2011).

10.

K. Matsushima, H. Nishi, and S. Nakahara are preparing a manuscript to be called “Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography,”

11.

N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48, H120–H136(2009). [CrossRef]

12.

N. Hashimoto, K. Hoshino, and S. Morokawa, “Improved real-time holography system with LCDs,” Proc. SPIE 1667, 2–7 (1992).

13.

K. Sato, “Record and display of color 3-D images by electronic holography,” in Topical Meeting on Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2007), paper DWA2.

14.

R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). [CrossRef]

15.

T. Nakatsuji and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). [CrossRef]

16.

I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]

17.

Y. Takaki, H. Kawai, and H. Ohzu, “Hybrid holographic microscopy free of conjugate and zero-order images,” Appl. Opt. 38, 4990–4996 (1999). [CrossRef]

18.

K. Matsushima and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIE 5290, 90–97 (2004).

19.

A. Kondoh and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38, 53–61 (2007). [CrossRef]

20.

J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996), chap. 3.10.

21.

K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662–19673 (2009). [CrossRef]

22.

K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18, 18453–18463 (2010). [CrossRef]

OCIS Codes
(090.1760) Holography : Computer holography
(090.2870) Holography : Holographic display
(110.6880) Imaging systems : Three-dimensional image acquisition
(090.1995) Holography : Digital holography

ToC Category:
3D Imaging and Display

History
Original Manuscript: August 5, 2011
Manuscript Accepted: October 24, 2011
Published: December 5, 2011

Citation
Kyoji Matsushima, Yasuaki Arima, and Sumio Nakahara, "Digitized holography: modern holography for 3D imaging of virtual and real objects," Appl. Opt. 50, H278-H284 (2011)
http://www.opticsinfobase.org/ao/abstract.cfm?URI=ao-50-34-H278


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967). [CrossRef]
  2. A. W. Lohmann and D. P. Paris, “Binary fraunhofer holograms, generated by computer,” Appl. Opt. 6, 1739–1748 (1967). [CrossRef]
  3. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607–4614 (2005). [CrossRef]
  4. K. Matsushima and S. Nakahara, “Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). [CrossRef]
  5. K. Matsushima and S. Nakahara, “High-definition full-parallax CGHs created by using the polygon-based method and the shifted angular spectrum method,” Proc. SPIE 7619, 761913 (2010).
  6. K. Matsushima, M. Nakamura, and S. Nakahara, “Novel techniques introduced into polygon-based high-definition CGHs,” in Topical Meeting on Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2010), paper JMA10.
  7. K. Matsusima, M. Nakamura, I. Kanaya, and S. Nakahara, “Computational holography: Real 3D by fast wave-field rendering in ultra-high resolution,” in Proceedings of SIGGRAPH Posters’ 2010 (2010).
  8. K. Matsushima, “Wave-field rendering in computational holography,” in 2010 IEEE/ACIS 9th International Conference on Computer and Information Science (2010), pp. 846–851.
  9. H. Nishi, K. Higashi, Y. Arima, K. Matsushima, and S. Nakahara, “New techniques for wave-field rendering of polygon-based high-definition CGHs,” Proc. SPIE 7957, 79571A (2011).
  10. K. Matsushima, H. Nishi, and S. Nakahara are preparing a manuscript to be called “Simple wave-field rendering for photorealistic reconstruction in polygon-based high-definition computer holography,”
  11. N. T. Shaked, B. Katz, and J. Rosen, “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” Appl. Opt. 48, H120–H136(2009). [CrossRef]
  12. N. Hashimoto, K. Hoshino, and S. Morokawa, “Improved real-time holography system with LCDs,” Proc. SPIE 1667, 2–7 (1992).
  13. K. Sato, “Record and display of color 3-D images by electronic holography,” in Topical Meeting on Digital Holography and Three-Dimensional Imaging (Optical Society of America, 2007), paper DWA2.
  14. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). [CrossRef]
  15. T. Nakatsuji and K. Matsushima, “Free-viewpoint images captured using phase-shifting synthetic aperture digital holography,” Appl. Opt. 47, D136–D143 (2008). [CrossRef]
  16. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]
  17. Y. Takaki, H. Kawai, and H. Ohzu, “Hybrid holographic microscopy free of conjugate and zero-order images,” Appl. Opt. 38, 4990–4996 (1999). [CrossRef]
  18. K. Matsushima and A. Kondoh, “A wave optical algorithm for hidden-surface removal in digitally synthetic full-parallax holograms for three-dimensional objects,” Proc. SPIE 5290, 90–97 (2004).
  19. A. Kondoh and K. Matsushima, “Hidden surface removal in full-parallax CGHs by silhouette approximation,” Syst. Comput. Jpn. 38, 53–61 (2007). [CrossRef]
  20. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996), chap. 3.10.
  21. K. Matsushima and T. Shimobaba, “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” Opt. Express 17, 19662–19673 (2009). [CrossRef]
  22. K. Matsushima, “Shifted angular spectrum method for off-axis numerical propagation,” Opt. Express 18, 18453–18463 (2010). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: MOV (716 KB)     
» Media 2: MOV (636 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited