OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 7 — Apr. 8, 2013
  • pp: 9192–9197
« Show journal navigation

Band-limited double-step Fresnel diffraction and its application to computer-generated holograms

Naohisa Okada, Tomoyoshi Shimobaba, Yasuyuki Ichihashi, Ryutaro Oi, Kenji Yamamoto, Minoru Oikawa, Takashi Kakue, Nobuyuki Masuda, and Tomoyoshi Ito  »View Author Affiliations


Optics Express, Vol. 21, Issue 7, pp. 9192-9197 (2013)
http://dx.doi.org/10.1364/OE.21.009192


View Full Text Article

Acrobat PDF (2345 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Double-step Fresnel diffraction (DSF) is an efficient diffraction calculation in terms of the amount of usage memory and calculation time. This paper describes band-limited DSF, which will be useful for large computer-generated holograms (CGHs) and gigapixel digital holography, mitigating the aliasing noise of the DSF. As the application, we demonstrate a CGH generation with nearly 8K × 4K pixels from texture and depth maps of a three-dimensional scene captured by a depth camera.

© 2013 OSA

1. Introduction

In state-of-the-art electroholography [1

1. S. A. Benton and V. M. Bove Jr., Holographic Imaging (Wiley-Interscience, 2008) [CrossRef] .

] and digital holography [2

2. U. Schnars and W. Juptner, “Direct recording of holograms by a CCD target and numerical Reconstruction,” Appl.Opt. , 33, 179–181 (1994) [CrossRef] .

], we need to treat a large number of pixels to increase the quality of reconstructed images. Electroholography is a promising technique for a three-dimensional (3D) display because it is capable of reconstructing the wavefront of a 3D scene [3

3. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005) [CrossRef] .

, 4

4. F. Yaras, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19, 9147–9156 (2011) [CrossRef] [PubMed] .

]. Practical electroholograpy requires a high-resolution spatial light modulator (SLM) to display a computer-generated hologram (CGH) because the size of the reconstructed 3D scene is proportional to the size of a CGH and the viewing angle is in inverse proportion to the pixel pitch of the CGH. For example, Ref. [3

3. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005) [CrossRef] .

] shows the excellent image quality of reconstructed 3D scenes from sub-gigapixel CGHs. Thus, we need to calculate a large CGH by calculating the diffraction from a 3D scene. Unfortunately, the computation required to generate such a CGH takes a long time, preventing the realization of practical electroholography [5

5. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging , 2, 28–34 (1993) [CrossRef] .

]. To solve this problem, methods for accelerating CGH have been proposed toward real-time calculation [6

6. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

8

8. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20, 21645–21655 (2012) [CrossRef] [PubMed] .

]. Refs. 6

6. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

and 7

7. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34, 3133–3135 (2009) [CrossRef] [PubMed] .

are the CGH calculation methods from a 3D object composed of point light sources. Ref. 8

8. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20, 21645–21655 (2012) [CrossRef] [PubMed] .

is the CGH calculation method from a 3D scene acquired by integral imaging technique.

Digital holography is a hologram-recording technique by using an electronic device such as a CCD or CMOS camera, and the captured hologram is reconstructed on a computer by using diffraction calculation. The applications of this technique are 3D imaging, 3D microscopy (digital holographic microscopy), and so forth because of the holographic property. In order to increase the wide field-of-view, and the lateral and depth resolution powers of the reconstructed image, we need to capture a large hologram by, for example, the gigapixels achieved in the recent researches [9

9. D. J. Brady and S. Lim, “Gigapixel holography,” 2011 ICO International Conference on Information Photonics (IP), 1–2, (2011) [CrossRef] .

11

11. S. O. Isikman, A. Greenbaum, W. Luo, A.F. Coskun, and A. Ozcan, “Giga-pixel lensfree holographic microscopy and tomography using color image sensors,” PLoS ONE 7, e45044 (2012) [CrossRef] .

]. The diffraction calculation from such a gigapixel hologram also takes a long time and a large amount of memory.

As mentioned above, electroholography and digital holography need efficient diffraction calculation for accelerating the calculation time and decreasing the memory amount. Double-step Fresnel diffraction (DSF) [12

12. F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. 29, 1668–1670 (2004) [CrossRef] [PubMed] .

] is an efficient calculation method in terms of the amount of memory and calculation time.

This paper describes band-limited DSF (BL-DSF) to mitigate the aliasing noise of the original DSF. Then, in order to show the effectiveness, we demonstrate an efficient approach for a large CGH, whose size is nearly 8K × 4K pixels, from texture and depth maps of a 3D scene captured by a depth camera, using BL-DSF. The merits of BL-DSF are the small amount of memory and short calculation time, compared with convolution-based diffraction calculations such as the angular spectrum method (ASM) [13

13. J. W. Goodman, Introduction to Fourier Optics (3rd ed.), (Robert & Company, 2005).

].

In Section 2, we explain BL-DSF. In Section 3, we present the results of the large CGH generation. Section 4 concludes this work.

2. Band-limited double-step Fresnel diffraction

In Fourier optics, diffraction calculations are categorized into two forms: the first is the convolution-based diffraction and the second is Fourier transform-based diffraction. The general expression of convolution-based diffraction is as follows:
u2(x2,y2)=u1(x1,y1)*pz(x1,y1)=1[[u1(x1,y1)]Pz(fx,fy)],
(1)
where operators [·] and −1[·] are the Fourier and inverse Fourier transform, respectively, u1(x1, y1) and u2(x2, y2) indicate a source and destination planes, pz is a point spread function and Pz(fx, fy) = [pz(x1, y1)] is the transfer function according to propagation distance z. For example, ASM [13

13. J. W. Goodman, Introduction to Fourier Optics (3rd ed.), (Robert & Company, 2005).

] uses Pz(fx,fy)=exp(2πiz1/λ2fx2fy2). A merit of the convolution-based diffraction is that the sampling rate on the destination plane is the same as that on the source plane; however a demerit is the need to expand the source and destination planes by zero-padding to avoid aliasing, which occurs due to the circular convolution property of Eq.(1). It takes a large amount of memory and long calculation time.

Meanwhile, single-step Fresnel diffraction (SSF) is a Fourier transform-based diffraction [13

13. J. W. Goodman, Introduction to Fourier Optics (3rd ed.), (Robert & Company, 2005).

]. SSF is expressed as follows:
u2(x2,y2)=Czu1(x1,y1)exp(iπ(x12+y12)λz)exp(2πi(x1x2+y1y2)λz)dx1dy1,
(2)
where Cz=exp(ikz)iλz. The numerical version of SSF (Eq.(3)) is obtained by defining (x1, y1) = ((m1Nx/2)px1, (n1Nx/2)py1) and (x2, y2) = ((m2Nx/2)px2, ((n2Ny/2)py2), where m1, m2 ∈ [0, Nx/2 − 1] and n1, n2 ∈ [0, Ny/2 − 1], the sampling rates on the source plane are px1 and py1, and those on the destination plane are px2 = λz/(Nxpx1) and py2 = λz/(Nypy1):
u2(m2,n2)=SSFz[u1(m1,n1)]=CzFFT[u1(m1,n1)exp(iπ(x12+y12)λz)],
(3)
where the pixel numbers of the source and destination planes are Nx × Ny.

SSF can calculate the light propagation at z by one fast Fourier transform (FFT), so that it does not need zero-padding unlike the convolution-based diffraction. Thus, it is an efficient approach in terms of the memory and the calculation time required; however, the sampling rates on the destination plane are changed by the wavelength and propagation distance.

To overcome this problem, DSF was proposed [12

12. F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. 29, 1668–1670 (2004) [CrossRef] [PubMed] .

]. It calculates the light propagation between the source plane and the destination plane by two SSFs, via a virtual plane. The first SSF calculates the light propagation between the source plane and the virtual plane at distance z1. The sampling rates on the virtual plane are pxv = λz1/(Nxpx1) and pyv = λz1/(Nypy1). The second SSF calculates the light propagation between the virtual plane and the destination plane at distance z2. The sampling rates on the destination plane are px2 = λz2/(Nxpxv) = |z2/z1|px1 and py2 = λz2/(Nypyv) = |z2/z1|py1. The total propagation distance is z = z1 + z2, where z1 and z2 are acceptable for minus distance. DSF introducing the rectangular function for band limitation, which is referred as to BL-DSF, is expressed as follows:
u2(m2,n2)=DSFz[u1(m1,n1)]=SSFz2[SSFz1[u1(m1,n1)]]=CzFFTsgn(z2)[exp(iπz(xv2+yv2)λz1z2)Rect(xvxvmax,yvyvmax)FFTsgn(z1)[u1(m1,n1)exp(iπ(x12+y12)λz1)]].
(4)
Operator FFTsgn(z) means forward FFT when the sign of z is plus and inverse FFT when it is minus. The rectangular function is introduced for band-limiting chirp function exp(iπz(xv2+yv2)λz1z2)=exp(2πiϕ(xv,yv)) because the result of the first SSF can be regarded as the frequency domain. Aliasing will occur in the absence of the rectangular function. We determine the band-limiting area as follows:
1/pxv2|fxmax|=2|ϕ(xv,yv)xv|=|2zxvmaxλz1z2|
(5)
1/pyv2|fymax|=2|ϕ(xv,yv)yv|=|2zyvmaxλz1z2|
(6)

2.1. Performance

BL-DSF is an efficient method in terms of the amount of memory and calculation time. For briefly, we assume the sizes of the source plane and the destination plane are Nx = Ny = N in the following discussion. Convolution-based diffraction needs to extend the size of the source and the destination planes to be at least four times as large as the original ones to avoid circular convolution, so that the calculation time for convolution-based diffraction is proportional to 4N2 log2 2N. On the other hand, the calculation time for BL-DSF diffraction is only proportional to N2 log2N.

We estimate the performance of BL-DSF on a CPU (Intel Core i7-2600S) and a graphics-processing unit (GPU) (NVIDIA GeForce GTX670), compared with ASM. Table 1 shows the calculation times. BL-DSF can calculate diffraction faster than the angular spectrum method. We use only one CPU thread.

Table 1. Performance of BL-DSF on the CPU and GPU, compared with ASM

table-icon
View This Table
| View All Tables

The amounts of memory for ASM and BL-DSF when using a single-precision floating-point format are 32N2 bytes and 8N2 bytes, respectively. For instance, when N = 8,192, the angular spectrum method needs 2 GBytes, while BL-DSL needs only 512 MBytes. We could not calculate the case of N = 8,192 using ASM on the GPU because the required memory exceeded the maximum amount of the GPU memory (2 GBytes). While we can calculate the case of N = 8,192 using BL-DSF because BL-DSF requires only the GPU memory of 512 MBytes.

Figure 1 shows the real part of the complex amplitude calculated by BL-DSF from the source plane having only one point in the center, with or without the rectangular function in Eq. (4). The calculation conditions are a wavelength of 633 nm, a sampling rates on the source plane of 10 μm, a propagation distance of z = z1 + z2 = 0.02 m and N = 512. Note that the sampling rate on the destination plane is changed by a rate of z2 and z1. We do not want to change the sampling rates on the source and the destination planes, so we set z1 = z/2 + 500 m and z2 = z/2–500 m. In this case, the sampling rate on the destination plane is almost same as that on the source plane, about 9.996μm. Figure 1(a) is the case when the rectangular function is absent. Aliasing noise occurs. Meanwhile, Fig. 1(b) shows mitigation of aliasing noise, introducing the rectangular function.

Fig. 1 Band-limited effects. (a) without band-limitation (b) with band-limitation (c) with band-limitation and half-zone plate processing.

3. Application to computer-generated holograms

To show the effectiveness of BL-DSF, we demonstrate the fast calculation of a CGH using BL-DSF from a 3D scene composed of texture and depth maps captured by a depth camera or created by computer graphics that have these maps in the graphics memory.

In this experiment, Figs 2 (a) and 2(b) show the texture and depth maps of the 3D scene with about 2K × 1K pixels captured by an axi-vision camera [14

14. M. Kawakita, K. Iizuka, T. Aida, H. Kikuchi, H. Fujikake, J. Yonai, and K. Takizawa, “Axi-vision camera (real-time distance-mapping camera),” Appl. Opt. 39, 3931–3939 (2000) [CrossRef] .

]. We used a color electroholography system with nearly 8K × 4K LCD panels [8

8. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20, 21645–21655 (2012) [CrossRef] [PubMed] .

] developed by the National Institute of Information and Communications Technology (NICT), Japan. This optical system consists of RGB lasers (the wavelengths are 640 nm, 532 nm and 473 nm respectively), and three 8K × 4K amplitude-modulated LCD panels whose pixel pitch is 4.8 μm to display amplitude CGHs. To eliminate the 0-th order and conjugate lights inherently arising from the amplitude CGHs, the optical system uses half-zone plate processing and the single-sideband technique [15

15. T. Mishina, F. Okano, and I. Yuyama, “Time-alternating method based on single-sideband holography with half-zone-plate processing for the enlargement of viewing zones,” Appl. Opt. 38, 3703–3713 (1999) [CrossRef] .

]. Figure 1(c) shows the half-zone-processed result by limiting the lower half of the rectangular function of BL-DSF.

Fig. 2 Texture and depth maps of the 3D scene captured by the depth camera. (a) Texture map (b) depth map.

In CGH generation, we first convert the texture map tex(m1, n1) and depth map dep(m1, n1) to about 8K × 4K pixels. A pixel value in dep(m1, n1) indicates a certain depth, i, and the range of the pixel values is 0 to 255. Therefore, the physical distance is expressed as z + iΔz (i ∈ [0, 255]), where Δz is the physical spacing between the neighboring pixel values in the depth map.

We calculate the complex amplitude on the CGH plane to superimpose each complex amplitude corresponding to each depth by BL-DSF as follows:
u(m2,n2)=i=0255DSFz+iΔz[tex(m1,n1)exp(i2πn(m1,n1))×maski(m1,n1)],
(7)
where, tex(m1, n1) is the texture map (Fig.2(a)) and n(m1, n1) is the uniform distribution of pseudo-random numbers within 0.0 to 1.0. Function maski(m1, n1) is defined by,
maski(m1,n1)={1(ifdep(m1,n1)=i)0(otherwise)
(8)

In order to obtain the amplitude CGH, we take the real part, I(m2, n2), of the complex amplitude u2(m2, n2). In addition, we obtain the final-amplitude CGH by taking ±2σ top increase the brightness of the reconstructed image, where σ is the standard derivation of I(m2, n2). Figure 3 shows a reconstructed 3D scene from a nearly 8K × 4K CGH using BL-DSF. The left-hand, middle and right-hand figures are photographs by changing focus.

Fig. 3 Reconstructed 3D scene from the 8K × 4K CGHs using BL-DSF.

Table 2 shows the calculation times of the CGH using BL-DSF and ASM on the CPU and GPU. The calculation times show only the generation of the monochrome CGH. When using ASM, the only difference is that BL-DSF in Eq.(7) is replaced by Eq.(1). We can calculate one CGH corresponding to one wavelength at 16.6 seconds using BL-DSF on the GPU.

Table 2. Calculation times of Eq.(7) using BL-DSF and ASM on the CPU and GPU

table-icon
View This Table
| View All Tables

4. Conclusion

We improved the original DSF by band-limiting the frequency domain in order to mitigate aliasing noise. Band-limitation can be applied to half-zone processing, which is a useful technique to eliminate the 0-th and conjugate lights for electroholography using an amplitude CGH. The amount of memory needed for BL-DSF is a quarter of that for convolution-based diffraction. The little memory needed is useful for low-memory devices such as GPUs. The calculation time for BL-DSF can also be accelerated, compared with convolution-based diffraction. We showed the fast generation of a 8K × 4K CGH using BL-DSF. BL-DSF will be useful for gigapixel digital holography.

Acknowledgments

This work is supported by Japan Society for the Promotion of Science (JSPS) KAKENHI (Young Scientists (B) 23700103) 2011, and the NAKAJIMA FOUNDATION.

References and links

1.

S. A. Benton and V. M. Bove Jr., Holographic Imaging (Wiley-Interscience, 2008) [CrossRef] .

2.

U. Schnars and W. Juptner, “Direct recording of holograms by a CCD target and numerical Reconstruction,” Appl.Opt. , 33, 179–181 (1994) [CrossRef] .

3.

C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38, 46–53 (2005) [CrossRef] .

4.

F. Yaras, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19, 9147–9156 (2011) [CrossRef] [PubMed] .

5.

M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging , 2, 28–34 (1993) [CrossRef] .

6.

H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).

7.

T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett. 34, 3133–3135 (2009) [CrossRef] [PubMed] .

8.

Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express 20, 21645–21655 (2012) [CrossRef] [PubMed] .

9.

D. J. Brady and S. Lim, “Gigapixel holography,” 2011 ICO International Conference on Information Photonics (IP), 1–2, (2011) [CrossRef] .

10.

J. R. Fienup and A. E. Tippie, “Gigapixel synthetic-aperture digital holography,” Proc. SPIE 8122, 812203 (2011) [CrossRef] .

11.

S. O. Isikman, A. Greenbaum, W. Luo, A.F. Coskun, and A. Ozcan, “Giga-pixel lensfree holographic microscopy and tomography using color image sensors,” PLoS ONE 7, e45044 (2012) [CrossRef] .

12.

F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett. 29, 1668–1670 (2004) [CrossRef] [PubMed] .

13.

J. W. Goodman, Introduction to Fourier Optics (3rd ed.), (Robert & Company, 2005).

14.

M. Kawakita, K. Iizuka, T. Aida, H. Kikuchi, H. Fujikake, J. Yonai, and K. Takizawa, “Axi-vision camera (real-time distance-mapping camera),” Appl. Opt. 39, 3931–3939 (2000) [CrossRef] .

15.

T. Mishina, F. Okano, and I. Yuyama, “Time-alternating method based on single-sideband holography with half-zone-plate processing for the enlargement of viewing zones,” Appl. Opt. 38, 3703–3713 (1999) [CrossRef] .

OCIS Codes
(090.1760) Holography : Computer holography
(090.2870) Holography : Holographic display
(090.1995) Holography : Digital holography
(090.5694) Holography : Real-time holography

ToC Category:
Holography

History
Original Manuscript: February 28, 2013
Revised Manuscript: March 23, 2013
Manuscript Accepted: March 23, 2013
Published: April 5, 2013

Citation
Naohisa Okada, Tomoyoshi Shimobaba, Yasuyuki Ichihashi, Ryutaro Oi, Kenji Yamamoto, Minoru Oikawa, Takashi Kakue, Nobuyuki Masuda, and Tomoyoshi Ito, "Band-limited double-step Fresnel diffraction and its application to computer-generated holograms," Opt. Express 21, 9192-9197 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-7-9192


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. A. Benton and V. M. Bove, Holographic Imaging (Wiley-Interscience, 2008). [CrossRef]
  2. U. Schnars and W. Juptner, “Direct recording of holograms by a CCD target and numerical Reconstruction,” Appl.Opt., 33, 179–181 (1994). [CrossRef]
  3. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer38, 46–53 (2005). [CrossRef]
  4. F. Yaras, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express19, 9147–9156 (2011). [CrossRef] [PubMed]
  5. M. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging, 2, 28–34 (1993). [CrossRef]
  6. H. Yoshikawa, T. Yamaguchi, and R. Kitayama, “Real-time generation of full color image hologram with compact distance look-up table,” OSA Topical Meeting on Digital Holography and Three-Dimensional Imaging 2009, DWC4 (2009).
  7. T. Shimobaba, N. Masuda, and T. Ito, “Simple and fast calculation algorithm for computer-generated hologram with wavefront recording plane,” Opt. Lett.34, 3133–3135 (2009). [CrossRef] [PubMed]
  8. Y. Ichihashi, R. Oi, T. Senoh, K. Yamamoto, and T. Kurita, “Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms,” Opt. Express20, 21645–21655 (2012). [CrossRef] [PubMed]
  9. D. J. Brady and S. Lim, “Gigapixel holography,” 2011 ICO International Conference on Information Photonics (IP), 1–2, (2011). [CrossRef]
  10. J. R. Fienup and A. E. Tippie, “Gigapixel synthetic-aperture digital holography,” Proc. SPIE8122, 812203 (2011). [CrossRef]
  11. S. O. Isikman, A. Greenbaum, W. Luo, A.F. Coskun, and A. Ozcan, “Giga-pixel lensfree holographic microscopy and tomography using color image sensors,” PLoS ONE7, e45044 (2012). [CrossRef]
  12. F. Zhang, I. Yamaguchi, and L. P. Yaroslavsky, “Algorithm for reconstruction of digital holograms with adjustable magnification,” Opt. Lett.29, 1668–1670 (2004). [CrossRef] [PubMed]
  13. J. W. Goodman, Introduction to Fourier Optics (3rd ed.), (Robert & Company, 2005).
  14. M. Kawakita, K. Iizuka, T. Aida, H. Kikuchi, H. Fujikake, J. Yonai, and K. Takizawa, “Axi-vision camera (real-time distance-mapping camera),” Appl. Opt.39, 3931–3939 (2000). [CrossRef]
  15. T. Mishina, F. Okano, and I. Yuyama, “Time-alternating method based on single-sideband holography with half-zone-plate processing for the enlargement of viewing zones,” Appl. Opt.38, 3703–3713 (1999). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 

« Previous Article

OSA is a member of CrossRef.

CrossCheck Deposited