August 2011
Spotlight Summary by Kedar Khare
Holographic video at 40 frames per second for 4-million object points
The development and commercial availability of specialized high-speed digital processing hardware and the progress in spatial light modulator (SLM) technology have brought 3D displays for high-resolution video scenery closer to reality. Computer-generated holography (CGH)—first reported early in the 1960’s—is an attractive technology for 3D displays. While a conventional hologram is generated by physical recording of an interference pattern between the object and the reference beams, a computer-generated hologram is produced by computing the required interference pattern that is then fed to an SLM device. The diffraction of readout beam from the SLM, when observed visually, gives 3D or depth perception without any additional hardware/devices. It is clear from above that both the computation of the interference pattern and the ability to modulate a high-resolution SLM device at video frame rate are critical for a successful 3D display device. For a 3D scene represented by N points located at varying depths, one needs to compute the contribution of wavefronts originating from each of the points to the object beam at the hologram plane. Further, assuming a prespecified reference beam, the interference pattern with the object beam is computed (amplitude or phase); this information is then fed to the SLM. The typical SLM pixels are a few micrometers in size. For a beginner, it may be surprising to know that the computational load involved in this process is fairly high, and it is only in recent years that real-time speeds for 3D holographic video display are being realized.
For developing practical 3D videos display devices one needs to investigate better approaches to approximating/computing the object beam at the hologram plane instead of simply waiting for the computer hardware speeds to improve to required levels. Because of technical issues related to the sampling of diffracted wavefronts, it is a common practice in diffraction pattern computations to first determine the object beam as above at an intermediate plane near the object and then perform a second step of propagating it from this intermediate plane to the hologram plane. The paper by Tsang, Cheung, Poon, and Zhou proposes an algorithm of this type, which they name the interpolated wavefront recording plane (IWRP) method. For given discrete sets of levels for point object brightness and the distances of these points from the intermediate wavefront recording plane, this method uses precomputed object beam templates that are stored in the computer memory. These templates are then simply added appropriately in real time avoiding the necessity for determining the object wave at the intermediate plane for every video frame. Finally, the propagation of wave fields from the intermediate to the hologram plane is implemented with two fast Fourier transform operations that they implement on the increasingly popular graphics-processing units. The end result of 40 frames of 2048 × 2048 per second takes us in the realm of realistic, high-resolution holographic video.
You must log in to add comments.
For developing practical 3D videos display devices one needs to investigate better approaches to approximating/computing the object beam at the hologram plane instead of simply waiting for the computer hardware speeds to improve to required levels. Because of technical issues related to the sampling of diffracted wavefronts, it is a common practice in diffraction pattern computations to first determine the object beam as above at an intermediate plane near the object and then perform a second step of propagating it from this intermediate plane to the hologram plane. The paper by Tsang, Cheung, Poon, and Zhou proposes an algorithm of this type, which they name the interpolated wavefront recording plane (IWRP) method. For given discrete sets of levels for point object brightness and the distances of these points from the intermediate wavefront recording plane, this method uses precomputed object beam templates that are stored in the computer memory. These templates are then simply added appropriately in real time avoiding the necessity for determining the object wave at the intermediate plane for every video frame. Finally, the propagation of wave fields from the intermediate to the hologram plane is implemented with two fast Fourier transform operations that they implement on the increasingly popular graphics-processing units. The end result of 40 frames of 2048 × 2048 per second takes us in the realm of realistic, high-resolution holographic video.
Add Comment
You must log in to add comments.
Article Information
Holographic video at 40 frames per second for 4-million object points
Peter Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou
Opt. Express 19(16) 15205-15211 (2011) View: Abstract | HTML | PDF