OSA's Digital Library

Advances in Optics and Photonics

Advances in Optics and Photonics

| BRINGING REVIEWS AND TUTORIALS TO LIGHT

  • Editor: Bahaa E. A. Saleh
  • Vol. 5, Iss. 4 — Dec. 31, 2013
« Show journal navigation

Three-dimensional display technologies

Jason Geng  »View Author Affiliations


Advances in Optics and Photonics, Vol. 5, Issue 4, pp. 456-535 (2013)
http://dx.doi.org/10.1364/AOP.5.000456


View Full Text Article

Acrobat PDF (5944 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

The physical world around us is three-dimensional (3D), yet traditional display devices can show only two-dimensional (2D) flat images that lack depth (i.e., the third dimension) information. This fundamental restriction greatly limits our ability to perceive and to understand the complexity of real-world objects. Nearly 50% of the capability of the human brain is devoted to processing visual information [Human Anatomy & Physiology (Pearson, 2012)]. Flat images and 2D displays do not harness the brain’s power effectively.With rapid advances in the electronics, optics, laser, and photonics fields, true 3D display technologies are making their way into the marketplace. 3D movies, 3D TV, 3D mobile devices, and 3D games have increasingly demanded true 3D display with no eyeglasses (autostereoscopic). Therefore, it would be very beneficial to readers of this journal to have a systematic review of state-of-the-art 3D display technologies.

© 2013 Optical Society of America

1. Fundamentals of Three-Dimensional Display

The physical world around us is three-dimensional (3D); yet traditional display devices can show only two-dimensional (2D) flat images that lack depth (the third dimension) information. This fundamental restriction greatly limits our ability to perceive and to understand the complexity of real-world objects. Nearly 50% of the capability of the human brain is devoted to processing visual information [1

1. E. N. Marieb and K. N. Hoehn, Human Anatomy & Physiology (Pearson, 2012).

]. Flat images and 2D displays do not harness the brain’s power effectively.

If a 2D picture is worth a thousand words, then a 3D image is worth a million. This article provides a systematic overview of the state-of-the-art 3D display technologies. We classify the autostereoscopic 3D display technologies into three broad categories: (1) multiview 3D display, (2) volumetric 3D display, and (3) digital hologram display. A detailed description of the 3D display mechanism in each category is provided. For completeness, we also briefly review the binocular stereoscopic 3D displays that require wearing special eyeglasses.

For multiview 3D display technologies, we will review occlusion-based technologies (parallax barrier, time-sequential aperture, moving slit, and cylindrical parallax barrier), refraction-based (lenticular sheet, multiprojector, prism, and integral imaging), reflection-based, diffraction-based, illumination-based, and projection-based 3D display mechanisms. We also briefly discuss recent developments in super-multiview and multiview with eye-tracking technologies.

For volumetric 3D display technologies, we will review static screen (solid-state upconversion, gas medium, voxel array, layered LCD stack, and crystal cube) and swept screen (rotating LED array, cathode ray sphere, varifocal mirror, rotating helix, and rotating flat screen). Both passive screens (no emitter) and active screens (with emitters on the screen) are discussed.

For digital hologram 3D displays, we will review the latest progress in holographic display systems developed by MIT, Zebra Imaging, QinetiQ, SeeReal, IMEC, and the University of Arizona.

Concluding remarks are given with a comparison table, a 3D imaging industry overview, and future trends in technology development. The overview provided in this article should be useful to researchers in the field since it provides a snapshot of the current state of the art, from which subsequent research in meaningful directions is encouraged. This overview also contributes to the efficiency of research by preventing unnecessary duplication of already performed research.

1.1. Why Do We Need 3D Display?

There have been few fundamental breakthroughs in display technology since the advent of television in the 1940s. A cliché often used when describing the progress of computer technology goes like this: If cars had followed the same evolutionary curve that computers have, a contemporary automobile would cost a dollar and could circle the Earth in an hour using a few cents worth of gasoline. Applying the same metaphor to information display devices, however, would likely find us at the wheel of a 1940s vintage Buick.

Conventional 2D display devices, such as cathode ray tubes (CRTs), liquid crystal devices (LCDs), or plasma screens, often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, complex data patterns or 3D objects displayed on 2D screens are still unable to provide spatial relationships or depth information correctly and effectively. Lack of true 3D display often jeopardizes our ability to truthfully visualize high-dimensional data that are frequently encountered in advanced scientific computing, computer aided design (CAD), medical imaging, and many other disciplines. Essentially, a 2D display apparatus must rely on humans’ ability to piece together a 3D representation of images. Despite the impressive mental capability of the human visual system, its visual perception is not reliable if certain depth cues are missing.

Figure 1 illustrates an example of an optical illusion that demonstrates how easy it is to mislead the human visual system in a 2D flat display. On the left of the figure are some bits and pieces of an object. They look like corners and sides of some 3D object. After putting them together, a drawing of a physically impossible object is formed in a 2D screen (right-hand side of Fig. 1). Notice that, however, there is nothing inherently impossible about the collection of 2D lines and angles that make up the 2D drawing. The reason for this optical illusion to occur is lack of proper depth cues in the 2D display system. To effectively overcome the illusion or confusion that often occurs in visualizing high-dimensional data/images, true volumetric 3D display systems that preserve most of the depth cues in an image are necessary.

Figure 1 An example of optical illusion that shows how easily a 2D display system can mislead or confuse our visual system.

1.2. What Is a “Perfect” 3D Display?

True 3D display is the “holy grail” of visualization technology that can provide efficient tools to visualize and understand complex high-dimensional data and objects. 3D display technologies have been a hot topic of research for over a century [2

2. T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

27

27. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50, H87–H115 (2011). [CrossRef]

].

What is a “perfect” 3D display? A perfect 3D display should function as a “window to the world” through which viewers can perceive the same 3D scene as if the 3D display screen were a transparent “window” to the real-world objects. Figure 2 illustrates the “window to the world” concept. In Fig. 2(a), a viewer looks at 3D objects in the world directly. We now place a 3D display screen between the viewer and the 3D scene. The 3D display device should be able to totally duplicate the entire visual sensation received by the viewer. In other words, a perfect 3D display should be able to offer all depth cues to its viewers [Fig. 2(b)].

Figure 2 What is a perfect 3D display? (a) A viewer looks at 3D scene directly. (b) A perfect 3D display should function as a “window to the world” through which viewers can perceive the same 3D scene as if the 3D display screen were a transparent “window” to the real world objects.

1.3. Depth Cues Provided by 3D Display Devices

Computer graphics enhance our 3D sensation in viewing 3D objects. Although an enhanced 3D image appears to have depth or volume, it is still only 2D, due to the nature of the 2D display on a flat screen. The human visual system needs both physical and psychological depth cues to recognize the third dimension. Physical depth cues can be introduced only by true 3D objects; psychological cues can be evoked by 2D images.

There are four major physical depth cues the human brain uses to gain true 3D sensation [2

2. T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

] (Fig. 3):
  • (1) Accommodation is the measurement of muscle tension used to adjust the focal length of eyes. In other words, it measures how much the eye muscle forces the eyes’ lenses to change shape to obtain a focused image of a specific 3D object in the scene, in order to focus the eyes on the 3D object and to perceive its 3D depth.
  • (2) Convergence is a measurement of the angular difference between the viewing directions of a viewer’s two eyes when they look at the same fixation point on a 3D object simultaneously. Based on the triangulation principle, the closer the object, the more the eyes must converge.
  • (3) Motion parallax offers depth cues by comparing the relative motion of different elements in a 3D scene. When a viewer’s head moves, closer 3D objects appear to move faster than those far away from the viewer.
  • (4) Binocular disparity (stereo) refers to differences in images acquired by the left eye and the right eye. The farther away a 3D object is, the farther apart are the two images.

Figure 3 Illustration of four major physical depth cues.

Some 3D display devices can provide all of these physical depth cues, while other autostereoscopic 3D display techniques may not be able to provide all of these cues. For example, 3D movies based on stereo eyeglasses may cause eye fatigue due to the conflict of accommodation and convergence, since the displayed images are on the screen, not at their physical distance in 3D space [28

28. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3):33, 1–30 (2008). [CrossRef]

].

The human brain can also gain a 3D sensation by extracting psychological depth cues from 2D monocular images [3

3. B. Blundell and A. Schwarz, Volumetric Three Dimensional Display System (Wiley, 2000).

]. Examples (Fig. 4) include the following:
  • (1) Linear perspective is the appearance of relative distance among 3D objects, such as the illusion of railroad tracks converging at a distant point on the horizon.
  • (2) Occlusion is the invisible parts of objects behind an opaque object. The human brain interprets partially occluded objects as lying farther away than interposing ones.
  • (3) Shading cast by one object upon another gives strong 3D spatial-relationship clues. Variations in intensity help the human brain to infer the surface shape and orientation of an object.
  • (4) Texture refers to the small-scale structures on an object’s surface that can be used to infer the 3D shape of the object as well as its distance from the viewer.
  • (5) Prior knowledge of familiar sizes and the shapes of common structures—the way light interacts with their surfaces and how they behave when in motion—can be used to infer their 3D shapes and distance from the viewer.

Figure 4 Illustration of psychological depth cues from 2D monocular images.

The human visual system perceives a 3D scene via subconscious analysis with dynamic eye movements for sampling the various features of 3D objects. All visual cues contribute to this dynamic and adaptive visual sensing process.

Figure 5 Dependence of depth cues on viewing distance.

It is often quite difficult for a 3D display device to provide all the physical and psychological depth cues simultaneously. Some of the volumetric 3D display techniques, for example, may not be able to provide shading or texture due to the inherently transparent nature of displayed voxels. Some 3D display technologies, such as stereoscopic display, provide conflicting depth cues about the focusing distance and eye converging distance, a phenomenon that is often referred as the accommodation/convergence breakdown (to be discussed in Section 2.5).

1.4. Plenoptic Function

In 1991, Adelson and Bergen [29

29. E. Adelson and J. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.

] developed the concept of the plenoptic function (Fig. 6) to describe the kinds of visual stimulation that could be perceived by vision systems. The plenoptic function is an observer-based description of light in space and time. Adelson’s most general formulation of the plenoptic function P is dependent on several variables:
  • the location in space from where light being viewed or analyzed, described by a 3D coordinate (x;y;z);
  • the direction from which the light approaches this viewing location, given by two angles (θ,ϕ);
  • the wavelength of the light λ; and
  • the time of the observation t.

Figure 6 Plenoptic function for a single viewer: the spherical coordinate system of the plenoptic function is used to describe the lines of sight between an observer and a scene.

The plenoptic function can thus be written in the following way:
P(x,y,z,θ,ϕ,λ,t)

Note that the plenoptic function and the light field [8

8. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH (1996), pp. 31–42.

] to be discussed in Section 3.1 have similarity in describing the visual stimulation that could be perceived by vision systems.

1.5. From 2D Pixel to 3D Voxel (or Hogel)

Most 2D display screens produce pixels that are points emitting light of a particular color and brightness. They never take on a different brightness or color hue no matter how or from where they are viewed. This omnidirectional emission behavior prevents 2D display screens from producing a true 3D sensation.

The profound insight offered by plenoptic function and light field theories reveals that picture components that form 3D display images, often called voxels (volumetric picture elements) or hogels (holographic picture elements) must be directional emitters—they appear to emit directionally varying light (Fig. 7). Directional emitters include not only self-illuminating directional light sources, but also points on surfaces that reflect, refract, or transmit light from other sources. The emission of these points is dependent on their surrounding environment.

Figure 7 Each element (voxel or hoxel) in a true 3D display should consist of multiple directional emitters: if tiny projectors radiate the captured light, the plenoptic function of the display is an approximation to that of the original scene when seen by an observer.

A 3D display mimics the plenoptic function of the light from a physical object (Fig. 7). The accuracy to which this mimicry is carried out is a direct result of the technology behind the spatial display device. The greater the amount and accuracy of the view information presented to the viewer by the display, the more the display appears like a physical object. On the other hand, greater amounts of information also result in more complicated displays and higher data transmission and processing costs.

1.6. Classification of 3D Display Technology

Figure 8 Classification of 3D display technologies.

In the following sections, we will provide brief discussions on each technique listed in Fig. 8. We try to highlight the key innovative concept(s) in each opto-electro-mechanical design and to provide meaningful graphic illustration, without getting bogged down in too much technical detail. It is our hope that readers with a general background in optics, computer graphics, computer vision, or other various 3D application fields can gain a sense of the landscape in the 3D display field and benefit from this comprehensive yet concise presentation when they carry out their tasks in 3D display system design and applications.

2. Stereoscopic Display (Two Views, Eyeglasses Based)

We now review a number of binocular stereoscopic display techniques. The binocular stereoscopic display techniques require viewers to wear special eyeglasses in order to see two slightly different images (stereo pairs) in two different eyes. Having a description of the 3D contents in a scene allows computation of individual perspective projections for each eye position. The key issue is to properly separate the stereo image pair displayed on the same screen to deliver the left image to the left eye and the right image to the right eye. This is also referred to as the stereo-channel separation problem. Numerous techniques have been developed to separate stereo channels based on the differences in spectral, polarization, temporal, and other characteristics of the left and right images (Fig. 9). We now provide a brief survey of these techniques.

Figure 9 Classification of stereoscopic display technology.

2.1. Color-Interlaced (Anaglyph)

In anaglyph displays, the left- and right-eye images are filtered with near-complementary colors (red and green, red and cyan, or green and magenta, and the observer wears respective color-filter glasses for separation (Fig. 10). Employing tristimulus theory, the eye is sensitive to three primary colors: red, green, and blue. The red filter admits only red, while the cyan filter blocks red, passing blue and green (the combination of blue and green is perceived as cyan). Combining the red component of one eye’s view with the green and blue components of the other view allows some limited color rendition (binocular color mixture). Color rivalry and unpleasant after-effects (transitory shifts in chromatic adaptation) restrict the use of the anaglyph method.

Figure 10 Color-interlaced anaglyph stereo.

ColorCode 3D is a newer, patented stereo viewing system deployed in the 2000s by Sorensen et al. [30

30. S. E. B. Sorensen, P. S. Hansen, and N. L. Sorensen, “Method for recording and viewing stereoscopic images in color using multichrome filters,” U.S. patent6,687,003 (February3, 2004).

] that uses amber and blue filters. Notably, unlike other anaglyph systems, ColorCode 3D is intended to provide perceived nearly full-color viewing (particularly within the RG color space) with existing television and paint media. As shown in Fig. 11, one eye (left, amber filter) receives the cross-spectrum color information and one eye (right, blue filter) sees a monochrome image designed to give the depth effect. The human brain ties both images together.

Figure 11 Color-interlaced display.

2.2. Polarization-Interlaced Stereoscopic Display

Polarization-interlaced stereoscopic display techniques (Fig. 12) are very well suited for video projection. When using projectors with separate optical systems for the primary colors, the left- and right-view color beams should be arranged in identical order to avoid rivalry. The light flux in liquid crystal (LC) projectors is polarized by the light valves. Commercial LC projectors can be fitted for stereo display by twisting the original polarization direction via half-wave retardation sheets to achieve, e.g., the prevalent V-formation.

Figure 12 Polarization-interlaced stereoscopic display.

Stereo projection screens must preserve polarization. Optimal results have been reported for aluminized surfaces and for translucent opaque acrylic screens. Typical TV rear-projection screens (a sandwiched Fresnel lens and a lenticular raster sheet) depolarize the passing light. LC-based, direct-view displays and overhead panels have recently been marketed [31

31. E. A. Edirisinghe and J. Jiang, “Stereo imaging, an emerging technology,” in Proceedings of SSGRR, L’Aquila, July31–August 6, 2000.

]. Their front sheet consists of pixel-sized micropolarizers, which are tuned in precise register with the raster of the LCD. The left- and right-eye views are electronically interlaced line-by-line and separated through a line-by-line change of polarization.

2.3. Time-Multiplexed Stereoscopic Display

The human visual system is capable of merging the constituents of a stereo pair across a time lag of up to 50 ms. This “memory effect” (or persistence of vision) [32

32. M. Coltheart, “The persistences of vision,” Phil. Trans. R. Soc. B 290, 57–69 (1980). [CrossRef]

,33

33. “Persistence of vision,” http://en.wikipedia.org/wiki/Persistence_of_vision.

] is exploited by time-multiplexed displays (Fig. 13). The left- and right-eye views are shown in rapid alternation and synchronized with an active LC shutter, which opens in turn for one eye while occluding the other eye. The shutter system is usually integrated in a pair of spectacles and controlled via an infrared link. When the observer turns away from the screen, both shutters are switched to be transparent. Time-multiplexed displays are fully compatible for 2D presentation. Both constituent images are reproduced at full spatial resolution by a single monitor or projector, thus avoiding geometrical and color differences. The monitor-type systems have matured into a standard technique for 3D workstations.

Figure 13 Time-multiplexed stereoscopic display.

2.4. Head-Mount Display

Figure 14 shows a head-mount display (HMD) with a separate video source displayed in front of each eye to achieve a stereoscopic effect. The user typically wears a helmet or glasses with two small LCD or organic light-emitting device (OLED) displays with magnifying lenses, one for each eye [34

34. O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2, 199–216 (2006). [CrossRef]

]. Advanced free-form optical design can improve the performance and reduce the size [35

35. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through headmounted display with a low f-number and large field of view using a free-form prism,” Appl. Opt. 48, 2655–2668 (2009). [CrossRef]

]. The technology can be used to show stereo films, images, or games, and it can also be used to create a virtual display. HMDs may also be coupled with head-tracking devices, allowing the user to “look around” the virtual world by moving his or her head, eliminating the need for a separate controller. Performing this update quickly enough to avoid inducing nausea in the user requires a great amount of computer image processing. If six-axis position sensing (direction and position) is used, the wearer may move about within the limitations of the equipment used. Owing to rapid advancements in computer graphics and the continuing miniaturization of video and other equipment, these devices are beginning to become available at more reasonable cost.

Figure 14 HMD for stereo 3D display.

Head-mounted or wearable glasses may be used to view a see-through image imposed upon the real-world view, creating what is called augmented reality. This is done by reflecting the video images through partially reflective mirrors. The real-world view is seen through the mirrors’ reflective surfaces. Experimental systems have been used for gaming, where virtual opponents may peek from real windows as a player moves about. This type of system is expected to have wide application in the maintenance of complex systems, as it can give a technician what is effectively “x-ray vision” by combining computer graphics rendering of hidden elements with the technician’s natural vision. Additionally, technical data and schematic diagrams may be delivered to this same equipment, eliminating the need to obtain and carry bulky paper documents. Augmented stereoscopic vision is also expected to have applications in surgery, as it allows the combination of radiographic data (computed axial tomography scans and magnetic resonance imaging) with the surgeon’s vision.

2.5. AccommodationConvergence Conflict

One of the major complaints from users of stereoscopic displays is the inconsistency of depth cues, a phenomenon called accommodation–convergence conflict.

Figure 15 provides an illustration of this phenomenon. When observers view stereoscopic images displayed on a screen, the eyes’ muscles focus the eyes at the distance of the display screen (i.e., the focal distance) in order to clearly see images displayed on the screen. This is due to the accommodation function of human eyes. On the other hand, the perception of 3D objects provided by the 3D display give the human brain the information that the 3D objects are at their “real” distance, such that the convergence of the viewer’s eyes are on the “convergence distance,” As shown in Fig. 15, in stereoscopic displays, the “focal distance” is not necessarily equal to the “convergence distance.” This type of visual conflict (i.e., the accommodation–convergence conflict) may cause visual confusion and visual fatigue of human visual systems (Hoffman et al. [28

28. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3):33, 1–30 (2008). [CrossRef]

]). For some viewers, the accommodation–convergence conflict may cause discomfort and headaches after a prolonged time of viewing stereoscopic displays.

Figure 15 Illustration of the accommodation/convergence conflict in stereoscopic displays: convergence and focal distance with real stimuli and stimuli presented on conventional 3D displays.

The accommodation–convergence conflict can be ameliorated by increasing the number of light rays that originate from different views and can be perceived simultaneously by the viewer’s pupil. Super-multiview [36

36. T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “A display system for natural viewing of 3D images,” in Three-Dimensional Television, Video and Display Technology (Springer, 2010), pp. 461–487.

] is one attempt to overcome this restriction. Ultimately, holographic 3D display [4

4. D. Gabor, “Holography 1948–1971,” Proc. IEEE 60, 655–668 (1972). [CrossRef]

,5

5. S. Benton and M. Bove, Holographic Imaging (Wiley Interscience, 2008).

,37

37. M. Lucente, “Computational holographic bandwidth compression,” IBM Syst. J. 35, 349–365 (1996). [CrossRef]

] technologies can address this issue.

3. Autostereoscopic 3D Display—Multiview 3D Display Techniques

3.1. Approximate the Light Field by Using Multiviews

The plenoptic function discussed in Section 1.4 describes the radiance along all light rays in 3D space, and can be used to express the image of a scene from any possible viewing position at any viewing angle at any point in time. Equivalently, one can use a light field to represent the radiance at a point in a given direction. The origin of the light field concept can be traced back to Faraday [38

38. M. Faraday, “Thoughts on ray vibrations,” Philos. Mag. 28, 345–350 (1846).

], who first proposed that light should be interpreted as a field, much like the magnetic fields on which he had been working for several years. Gershun [39

39. A. Gershun, “The light field,” Moscow, 1936, P. Moon and G. Timoshenko, translators, J. Math. Phys. XVIII, 51–151 (1939).

] coined the term “light field.” In free space (space with no light occluders), radiance along light rays can be described as a four-dimensional (4D) light field, as shown in Fig. 16 [8

8. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH (1996), pp. 31–42.

], or lumigraph [40

40. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (1996), pp. 43–54.

]. Note that the definition of light field is equivalent to the definition of plenoptic function.

Figure 16 Two plane representation L(x,y,u,v) of a 4D light field.

Formally, the light field as proposed by Levoy and Hanrahan [8

8. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH (1996), pp. 31–42.

] completely characterizes the radiance flowing through all the points in all possible directions. For a given wavelength and time, one can represent a static light field as a five-dimensional (5D) scalar function L(x,y,z,θ,ϕ) that gives radiance as a function of location (x,y,z) in 3D space and the direction (θ,ϕ) the light is traveling. In 3D space free of light occluders, the radiance values of light rays do not change along their pathways. Hence, the 5D light field function contains redundant information. This redundancy allows us to reduce the dimension of light field function from 5D to 4D for completely describing radiance along rays in free space.

The ultimate goal of 3D display systems is to reproduce truthfully the light field generated by real-world physical objects. This proves to be a very difficult task due to fact that light field function is continuously distributed. A viewer of a 3D object is exposed to a potentially infinite number of different views of the scene (i.e., continuously distributed light field). Trying to duplicate a complete light field is practically impossible. A practical implementation strategy of light field 3D display is to take a subsample of continuously distributed light field function and use a finite number of “views” to approximate the continuous light field function.

Figure 17 illustrates the concept of using a finite number of “views” to approximate the continuously distributed light field that in theory has an infinite number of views in both directions. This approximation is viable and practical if the finite number of views is sufficiently high that it exceeds the angular resolution of a human’s visual acuity. Furthermore, a “horizontal parallax-only” (HPO) multiview display device can be implemented that generates the parallax effect on the horizontal direction only, allowing the viewers’ left and right eyes to see different images, and different sets of images can be seen when the viewer’s head position moves horizontally. Even with many fewer views, and with the HPO restriction, an autostereoscopic 3D display system can still generate multiviews to evoke its viewer’s stereo parallax and motion parallax depth cues, thus delivering to the viewer a certain level of 3D sensation.

Figure 17 Use of a finite number of views (multiple views) to approximate the infinite number of views generated by a continuously distributed light field.

3.2. Implementation Strategies of Multiview 3D Displays

As shown in Fig. 18, a multiview autostereoscopic 3D display system is able to produce different images in multiple (different) angular positions, thus evoking both stereo parallax and motion parallax depth cues to its viewers. No special eyewear is needed.

Figure 18 Illustration of a multiview HPO autostereoscopic 3D display system.

There are numerous implementation strategies for multiview autostereoscopic 3D displays. Following a general classification approach proposed by Pastoor and Wöpking [18

18. S. Pastoor and M. Wöpking, “3-D displays: a review of current technologies,” Displays 17, 100–110 (1997). [CrossRef]

], they can be categorized into the following broad classes (Fig. 19):
  • Occlusion based,
  • Refraction based,
  • Diffraction based,
  • Reflection based,
  • Illumination based,
  • Projection based, and
  • Super-multiview.

Figure 19 Classification of multiview 3D display techniques.

Each of these strategies has its own advantages and disadvantages. We discuss briefly each of them.

3.3. Occlusion-Based Multiview 3D Display Techniques

The occlusion-based multiview 3D display approaches have one thing in common: they all have blockage(s) in the optical path. Due to parallax effects, parts of the image are hidden from one eye but are visible to the other eye. The technical solutions differ in the number of viewing slits (ranging from a dense grid to a single vertical slit), in presentation mode (time sequential versus stationary), and in whether the opaque barriers are placed in front of or behind the image screen (parallax barrier versus parallax illumination techniques).

3.3a. Parallax Barrier

The parallax stereogram was first introduced by Ives in 1902 [41

41. F. E. Ives, “A novel stereogram,” J. Franklin Inst. 153, 51–52 (1902). [CrossRef]

]. Parallax barrier display uses an aperture mask in front of a raster display, to mask out individual screen sections that should not be visible from one particular viewing zone. Since the barrier is placed at a well-chosen distance pb in front of the display screen, a parallax-dependent masking effect is enforced. In the HPO design, all masks are vertical grids. The horizontally aligned eyes perceive different vertical screen columns. Every other column is masked out and is visible only to the other eye (Fig. 20).

Figure 20 Parallax barrier HPO autostereoscopic 3D display (example with two views).

The design parameters of the parallax barrier must be selected appropriately. Usually, the stand-off distance pe and the pixel pitch p of the screen are predetermined by hardware and viewing conditions. “e” is half of the eye separation distance. Based on similar triangle geometry, the appropriate barrier pitch b and the mask distance pb can be computed with
pb=(p*pe)(e+p),b=2*p(pepb)pe.

The barrier mask can be dynamically adjusted to suit the viewer’s position, with a transparent LCD panel rendering the mask slits dynamically [42

42. T. Peterka, R. L. Kooima, D. J. Sandin, A. Johnson, J. Leigh, and T. A. DeFanti, “Advances in the Dynallax solid-state dynamic parallax barrier autostereoscopic visualization display system,” IEEE Trans. Vis. Comput. Graph. 14, 487–499 (2008). [CrossRef]

].

Figure 21 shows a multiview parallax barrier display principle. It is important to produce a smooth transition of motion parallax between adjacent views. Some degree of overlap of the luminance profiles of adjacent views may be necessary to ensure smoothness. Too much overlap, however, may increase the blurriness of the displayed images.

Figure 21 Parallax barrier HPO autostereoscopic 3D display (multiple views).

The parallax-barrier-based displays have several drawbacks:
  • Reduced brightness. Only a small amount of light emitted from pixels passes through the parallel barriers. The brightness of a display is thus significantly reduced.
  • Limited resolution. For a display with N views, the resolution of any individual view is essentially 1/N of the original display resolution.
  • Picket fence effect in the monocular image. Since each view sees only one pixel column out of N columns associated with one barrier window, vertical dark lines appear in each view, creating a “picket fence effect” in the monocular image.
  • Image flipping artifact when crossing a viewing zone. When the viewer’s position is not aligned properly with the intended viewing angle, the viewer’s left eye may see the image intended for the right eye and vice versa. This causes a “flip” artifact and confuses the viewer, so depth is not perceived correctly.
  • Limited number of viewing zones. When trying to increase the number of views, the width of the dark slit of the barrier increases, while the white slit width remains the same, causing display brightness decrease and a picket fence effect.
  • Diffraction effect caused by a small window. With increase of resolution, the aperture of the parallel barrier becomes smaller, which may introduce diffraction effects that could spread light rays and degrade image quality.

3.3b. Time-Sequential Aperture Displays

One of the time-sequential aperture displays was developed by Cambridge University [10

10. N. Dodgson, “Autostereoscopic 3D displays,” Computer 38(8), 31–36 (2005). [CrossRef]

]. This time-sequential 3D display uses a high-speed display screen and a time-multiplex scheme to split N image sequences into N synthetic images at different virtual positions, as if there were N image sources in different positions. The system shown in Fig. 22 uses a high-speed CRT (1000Hz) and an array of ferroelectric LC shutters to serve as a fast-moving aperture in the optical system, such that the image generated by each shutter is directed toward a corresponding view zone (the images are labeled in the figure as “views”) (see [10

10. N. Dodgson, “Autostereoscopic 3D displays,” Computer 38(8), 31–36 (2005). [CrossRef]

] for details). At any given time, only one shutter is transparent and the rest are opaque. Rapidly changing images shown on the high-speed display screen are synchronized with the ON/OFF status of each ferroelectric LC shutter to ensure that the correct image is projected to its view zone.

Figure 22 Time-sequential aperture 3D display using a high-speed CRT.

This approach has been extended to a more elaborate design, as shown in Fig. 23, by Kanebako and Takaki [44

44. T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008). [CrossRef]

]. An array of LEDs is used as high-speed switching light sources. Each LED in the array can be controlled and synchronized to switch ON/OFF at a high frame rate. At any given time, only one LED is ON. Due to the directional illumination nature of the LEDs and the illumination optics associated with them, the illumination beam is directed toward a high-speed spatial light modulator (SLM), such as a digital light processor, a LCD, or liquid-crystal-on-silicon (LCOS). The light beam modulated by the SLM becomes the image and is projected toward a set of imaging lenses. Due to the directionality of the illumination beam, there is a one-to-one correspondence relationship between the LEDs and the aperture slots in the aperture array. The light coming out from the aperture is projected onto a vertical diffuser screen. Multiview 3D display images can then be seen from viewers in wide viewing angles.

Figure 23 Time-sequential aperture 3D display using a switchable LED array.

The time-sequential display scheme has several advantages: (1) it preserves the full image resolution of the high-speed display screen to each view of the 3D display, (2) it has simple system design architecture and there is no need for multiple screens or projectors, (3) calibration of the system is relatively easy, and (4) the cost of the overall system should be low if mass produced.

The potential drawbacks of the time-sequential display scheme include the following: (1) the number of views is limited by the speed of the SLM. With a 1000 Hz SLM and 25 Hz refresh rate of the 3D display, the maximum number of views is N=1000/25=40. (2) The separation distance of the apertures is limited by the optical design. Wide viewing angle [say, 100° field of view (FOV)] is difficult to achieve. (3) The brightness of the Cambridge display is limited by the CRT technology and aperture size. If a small aperture is selected in order to achieve dense distribution of virtual image sources, the brightness of each view will be greatly reduced.

3.3c. Moving Slit in Front of Display Screen

The so-called Holotron (St. John [45

45. D. S. St. John, “Holographic color television record system,” U.S. patent3,813,685 (May28, 1974).

]) is a moving-slit display without additional lenses in front of the CRT. Corresponding columns of different views are displayed side-by-side behind the slit aperture (Fig. 24). As the slit moves laterally, a new set of multiplexed image columns is displayed on the CRT. This way, a set of N views is displayed as a sequence of partial images, composed of N columns, during a single pass of the slit. The horizontal deflection of the electron beam is restricted to the momentary position of the partial-image area. To reduce the CRT requirements (sampling frequency, phosphor persistence), a multiple electron gun CRT in conjunction with a multiple moving-slit panel has been proposed.

Figure 24 Moving slit in front of a high-speed display.

3.3d. Cylindrical Parallax Barrier Display

The FOV of an autostereoscopic displays may be extended to 360° with some clever designs. Figure 25 shows an example of a multiview parallax barrier display, dubbed SeeLinder (Endo et al. [46

46. T. Endo, Y. Kajiki, T. Honda, and M. Sato, “Cylindrical 3-D video display observable from all directions,” in Proceedings of Pacific Graphics (2000), pp. 300–306.

]), using a rotating cylindrical parallax barrier and LED arrays for covering a 360° HPO viewing area. The physically limited number of LED arrays is overcome by rotating the LED array, while the multiview in the horizontal direction is created by the rotating cylindrical parallax barriers. In each viewing location (zone), a viewer can see an appropriate view of a 3D image corresponding to the viewer’s location. The prototype has a 200 mm diameter and a 256 mm height. Displayed images have a resolution of 1254 circumferential pixels and 256 vertical pixels. The refresh rate is 30 Hz. Each pixel has a viewing angle of 60°, which is divided into over 70 views so that the angular parallax interval of each pixel is less than 1°. The key design parameters include the barrier interval, the aperture width, the width of the LEDs, and the distance between the LEDs and the barriers.

Figure 25 Cylindrical parallax barrier display.

This “outward viewing” design shown in Fig. 25 is for viewers surrounding the outside of the display device. A similar design but for “inward viewing” is described by Yendo et al. [47

47. T. Yendo, N. Kawakami, and S. Tachi, “Seelinder: the cylindrical light field display,” in ACM SIGGRAPH (2005), paper 16.

], where the viewer is located inside a large-size cylinder. The barrier masks are inside and the LEDs are outside. Both barriers and LEDs are stationary and aligned with respect to each other and rotate together.

3.3e. Fully Addressable LCD Panel as a Parallax Barrier

A recent advance in extending the parallel barrier techniques discussed in Section 3.3a is to use a fully addressable LCD panel with gray-scale transmittance values in each pixel to modulate the light field generated by the display system (Lanman et al. [48

48. D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29, 163 (2010). [CrossRef]

]). The MIT group also introduced the tensor display technique that utilizes compressive light field optimization to synthesize the transmittance values on each of multiple layers of LCD screens (Wetzstein et al. [49

49. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31, 80 (2012). [CrossRef]

]). This unified optimization framework, based on nonnegative tensor factorization, allows joint multilayer, multiframe light field decompositions, significantly reducing artifacts. It is also the first optimization method for designs combining multiple layers with directional backlighting. Since the target light field can be defined to have both horizontal and vertical parallax, this advanced barrier technique is able to produce 3D displays with both horizontal and vertical parallax. This is a promising direction for barrier-based 3D display technology development.

3.4. Refraction-Based Multiview 3D Display Techniques

3.4a. Lenticular Sheet

A lenticular lens sheet consists of a linear array of thick plano–convex cylindrical lenses called “lenticules.” The function of a lens sheet is optically analogous to that of a parallax barrier screen. However, it is transparent, and therefore the optical efficiency is much higher than its parallax barrier counterpart. Hess [50

50. W. Hess, “Stereoscopic picture,” U.S. patent1,128,979 (February16, 1915).

] patented a stereoscopic display using a one-dimensional (1D) lenticular array in 1912. In the spatial multiplex design of a multiview 3D display, the resolution of the display screen is split among the multiple views. Figure 26 illustrates a HPO spatial multiplex display with five views using a lenticular lens sheet in front of a 2D LCD screen. The lenticular lenses are aligned with the vertical pixel columns on the 2D LCD screen. A pixel column is assigned to a single view for every five columns. If a viewer positions his or her eyes in the correct view zone, he can actually see stereo images from the spatial multiplex screen. When his head moves, motion parallax can also be experienced.

Figure 26 Multiview autostereoscopic 3D display using a spatial multiplex design.

The lenticular display has the advantages of utilizing existing 2D screen fabrication infrastructure. Its implementation is relatively simple and low cost. Although lenticular-based displays offer better brightness and higher possible resolution than parallel-barrier-based displays, lenticular-based displays present their own set of challenges:
  • Limited resolution. For a display with N views, the resolution of an individual view is essentially 1/N of the original display resolution.
  • Alignment. Aligning a lenticular sheet with a screen requires significant effort.
  • Cross talk between views and image flips. This may result in one eye seeing the image intended for the other eye, causing the human brain to perceive the stereo effect incorrectly.
  • Lenticular-based displays also suffer from problems that plague parallel-barrier-based displays, such as the picket fence problem, limited resolution, and limited number of viewing windows.

3.4b. Slanted Lenticular Layer on a LCD

Since it is obvious that the resolution of a 3D image is reduced in lenticular lens systems, a number of advanced techniques have been developed to compensate for it. One of them, developed by Berkel and co-workers [51

51. C. van Berkel, D. W. Parker, and A. R. Franklin, “Multiview 3D LCD,” Proc. SPIE 2653, 32 (1996). [CrossRef]

,52

52. C. van Berkel and J. A. Clarke, “Characterization and optimization of 3D-LCD module design,” Proc. SPIE 3012, 179 (1997). [CrossRef]

] at Philips Research Labs, is a slanted lenticular system to distribute the loss of resolution into both the horizontal and vertical directions by slanting the structure of the lenticular lens or rearranging the color filter of the pixels.

Figure 27 shows the relationship between the pixels and the slanted lenticular sheet for a seven-view display. As the LCD is located in the focal plane of the lenticular sheet, the horizontal position on the LCD corresponds to the viewing angle. Therefore all points on the line A–A direct view 3 in a given direction, and all points on line B–B direct view 4 in another direction. The way in which the effect of flipping is reduced is evident by examining line A–A, where view 3 predominates, but with some contribution from view 2. Similarly, for the angle corresponding to line B–B, view 4 predominates with some contribution from view 3.

Figure 27 Arrangement of a slanted lenticular screen on a LCD array to enhance image quality.

A number of 3D TV products based on the slanted lenticular screen on LCD techniques are currently available in the commercial market, including manufacturers such as Vizio, Stream TV Networks, Alioscopy, RealD, Philips, Sharp, Toshiba, and TLC. Some of them are working on 4 K UHD and even 8 K versions of 3D TVs.

3.4c. LCD Design for Achieving High-Resolution Multiview 3D Displays Using a Lenticular or Parallax Barrier

One of the major drawbacks of conventional lenticular-lens-based multiview 3D displays is the loss of full resolution of the SLM in each view. In fact, for HPO display systems, the horizontal resolution for each view is only 1/N of that of SLM’s native resolution (N is the number of views).

A number of LCD manufacturers are developing new technologies to address this issue. For example, a high-density LCD module has been developed by NLT [53] to split the conventional square pixel into N portions, as shown in Fig. 28. Special lenticular arrays are designed to cover each portion with different optics, so that each portion of RGB is directed toward a different view direction. With this design, each view in the multiview 3D display will have the pixel resolution of the original LCD. As of May 2013, NLT has made two-view and six-view prototypes of autostereoscopic 3D display panels [up to 7.2 in. (18.29 cm)]for industrial applications.

Figure 28 High-resolution multiview 3D display using a specially design LCD and a lenticular array sheet.

3.4d. Multiview 3D Display Using Multiple Projectors and a Lenticular Sheet

Figure 29 shows a method for creating a multiview 3D display using multiple projectors, as demonstrated by Matusik and Pfister [9

9. W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23, 814–824 (2004). [CrossRef]

]. Each of these projectors creates images for a single corresponding view. The projectors form images on a special lenticular reflective screen. In the vertical direction, the light is diffused in all directions. In the horizontal direction, the light is focused onto the reflective diffuser screen and then projected back to the direction of the projector. Light actually passes through the vertically oriented lenticular screen twice. In the first pass, the lenticular lenses focus the projector pixels onto the diffuse screen. In the second pass, the same lenticular lenses redistribute the pixels back to the same angular direction. A transmissive screen can also be used, together with a double lenticular sheet.

Figure 29 Autostereoscopic 3D display using multiple projectors (frontal projection).

Compared with the spatial multiplex methods, the multiprojector approach can preserve the resolution of projected images for each view. There is no need to split resolution by the number of views. Also, the size of the display screen is scalable without increasing cost and complexity significantly, due to the use of projectors.

The drawbacks of multiprojector-based 3D display include the following:
  • Expense. Using either frontal or rear projection methods, such displays are expensive. The cost of having one projector per view becomes exorbitant for even a reasonable number of views.
  • Difficulty of calibration. These displays also require that the projected images must be aligned precisely with one another. In practical application, maintaining optical calibration for a large number of projectors is a challenging task.
Despite these problems, experimental systems have been produced with more than 100 views.

3.4e. Prism Mask

A different solution developed by Schwerdtner and Heidrich [54

54. A. Schwerdtner and H. Heidrich, “Dresden 3D display (D4D),” Proc. SPIE 3295, 203 (1998). [CrossRef]

] uses a single panel and light source in connection with a prism mask (Fig. 30). Alternating pixel columns (RGB triples) are composed of corresponding columns of the left and right images. The prisms serve to deflect the rays of light to separated viewing zones.

Figure 30 Illustration of a prism mask 3D display screen.

3.4f. Liquid Crystal Lenses

If the shape of LC material can be controlled, it can serve as an optical lens in front of a SLM to direct the light beams to desirable directions in real time. This basic idea is illustrated in Fig. 31. When the voltage is applied between ITO layers, the shape of the LC cells forms an array of optical lenses equivalent to the optical function of a lenticular lens array. The LC lens can be used to produce a multiview 3D display.

Figure 31 Liquid crystal lens for a 2D/3D switchable display.

The unique advantage of using a LC lens for 3D display is that it is electronically switchable between 2D and 3D display modes. This feature solves a major problem faced by existing lenticular-based multiview 3D displays: when displaying 2D contents, the resolution is lower than its 2D counterpart. The LC-lens-based switchable display can preserve the native resolution of the SLM in 2D display mode.

One of the major issues in current LC lens technology is the irregularity of LC alignment in the boundary region. This causes serious cross talk among views and deterioration of image quality. Considerable effort is underway to solve this problem. Huang et al. [55

55. Y.-P. Huang, C.-W. Chen, T.-C. Shen, and J.-F. Huang, “Autostereoscopic 3D display with scanning multi-electrode driven liquid crystal (MeD-LC) lens,” 3D Res. 1, 39–42 (2010). [CrossRef]

] implemented a multielectrode-driven LC lens (MeD LC Lens). By using multiple electrodes, the shape of the LC lens can be better controlled than that of conventional LC lenses that use only two electrodes. The shape of the LC lens can be dynamically changed at the image refreshing rate, allowing the active scanning of light beams for generating multiview 3D display functions. The high frame rate images displayed on a SLM are projected in time-sequential fashion to different viewing directions. The LC lens technique facilitates a high-resolution multiview 3D display, since the resolution of each view image has the full resolution of the SLM.

3.4g. Integral 3D Display

Lenticular-based autostereoscopic 3D displays provide only horizontal parallax. They generally lack vertical parallax. To achieve parallax in both directions, the integral display technique uses spherical lenses instead of cylindrical ones to present horizontally and vertically varying directional information, thus producing a full parallax image. The integral photography concept was proposed in 1908 by Lippmann [56

56. G. Lippmann, “Épreuves réversibles. Photographies intégrales,” C. R. Acad. Sci. 146, 446–451 (1908).

]. Creating 3D integral imagery by digitally interlacing a set of computer generated 2D views was first demonstrated in 1978 at the Tokyo Institute of Technology in Japan [57

57. H. Takahashi, H. Fujinami, and K. Yamada, “Holographic lens array increases the viewing angle of 3D displays,” SPIE Newsroom (June6, 2006).

]. Inexpensive lens arrays were produced by extrusion embossing, but these lenses have severe moiré effects and are not suitable for integral displays. More sophisticated holographic lens arrays have been demonstrated to enhance the viewing angle and depth of field [58

58. A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]

].

Figure 32 shows a typical spherical lens array used by integral 3D display devices. Integral 3D displays are less common than their lenticular-based counterparts mostly because even more of their spatial resolution is sacrificed to gain directional information.

Figure 32 Optical design of an integral 3D display screen.

In the image acquisition stage (pickup step) of an integral imaging system, each individual lens or pinhole will record its own microimage of an object, which is called the elemental image, and a large number of small and juxtaposed elemental images are produced behind the lens array onto the recording device. In the 3D display stage, the display device with the elemental image is aligned with the lens array and a spatial reconstruction of the object is created in front of the lens array, which can be observed with arbitrary perspective within a limited viewing angle. Therefore, integral imaging suffers from inherent drawbacks in terms of viewing parameters, such as viewing angle, resolution, and depth range due to limited resolution of the 2D SLM and the lens array itself.

Recent progress in autostereoscopic displays is focused on the enhancement of 3D resolution as well as smooth parallax [59

59. J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express 12, 6020–6032 (2004). [CrossRef]

]. Another example of excellent integral photography display has an image depth of 5.7 m, as demonstrated by Liao [60

60. H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17, 1690–1701 (2011). [CrossRef]

]. Although integral imaging provides both vertical and horizontal parallax within a limited viewing angle, low resolution resulting from full parallax is still a problem for practical uses.

3.4h. Moving Lenticular Sheet

Multiview display can be produced via moving parts, as proposed by Cossairt et al. [61

61. O. S. Cossairt, M. Thomas, and R. K. Dorval, “Optical scanning assembly,” U.S. patent7,864,419 (June8, 2004).

], Goulanian et al. [62

62. E. Goulanian and A. F. Zerrouk, “Apparatus and system for reproducing 3-dimensional images,” U.S. patent7,944,465 (May17, 2011).

], and Bogaert et al. [63

63. L. Bogaert, Y. Meuret, S. Roelandt, A. Avci, H. De Smet, and H. Thienpont, “Demonstration of a multiview projection display using decentered microlens arrays,” Opt. Express 18, 26092–26106 (2010). [CrossRef]

]. Figure 33 illustrates a concept for using a moving lenticular sheet module at high speed (>30Hz) to steer the viewing direction of images displayed on a high-speed display screen. The image can be generated by a fast LCD or by projection of a high-speed image sequence from a DLP. At each position of the lenticular sheet module, the high-speed display screen produces an image corresponding to a particular viewing direction. With the back and forth motion of the lenticular sheet module, the multiview images are scanned through a wide range of viewing angles.

Figure 33 3D display design using a moving lenticular sheet module to steer the viewing direction to a wide angle.

The drawbacks for this design are as follows:
  • (1) Accurate micro-motion of a large size lenticular sheet module is difficult to implement, depending on screen size, weight, and design. Therefore, for different sizes of displays, different types of motion controllers are needed, leading to high cost of scale-up production.
  • (2) Since the motion of the lenticular sheet module is back and forth, the speed is not constant. There is a significant variation (from zero to maximum speed) during every cycle of motion. The scanning speed of the viewing direction is therefore not constant. This property may affect viewing performance.
  • (3) For large size screens [e.g., 70 in. (177.80 cm)], such a moving screen module design may be very difficult and costly to implement.

3.5. Reflection-Based Multiview 3D Display

3.5a. Beam Splitter (Half Mirror)

A field lens is placed at the focus of a real (aerial) image in order to collimate the rays of light passing through that image without affecting its geometrical properties. Various 3D display concepts use a field lens to project the exit pupils of the left- and right-image illumination systems into the appropriate eyes of the observer. The effect is that the right-view image appears dark to the left eye, and vice versa. This approach generally avoids all the difficulties resulting from small registration tolerances of pixel-sized optical elements.

Figure 34 shows the basic principle of the beam-splitter-based 3D display proposed by Woodgate et al. [64

64. G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187 (1997). [CrossRef]

]. Two LCD panels, the images of which are superimposed by a beam combiner, are used to display the left and right views. Field lenses placed close to the LCD serve to direct the illumination beams into the appropriate eye. For head tracking, the position of the light sources must be movable. The head-tracking illumination system can be implemented, e.g., by monochrome CRTs displaying high-contrast camera images of the left and right halves of the observer’s face. Multiple-user access is possible by using multiple independent illuminators.

Figure 34 Reflection-based autostereoscopic 3D display.

3.6. Diffraction-Based Multiview 3D Display

With the diffractive-optical-element (DOE) approach, corresponding pixels of adjacent perspective views are grouped into “jumbo pixels” (ICVision Display [65

65. M. W. Jones, G. P. Nordin, J. H. Kulick, R. G. Lindquist, and S. T. Kowel, “A liquid crystal display based implementation of a real-time ICVision holographic stereogram display,” Proc. SPIE 2406, 154 (1995). [CrossRef]

] and 3D Grating Image Display [66

66. T. Toda, S. Takahashi, and F. Iwata, “3D video system using grating image,” in Proc. SPIE 2406, 191 (1995). [CrossRef]

,18

18. S. Pastoor and M. Wöpking, “3-D displays: a review of current technologies,” Displays 17, 100–110 (1997). [CrossRef]

].) Small diffraction gratings placed in front of each partial pixel direct the incident light to the respective image’s viewing area (first-order diffraction; see Fig. 35). Current prototypes yield images of less than 1.5 in. (3.81 cm) in diameter. Advanced concepts provide for the integration of image modulation and diffraction of light within a single, high-resolution SLM [67

67. E. Schulze, “Synthesis of moving holographic stereograms with high-resolution spatial light modulators,” Proc. SPIE 2406, 124 (1995). [CrossRef]

].

Figure 35 DOE screen-based autostereoscopic 3D display.

3.6a. Directional Backlight Based on Diffractive Optics

Multiview 3D display can also be generated by using a SLM (such as a LCD) screen with a directional backlight mechanism. Fattal et al. [68

68. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle glasses-free three-dimensional display,” Nature 495, 348–351 (2013). [CrossRef]

] from HP Labs introduced an interesting directional backlight design to produce wide-angle full parallax views with low-profile volume that is suited for mobile devices. Figure 36 illustrates this design concept, as proposed in [68

68. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle glasses-free three-dimensional display,” Nature 495, 348–351 (2013). [CrossRef]

]. Special grating patterns are etched or deposited on the surface of a glass or plastic light-guide substrate. The substrate is illuminated by collimated light, and the diffractive patterns scatter the light out of the backlight light-guide substrate. The directions of light scattered are determined by the diffractive patterns, which can be carefully designed to implement the multiview directions. Experimental results presented by [68

68. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle glasses-free three-dimensional display,” Nature 495, 348–351 (2013). [CrossRef]

] shows a 64-view backlight that produces 3D images with a spatial resolution of 88 pixels per inch with full parallax and a 90° viewing angle.

Figure 36 Directional backlight based on diffractive optics.

3.7. Projection-Based Multiview 3D Display

3.7a. USC: Rotating Screen Light Field Display with a High-Speed Projector

The light field display developed by Jones et al. [13

13. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” in SIGGRAPH 2007 Papers (2007), paper 40.

] consists of a high-speed image projector, a spinning mirror covered by a holographic diffuser, and electronics to decode specially encoded Digital Visual Interface (DVI) video signals. As shown in Fig. 37, the high-speed image projector and a spinning mirror covered by a holographic diffuser generate a 360° HPO view.

Figure 37 360° multiview 3D display: generating 2D pictures from 360° surrounding directions, each of which is projected in the display device toward the corresponding viewing angles in 360° surrounding directions projected toward the corresponding viewing angles.

Cossairt et al. [69

69. O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244–1250 (2007). [CrossRef]

] developed a 198-view HPO 3D display, based on the Perspectra spatial 3D display platform [70

70. Actuality 3D Display, http://actuality-medical.com.

]. It is able to produce 3D images with viewer-position-dependent effects. This was achieved by replacing the rotating screen with a vertical diffuser and altering the 3D rendering software on the Perspectra.

The light field 3D displays can produce impressive 3D visual effects. There are, however, several inherent problems with existing light field 3D display technologies. For example, some approaches rely upon rotating parts and/or a scanning mechanism to produce light field distributions. There is a physical limitation on the display volume due to the existence of moving parts. The trade-offs between displayed image quality and complexity/costs of display systems sometimes force designers to sacrifice image quality; thus 3D image resolution and quality still are not comparable with that of high-end 2D displays.

3.7b. Holografika: Multiple Projectors + Vertical Diffuser Screen

A recent development in multiview 3D display technology utilizes multiple projectors and a holographic screen to generate a sufficient number of views to produce a 3D display effect. These displays use a specially arranged array of micro-displays and a holographic screen. One of the elements used for making the holographic screen is the vertical diffuser (Fig. 38). Each point of the holographic screen emits light beams of different color and intensity to the various directions in a controlled manner. The light beams are generated through a light modulation system arranged in a specific geometry, and the holographic screen makes the necessary optical transformation to compose these beams into a continuous 3D view. With proper software control, the light beams leaving the various pixels can be made to propagate in multiple directions, as if they were emitted from physical objects at fixed spatial locations.

Figure 38 Vertical diffuser screen: the horizontal parallax-only nature of the display requires a diffusing element in the image plane. The function of this diffuser is to scatter light along the vertical axis while leaving the horizontal content of the image unaltered. Such a diffuser can be approximated by a finely pitched lenticular array.

The HoloVizo display developed by Holografika [71

71. Holografika, www.holografika.com.

,72

72. T. Balogh and P. T. Kovács, “Real-time 3D light field transmission,” Proc. SPIE 7724, 772406 (2010). [CrossRef]

], as shown in Fig. 39, takes advantage of the 1D diffusion property and uses a number of projectors that illuminate a holographic screen. In the horizontal cross-section view, a viewer can see only one very thin slit of images from each projector, assuming that the screen diffuses light in the vertical direction only. To generate one viewing perspective, these thin slits from different projectors have to be mosaicked together. Therefore, the display requires many projectors to work together. For the HoloVizo display system, as many as 80 projectors are used for the prototype. With the mirrors on both sides, these projectors are able to generate as many as 200 views due to mirror effect. In one prototype, the system is able to produce 80 million voxels [80 views, each with 720p (1280×720) resolution)].

Figure 39 Holografika multiview 3D Display: multi-projector + vertical diffuser screen.

3.7c. Theta-Parallax-Only Display

Favalora and Cossairt [73

73. G. Favalora and O. Cossairt, “Theta-parallax-only (TPO) displays,” U.S. patent7,364,300 B2 (April24, 2008).

] proposed an interesting design of a multiview 3D display that exploits the directional light steering property of a special screen. As shown in Fig. 40, the rotating flat screen consists of three layers. The top and bottom layers are diffusers. The middle layer is a directional light steering layer, meaning that all light projected from the bottom of the screen is reflected toward one side of the view. With the rotation of the flat screen, the displayed images are scanned 360° around the display device, forming a 360° viewable 3D image display.

Figure 40 TPO 3D display.

Similar displays have since been demonstrated in various laboratories; see, e.g., Uchida and Takaki [74

74. S. Uchida and Y. Takaki, “360-degree, three-dimensional table-screen display using small array of high-speed projectors,” Proc. SPIE 8288, 82880D (2012). [CrossRef]

].

3.7d. Projector with a Lenticular Mirror Sheet

Another design concept of projection-based multiview 3D displays, proposed by Krah at Apple [75

75. C. H. Krah, “Three-dimensional display system,” U.S. patent7,843,449 (November30, 2010).

], is to use a projector and a reflective lenticular mirror sheet as the reflective screen. As shown in Fig. 41, a projector generates multipixel image projection on each strip of the lenticular sheet. Due to the curvature of the lenticular strip, the reflected beams are directed toward different directions. By carefully calibrating the projection image and the location of the lenticular screen, one can produce a multiview 3D display system that has different images for different viewing perspectives.

Figure 41 3D display with a projector and a lenticular mirror sheet.

3.7e. Projector with a Double-Layered Parallax Barrier

The scheme shown in Fig. 20 using a single-layer parallax barrier can also be implemented by using a double layered parallax barrier and a multiprojector array. Figure 42 shows this concept, as proposed by Tao et al. [76

76. Y.-H. Tao, Q.-H. Wang, J. Gu, and W.-X. Zhao, “Autostereoscopic three-dimensional projector based on two parallax barriers,” Opt. Lett. 34, 3220–3222 (2009). [CrossRef]

]. There are multiple projectors; each generates an image for one view. All images are projected on the first parallax barrier layer, which controls the light rays from each projector to a specific position on the diffuser screen. The second barrier layer, with the same pitch and position as the first one, controls the viewing directions of the multiview images formed on the diffuser screen. Viewers in different viewing locations would be able to see different views of the images. Therefore, the system setup becomes the multiview 3D display.

Figure 42 Parallax-based autostereoscopic 3D projector.

3.7f. Frontal Projection with Parallax Barrier

Kim et al. [77

77. Y. Kim, K. Hong, J. Yeom, J. Hong, J.-H. Jung, Y. W. Lee, J.-H. Park, and B. Lee, “A frontal projection-type three-dimensional display,” Opt. Express 20, 20130–20138 (2012). [CrossRef]

] proposed a frontal projection autosterescopic 3D display using a parallax barrier and some passive polarizing components in front of a reflective screen. The advantages claimed by the authors are that the display is both space saving and cost effective in comparison with conventional rear projection counterparts. Figure 43 illustrates its basic configuration detailed in [77

77. Y. Kim, K. Hong, J. Yeom, J. Hong, J.-H. Jung, Y. W. Lee, J.-H. Park, and B. Lee, “A frontal projection-type three-dimensional display,” Opt. Express 20, 20130–20138 (2012). [CrossRef]

]. The light coming out from the projector first passes a polarizer, then passes through a parallax barrier to form pixelized images on a polarization-preserving screen. There is a quarter-wave retarding film placed in between the barrier and the screen so that the reflected light from the screen has the correct polarization direction. Viewers are able to see multiview 3D images coming out from the parallax barrier.

Figure 43 Frontal projection parallax barrier autostereoscopic 3D display.

In this design, the projected light actually has to go through the parallax barrier twice. As a result, the optical efficiency of this design is still quite low.

3.8. Super-Multiview 3D Displays

Due to the physical upper limit on how many views a multiview 3D display can generate, there is always a discontinuity in view switching with respect to the viewing direction. The motion parallax is often provided in a stepwise fashion. This fact degrades the effectiveness of 3D displays.

Another problem in conventional multiview 3D displays is the accommodation–convergence conflict. While the human eye’s convergence capability perceives the correct depth of 3D objects, the accommodation function makes the eyes focus on the display screen. Since there is a close interaction between convergence and accommodation, this accommodation–convergence conflict problem often causes visual fatigue in 3D viewers.

In attempts to solve these two problems, researchers developed super-multiview (SMV) techniques that try to use an extremely large number of views (e.g., 256 or 512 views) to generate more natural 3D display visual effects, as proposed by Honda et al. [36

36. T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “A display system for natural viewing of 3D images,” in Three-Dimensional Television, Video and Display Technology (Springer, 2010), pp. 461–487.

], Kajiki et al. [78

78. Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154 (1997). [CrossRef]

], and Takaki and Nago [17

17. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18, 8824–8835 (2010). [CrossRef]

]. The horizontal interval between views is reduced to a level smaller than the diameter of the human pupil (Fig. 44). Light from at least two images from slightly different viewpoints enters the pupil simultaneously [36

36. T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “A display system for natural viewing of 3D images,” in Three-Dimensional Television, Video and Display Technology (Springer, 2010), pp. 461–487.

]. This is called the SMV display condition. In bright light, the human pupil diameter is 1.5mm, while in dim light the diameter is enlarged to 8mm. The importance of the SMV condition is that the increase in number of views to provide multiple views to pupils simultaneously may help to evoke the natural accommodation responses to reduce the accommodation–convergence conflict, and to provide smooth motion parallax. Regarding the motion parallax, the SMV condition may improve the realism of 3D images perceived by viewers, since the brain unconsciously predicts the image change due to motion.

Figure 44 SMV condition: light from at least two images from slightly different viewpoints enters the pupil simultaneously.

Since 1995, multiple SMV prototypes have been built and tested. In 2010, a prototype display with 256 views was constructed using 16 LCD panels and 16 projection lenses [17

17. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18, 8824–8835 (2010). [CrossRef]

]. The display screen size was 10.3 in. (26.16 cm), and the horizontal pitch of the viewing zones was 1.31 mm. 3D images produced by the prototype display had smooth motion parallax. Moreover, it was possible to focus on the 3D images, which means that the accommodation function might work properly on the 3D images produced by the prototype display, so that the accommodation–convergence conflict might not occur.

3.9. Eye-Tracking (Position Adaptive) Autostereoscopic 3D Displays

It is well known [10

10. N. Dodgson, “Autostereoscopic 3D displays,” Computer 38(8), 31–36 (2005). [CrossRef]

] that autostereoscopic 3D display systems have preferred viewer position(s), or “sweet spots,” where the viewers can gain the best 3D sensation. The locations of the sweet spots are usually fixed and determined by optical and electronic design of the 3D display systems. Many researchers, such as Woodgate et al. [64

64. G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187 (1997). [CrossRef]

], Henstchke [79

79. S. Hentschke, “Autostereoscopic reproduction system for 3-D displays,” U.S. patent7,839,430 (November232010).

], Wang et al. [80

80. N.-Y. Wang, H.-J. Lee, and C.-H. Tsai, “Parallax barrier type autostereoscopic display device,” U.S. patent6,727,866 (April27, 2004).

], and Si [81

81. B. Si, “Stereoscopic image display system and method of controlling the same,” U. S. patent8,427,746 B2 (April23, 2013).

] have attempted to make dynamic changes in the optical and electronic design of 3D display systems to adaptively optimize them based on the current position of the viewer’s eyes.

For example, in Wang et al. [80

80. N.-Y. Wang, H.-J. Lee, and C.-H. Tsai, “Parallax barrier type autostereoscopic display device,” U.S. patent6,727,866 (April27, 2004).

], a parallax-barrier-type autostereoscopic display device was designed to have multiple parallel strip backlight sources. An eye-tracking technique is used to determine the position of the viewer’s eyes. An adaptive controller then turns on the properly selected light source for forming images to the viewer’s left eye position and right eye position.

A recent development in this direction is shown in the Free3C 3D Display [82], with an autostereoscopic 3D display with UXGA (1200×1600) resolution. The Free2C Desktop 3D Display is based on a special head-tracking lenticular-screen 3D display principle, allowing free 3D head movements with reasonable freedom for single viewers. A lenticular screen, placed in front of an LCD, is moved in both the x and y directions by voice-coil actuators, aligning the exit pupils with the viewer’s eye locations. No stereo viewing glasses are needed. The cross talk is controlled below 2%.

An interesting eye-tracking 3D display design was developed by Surman et al. [83

83. P. Surman, R. S. Brar, I. Sexton, and K. Hopf, “MUTED and HELIUM3D autostereoscopic displays,” in IEEE International Conference on Multimedia and Expo (ICME) (2010), pp. 1594–1599.

] in the European Union funded Multi-User Three-Dimensional Display (MUTED) project. The latest version of the MUTED design (Fig. 45) uses an array of lenses (similar to lenticular but with the width of the lenslet being greater than the pitch), a LCOS projector, and a LCD panel screen. The lens array is used to steer the direction of light illumination toward the location of the viewer’s eyes, which is determined by an eye-tracking system. The image projected by the LCOS projector consists of a series of dots, whose locations are designed for the lens array to focus onto the viewer’s eye locations. The optical efficiency is low in this design since only a small portion of the light produced by the LCOS projector is used for generating the directional backlight for the LCD panel.

Figure 45 Eye-tracking-enabled 3D display, with a lens array to steer the direction of light illumination for a LCD panel.

Eye-tracking techniques have found a great fit in mobile 3D display applications. For example, Ju et al. [84

84. S. H. Ju, M.-D. Kim, M.-S. Park, K.-T. Kim, J.-H. Park, and K.-M. Lim, “Viewer’s eye position estimation using single camera,” in SID Syposium Digest of Technical Papers (2013), pp. 671–674.

] proposed an eye-tracking method using a single camera for mobile devices. Wu et al. [85

85. H. Y. Wu, C. H. Chang, and C. L. Lin, “Dead-zone-free 2D/3D switchable barrier type 3D display,” in SID Syposium Digest of Technical Papers (2013), pp. 675–677.

] proposed an adaptive parallax barrier scheme that used multiple subbarriers to adjust the viewing zone location based on the viewer’s detected eye positions. For single-user application scenarios, eye-tracking-based 3D display technologies show great promise to fulfill the needs of providing autostereoscopic 3D display functionality.

3.10. Directional Backlight Designs for Full-Resolution Autostereoscopic 3D Displays

Full-resolution autostereoscopic 3D display can be achieved by using clever directional backlight mechanisms, together with high-speed LCD panels. The directional backlight mechanisms generate optically distinct viewing regions for multiple views in a time-multiplexed fashion. In the case of two views, a stereoscopic display is generated. In general, the directional backlight designs are well suited for providing 3D display capability for mobile devices, providing full LCD resolution for each view with a compact package size. Additional advantages of using the directional backlight techniques for 3D displays include avoidance of the perception of flicker and elimination of view reversal, a common cause of viewer fatigue [86

86. J. C. Schultz, R. Brott, M. Sykora, W. Bryan, and T. Fukami, “Full resolution autostereoscopic 3D display for mobile applications,” in SID Symposium Digest of Technical Papers (2009), Vol. 40, pp. 127–130.

].

3.10a. Directional Backlight Design by 3M

Schultz and St. John at 3M [87

87. J. C. Schultz and M. J. Sykora, “Directional backlight with reduced crosstalk,” U.S. patent application2011/0285927 A1 (May24, 2010).

] proposed a compact directional backlight design for full-resolution autostereoscopic 3D display using two LED sources, a reflective film, a specially designed light guide, a 3D film sheet, and a fast-switching LCD panel (Fig. 46). When the LED light source for the left eye is on and the LED light source for the right eye is off, the light guide with a simple shallow prism structure (thickness 0.5 mm, made of polymethyl methacrylate) manages to direct most of the light rays toward the predetermined location of the left eye. By synchronizing the image displayed on the LCD panel with the left-view image, the viewer is able to see the image corresponding to the left view. Switching the LED light sources facilitates the alternative display of images of the left and right views, and these images are directed, due to the predetermined directions, toward the corresponding eyes. The 3D film features nanometer scale (130nm) structures of lenticular and prism. There is no requirement to align the 3D film with the LCD panel pixel structure. To implement two-view display, a LCD panel with a refresh rate of 120 Hz is used. The refresh rate for the full-resolution 3D display is thus 60 Hz. The cross talk between the left and right views can be reduced to a level of less than 10%, with the optimization of overall design parameters. A 9 in. (22.86 cm) WVGA (800×480) 3D display was demonstrated in [87

87. J. C. Schultz and M. J. Sykora, “Directional backlight with reduced crosstalk,” U.S. patent application2011/0285927 A1 (May24, 2010).

].

Figure 46 3M directional backlight design, consisting of a specially designed light guide, a sheet of 3D film, a pair of light sources, and a fast switching LCD panel.

3.10b. Directional Backlight Design by SONY for a 2D/3D Switchable Display

Minami and co-workers at SONY [88

88. M. Minami, K. Yokomizo, and Y. Shimpuku, “Glasses-free 2D/3D switchable display,” in SID Symposium Digest of Technical Papers (2011), pp. 468–471.

,89

89. M. Minami, “Light source device and display,” U.S. patent application2012/0195072 A1 (August2, 2012).

] proposed a 2D/3D switchable backlight design, as shown in Fig. 47. The unique light-guide design allows it to function as a parallax barrier as well as a backlight. The 3D illumination light rays from the light sources placed along the side of the light-guide bounce between the reflective surfaces inside the light guide. When these rays hit the scattering patterns in locations corresponding to the slits in the parallax barrier, they are reflected toward the LCD panel, evoking multiview 3D display when synchronized with the image displayed on the LCD (3D resolution 960×360 pixels for each view). When the 3D backlights are OFF while the 2D backlights are ON, the system acts as a normal full-resolution 2D display, with a resolution of 1080p (1920×1080 pixels).

Figure 47 Sony 2D/3D switchable backlight design.

3.10c. Four-Direction Backlight with a 12-View Parallax Barrier for a 48-view 3D Display

Wei and Huang [90

90. C. W. Wei and Y. P. Huang, “240  Hz 4-zones sequential backlight,” in SID Symposium Digest (2010), p. 863.

] proposed a four-direction backlight together with a 12-view parallax barrier for a 48-view 3D display. This design combines four major components: a sequentially switchable LED matrix plate, a dual-directional prism array, a 240 Hz LCD panel, and a multiview parallax barrier. As shown in Fig. 48, the LED matrix is sequentially switched at 240 Hz among four groups of LEDs (indicated by red, green, blue, and yellow in Fig. 48), synchronized with the 240 Hz LCD panel. The directional prism arrays are placed in alignment with the LED matrix grids. Due to the direction of prisms and the displacements among the locations of LED groups, the light rays originating from each LED group are directed toward a viewing direction. There are four light directions in the design shown in Fig. 48. The parallax barrier in front of the LCD panel is designed to have 12 views. Therefore, the total number of views of the autosterescopic 3D display becomes 48(=4×12). The viewing angle is ±40°.

Figure 48 Four-view directional backlight design, consisting of a LED matrix switchable light source, a dual directional prism array, a 240 Hz LCD panel, and a multiview parallax barrier.

3.10d. Multidirectional Backlighting Using Lenslet Arrays

Kwon and Choi [91

91. H. Kwon and H. J. Choi, “A time-sequential multiview autostereoscopic display without resolution loss using a multidirectional backlight unit and a LCD panel,” Proc. SPIE 8288, 82881Y (2012). [CrossRef]

] implemented a multidirectional backlight using a LCD panel, a lenticular lens arrays, and a uniformed backlight unit. In a structural design similar to that of the lenticular-based multiview 3D display (Section 3.4a), the columns in the LCD panel correspond to their specific directions of light projection, due to the optical property of the lenticular lens array. A five-direction backlight design is shown in Fig. 48, where there are five columns of pixels under each lenslet. Each of these columns produces a collimated light beam in its predetermined direction, shown as different colors in Figure 49. Time-multiplexing these columns on the LCD panel can generate multidirectional backlight whose direction is changed sequentially. Synchronizing the backlight direction with another LCD panel with the displayed image of the respective directional view can produce an autostereoscopic 3D display.

Figure 49 Multidirectional backlight design using a LCD panel, a lenticular lens array, and a uniform backlight source.

The main advantages of this design include its ability to make a thin backlight unit and its compatibility with the lenticular-based 3D display design. Drawbacks of this design include the requirement of a high-speed LCD panel (in an N-directional backlight unit, the frame rate of the LCD panel is N times that of the 3D image to be displayed) and low brightness of light output in each direction (1/N of the total light output of the uniformed light source).

4. Volumetric 3D Display

In contrast to multiview 3D displays that present the proper view of a 3D image to viewers in corresponding viewing locations, volumetric 3D display techniques to be discussed in this section can display volumetric 3D images in true 3D space. Each “voxel” on a 3D image is located physically at the spatial position where it is supposed to be and reflects light from that position toward omni-directions to form a real image in the eyes of viewers. Such volumetric 3D displays provide both physiological and psychological depth cues to the human visual system to perceive 3D objects, and they are considered more powerful and desirable devices for human/computer visual interface than existing display devices.

We provide a brief overview of a number of representative 3D volumetric display technologies (Fig. 50). This list is by no means comprehensive or inclusive of all possible techniques.

Figure 50 Classification of volumetric 3D display technologies.

4.1. Static Screen Volumetric 3D Displays: Static and Passive Screens

4.1a. Solid-State Upconversion

One of the fundamental requirements for a volumetric 3D display system is to have the entire display volume filled with voxels that can be selectively excited at any desired location. To achieve this goal, one can have two independently controlled radiation beams that activate a voxel only when they intersect. While electron beams cannot be used for such a purpose, laser beams can, provided that a suitable material for the display medium can be found. Figure 51 shows a process known as two-photon upconversion that can achieve this objective. Briefly, this process uses the energy of two infrared photons to pump a material into an excited level, from which it can make a visible fluorescence transition to a lower level. For this process to be useful as a display medium, it must exhibit two photon absorptions from two different wavelengths, so that a voxel is turned on only at the intersection of two independently scanned laser sources. The materials of choice at the present time are the rare-earth particles doped into a glass host known as ZBLAN. ZBLAN is a fluorozirconate glass with the chemical name ZrF4-BaF2-LaF3-AlF3-NaF. The two-photon upconversion concept for 3D volumetric displays was used by Downing [92

92. E. A. Downing, “Method and system for three-dimensional display of information based on two photon upconversion,” U.S. patent5,684,621 (November4, 1997).

], Lewis et al. [93

93. J. D. Lewis, C. M. Verber, and R. B. McGhee, “A true three-dimensional display,” IEEE Trans. Electron Devices 18, 724–732 (1971). [CrossRef]

], and Langhans et al. [94

94. K. Langhans, C. Guill, E. Rieper, K. Oltmann, and D. Bahr, “Solid Felix: a static volume 3D-laser display,” IS&T Reporter 18(1), 1–9 (2003).

] in building their prototypes. These volumetric 3D displays show promising features, such as that there are no moving parts within the display volume. Major difficulties in producing a practically useful 3D display using this approach are its scale-up capability (existing prototypes had sizes of a few inches) and the ability to display multiple colors (different colors usually require different lasers and screen materials).

Figure 51 Static screen 3D display based on solid-state upconversion. (a) Energy level diagram of an active ion. (b) Two scanned intersecting laser beams are used to address voxels in transparent glass material doped with such an ion.

4.1b. Gas Medium Upconversion

Another 3D display based on the upconversion concept employs the intersection of two laser beams in an atomic vapor and subsequent omnidirectional fluorescence from the intersection point [95

95. E. J. Korevaar and B. Spiver, “Three dimensional display apparatus,” U.S. patent4,881,068 (November14, 1989).

]. Two lasers are directed via mirrors and xy scanners toward an enclosure containing an appropriate gaseous species (rubidium vapor, for example), where they intersect at 90°. Either laser by itself causes no visible fluorescence. However, where both lasers are incident on the same gas atoms, two-step excitation results in fluorescence at the intersecting point. By scanning the intersection point fast enough, a 3D image can be drawn in the vapor. The eye cannot see changes faster than about 15 Hz. Therefore, if the image to be displayed is repeatedly drawn faster than this rate, the image will appear to be steady, even though light may be originating from any one point in the volume only a small fraction of the time.

The advantage of this 3D display concept is its scalability: It can be built in almost any desirable size without significantly increasing the complexity of the system. The technical difficulties in implementing this concept include the requirement for a vacuum chamber, the need to maintain a certain temperature, a limitation on the number of voxels by the speed of the scanners, and the eye-safe problem presented by laser beams.

4.1c. Crystal Cube Static Screen Volumetric 3D Display

A static 3D crystal cube (or any other shape) was developed as a 3D display volume by Nayar and Anand [96

96. S. K. Nayar and V. N. Anand, “3D display using passive optical scatterers,” Computer 40(7), 54–63 (2007). [CrossRef]

] and Geng [97

97. J. Geng, “Volumetric 3D display system with static screen,” NASA Tech Briefs (NASA, 2011), Vol. 35, p. 40, http://www.techbriefs.com/component/content/article/9432.

]. Within a block of glass material, a large number of tiny dots (i.e., voxels) are created by using a recently available machining technique called Laser Subsurface Engraving (LSE). LSE can produce a large number of tiny physical crack points (as small as 0.02mm in diameter) at desirable (x,y,z) locations precisely within a crystal cube. These cracks, when illuminated by a properly designed light source, scatter light in omnidirections and form visible voxels within the glass volume, thus providing a true volumetric 3D display. Locations of the voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution SLM. The collection of these voxels occupies the full display volume of the static 3D crystal cube. By controlling the SLM engine to vary illumination patterns, different volumetric 3D images can be displayed inside the crystal cube. A solid screen with dimensions of 320mm×320mm×90mm was built and tested [97

97. J. Geng, “Volumetric 3D display system with static screen,” NASA Tech Briefs (NASA, 2011), Vol. 35, p. 40, http://www.techbriefs.com/component/content/article/9432.

] (Fig. 52).

Figure 52 Concept of the “3D Cube” volumetric 3D display, which uses a crystal cube as its static screen.

Unique advantages of the 3D static screen display technology include
  • no moving screen;
  • inherent parallel mechanism for 3D voxel addressing;
  • high spatial resolution;
  • easy-to-implement full color display;
  • fine voxel size (at a submillimeter level);
  • no blind spot in the display volume;
  • display volume that can be of arbitrary shape;
  • no need for special viewing glasses or any special eyewear to view the 3D images; and
  • no image jitter that is associated with a moving screen.

4.1d. Laser Scanning to Produce Plasma 3D Display in the Air

Kimura et al. [98

98. H. Kimura, T. Uchiyama, and H. Yoshikawa, “Laser produced 3D display in the air,” in ACM SIGGRAPH (2006), p. 20.

] developed a laser scanning 3D display technique that can produced visible voxels in the air from the plasma generated by the laser. They noticed the phenomenon that, when laser beams are strongly focused, air plasma emission can be induced only near the focal point. They first succeeded in experiments to display 2D images in the air. The images were constructed from dot arrays produced using a combination of a laser and galvanometric scanning mirrors. Later, they extended the scanning and focusing mechanism in 3D space, and produced 3D images in the air [99

99. M. Momiuchi and H. Kimura, “Device for forming visible image in air,” U.S. patent7,533,995 (May19, 2009).

].

4.2. Static Screen Volumetric 3D Displays: Static and Active Screen

4.2a. Voxel Array: 3D Transparent Cube with Optical-Fiber Bundles

MacFarlane [21

21. D. MacFarlane, “Volumetric three dimensional display,” Appl. Opt. 33, 7453–7457 (1994). [CrossRef]

] at the University of Texas, Dallas, developed an optical-fiber-addressed transparent 3D glass cube for displaying true 3D volumetric images (Fig. 53). As a fairly straightforward extension of a 2D LCD screen, this type of 3D display is a 3D array consisting of a stack of 2D pixel elements. The voxels are made from optical resin and are transparent in their quiescent state. Optical fibers embedded in the glass cube are used to address the 3D voxel arrays. The image signal is controlled by a SLM. The collection of all activated 3D voxels thus forms 3D images in true 3D space.

Figure 53 Concept illustration of the optical-fiber-bundle-based static volumetric 3D display.

4.2b. Voxel Array: LED Matrix

Many groups have attempted to build volumetric 3D display devices using 3D matrices of LEDs. For example, an 8×8×8(=512) voxel display prototype was developed by Wyatt at MIT [100

100. D. Wyatt, “A volumetric 3D LED display” (MIT, 2005), http://web.mit.edu/6.111/www/f2005/projects/wyatt_Project_Design_Presentation.pdf.

]. The concept of these LED array displays is straightforward, but implementation is quite challenging if the goal is to develop high-resolution 3D volumetric 3D display systems. In addition to many problems faced by the optical fiber bundle display, there is an issue of voxel occlusion due to the opaque nature of LEDs themselves. Cross illumination among LEDs is also a concern.

4.2c. Static Screen Volumetric 3D Display Techniques Based on Multiple Layer LCDs

Figure 54 shows a static screen multilayer LCD 3D volume visualization display proposed by Sadovnik and Rizkin [101

101. L. Sadovnik and A. Rizkin, “3D volume visualization display,” U.S. patent5,764,317 (June9, 1998).

]. The volumetric multilayer screen includes multiple electronically switchable polymer dispersed liquid crystal (PDLC) layers that are stacked. An image projector is used to project sequential sections of a 3D image onto the PDLC screens. The timing of projection is controlled such that activation of the PDLC screen and the projection of an image section are synchronized. A volumetric 3D image is formed by multiple sections of projected images at different z heights. Note that this 3D display scheme requires the switching speed of the PDLC to be faster than 0.1 ms.

Figure 54 3D volume visualization display.

Sullivan [20

20. A. Sullivan, “3 Deep: new displays render images you can almost reach out and touch,” IEEE Spectrum42(4), 30–35 (2005).

,102

102. A. Sullivan, “Multi-planar volumetric display system and method of operation using multi-planar interlacing,” U.S. patent6,806,849 (October19, 2004).

] proposed a similar design of a multiplanar volumetric 3D display system. The display volume consists of a stack of switchable LC sheets whose optical transmission rates are switchable by the voltage applied on them that is controlled by a controller and synchronizer electronics. The LC sheets stay optically clear when there is no voltage applied on them but become scattering when there is a voltage applied. By synchronizing the timing of image projection from a high-speed image projector and the ON/OFF state of each LC sheet, 2D sections of a 3D image can be displayed in proper 3D locations, thus forming a true 3D display. A volumetric 3D display system based on this concept was built by LightSpace Technologies [103

103. LightSpace Technologies, www.lightspacetech.com.

]. As of 2013, EuroLCDs [104

104. EuroLCDs, www.eurolcds.com.

] has received the exclusive license for commercializing this volumetric 3D display technology.

Gold and Freeman [105

105. R. S. Gold and J. E. Freeman, “Layered display system and method for volumetric presentation,” U.S. patent5,813,742 (September29, 1998).

] proposed a layered volumetric 3D display concept that forms a hemispheric-shaped screen. Instead of using a planar stack of switchable LC sheets, this system customized the production of each layer with different sizes and integrated them together to form a hemispherical volume for displaying 3D images.

Leung et al. [106

106. M. S. Leung, N. A. Ives, and G. Eng, “Three-dimensional real-image volumetric display system and method,” U.S. patent5,745,197 (April28, 1998).

] proposed a 3D real-image volumetric display that employs a successive stack of transparent 2D LCD planar panels. Each layer of LCD planar panels is controlled to display a section of a 3D image, and the viewable image combines images displayed on each of the multiple layers of LCDs, thus forming a true 3D volumetric display system.

Koo and Kim [107

107. J.-P. Koo and D.-S. Kim, “Volumetric three-dimensional (3D) display system using transparent flexible display panels,” U.S. patent application2007/0009222 A1 (January11, 2007).

] proposed a multilayered volumetric 3D display system that uses a stack of flexible transparent display elements, such as OLEDs, to form a volumetric 3D display screen. The flexibility of the OLED element allows for various conformal shapes of screen designs, such as cylindrical, spherical, and/or cone-shaped volumes.

Advantages of these multi-planar LC-based static screen volumetric 3D display schemes include
  • (1) Static screen. No moving parts in the 3D display system.
  • (2) Simple design. Systems using LC sheets need only “one-pixel” LC sheets.

Drawbacks of these schemes include
  • (1) Requirement of high-speed switching LC sheets. Assume that a reasonable refresh rate of a high-quality 3D image is 100 Hz, and there are 100 layers of LC sheets. The switching time for each LC sheet is thus 1/(100×100)=100μs=0.1ms. This is about 1 to 2 orders of magnitude faster than any commercially available LC material can achieve, and it is very difficult for existing technology to handle such a high switching speed.
  • (2) Low image brightness due to short exposure time. Brightness perceived by human eyes depends not only on the intensity of the light source but also on the exposure time of the light source. The shorter the exposure time, the dimmer the light appears. With only 50 μs maximum exposure time (considering the ON/OFF transit time of LC sheets), it is very difficult to achieve reasonable image brightness for 3D display.
  • (3) Low brightness due to optical transmission loss of projected light going through multiple LC sheets. Even with high-quality LC sheets that have optimal light transmission efficiency of 97% each, after 35 layers of LC sheets, the light intensity is reduced to 50% of its original strength. It drops further to 25% after 40 layers of passage.

4.3. Swept Screen Volumetric 3D Displays: Passive Sweeping Screen

4.3a. Volumetric 3D Display Using a Sweeping Screen and a CRT

Hirsch [108

108. M. Hirsch, “Three dimensional display apparatus,” U.S. patent2,967,905 (January13, 1958).

] proposed a rotating screen 3D display design in 1958. Aviation Week reported a volumetric 3D display system developed by ITT Labs in 1960 [109

109. “3D Display from ITT Labs,” Aviation Week, 66–67 (October31, 1960).

]. As shown in Fig. 55, this system uses a flat rotating screen and a relay mirror/prism system to deliver the projected image from a high-intensity CRT to the vertical rotating screen.

Figure 55 Sweeping screen volumetric 3D display system using a CRT.

4.3b. Varifocal Mirror and High-Speed Monitor

A clever method of 3D display proposed by Sher [110

110. L. D. Sher, “Three-dimensional display,” U.S. patent4,130,832 (December19, 1978).

] employs the strategy of forming optical virtual 3D images in space in front of the viewer. The varifocal mirror system consists of a vibrating circular mirror along with a high-speed monitor. The monitor is connected to a woofer such that the woofer can be synchronized to the monitor. A flexible circular mirror is attached to the front of the woofer, and the monitor is pointed toward the mirror. With the vibrations from the woofer, the mirror changes focal length and the different points being displayed on the monitor seem to appear at different physical locations in space, giving the appearance of different depths to different objects in the scene being displayed. Variable-mirror-based 3D display systems are primarily limited by the size of the mirror and the updating rate of the images, since this mirror has to vibrate.

4.3c. Laser-Scanning Rotating Helix 3D Display

Extensive attempts have been made by researchers to develop a 3D display device based on laser scanning and a rotating (helical) surface (Hartwig [111

111. R. Hartwig, “Vorrichtung zur Dreidimensionalen Abbildung in Einem Zylindersymmetrischen Abbildungsraum,” DE patent2622802 C2 (1976).

] and Garcia and Williams at Texas Instruments [112

112. F. Garcia Jr. and R. D. Williams, “Real time three dimensional display with angled rotating screen and method,” U.S. patent5,042,909 (August27, 1991).

]). Laser-scanning 3D displays operate by deflecting a beam of coherent light generated by a laser to a rotating helical surface. Timed modulation of the laser beam controls the height of the light spot that is produced by the laser on the rotating surface. The deflectors include devices such as polygonal mirrors, galvanometers, acousto-optic modulated deflectors, and microdeformable mirrors. There are several problems with this 3D display mechanism that have prevented it from becoming commercially feasible.

The major limitation of the laser-scanning technique is the maximum number of voxels that can be displayed. Due to the nature of vector scanning of lasers, only one spot of light can be displayed at any given moment. All the activated image voxels have to be addressed, one by one, by the scanning of a single laser beam in time-multiplexed fashion. The time needed for steering the laser beam and to stay on the voxel position to produce sufficient brightness poses an upper limit to how many voxels can be displayed. To increase the number of voxels, multiple channel lasers and scanners could be used. However, many attempts to increase the spatial resolution have been hampered by the high cost and bulky hardware design.

4.3d. NRaD’s Scanning Laser Helix 3D Volumetric Display

4.3e. Actuality’s Perspecta

Favalora and co-workers at Actuality [11

11. G. Favalora, “Volumetric 3D displays and application infrastructure,” Computer 38(8), 37–44 (2005). [CrossRef]

,70

70. Actuality 3D Display, http://actuality-medical.com.

] developed a 100-million-voxel swept screen volumetric display as a turnkey system that incorporates a high-resolution 3D display and a display-agnostic operating environment. The system generates 10 in.- (25.40 cm)-diameter volume-filling imagery with a full 360° FOV (Fig. 56). To provide the volumetric imagery, the display projects 198 layers of 2D patterns, called slices, onto an optimized diffuser rotating at or above 900 rpm. The display sweeps the entire volume twice for every complete screen revolution, resulting in a visual refresh rate of 30 Hz. The design of the scanning optical modules facilitates a rotating mirror arrangement configured similarly to that reported in [109

109. “3D Display from ITT Labs,” Aviation Week, 66–67 (October31, 1960).

]. However, the system uses a high-speed DLP produced by Texas Instruments that has significantly higher frame rates and image resolution than that of CRTs. The system uses the OpenGL API to interface to legacy 3D software packages, providing compatibility with a wide range of existing 3D applications. Perspecta’s diverse applications range from medical visualization for interventional planning to battlefield simulation and molecular modeling.

Figure 56 Perspecta 3D display developed by Actuality.

4.3f. Volumetric 3D Display Based on a Helical Screen and a DLP Projector

A “multiplanar” volumetric 3D display system was developed using a high-speed DLP projector and a rotating double helix screen [15

15. J. Geng, “Volumetric 3D display for radiation therapy planning,” J. Disp. Technol. 4, 437–450 (2008). [CrossRef]

,114

114. J. Geng, “A volumetric 3D display based on a DLP projection engine,” Displays 34, 39–48 (2013). [CrossRef]

118

118. J. Geng, “Method and apparatus for generating structural pattern illumination,” U.S. patent6,937,348 (August30, 2005).

]. Figure 57 illustrates the principle of our DLP/Helix volumetric 3D display. Light coming out from a source is reflected by a polarizing beam splitter cube toward a SLM, whose image patterns are generated by a host personal computer (PC). The modulated image patterns are projected by optics onto a rotating double helix screen. The DLP projection engine provides high-speed, high-resolution, and high-brightness image generation for this volumetric 3D display.

Figure 57 A “Multi-planar” volumetric 3D display using a high-speed DLP projector and a rotating double helix screen.

As part of the image generation process, the rotating helix screen’s motion is synchronized with the DLP pattern projection’s timing such that the moving screen intercepts high-speed 2D image projections from the DLP at different spatial positions along the z axis. This forms a stack of spatial image layers that viewers can perceive as true 3D volumetric images. Viewing such images requires no special eyewear. The 3D images float in true 3D space, just as if the real object had been placed there. The unique features of the DLP/Helix 3D display design include an inherent parallel architecture for voxel addressing, high speed, and high spatial resolution, with no need for viewers to wear special glasses or a helmet. A prototype was built with 500 mm screen diameter capable of displaying 3D images with 150 million voxels.

4.4. Swept Screen Volumetric 3D Displays: Sweeping Screen with On-Screen Emitters

4.4a. Rotating LED Array

Figure 58 Principle of rotating LEDs.

4.4b. Cathode Ray Sphere

The Cathode Ray Sphere (CRS) concept was originally developed by Ketchpel in the 1960s [121

121. R. D. Ketchpel, “Three-dimensional display cathode ray tube,” U.S. patent3,140,415 (July7, 1964).

] and later implemented by Blundell [122

122. B. Blundell and A. Schwarz, Volumetric Three-Dimensional Display Systems (Wiley, 2000).

,123

123. B. G. Blundell, “Three dimensional display system,” U.S. patent5,703,606 (December30, 1997).

]. The voxels are created by addressing a rapidly rotating phosphor-coated target screen in a vacuum by electron beams synchronized to the screen’s rotation. The view of this rotating multiplanar surface depends on the clarity of the glass enclosure and the translucency of the rotating screen. Another image quality issue is the interaction between the phosphor decay rate and the speed of the rotation of the screen.

5. Holographic Display

5.1. Introduction

Invented in 1947 by Gabor , holography—from the Greek holos, for “whole,” and grafe, for “drawing”—is a 3D display technique that allows the light wavefronts scattered from an object to be recorded and later reconstructed so that an imaging system (a camera or an eye) can see a 3D image of the object even when the object is no longer present. Gabor was awarded the Nobel Prize in Physics in 1971 for “for his invention and development of the holographic method.” The unique virtue of holograms is that they record and replay all the characteristics of light waves, including phase, amplitude, and wavelength, going through the recording medium. As such, ideally there should be no difference between seeing a natural object or scene and seeing a hologram of it.

Holographic 3D display is thought to be the “holy grail” of 3D display technology because it is able to present truthfully a virtual window of a 3D real-world scene with all characteristics of real-world objects. Practical implementation of holographic display still has many tremendous technical challenges [4

4. D. Gabor, “Holography 1948–1971,” Proc. IEEE 60, 655–668 (1972). [CrossRef]

,5

5. S. Benton and M. Bove, Holographic Imaging (Wiley Interscience, 2008).

,37

37. M. Lucente, “Computational holographic bandwidth compression,” IBM Syst. J. 35, 349–365 (1996). [CrossRef]

]. For example, pure holographic displays require a pixel size smaller than 1 μm, leading to multi-trillion pixels on a reasonable size display screen. Such a huge amount of data presents seemingly unfathomable technical challenges to the entire chain of 3D imaging industries, including 3D image acquisition, processing, transmission, visualization, and display.

The SLM is the key component in implementing computer-generated hologram (CGH) systems. Depending on designs, the SLM can be addressed via a variety of mechanisms. Table 1 lists major types of existing SLMs that are used in building 3D holographic displays. The current CGH systems achieve only relatively low visualization quality and a narrow viewing angle because the display units are typically based on LCD, LCOS, and/or micromirror MEMS technologies with scaling limits at around 2–4 μm, limiting the projection angle to less than 10°. This trend is summarized by Stahl and Jayapala [124

124. R. Stahl and M. Jayapala, “Holographic displays and smart lenses,” Opt. Photon. 6, 39–42 (2011). [CrossRef]

] and Huang [125

125. Y.-P. Huang, “Auto-stereoscopic 3D display and its future developments,” http://www.cdr.ust.hk/Webinar (SID, 2012).

] as shown in Fig. 59.

Table 1. Examples of Existing Digital Hologram Systems

table-icon
View This Table
| View All Tables
Figure 59 The diffraction angle of a holographic display system is proportional to the size of its pixels. Pixel size close to or below the wavelength of the visible light used is necessary to achieve high diffractive efficiency and wide viewing angles.

Many very promising 3D display technologies have been and are being developed to make various compromises on pure holographic conditions, thus producing practically implementable 3D display systems using today’s technologies. Figure 60 shows a number of efforts to develop a real-time holographic display, most notably by Steve Benton’s Spatial Imaging Group at the MIT Media Laboratory, an MIT spin-off Zebra Imaging, a group at QinetiQ [QinetiQ is a spin-off company from the UK’s Defense Evaluation and Research Agency (DERA)], a German company SeeReal, Belgium’s IMEC, and the University of Arizona. These efforts have produced some promising results. However, a practical large size, high-resolution, full-color, and dynamic holographic display still remains a significant challenge.

Figure 60 Examples of existing digital hologram systems.

5.2. MIT ElectroHolography

MIT’s early efforts resulted in two HPO monochrome display prototypes (Fig. 61) [5

5. S. Benton and M. Bove, Holographic Imaging (Wiley Interscience, 2008).

,37

37. M. Lucente, “Computational holographic bandwidth compression,” IBM Syst. J. 35, 349–365 (1996). [CrossRef]

]. These systems calculate fringe patterns of 3D scenes via simulation, and then display the fringe patterns piecewise in an acousto-optic modulator (AOM). A mechanical scanner performs raster scanning of the AOM’s image to produce a larger display. The early systems’ horizontal view zone is about 30°, and the vertical resolution is 144 lines.

Figure 61 MIT’s electroHolography systems: Mark II configuration.

Due to the huge amount of data and limited available bandwidth, many design trade-offs have to be made to minimize the total data bandwidth to about 36 megabytes per frame. The display has a frame rate of 2–3 fps with prestored data. Since complex physical simulation is needed to generate holographic fringe patterns, higher frame rates are difficult to achieve at this time.

5.2a. MIT Mark I

The first-generation MIT display (“Mark I”) had a 50 MHz bandwidth TeO2 AOM driven by a 32,768 × 192 raster; the video signal was multiplied by a 100 MHz sinusoid and low-pass filtered to retain the lower sideband. The size of display volume was 25 mm × 25 mm × 25 mm, and the viewing angle was 15°. The vertical scanner was a galvanometer and the horizontal scanner a polygonal mirror. Computation was performed on a Thinking Machines CM2.

5.2b. MIT Mark II

To increase the size of the display image to get both eyes into the viewing zone, MARK II was built using 18 AOM channels in parallel; thus it can simultaneously generate 18 adjacent scan lines (Fig. 61; see [5

5. S. Benton and M. Bove, Holographic Imaging (Wiley Interscience, 2008).

] for details). The vertical scanner then moved in 18-line steps to scan out 144 lines, each having 262,144 samples. The display volume was 150mm×75mm×150mm and the viewing angle was 30°. Mark II used a synchronized linear array of galvanometric scanners. The 18 video channels were initially generated by Cheops, and in later work the display was driven by three dual-output PC video cards. Employing parallel AOMs and a segmented horizontal scanner provided Mark II scale-up capability at the expense of increased complexity—more video input channels and more synchronized mirror drive circuitry.

5.2c. MIT Mark III Based on an Anisotropic Leaky-Mode Modulator

The idea behind the development of Mark III was to further improve the 3D image quality while reducing the size and cost of the display system to the point that it would be practical to think of it as a PC “holo-video monitor.” The new system processes 3D images on a standard graphics processor rather than on specialized hardware. The original specifications for the Mark III system were [126

126. D. E. Smalley, Q. Y. J. Smithwick, and V. M. Bove, “Holographic video display based on guided-wave acousto-optic devices,” in Proc. SPIE 6488, 64880L (2007). [CrossRef]

]
  • 440 scan lines, 30 Hz;
  • 24° view angle;
  • 80mm×60mm×80mm (W×H×D) view volume; and
  • 1.5m total optical path length, folded to fit into a relatively shallow box.

Further generations of this design will increase the view volume and view angle and will add full color. Smalley and co-workers [127

127. D. E. Smalley, Q. Y. Smithwick, V. M. Bove, J. Barabas, and S. Jolly, “Anisotropic leaky-mode modulator for holographic video displays,” Nature 498, 313–317 (2013). [CrossRef]

] in the MIT group recently developed a novel SLM design that could significantly improve the state of the art of key performance in holographic displays, such as bandwidth, viewing angle, image quality, color multiplexing, and cost. The new holographic scanner design, also referred to as the guided wave scanner (GWS), is based on the principle of anisotropic leaky-mode coupling—a proton-exchanged channel waveguide on a lithium niobate (LiNbO3) substrate with a transducer at one end. The GWS consists of two sets of acoustic transducers to create surface acoustic waves that first use Bragg diffraction to deflect light horizontally, then use mode conversion to deflect light vertically. The acoustic energy produced by the transducers interacts collinearly with the light and “bumps” the light into a leaky mode via polarization-rotating mode conversion. This leaky-mode light passes through the waveguide interface and exists from the edge of the LiNbO3 substrate. This interaction can be used to scan light vertically over a range of angles. According to [127

127. D. E. Smalley, Q. Y. Smithwick, V. M. Bove, J. Barabas, and S. Jolly, “Anisotropic leaky-mode modulator for holographic video displays,” Nature 498, 313–317 (2013). [CrossRef]

], MIT’s new SLM supports a bandwidth of more than 50 billion pixels per second, a 10× improvement to the current state of the art, and can be constructed for less than $500.

5.3. Zebra “Holographic” Dynamic Hogel Light Field Display

Zebra Imaging has developed a high-resolution light field display system [128

128. M. Klug, T. Burnett, A. Fancello, A. Heath, K. Gardner, S. O’Connell, C. Newswanger, “A scalable, collaborative, interactive light-field display system,” in SID Symposium Digest of Technical Papers (2013), Vol. 44, Issue 1, pp. 412–415.

] that, according to the company’s website [129

129. Zebra Imaging, www.zebraimaging.com.

], was the world’s first, real-time, full-parallax, self-contained, interactive, color holographic display. The display is scalable from 6 in. ( 15.24 cm) up to 6 ft. (1.83 m) diagonally, supports simultaneous collaboration of up to 20 participants, and supports viewing of streaming sensor data in near-real-time without the need for 3D goggles or visual aids. Zebra’s holographic display demonstrates full-parallax functionality and 360° viewing of the imagery displayed, and it was demonstrated with near-real-time streaming data feeds representing tactical information. The system also provides operators the ability to interact with the display: users can zoom, rotate, and reach in, select, and manipulate any part of the image they are viewing.

This “holographic” 3D display is not based on traditional holographic principles (Lucente [130

130. M. Lucente, “The first 20 years of holographic video—and the next 20,” in SMPTE 2nd Annual International Conference on Stereoscopic 3D for Media and Entertainment, New York, June21–23, 2011.

]) that rely upon interference of light to recreate viewable 3D objects. Instead, this full parallax 3D display is based on the full parallax light field principle (Klug [128

128. M. Klug, T. Burnett, A. Fancello, A. Heath, K. Gardner, S. O’Connell, C. Newswanger, “A scalable, collaborative, interactive light-field display system,” in SID Symposium Digest of Technical Papers (2013), Vol. 44, Issue 1, pp. 412–415.

]). Figure 62 illustrates the architecture of the overall system design modules. The input to the display system can be synthesized 3D models or 3D surface profiles of objects in the scene captured by a 3D camera. The input 3D data is sent to a data broadcasting module that distributes the computational tasks to the hogel generation module. The hogel generation module consists of multiple parallel processing units/boards. It computes the entire set of light field rays for all visible points on the surface of 3D objects. In their first generation (GEN 1) prototype, there are 84×72 light field rays computed from each surface point (Fig. 63). The results of hogel computation are sent via the “hogel distribution” module to the SLMs. Twenty-four(=4×6) SLMs are used to construct a building block (a tile) of the screen. Each tile is supported by six FPGA boards and three CPU/GPU backplane computational system modules. About 150 tiles are used to build a dynamic hogel light field display screen with a size of 300mm×300mm. The entire prototype system uses 216 units of 1080p SLMs with about 500 million pixels in total.

Figure 62 Holographic dynamic hogel light field display.
Figure 63 Module of the dynamic hogel light field display screen and hogel generation optics.

The key features of Zebra’s display include
  • full (horizontal and vertical) parallax;
  • minutes to milliseconds update rates;
  • collaboration-friendliness;
  • 3D software compatibility;
  • modularity and scalability;
  • physically accessible imagery; and
  • multi-touch interaction compatibility.

5.4. QinetiQ System

Slinger et al. [131

131. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]

] at QinetiQ [132

132. QinetiQ, www.qinetiq.com.

] developed an approach to calculating and displaying holograms of over 5 billion pixels. This approach utilizes the high frame rate of medium complexity, electrically addressed spatial light modulators (EASLMs), and the high resolution of optically addressed spatial light modulators (OASLMs). The resulting system can display 3D images with significantly higher pixel counts than previously possible.

The active tiling SLM system as shown in Fig. 64 divides up large sets of CGH data into segments [131

131. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]

,132

132. QinetiQ, www.qinetiq.com.

]. These segments are displayed sequentially on an EASLM (binary, 1024×1024 pixel ferroelectric crystal on silicon, operated at 2500 Hz). Replication optics (5×5) are then used to project these segments onto a LC OASLM. The OASLM consists of an amorphous silicon photosensor, light blocking layers, a dielectric mirror, and a ferroelectric LC output layer. An image segment can be stored on one OASLM tile by switching the LC at that tile. The output of each tile has 26 million pixels. A new data segment is then loaded onto the EASLM and transferred to an adjacent tile on the OASLM until an entire holographic pattern is written and can be read by coherent illumination to replay the hologram.

Figure 64 Holographic display prototype developed by QinetiQ. Active tiling modular system uses an electrically addressed SLM as an “image engine” that can display the CGH image elements quickly. Replication optics project multiple demagnified images of the EASLM onto an OASLM, which stores and displays the computer-generated pattern. Readout optics form the holographic image. This modulator system allows multiple channels to be assembled to produce a large screen 3D display.

The active tile system achieves a pixel areal density of over 2.2 million pixels per square centimeter. The compact display volume has a density of 2.4 billion pixels per cubic meter, with spacing of 6.6 μm. The system can display both monochrome and color (via frame sequential color) images, with full parallax and depth cues.

5.5. SeeReal’s Eye-Tracking Holographic Display

In the patent portfolio of SeeReal Technologies (15 issued and 80 in applications) [133], there are several different versions of 3D display technologies. Some of them are not really a holographic display but are, rather, a time-sequential autostereoscopic multiview display.

The fundamental difference between conventional holographic displays and SeeReal’s approach, claimed by SeeReal [134

134. S. Reichelt, R. Häussler, N. Leister, G. Fütterer, H. Stolle, and A. Schwerdtner, “Holographic 3-D displays—electro-holography within the grasp of commercialization,” in Advances in Lasers and Electro Optics, N. Costa and A. Cartaxo, eds. (INTECH, 2012), Chap. 29.

,135

135. S. Reichelt, R. Häussler, G. Fütterer, and N. Leister, “Depth cues in human visual perception and their realization in 3D displays,” Proc. SPIE 7690, 76900B (2010). [CrossRef]

], is in the primary goal of the holographic reconstruction. In typical holographic 3D displays, the primary goal is to reconstruct an entire 3D scene that can be seen from a large viewing zone. In contrast, SeeReal’s approach is to reconstruct the wavefront that would be generated by a real existing 3D scene at just the eyes’ positions. The reconstructed 3D scene can be seen if each observer eye is positioned at a virtual viewing window (VW). A VW is the Fourier transform of the hologram and is located in the Fourier plane of the hologram. The size of the VW is limited to one diffraction order of the Fourier transform of the hologram.

Using eye-tracking technologies, the locations of a viewer’s eyes can be determined in real time, and the viewer is allowed to freely move in front of the screen, as illustrated in Fig. 65. The system setup has a Fourier transforming lens, a SLM, and each eye of one viewer. Coherent light transmitted by the lens illuminates the SLM. The SLM is encoded with a hologram that reconstructs an object point of a 3D scene. The modulated light reconstructs the object point, which is visible from a region that is much larger than the eye pupil. Most of the reconstruction is wasted since parts are not seen by eyes.

Figure 65 Eye tracking and reconstruction volume.

The essential idea of SeeReal’s approach is to reconstruct only the wavefront at the eye positions. This approach significantly reduces the amount of information that needs to be processed, and thus promises high-quality holographic display with reasonable hardware/software system costs.

SeeReal’s display differs from prior art not only in hologram encoding but also in the optical setup. Prior-art displays reconstruct the 3D scene around the Fourier plane and provide viewing regions behind the Fourier plane. SeeReal’s displays provide VWs for observers in the Fourier plane. The 3D scene is located between the Fourier plane and the SLM or behind the SLM. SeeReal Technologies uses a 30×30 pixel array for each of the 3D scene points, the so-called subhologram approach.

Required pitch of the spatial light modulator (SLM). For typical holographic displays, the diffraction angle of the SLM determines the size of the reconstructed 3D scene and hence a small pixel pitch is needed (Fig. 54). A reasonable size of the 3D scene (e.g., 500 mm) would require a pixel size of 1μm. This is difficult to achieve in existing technology. SeeReal’s approach significantly lessens such requirements. A moderate pixel size (50 μm) can generate a VW with 20 mm size at a 2 m distance. This means that SeeReal’s approach can be practically implemented using existing technologies. Therefore, this approach makes large size holographic displays feasible.

5.6. IMEC Holographic Display

To achieve a wide viewing angle, the light-diffracting element of a holographic display must be sized close to the wavelength of visible light. Belgium’s Interuniversity Microelectronics Centre (IMEC) developed a nanoscale SiGe- microelectromechanical systems (MEMS) chip that offers over 1 billion microdiffractive nanodevices (DNDs) on a chip with a single DND size ranging from 250 to 2400 nm [136] (Fig. 66). Since such a small pixel size of the nanoscale MEMS chip is close to the wavelength of visible light, each pixel on the nano-MEMS chip becomes a visible-light-diffracting element that can be used for CGH 3D displays. IMEC has developed a working prototype HoloDis system demonstrating holographic video display, with an approximately 38° viewing angle (19° off axis). Figure 66 shows a sample picture of the 3D holographic display generated by IMEC’s prototype.

Figure 66 3D holographic visualization realized by the holographic display with subwavelength diffractive pixels. A viewing angle of 38° is achieved using 500 nm pixel pitch and 635 nm illumination wavelength.

5.7. University of Arizona Holographic Video Display

Traditionally, holographic 3D display technologies are grouped into two major categories: holographic printing of static 3D images, and CGH with dynamic 3D images. The designs of these two types of systems are usually quite different. However, Blanche et al. [137

137. P.-A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W.-Y. Hsieh, M. Kathaperumal, B. Rachwal, O. Siddiqui, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468, 80–83 (2010). [CrossRef]

] at the University of Arizona developed an exotic material, called a sensitized photorefractive polymer (PRP), with remarkable holographic properties, including the ability to refresh images every 2 s. This actually blurred the boundary between “printed” holography and CGH fringe systems.

As shown in Fig. 67 and [137

137. P.-A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W.-Y. Hsieh, M. Kathaperumal, B. Rachwal, O. Siddiqui, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468, 80–83 (2010). [CrossRef]

], the system’s light source is a coherent laser beam (λ=532nm). It is split into an object beam and a reference beam by a beam splitter. The object beam is modulated with computer-generated holographic information of a 3D scene by a SLM. After the Fourier transform is performed, the object beam is interfered with the reference beam within the volume of the photorefractive polymer. The photorefractive polymer is mostly transparent in the visible region of the spectrum. The holographic information is recorded onto the polymer using a pulsed laser system. A spatial multiplexing raster scanning method is used to address the entire display volume. Each hogel containing 3D information from various perspectives is written with a single nanosecond laser pulse (6 ns pulse, 200 mJ at 50 Hz). The hogel resolution is 1 mm. The entire recording of a 4in.×4in. holographic screen takes 2 s. The 3D image is viewed using an incoherent color LED incident at the Bragg angle, and the image is clearly visible under ambient room light. The hologram fades away after a couple of minutes by natural dark decay, or it can be erased by a new 3D image recording.

Figure 67 High-level optical configuration of the PRP holographic imaging system.

To achieve full-color 3D display, an angular multiplexing scheme is used. Up to three different holograms are written in the material at different angles and read out with different color LEDs. To ensure precise alignment of diffractive beams with different colors, holograms with red, green, and blue color information are recorded simultaneously, with a separation angle of 5° between object beams. HPO requires only 100 hogels, while full parallax would require 100×100=10,000hogels. To achieve fast recording of holograms with full parallax, an array of pulsed lasers can be used. Although this was an important step toward dynamic holography, the system’s refresh rate is still slow compared with the 30 frames per second standard video rate.

6. Pseudo 3D Display Techniques

As 3D display technologies become attractions for the media and the public, many new information display formats are given the name “hologram.” Examples include CNN’s “election-night hologram” of a telepresent reporter interview for the 2008 U.S. presidential election [138

138. CNN, www.cnn.com.

], the projection image display on specially designed semitransparent “hologram” screens/films [139], a 360° viable display based on an optical illusion [140], an image projected onto fog for display, and the formation of graphics patterns using water flow. However, in reality, these “holograms” are not true 3D displays based on the principles of light fields (multiview), volumetrics, or the reconstruction of light wavefronts using diffractive interference (holographic).

In this section, we briefly discuss a few examples of these types of displays, which we have termed “pseudo 3D displays”.

6.1. “On-Stage Holographic Telepresence”

“Telepresence” refers to display technologies that present the appearance of a person or a scene to a remote location to make the audience in that location feel as if they see the real person or scene. Several commercial technologies are available, such as those described in [141

141. Musion Eyeliner, http://www.eyeliner3d.com/.

143

143. P. Simonson and M. Corell, “Method and arrangement for projecting images,” U.S. patent7,184,209 (February27, 2007).

]. For example, the “on-stage holographic telepresence” system developed by [144

144. Musion Systems Ltd, http://www.musion.co.uk.

] projects 2D images to a semitransparent thin-film screen. Under a special illumination design, the semitransparent screen is not visible to the viewers, so that the object/person in the image appears to the viewers to be floating in thin air. This type of display generates the “Pepper’s ghost” effect [145

145. “Pepper’s ghost,” http://en.wikipedia.org/wiki/Pepper%27s_ghost.

]. Although they often are very impressive in generating visual effects, they are not holographic 3D display, volumetric 3D display, or multiview 3D display. The 2D images projected on the semitransparent screen cannot evoke physical depth cues. The transparent screen triggers the human brain to gain a certain level of 3D sensation via some psychological depth cues from the “floating” 2D monocular images.

By clever design, the semitransparent screen can also be used to construct a display device that has 360° viewable floating images [142,143

143. P. Simonson and M. Corell, “Method and arrangement for projecting images,” U.S. patent7,184,209 (February27, 2007).

].

6.2. Fog Screen and Particle Cloud Displays

A unique display media is a sheet of water vapor (fog) or a particle cloud on which projected 2D images can form a floating image display. Commercial products and patents include [146

146. FogScreen, http://www.fogscreen.com/.

150

150. C. D. Dyner, “Method and system for free-space imaging display and interface,” U.S. patent6,857,746 (February22, 2005).

]. In the case of a fog screen, one or more thin layers of a steady flow of fog are generated, by a specially designed machine, as a screen medium. Images are projected onto the screen using a 2D projector(s). The unique nature of a fog screen display is its interactivity: there is no solid material to form the screen; thus hand, body, and other objects can penetrate (invade) the display screen without destroying it. The size can be large as well.

In the case of a particle cloud [150

150. C. D. Dyner, “Method and system for free-space imaging display and interface,” U.S. patent6,857,746 (February22, 2005).

], the display technology is able to present full-color, high-resolution video or still images in free space, and it enables viewers to directly interact with the visual images. The system generates a dynamic, non-solid particle cloud by ejecting an atomized condensate present in the surrounding air into an invisible particle cloud, which acts as a projection screen.

In general, fog screen and particle cloud technologies are not able to provide physical depth cues. They are not true 3D displays in nature.

6.3. Graphic Waterfall Display

A novel control mechanism of an array of water valves can create a dynamic display of graphic patterns [151

151. S. H. Pevnick, “Water supply method and apparatus for a fountain,” U.S. patent6,557,777 (May6, 2003).

,152

152. Graphical Waterfalls, http://pevnickdesign.com/.

]. This serves as an unconventional and entertaining way to display message and graphical information. Using a row of closely spaced and pneumatically controlled valves to precisely time the ON/OFF of water flow from each valve, the graphic waterfall produces dynamic graphical information on a “sheet” of free-falling water. The size of such 2D displays varies, ranging from 5–10 m in height and 3–15 m in width. The “resolution” of these 2D displays depends upon the maximum switching speed of the valves and the precision of their control systems. The image contrast of these displays (i.e., visual difference between air space and space within the water sheet) is usually enhanced by proper lighting setups.

MIT architects and engineers designed a building with such a setup, and it was unveiled at 2008 Zaragoza World Expo in Spain [153

153. P. Richards, “MIT architects design building with digital water walls,” MIT News Office (July12, 2007).

]. The “water walls” that make up the structure consist of a row of closely spaced solenoid valves along a pipe suspended in the air. The valves can be opened and closed, at high frequency, via computer control. This produces a curtain of falling water with gaps at specified locations—a pattern of pixels created from air and water instead of illuminated points on a screen. The entire surface becomes a 1-bit-deep digital display that continuously scrolls downward.

6.4. Microsoft Vermeer

By combining the optical illusion of the mirascope with a virtual image display, a team of Microsoft researchers have developed a display system, called Vermeer, with moving floating images in midair at 15 frames per second, emulating 192 views. A virtual image of a 3D scene is rendered through a half-silvered mirror and spatially aligned with the real world for the viewer [154

154. O. Hilliges, D. Kim, S. Izadi, M. Weiss, and A. Wilson, “HoloDesk: direct 3D interactions with a situated see-through display,” in Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (2012), pp. 2421–2430.

]. Since the displayed images are virtual images, viewers can reach out and “touch” these images with the aid of a depth-tracking camera, such as Microsoft’s Kinect. This allows users to literally get their hands into the virtual display and to directly interact with a spatially aligned 3D virtual world, without the need for any specialized head-worn hardware or input device [140].

7. Comparisons of Some Existing 3D Display Technologies

Historically, there are very few published studies, if any, on comparing various 3D display technologies, due to the challenging task of analyzing and matching a large number of vastly different optical and electromechanical designs and evaluating their impact on viewers’ depth perception, comfort, and enhancement in performing visual tasks. Limited depth perception comparisons were reported for a small number of displays [155

155. N. Holliman, Three-Dimensional Display Systems (Taylor and Francis, 2006).

]. We realize that it is virtually impossible to perform rigorous quantitative or empirical comparisons of optoelectromechanical design details and performance among a large variety of display techniques and modalities without getting into the dilemma of comparing apples to oranges. We provide a depth cue comparison of various 3D display technologies in Table 2 to itemize a few key properties of 3D display technologies. The comparison table in this article is meant to provide some guidance, to a certain degree, to differentiate the key behavior of each technique in major categories of performance.

Table 2. Comparison of Various 3D Display Technologies

table-icon
View This Table
| View All Tables

Although holographic 3D display is considered to be the “holy grail” of 3D display technology, practical implementation of holographic displays still has tremendous technical challenges due to the tremendous amounts of data required. For example, pure holographic displays require a pixel size smaller than 1 μm, leading to multi-trillion pixels on a reasonable size display screen. Such a huge amount of data present seemingly unfathomable technical challenges to the entire chain of 3D imaging industries, including 3D image acquisition, processing, transmission, visualization, and display.

Many very promising 3D display technologies are being developed to make various compromises on the pure holographic conditions, thus producing practically implementable 3D display systems using today’s technologies. These compromises and approximations lead to different characteristics among the resulting 3D display technologies. In Table 2, we list major categories of 3D display technical approaches, namely, the binocular stereoscopic, multiview (light field) autostereoscopic (with HPO, integral imaging, super-multiview, and multiview with eye-tracking), and volumetric (static and moving screens) 3D displays.

In Section 6, we provided some of the pseudo 3D display technologies that often are mistakenly called 3D holographic or true 3D displays. These pseudo 3D displays cannot provide any physical depth cues of a real-world 3D scene.

We compare these display technologies in terms of the various depth cues they can provide (e.g., motion parallax, accommodation–convergence conflict, voxels in 3D space, hidden edge removal), typical display performance they can achieve [spatial resolution (total voxels, total views, resolution of each view), screen size, viewing angle], and user-friendliness (overall system dimension, need for eye tracking, moving parts, eye safety). Many other characteristics could also be added into Table 2, but we choose to leave these tasks to readers since we do not intend to provide an exhaustive list of comparisons, which may be impossible to do to address concerns coming from those with various backgrounds.

Several notes regarding the table that lists pros and cons of some existing 3D display systems (Table 3):
  • (1) In Table 2 we use four physiological depth cues (motion parallax, binocular disparity, convergence, and accommodation) and five psychological depth cues (linear perspective, occlusion, shading, texture, and prior knowledge) to compare the displayed 3D images generated from each technique.
  • (2) As the single most important characteristics of 3D display, almost all 3D display technologies discussed in this article offer motion parallax and binocular disparity depth cues. However, only volumetric and holographic 3D displays offer convergence and accommodation depth cues. Most multiview 3D displays offer limited degrees of convergence and accommodation depth cues, due to the fact that the images are displayed on screens that are mostly flat or curved display surfaces. The inherent accommodation–convergence conflict is primarily for close-range viewing (<2m) [155

    155. N. Holliman, Three-Dimensional Display Systems (Taylor and Francis, 2006).

    ]. For scenarios such as 3D cinemas or large screen 3DTVs, the viewing distance is generally greater than 2 m. The depth of view of human eyes in such a viewing range is much more tolerant to the accommodation–convergence conflict.
  • (3) Multiview 3D display techniques are able to provide motion parallax, but usually at a coarse level. Super-multiview and CGHs are able to generate a much smoother motion parallax, while many existing CGH systems are still not able to provide such capability, due to greatly reduced data bandwidth.
  • (4) If a 3D display is able to generate voxels in 3D space instead of on a flat/curved screen surface, it is able to offer its viewers the convergence and accommodation depth cues. Therefore, we use “voxels in 3D space” as a key criterion to compare different types of 3D displays. Volumetric and holographic 3D displays have voxels in 3D space, while most multiview 3D displays fail to generate volumetric voxels.
  • (5) “Hidden line removal” is an important characteristic of real-world 3D objects. However, some 3D displays, such as most volumetric 3D displays, do not have such viewing-angle-dependent behavior. In these displays, all the voxels are always on display no matter which direction viewers look from. The displayed 3D objects appear transparent. For certain applications, such as some specific medical display applications for volumetric data sets, the transparent display mode may be useful. For the majority of display applications, 3D displays without hidden line removal may hinder the accuracy, efficiency, and effectiveness of 3D visualization tasks.
  • (6) Viewable angle (or FOV) is an important performance measurement of usability for 3D displays. It is very challenging to generate 3D images for a large range of viewing angles due to the inherent requirement on the amount of information and some hardware limitations. The ultimate goal for the viewable angle for 3D displays should be surrounding 360° (or 4π space angle). However, none of the existing 3D display techniques is yet able to achieve this goal.
  • (7) Scalability is an important consideration when selecting a 3D display technology. For various reasons, some 3D display schemes are almost impossible to scale to large display volume or screen size. For example, due to the rotating screen, swept volume 3D volumetric displays have physical limitations on the size of screens. Projection-based display techniques usually offer flexibility in screen/volume sizes, although the pixel counts on the unit area of the screen decreases proportionally with increase of screen size unless the pixel resolution of the SLMs changes accordingly. For holographic 3D displays, the amount of information required is proportional to the size of the display volume.
  • (8) Some 3D display techniques discussed in this article may exhibit 3D depth cues with a small number of voxels, but fail when the voxel account increases to 2D-compatible level. Viewers are used to seeing high-quality 2D images. Therefore, the image quality of 3D displays should at least be at a level comparable with existing high-end 2D displays (e.g., with equivalent 1080p or 4 K pixel resolutions). Film-based static holographic 3D displays can generate very high-quality images. However, computer generated dynamic holographic 3D display systems have yet to deliver comparable performance. In general, it is very difficult for volumetric 3D displays to produce high-quality 2D-equivalent image displays. Time-multiplexed-based multiview 3D displays usually are able to provide 2D-equivalent image quality for a limited number of views. Spatial multiplexing multiview schemes require division of the pixel resolution of the SLM with the number of views and, therefore, may not be able to provide full-resolution 2D-comparable display quality.
  • (9) If a 3D display system requires moving parts for its operation, the 3D display system may have issues with durability, cost, and maintenance. Preferred designs for 3D displays are those that have no moving parts.
  • (10) When using the light field theory to describe 3D displays, a fundamental requirement for 3D displays is for each voxel to have the ability to generate view-dependent light emittance. Viewers can thus see different perspectives of 3D objects from different viewing angles. Multiview and holographic 3D displays are primarily based on this design principle. Volumetric 3D displays, on the other hand, may not have the built-in capability to produce view-dependent light emission.
  • (11) A 3D display is 2D–3D compatible if it can also display 2D content with the same quality as a high-end 2D display. The 2D–3D compatibility allows the 3D display system to be used all the time, with or without 3D contents. Multiview 3D displays usually are 2D–3D compatible, while volumetric or holographic 3D displays require totally different data formats.
  • (12) Another usability issue is eye safety. Some 3D display employs strong laser lights that may cause damage to human eyes if viewed directly.

Table 3. Pros and Cons of Some Existing 3D Display Systems

table-icon
View This Table
| View All Tables

Table 3 provides more detailed descriptions of key features (system parameters, performance, etc.) of some existing 3D displays and their pros and cons.

The selection of these properties is by no means exhaustive. Depending on specific applications and design objectives, one can easily come up with a difference set of criteria. There is a reason why so many 3D display techniques have been developed to date: there is no single technique that can be applied to each and every specific application scenario. Each 3D display technique has its own set of advantages and disadvantages. When selecting a 3D display technique for a specific application, readers are encouraged to make careful trade-offs among their specific application requirements and to consider key performance issues.

8. Remarks

8.1. Entire Chain of the 3D Imaging Industry

3D imaging is an interdisciplinary technology that draws contributions from optical design, structural design, sensor technology, electronics, packaging, and hardware and software. 3D display is only one of the elements in the entire chain of the 3D imaging industry, which includes an array of technologies, including 3D image acquisition, processing, transmission, visualization, and display.

Figure 68 shows a flowchart that illustrates the major building blocks of the entire 3D imaging industry. The “3D contents” have to be generated either from computer-generated 3D models or from 3D image acquisition systems. The acquired 3D contents have to be efficiently processed by sophisticated 3D image processing algorithms. To facilitate distribution of 3D contents to remote locations, 3D image transmission techniques have to be developed (e.g., 3D data compression techniques to reduce transmission bandwidth). Various aspects of 3D visualization techniques, such as 3D user interaction and effective 3D visualization protocols, have to be developed before the 3D display system hardware and software technologies can be applied. Finally, an important aspect in the 3D imaging chain is the technologies to facilitate natural interaction between viewers and 3D images. True 3D interaction technologies would make 3D display more effective, efficient, and interesting.

Figure 68 Entire chain of the 3D imaging industry.

There is another important aspect of the 3D imaging industry worth mentioning: we need to establish rigorous standards for 3D display performance assessment/evaluation, and develop technologies to facilitate these standards.

The 3D imaging industry as a whole needs to promote the entire industrial chain, with advances in various component technologies, in order to bring 3D images into users’ hands and views.

8.2. Moving Forward with 3D Displays

If a 2D picture is worth a thousand words, then a 3D image is worth a million.

Various promising 3D display technologies have been discussed in this article. Most of them are still in the stages of prototypes and pre-market testing. To move the 3D display technologies forward to mass market, we need to find a few “killer applications” and develop suitable technologies/products for them.

High-end market segments for 3D display technologies include defense, medical, space, and scientific high-dimensional data visualization applications, to name a few.

The mass market segments of 3D display technologies include 3D TV, 3D movies, 3D mobile devices, and 3D games. Any 3D display technology that can break through these markets may find widespread adoption and high-volume production opportunities.

The overview provided in this article should be useful to researchers in the field since it provides a snapshot of the current state of the art, from which subsequent research in meaningful directions is encouraged. This overview also contributes to the efficiency of research by preventing unnecessary duplication of already performed research.

The field of 3D display technology is still quite young compared with its 2D counterpart, which has developed over several decades with multibillion dollar investments. It is our hope that our work in developing and applying 3D display technologies to a variety of applications can provide some stimulation and attraction of more talented researchers from both theoretical and applications backgrounds to this fascinating field of research and development.

References

1.

E. N. Marieb and K. N. Hoehn, Human Anatomy & Physiology (Pearson, 2012).

2.

T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).

3.

B. Blundell and A. Schwarz, Volumetric Three Dimensional Display System (Wiley, 2000).

4.

D. Gabor, “Holography 1948–1971,” Proc. IEEE 60, 655–668 (1972). [CrossRef]

5.

S. Benton and M. Bove, Holographic Imaging (Wiley Interscience, 2008).

6.

E. Lueder, 3D Displays (Wiley, 2012).

7.

R. Hainich and O. Bimber, Displays: Fundamentals & Applications (Peters/CRC Press, 2011).

8.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH (1996), pp. 31–42.

9.

W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23, 814–824 (2004). [CrossRef]

10.

N. Dodgson, “Autostereoscopic 3D displays,” Computer 38(8), 31–36 (2005). [CrossRef]

11.

G. Favalora, “Volumetric 3D displays and application infrastructure,” Computer 38(8), 37–44 (2005). [CrossRef]

12.

E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane, “A three color, solid-state three dimensional display,” Science 273, 1185–1189 (1996). [CrossRef]

13.

A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” in SIGGRAPH 2007 Papers (2007), paper 40.

14.

B. Javidi and F. Okano, Three Dimensional Television, Video, and Display Technologies (Springer, 2011).

15.

J. Geng, “Volumetric 3D display for radiation therapy planning,” J. Disp. Technol. 4, 437–450 (2008). [CrossRef]

16.

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3, 128–160 (2011). [CrossRef]

17.

Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18, 8824–8835 (2010). [CrossRef]

18.

S. Pastoor and M. Wöpking, “3-D displays: a review of current technologies,” Displays 17, 100–110 (1997). [CrossRef]

19.

J. Geng, “Multiview three-dimensional display using single projector,” Displays (submitted).

20.

A. Sullivan, “3 Deep: new displays render images you can almost reach out and touch,” IEEE Spectrum42(4), 30–35 (2005).

21.

D. MacFarlane, “Volumetric three dimensional display,” Appl. Opt. 33, 7453–7457 (1994). [CrossRef]

22.

J.-Y. Son, B. Javidi, and K.-D. Kwack, “Methods for displaying three-dimensional images,” Proc. IEEE 94, 502–523 (2006). [CrossRef]

23.

J.-Y. Son, B. Javidi, S. Yano, and K.-H. Choi, “Recent developments in 3-D imaging technologies,” J. Disp. Technol. 6, 394–403 (2010). [CrossRef]

24.

H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99, 540–555 (2011). [CrossRef]

25.

B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]

26.

N. S. Holliman, N. A. Dodgson, G. E. Favalora, and L. Pockett, “Three-dimensional displays: a review and applications analysis,” IEEE Trans Broadcast. 57, 362–371 (2011). [CrossRef]

27.

J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50, H87–H115 (2011). [CrossRef]

28.

D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3):33, 1–30 (2008). [CrossRef]

29.

E. Adelson and J. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.

30.

S. E. B. Sorensen, P. S. Hansen, and N. L. Sorensen, “Method for recording and viewing stereoscopic images in color using multichrome filters,” U.S. patent6,687,003 (February3, 2004).

31.

E. A. Edirisinghe and J. Jiang, “Stereo imaging, an emerging technology,” in Proceedings of SSGRR, L’Aquila, July31–August 6, 2000.

32.

M. Coltheart, “The persistences of vision,” Phil. Trans. R. Soc. B 290, 57–69 (1980). [CrossRef]

33.

“Persistence of vision,” http://en.wikipedia.org/wiki/Persistence_of_vision.

34.

O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2, 199–216 (2006). [CrossRef]

35.

D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through headmounted display with a low f-number and large field of view using a free-form prism,” Appl. Opt. 48, 2655–2668 (2009). [CrossRef]

36.

T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “A display system for natural viewing of 3D images,” in Three-Dimensional Television, Video and Display Technology (Springer, 2010), pp. 461–487.

37.

M. Lucente, “Computational holographic bandwidth compression,” IBM Syst. J. 35, 349–365 (1996). [CrossRef]

38.

M. Faraday, “Thoughts on ray vibrations,” Philos. Mag. 28, 345–350 (1846).

39.

A. Gershun, “The light field,” Moscow, 1936, P. Moon and G. Timoshenko, translators, J. Math. Phys. XVIII, 51–151 (1939).

40.

S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (1996), pp. 43–54.

41.

F. E. Ives, “A novel stereogram,” J. Franklin Inst. 153, 51–52 (1902). [CrossRef]

42.

T. Peterka, R. L. Kooima, D. J. Sandin, A. Johnson, J. Leigh, and T. A. DeFanti, “Advances in the Dynallax solid-state dynamic parallax barrier autostereoscopic visualization display system,” IEEE Trans. Vis. Comput. Graph. 14, 487–499 (2008). [CrossRef]

43.

“Nintendo 3DS,” Nintendo, http://www.nintendo.com/3ds/features/.

44.

T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008). [CrossRef]

45.

D. S. St. John, “Holographic color television record system,” U.S. patent3,813,685 (May28, 1974).

46.

T. Endo, Y. Kajiki, T. Honda, and M. Sato, “Cylindrical 3-D video display observable from all directions,” in Proceedings of Pacific Graphics (2000), pp. 300–306.

47.

T. Yendo, N. Kawakami, and S. Tachi, “Seelinder: the cylindrical light field display,” in ACM SIGGRAPH (2005), paper 16.

48.

D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29, 163 (2010). [CrossRef]

49.

G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31, 80 (2012). [CrossRef]

50.

W. Hess, “Stereoscopic picture,” U.S. patent1,128,979 (February16, 1915).

51.

C. van Berkel, D. W. Parker, and A. R. Franklin, “Multiview 3D LCD,” Proc. SPIE 2653, 32 (1996). [CrossRef]

52.

C. van Berkel and J. A. Clarke, “Characterization and optimization of 3D-LCD module design,” Proc. SPIE 3012, 179 (1997). [CrossRef]

53.

NLT, www.nlt-technologies.co.jp/en/.

54.

A. Schwerdtner and H. Heidrich, “Dresden 3D display (D4D),” Proc. SPIE 3295, 203 (1998). [CrossRef]

55.

Y.-P. Huang, C.-W. Chen, T.-C. Shen, and J.-F. Huang, “Autostereoscopic 3D display with scanning multi-electrode driven liquid crystal (MeD-LC) lens,” 3D Res. 1, 39–42 (2010). [CrossRef]

56.

G. Lippmann, “Épreuves réversibles. Photographies intégrales,” C. R. Acad. Sci. 146, 446–451 (1908).

57.

H. Takahashi, H. Fujinami, and K. Yamada, “Holographic lens array increases the viewing angle of 3D displays,” SPIE Newsroom (June6, 2006).

58.

A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]

59.

J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express 12, 6020–6032 (2004). [CrossRef]

60.

H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17, 1690–1701 (2011). [CrossRef]

61.

O. S. Cossairt, M. Thomas, and R. K. Dorval, “Optical scanning assembly,” U.S. patent7,864,419 (June8, 2004).

62.

E. Goulanian and A. F. Zerrouk, “Apparatus and system for reproducing 3-dimensional images,” U.S. patent7,944,465 (May17, 2011).

63.

L. Bogaert, Y. Meuret, S. Roelandt, A. Avci, H. De Smet, and H. Thienpont, “Demonstration of a multiview projection display using decentered microlens arrays,” Opt. Express 18, 26092–26106 (2010). [CrossRef]

64.

G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187 (1997). [CrossRef]

65.

M. W. Jones, G. P. Nordin, J. H. Kulick, R. G. Lindquist, and S. T. Kowel, “A liquid crystal display based implementation of a real-time ICVision holographic stereogram display,” Proc. SPIE 2406, 154 (1995). [CrossRef]

66.

T. Toda, S. Takahashi, and F. Iwata, “3D video system using grating image,” in Proc. SPIE 2406, 191 (1995). [CrossRef]

67.

E. Schulze, “Synthesis of moving holographic stereograms with high-resolution spatial light modulators,” Proc. SPIE 2406, 124 (1995). [CrossRef]

68.

D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle glasses-free three-dimensional display,” Nature 495, 348–351 (2013). [CrossRef]

69.

O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244–1250 (2007). [CrossRef]

70.

Actuality 3D Display, http://actuality-medical.com.

71.

Holografika, www.holografika.com.

72.

T. Balogh and P. T. Kovács, “Real-time 3D light field transmission,” Proc. SPIE 7724, 772406 (2010). [CrossRef]

73.

G. Favalora and O. Cossairt, “Theta-parallax-only (TPO) displays,” U.S. patent7,364,300 B2 (April24, 2008).

74.

S. Uchida and Y. Takaki, “360-degree, three-dimensional table-screen display using small array of high-speed projectors,” Proc. SPIE 8288, 82880D (2012). [CrossRef]

75.

C. H. Krah, “Three-dimensional display system,” U.S. patent7,843,449 (November30, 2010).

76.

Y.-H. Tao, Q.-H. Wang, J. Gu, and W.-X. Zhao, “Autostereoscopic three-dimensional projector based on two parallax barriers,” Opt. Lett. 34, 3220–3222 (2009). [CrossRef]

77.

Y. Kim, K. Hong, J. Yeom, J. Hong, J.-H. Jung, Y. W. Lee, J.-H. Park, and B. Lee, “A frontal projection-type three-dimensional display,” Opt. Express 20, 20130–20138 (2012). [CrossRef]

78.

Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154 (1997). [CrossRef]

79.

S. Hentschke, “Autostereoscopic reproduction system for 3-D displays,” U.S. patent7,839,430 (November232010).

80.

N.-Y. Wang, H.-J. Lee, and C.-H. Tsai, “Parallax barrier type autostereoscopic display device,” U.S. patent6,727,866 (April27, 2004).

81.

B. Si, “Stereoscopic image display system and method of controlling the same,” U. S. patent8,427,746 B2 (April23, 2013).

82.

http://www.hhi.fraunhofer.de/fields-of-competence/interactive-media-human-factors/products-services/stereoscopic-displays/free2c-desktop-display.html

83.

P. Surman, R. S. Brar, I. Sexton, and K. Hopf, “MUTED and HELIUM3D autostereoscopic displays,” in IEEE International Conference on Multimedia and Expo (ICME) (2010), pp. 1594–1599.

84.

S. H. Ju, M.-D. Kim, M.-S. Park, K.-T. Kim, J.-H. Park, and K.-M. Lim, “Viewer’s eye position estimation using single camera,” in SID Syposium Digest of Technical Papers (2013), pp. 671–674.

85.

H. Y. Wu, C. H. Chang, and C. L. Lin, “Dead-zone-free 2D/3D switchable barrier type 3D display,” in SID Syposium Digest of Technical Papers (2013), pp. 675–677.

86.

J. C. Schultz, R. Brott, M. Sykora, W. Bryan, and T. Fukami, “Full resolution autostereoscopic 3D display for mobile applications,” in SID Symposium Digest of Technical Papers (2009), Vol. 40, pp. 127–130.

87.

J. C. Schultz and M. J. Sykora, “Directional backlight with reduced crosstalk,” U.S. patent application2011/0285927 A1 (May24, 2010).

88.

M. Minami, K. Yokomizo, and Y. Shimpuku, “Glasses-free 2D/3D switchable display,” in SID Symposium Digest of Technical Papers (2011), pp. 468–471.

89.

M. Minami, “Light source device and display,” U.S. patent application2012/0195072 A1 (August2, 2012).

90.

C. W. Wei and Y. P. Huang, “240  Hz 4-zones sequential backlight,” in SID Symposium Digest (2010), p. 863.

91.

H. Kwon and H. J. Choi, “A time-sequential multiview autostereoscopic display without resolution loss using a multidirectional backlight unit and a LCD panel,” Proc. SPIE 8288, 82881Y (2012). [CrossRef]

92.

E. A. Downing, “Method and system for three-dimensional display of information based on two photon upconversion,” U.S. patent5,684,621 (November4, 1997).

93.

J. D. Lewis, C. M. Verber, and R. B. McGhee, “A true three-dimensional display,” IEEE Trans. Electron Devices 18, 724–732 (1971). [CrossRef]

94.

K. Langhans, C. Guill, E. Rieper, K. Oltmann, and D. Bahr, “Solid Felix: a static volume 3D-laser display,” IS&T Reporter 18(1), 1–9 (2003).

95.

E. J. Korevaar and B. Spiver, “Three dimensional display apparatus,” U.S. patent4,881,068 (November14, 1989).

96.

S. K. Nayar and V. N. Anand, “3D display using passive optical scatterers,” Computer 40(7), 54–63 (2007). [CrossRef]

97.

J. Geng, “Volumetric 3D display system with static screen,” NASA Tech Briefs (NASA, 2011), Vol. 35, p. 40, http://www.techbriefs.com/component/content/article/9432.

98.

H. Kimura, T. Uchiyama, and H. Yoshikawa, “Laser produced 3D display in the air,” in ACM SIGGRAPH (2006), p. 20.

99.

M. Momiuchi and H. Kimura, “Device for forming visible image in air,” U.S. patent7,533,995 (May19, 2009).

100.

D. Wyatt, “A volumetric 3D LED display” (MIT, 2005), http://web.mit.edu/6.111/www/f2005/projects/wyatt_Project_Design_Presentation.pdf.

101.

L. Sadovnik and A. Rizkin, “3D volume visualization display,” U.S. patent5,764,317 (June9, 1998).

102.

A. Sullivan, “Multi-planar volumetric display system and method of operation using multi-planar interlacing,” U.S. patent6,806,849 (October19, 2004).

103.

LightSpace Technologies, www.lightspacetech.com.

104.

EuroLCDs, www.eurolcds.com.

105.

R. S. Gold and J. E. Freeman, “Layered display system and method for volumetric presentation,” U.S. patent5,813,742 (September29, 1998).

106.

M. S. Leung, N. A. Ives, and G. Eng, “Three-dimensional real-image volumetric display system and method,” U.S. patent5,745,197 (April28, 1998).

107.

J.-P. Koo and D.-S. Kim, “Volumetric three-dimensional (3D) display system using transparent flexible display panels,” U.S. patent application2007/0009222 A1 (January11, 2007).

108.

M. Hirsch, “Three dimensional display apparatus,” U.S. patent2,967,905 (January13, 1958).

109.

“3D Display from ITT Labs,” Aviation Week, 66–67 (October31, 1960).

110.

L. D. Sher, “Three-dimensional display,” U.S. patent4,130,832 (December19, 1978).

111.

R. Hartwig, “Vorrichtung zur Dreidimensionalen Abbildung in Einem Zylindersymmetrischen Abbildungsraum,” DE patent2622802 C2 (1976).

112.

F. Garcia Jr. and R. D. Williams, “Real time three dimensional display with angled rotating screen and method,” U.S. patent5,042,909 (August27, 1991).

113.

M. Lasher, P. Soltan, W. Dahlke, and N. Acantilado, “Laser projected 3D volumetric displays,” Proc. SPIE 2650, 285 (1996). [CrossRef]

114.

J. Geng, “A volumetric 3D display based on a DLP projection engine,” Displays 34, 39–48 (2013). [CrossRef]

115.

J. Geng, “Method and apparatus for high resolution three dimensional display,” U.S. patent6,064,423 (May16, 2000).

116.

J. Geng, “Method and apparatus for an interactive volumetric three dimensional display,” U.S. patent7,098,872 (August29, 2006).

117.

J. Geng, “Method and apparatus for an interactive volumetric three dimensional display,” U.S. patent6,900,779 (May31, 2005).

118.

J. Geng, “Method and apparatus for generating structural pattern illumination,” U.S. patent6,937,348 (August30, 2005).

119.

R. J. Schipper, “Three-dimensional display,” U.S. patent3,097,261 (July9, 1963).

120.

E. P. Berlin Jr., “Three-dimensional display,” U.S. patent4,160,973 (July10, 1979).

121.

R. D. Ketchpel, “Three-dimensional display cathode ray tube,” U.S. patent3,140,415 (July7, 1964).

122.

B. Blundell and A. Schwarz, Volumetric Three-Dimensional Display Systems (Wiley, 2000).

123.

B. G. Blundell, “Three dimensional display system,” U.S. patent5,703,606 (December30, 1997).

124.

R. Stahl and M. Jayapala, “Holographic displays and smart lenses,” Opt. Photon. 6, 39–42 (2011). [CrossRef]

125.

Y.-P. Huang, “Auto-stereoscopic 3D display and its future developments,” http://www.cdr.ust.hk/Webinar (SID, 2012).

126.

D. E. Smalley, Q. Y. J. Smithwick, and V. M. Bove, “Holographic video display based on guided-wave acousto-optic devices,” in Proc. SPIE 6488, 64880L (2007). [CrossRef]

127.

D. E. Smalley, Q. Y. Smithwick, V. M. Bove, J. Barabas, and S. Jolly, “Anisotropic leaky-mode modulator for holographic video displays,” Nature 498, 313–317 (2013). [CrossRef]

128.

M. Klug, T. Burnett, A. Fancello, A. Heath, K. Gardner, S. O’Connell, C. Newswanger, “A scalable, collaborative, interactive light-field display system,” in SID Symposium Digest of Technical Papers (2013), Vol. 44, Issue 1, pp. 412–415.

129.

Zebra Imaging, www.zebraimaging.com.

130.

M. Lucente, “The first 20 years of holographic video—and the next 20,” in SMPTE 2nd Annual International Conference on Stereoscopic 3D for Media and Entertainment, New York, June21–23, 2011.

131.

C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]

132.

QinetiQ, www.qinetiq.com.

133.

SeeReal, http://www.seereal.com/.

134.

S. Reichelt, R. Häussler, N. Leister, G. Fütterer, H. Stolle, and A. Schwerdtner, “Holographic 3-D displays—electro-holography within the grasp of commercialization,” in Advances in Lasers and Electro Optics, N. Costa and A. Cartaxo, eds. (INTECH, 2012), Chap. 29.

135.

S. Reichelt, R. Häussler, G. Fütterer, and N. Leister, “Depth cues in human visual perception and their realization in 3D displays,” Proc. SPIE 7690, 76900B (2010). [CrossRef]

136.

IMEC Holographic Display, http://www.imec.be/ScientificReport/SR2010/2010/1159126.html.

137.

P.-A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W.-Y. Hsieh, M. Kathaperumal, B. Rachwal, O. Siddiqui, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468, 80–83 (2010). [CrossRef]

138.

CNN, www.cnn.com.

139.

Holographic screen, http://en.wikipedia.org/wiki/Holographic_screen.

140.

Vermeer, http://research.microsoft.com/en-us/projects/vermeer/.

141.

Musion Eyeliner, http://www.eyeliner3d.com/.

142.

ViZoo, http://www.vizoo.com.

143.

P. Simonson and M. Corell, “Method and arrangement for projecting images,” U.S. patent7,184,209 (February27, 2007).

144.

Musion Systems Ltd, http://www.musion.co.uk.

145.

“Pepper’s ghost,” http://en.wikipedia.org/wiki/Pepper%27s_ghost.

146.

FogScreen, http://www.fogscreen.com/.

147.

UK FogScreen, http://ukfogscreen.com/.

148.

A. Kataoka and Y. Kasahara, “Method and apparatus for a fog screen and image-forming method using the same,” U.S. patent5,270,752 (December14, 1993).

149.

H. Hasegawa, A. Yamamoto, T. Fujimori, and N. Uchibori, “Image display system and method, and screen device,” U.S. patent8,157,382 (April17, 2012).

150.

C. D. Dyner, “Method and system for free-space imaging display and interface,” U.S. patent6,857,746 (February22, 2005).

151.

S. H. Pevnick, “Water supply method and apparatus for a fountain,” U.S. patent6,557,777 (May6, 2003).

152.

Graphical Waterfalls, http://pevnickdesign.com/.

153.

P. Richards, “MIT architects design building with digital water walls,” MIT News Office (July12, 2007).

154.

O. Hilliges, D. Kim, S. Izadi, M. Weiss, and A. Wilson, “HoloDesk: direct 3D interactions with a situated see-through display,” in Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (2012), pp. 2421–2430.

155.

N. Holliman, Three-Dimensional Display Systems (Taylor and Francis, 2006).

aop-5-4-456-i002 Jason Geng received his doctoral degree in Electrical Engineering from George Washington University in 1990. Since then, he has led a variety of research, development, and commercialization efforts on 3D imaging technologies. He founded a high tech company that specialized in developing 3D imaging technologies and products. He has served as a principal investigator and major contributor for over $35 million in research and product efforts. He has published 142 academic papers and one book, and is an inventor of 33 issued patents. He has received prestigious national honors, including the Tibbetts Award from the U.S. Small Business Administration and the “Scientist Helping America” award from the Defense Advanced Research Projects Agency, and was ranked 257 in INC. magazine’s “INC. 500 List.” Dr. Geng currently serves as the vice president for the IEEE Intelligent Transportation Systems Society (ITSS). He is also leading Intelligent Transportation System standard efforts by serving as the chairman of the standard committee for IEEE ITSS. His recent publication on 3D imaging technology (“Structured light 3D surface imaging: a tutorial,” www.opticsinfobase.org/aop/abstract.cfm?uri=aop-3‐2‐128) was among the top downloaded articles in the OSA Advances in Optics and Photonics (AOP) journal.

OCIS Codes
(090.2870) Holography : Holographic display
(110.6880) Imaging systems : Three-dimensional image acquisition
(120.2040) Instrumentation, measurement, and metrology : Displays

ToC Category:
Imaging Systems

History
Original Manuscript: May 28, 2013
Revised Manuscript: September 17, 2013
Manuscript Accepted: September 30, 2013
Published: November 22, 2013

Virtual Issues
(2013) Advances in Optics and Photonics

Citation
Jason Geng, "Three-dimensional display technologies," Adv. Opt. Photon. 5, 456-535 (2013)
http://www.opticsinfobase.org/aop/abstract.cfm?URI=aop-5-4-456


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. E. N. Marieb and K. N. Hoehn, Human Anatomy & Physiology (Pearson, 2012).
  2. T. Okoshi, Three-Dimensional Imaging Techniques (Academic, 1976).
  3. B. Blundell and A. Schwarz, Volumetric Three Dimensional Display System (Wiley, 2000).
  4. D. Gabor, “Holography 1948–1971,” Proc. IEEE 60, 655–668 (1972). [CrossRef]
  5. S. Benton and M. Bove, Holographic Imaging (Wiley Interscience, 2008).
  6. E. Lueder, 3D Displays (Wiley, 2012).
  7. R. Hainich and O. Bimber, Displays: Fundamentals & Applications (Peters/CRC Press, 2011).
  8. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH (1996), pp. 31–42.
  9. W. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23, 814–824 (2004). [CrossRef]
  10. N. Dodgson, “Autostereoscopic 3D displays,” Computer 38(8), 31–36 (2005). [CrossRef]
  11. G. Favalora, “Volumetric 3D displays and application infrastructure,” Computer 38(8), 37–44 (2005). [CrossRef]
  12. E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane, “A three color, solid-state three dimensional display,” Science 273, 1185–1189 (1996). [CrossRef]
  13. A. Jones, I. McDowall, H. Yamada, M. Bolas, and P. Debevec, “Rendering for an interactive 360° light field display,” in SIGGRAPH 2007 Papers (2007), paper 40.
  14. B. Javidi and F. Okano, Three Dimensional Television, Video, and Display Technologies (Springer, 2011).
  15. J. Geng, “Volumetric 3D display for radiation therapy planning,” J. Disp. Technol. 4, 437–450 (2008). [CrossRef]
  16. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3, 128–160 (2011). [CrossRef]
  17. Y. Takaki and N. Nago, “Multi-projection of lenticular displays to construct a 256-view super multi-view display,” Opt. Express 18, 8824–8835 (2010). [CrossRef]
  18. S. Pastoor and M. Wöpking, “3-D displays: a review of current technologies,” Displays 17, 100–110 (1997). [CrossRef]
  19. J. Geng, “Multiview three-dimensional display using single projector,” Displays (submitted).
  20. A. Sullivan, “3 Deep: new displays render images you can almost reach out and touch,” IEEE Spectrum42(4), 30–35 (2005).
  21. D. MacFarlane, “Volumetric three dimensional display,” Appl. Opt. 33, 7453–7457 (1994). [CrossRef]
  22. J.-Y. Son, B. Javidi, and K.-D. Kwack, “Methods for displaying three-dimensional images,” Proc. IEEE 94, 502–523 (2006). [CrossRef]
  23. J.-Y. Son, B. Javidi, S. Yano, and K.-H. Choi, “Recent developments in 3-D imaging technologies,” J. Disp. Technol. 6, 394–403 (2010). [CrossRef]
  24. H. Urey, K. V. Chellappan, E. Erden, and P. Surman, “State of the art in stereoscopic and autostereoscopic displays,” Proc. IEEE 99, 540–555 (2011). [CrossRef]
  25. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]
  26. N. S. Holliman, N. A. Dodgson, G. E. Favalora, and L. Pockett, “Three-dimensional displays: a review and applications analysis,” IEEE Trans Broadcast. 57, 362–371 (2011). [CrossRef]
  27. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50, H87–H115 (2011). [CrossRef]
  28. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence–accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vis. 8(3):33, 1–30 (2008). [CrossRef]
  29. E. Adelson and J. Bergen, “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing (MIT, 1991), pp. 3–20.
  30. S. E. B. Sorensen, P. S. Hansen, and N. L. Sorensen, “Method for recording and viewing stereoscopic images in color using multichrome filters,” U.S. patent6,687,003 (February3, 2004).
  31. E. A. Edirisinghe and J. Jiang, “Stereo imaging, an emerging technology,” in Proceedings of SSGRR, L’Aquila, July31–August 6, 2000.
  32. M. Coltheart, “The persistences of vision,” Phil. Trans. R. Soc. B 290, 57–69 (1980). [CrossRef]
  33. “Persistence of vision,” http://en.wikipedia.org/wiki/Persistence_of_vision .
  34. O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Disp. Technol. 2, 199–216 (2006). [CrossRef]
  35. D. Cheng, Y. Wang, H. Hua, and M. M. Talha, “Design of an optical see-through headmounted display with a low f-number and large field of view using a free-form prism,” Appl. Opt. 48, 2655–2668 (2009). [CrossRef]
  36. T. Honda, Y. Kajiki, K. Susami, T. Hamaguchi, T. Endo, T. Hatada, and T. Fujii, “A display system for natural viewing of 3D images,” in Three-Dimensional Television, Video and Display Technology (Springer, 2010), pp. 461–487.
  37. M. Lucente, “Computational holographic bandwidth compression,” IBM Syst. J. 35, 349–365 (1996). [CrossRef]
  38. M. Faraday, “Thoughts on ray vibrations,” Philos. Mag. 28, 345–350 (1846).
  39. A. Gershun, “The light field,” Moscow, 1936, P. Moon and G. Timoshenko, translators, J. Math. Phys. XVIII, 51–151 (1939).
  40. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen, “The lumigraph,” in Proceedings of ACM SIGGRAPH (1996), pp. 43–54.
  41. F. E. Ives, “A novel stereogram,” J. Franklin Inst. 153, 51–52 (1902). [CrossRef]
  42. T. Peterka, R. L. Kooima, D. J. Sandin, A. Johnson, J. Leigh, and T. A. DeFanti, “Advances in the Dynallax solid-state dynamic parallax barrier autostereoscopic visualization display system,” IEEE Trans. Vis. Comput. Graph. 14, 487–499 (2008). [CrossRef]
  43. “Nintendo 3DS,” Nintendo, http://www.nintendo.com/3ds/features/ .
  44. T. Kanebako and Y. Takaki, “Time-multiplexing display module for high-density directional display,” Proc. SPIE 6803, 68030P (2008). [CrossRef]
  45. D. S. St. John, “Holographic color television record system,” U.S. patent3,813,685 (May28, 1974).
  46. T. Endo, Y. Kajiki, T. Honda, and M. Sato, “Cylindrical 3-D video display observable from all directions,” in Proceedings of Pacific Graphics (2000), pp. 300–306.
  47. T. Yendo, N. Kawakami, and S. Tachi, “Seelinder: the cylindrical light field display,” in ACM SIGGRAPH (2005), paper 16.
  48. D. Lanman, M. Hirsch, Y. Kim, and R. Raskar, “Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,” ACM Trans. Graph. 29, 163 (2010). [CrossRef]
  49. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31, 80 (2012). [CrossRef]
  50. W. Hess, “Stereoscopic picture,” U.S. patent1,128,979 (February16, 1915).
  51. C. van Berkel, D. W. Parker, and A. R. Franklin, “Multiview 3D LCD,” Proc. SPIE 2653, 32 (1996). [CrossRef]
  52. C. van Berkel and J. A. Clarke, “Characterization and optimization of 3D-LCD module design,” Proc. SPIE 3012, 179 (1997). [CrossRef]
  53. NLT, www.nlt-technologies.co.jp/en/ .
  54. A. Schwerdtner and H. Heidrich, “Dresden 3D display (D4D),” Proc. SPIE 3295, 203 (1998). [CrossRef]
  55. Y.-P. Huang, C.-W. Chen, T.-C. Shen, and J.-F. Huang, “Autostereoscopic 3D display with scanning multi-electrode driven liquid crystal (MeD-LC) lens,” 3D Res. 1, 39–42 (2010). [CrossRef]
  56. G. Lippmann, “Épreuves réversibles. Photographies intégrales,” C. R. Acad. Sci. 146, 446–451 (1908).
  57. H. Takahashi, H. Fujinami, and K. Yamada, “Holographic lens array increases the viewing angle of 3D displays,” SPIE Newsroom (June6, 2006).
  58. A. Stern and B. Javidi, “3D image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). [CrossRef]
  59. J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express 12, 6020–6032 (2004). [CrossRef]
  60. H. Liao, T. Dohi, and K. Nomura, “Autostereoscopic 3D display with long visualization depth using referential viewing area based integral photography,” IEEE Trans. Vis. Comput. Graph. 17, 1690–1701 (2011). [CrossRef]
  61. O. S. Cossairt, M. Thomas, and R. K. Dorval, “Optical scanning assembly,” U.S. patent7,864,419 (June8, 2004).
  62. E. Goulanian and A. F. Zerrouk, “Apparatus and system for reproducing 3-dimensional images,” U.S. patent7,944,465 (May17, 2011).
  63. L. Bogaert, Y. Meuret, S. Roelandt, A. Avci, H. De Smet, and H. Thienpont, “Demonstration of a multiview projection display using decentered microlens arrays,” Opt. Express 18, 26092–26106 (2010). [CrossRef]
  64. G. J. Woodgate, D. Ezra, J. Harrold, N. S. Holliman, G. R. Jones, and R. R. Moseley, “Observer tracking autostereoscopic 3D display systems,” Proc. SPIE 3012, 187 (1997). [CrossRef]
  65. M. W. Jones, G. P. Nordin, J. H. Kulick, R. G. Lindquist, and S. T. Kowel, “A liquid crystal display based implementation of a real-time ICVision holographic stereogram display,” Proc. SPIE 2406, 154 (1995). [CrossRef]
  66. T. Toda, S. Takahashi, and F. Iwata, “3D video system using grating image,” in Proc. SPIE 2406, 191 (1995). [CrossRef]
  67. E. Schulze, “Synthesis of moving holographic stereograms with high-resolution spatial light modulators,” Proc. SPIE 2406, 124 (1995). [CrossRef]
  68. D. Fattal, Z. Peng, T. Tran, S. Vo, M. Fiorentino, J. Brug, and R. G. Beausoleil, “A multi-directional backlight for a wide-angle glasses-free three-dimensional display,” Nature 495, 348–351 (2013). [CrossRef]
  69. O. S. Cossairt, J. Napoli, S. L. Hill, R. K. Dorval, and G. E. Favalora, “Occlusion-capable multiview volumetric three-dimensional display,” Appl. Opt. 46, 1244–1250 (2007). [CrossRef]
  70. Actuality 3D Display, http://actuality-medical.com .
  71. Holografika, www.holografika.com .
  72. T. Balogh and P. T. Kovács, “Real-time 3D light field transmission,” Proc. SPIE 7724, 772406 (2010). [CrossRef]
  73. G. Favalora and O. Cossairt, “Theta-parallax-only (TPO) displays,” U.S. patent7,364,300 B2 (April24, 2008).
  74. S. Uchida and Y. Takaki, “360-degree, three-dimensional table-screen display using small array of high-speed projectors,” Proc. SPIE 8288, 82880D (2012). [CrossRef]
  75. C. H. Krah, “Three-dimensional display system,” U.S. patent7,843,449 (November30, 2010).
  76. Y.-H. Tao, Q.-H. Wang, J. Gu, and W.-X. Zhao, “Autostereoscopic three-dimensional projector based on two parallax barriers,” Opt. Lett. 34, 3220–3222 (2009). [CrossRef]
  77. Y. Kim, K. Hong, J. Yeom, J. Hong, J.-H. Jung, Y. W. Lee, J.-H. Park, and B. Lee, “A frontal projection-type three-dimensional display,” Opt. Express 20, 20130–20138 (2012). [CrossRef]
  78. Y. Kajiki, H. Yoshikawa, and T. Honda, “Hologram-like video images by 45-view stereoscopic display,” Proc. SPIE 3012, 154 (1997). [CrossRef]
  79. S. Hentschke, “Autostereoscopic reproduction system for 3-D displays,” U.S. patent7,839,430 (November232010).
  80. N.-Y. Wang, H.-J. Lee, and C.-H. Tsai, “Parallax barrier type autostereoscopic display device,” U.S. patent6,727,866 (April27, 2004).
  81. B. Si, “Stereoscopic image display system and method of controlling the same,” U. S. patent8,427,746 B2 (April23, 2013).
  82. http://www.hhi.fraunhofer.de/fields-of-competence/interactive-media-human-factors/products-services/stereoscopic-displays/free2c-desktop-display.html
  83. P. Surman, R. S. Brar, I. Sexton, and K. Hopf, “MUTED and HELIUM3D autostereoscopic displays,” in IEEE International Conference on Multimedia and Expo (ICME) (2010), pp. 1594–1599.
  84. S. H. Ju, M.-D. Kim, M.-S. Park, K.-T. Kim, J.-H. Park, and K.-M. Lim, “Viewer’s eye position estimation using single camera,” in SID Syposium Digest of Technical Papers (2013), pp. 671–674.
  85. H. Y. Wu, C. H. Chang, and C. L. Lin, “Dead-zone-free 2D/3D switchable barrier type 3D display,” in SID Syposium Digest of Technical Papers (2013), pp. 675–677.
  86. J. C. Schultz, R. Brott, M. Sykora, W. Bryan, and T. Fukami, “Full resolution autostereoscopic 3D display for mobile applications,” in SID Symposium Digest of Technical Papers (2009), Vol. 40, pp. 127–130.
  87. J. C. Schultz and M. J. Sykora, “Directional backlight with reduced crosstalk,” U.S. patent application2011/0285927 A1 (May24, 2010).
  88. M. Minami, K. Yokomizo, and Y. Shimpuku, “Glasses-free 2D/3D switchable display,” in SID Symposium Digest of Technical Papers (2011), pp. 468–471.
  89. M. Minami, “Light source device and display,” U.S. patent application2012/0195072 A1 (August2, 2012).
  90. C. W. Wei and Y. P. Huang, “240  Hz 4-zones sequential backlight,” in SID Symposium Digest (2010), p. 863.
  91. H. Kwon and H. J. Choi, “A time-sequential multiview autostereoscopic display without resolution loss using a multidirectional backlight unit and a LCD panel,” Proc. SPIE 8288, 82881Y (2012). [CrossRef]
  92. E. A. Downing, “Method and system for three-dimensional display of information based on two photon upconversion,” U.S. patent5,684,621 (November4, 1997).
  93. J. D. Lewis, C. M. Verber, and R. B. McGhee, “A true three-dimensional display,” IEEE Trans. Electron Devices 18, 724–732 (1971). [CrossRef]
  94. K. Langhans, C. Guill, E. Rieper, K. Oltmann, and D. Bahr, “Solid Felix: a static volume 3D-laser display,” IS&T Reporter 18(1), 1–9 (2003).
  95. E. J. Korevaar and B. Spiver, “Three dimensional display apparatus,” U.S. patent4,881,068 (November14, 1989).
  96. S. K. Nayar and V. N. Anand, “3D display using passive optical scatterers,” Computer 40(7), 54–63 (2007). [CrossRef]
  97. J. Geng, “Volumetric 3D display system with static screen,” NASA Tech Briefs (NASA, 2011), Vol. 35, p. 40, http://www.techbriefs.com/component/content/article/9432 .
  98. H. Kimura, T. Uchiyama, and H. Yoshikawa, “Laser produced 3D display in the air,” in ACM SIGGRAPH (2006), p. 20.
  99. M. Momiuchi and H. Kimura, “Device for forming visible image in air,” U.S. patent7,533,995 (May19, 2009).
  100. D. Wyatt, “A volumetric 3D LED display” (MIT, 2005), http://web.mit.edu/6.111/www/f2005/projects/wyatt_Project_Design_Presentation.pdf .
  101. L. Sadovnik and A. Rizkin, “3D volume visualization display,” U.S. patent5,764,317 (June9, 1998).
  102. A. Sullivan, “Multi-planar volumetric display system and method of operation using multi-planar interlacing,” U.S. patent6,806,849 (October19, 2004).
  103. LightSpace Technologies, www.lightspacetech.com .
  104. EuroLCDs, www.eurolcds.com .
  105. R. S. Gold and J. E. Freeman, “Layered display system and method for volumetric presentation,” U.S. patent5,813,742 (September29, 1998).
  106. M. S. Leung, N. A. Ives, and G. Eng, “Three-dimensional real-image volumetric display system and method,” U.S. patent5,745,197 (April28, 1998).
  107. J.-P. Koo and D.-S. Kim, “Volumetric three-dimensional (3D) display system using transparent flexible display panels,” U.S. patent application2007/0009222 A1 (January11, 2007).
  108. M. Hirsch, “Three dimensional display apparatus,” U.S. patent2,967,905 (January13, 1958).
  109. “3D Display from ITT Labs,” Aviation Week, 66–67 (October31, 1960).
  110. L. D. Sher, “Three-dimensional display,” U.S. patent4,130,832 (December19, 1978).
  111. R. Hartwig, “Vorrichtung zur Dreidimensionalen Abbildung in Einem Zylindersymmetrischen Abbildungsraum,” DE patent2622802 C2 (1976).
  112. F. Garcia and R. D. Williams, “Real time three dimensional display with angled rotating screen and method,” U.S. patent5,042,909 (August27, 1991).
  113. M. Lasher, P. Soltan, W. Dahlke, and N. Acantilado, “Laser projected 3D volumetric displays,” Proc. SPIE 2650, 285 (1996). [CrossRef]
  114. J. Geng, “A volumetric 3D display based on a DLP projection engine,” Displays 34, 39–48 (2013). [CrossRef]
  115. J. Geng, “Method and apparatus for high resolution three dimensional display,” U.S. patent6,064,423 (May16, 2000).
  116. J. Geng, “Method and apparatus for an interactive volumetric three dimensional display,” U.S. patent7,098,872 (August29, 2006).
  117. J. Geng, “Method and apparatus for an interactive volumetric three dimensional display,” U.S. patent6,900,779 (May31, 2005).
  118. J. Geng, “Method and apparatus for generating structural pattern illumination,” U.S. patent6,937,348 (August30, 2005).
  119. R. J. Schipper, “Three-dimensional display,” U.S. patent3,097,261 (July9, 1963).
  120. E. P. Berlin, “Three-dimensional display,” U.S. patent4,160,973 (July10, 1979).
  121. R. D. Ketchpel, “Three-dimensional display cathode ray tube,” U.S. patent3,140,415 (July7, 1964).
  122. B. Blundell and A. Schwarz, Volumetric Three-Dimensional Display Systems (Wiley, 2000).
  123. B. G. Blundell, “Three dimensional display system,” U.S. patent5,703,606 (December30, 1997).
  124. R. Stahl and M. Jayapala, “Holographic displays and smart lenses,” Opt. Photon. 6, 39–42 (2011). [CrossRef]
  125. Y.-P. Huang, “Auto-stereoscopic 3D display and its future developments,” http://www.cdr.ust.hk/Webinar (SID, 2012).
  126. D. E. Smalley, Q. Y. J. Smithwick, and V. M. Bove, “Holographic video display based on guided-wave acousto-optic devices,” in Proc. SPIE 6488, 64880L (2007). [CrossRef]
  127. D. E. Smalley, Q. Y. Smithwick, V. M. Bove, J. Barabas, and S. Jolly, “Anisotropic leaky-mode modulator for holographic video displays,” Nature 498, 313–317 (2013). [CrossRef]
  128. M. Klug, T. Burnett, A. Fancello, A. Heath, K. Gardner, S. O’Connell, C. Newswanger, “A scalable, collaborative, interactive light-field display system,” in SID Symposium Digest of Technical Papers (2013), Vol. 44, Issue 1, pp. 412–415.
  129. Zebra Imaging, www.zebraimaging.com .
  130. M. Lucente, “The first 20 years of holographic video—and the next 20,” in SMPTE 2nd Annual International Conference on Stereoscopic 3D for Media and Entertainment, New York, June21–23, 2011.
  131. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holography as a generic display technology,” Computer 38(8), 46–53 (2005). [CrossRef]
  132. QinetiQ, www.qinetiq.com .
  133. SeeReal, http://www.seereal.com/ .
  134. S. Reichelt, R. Häussler, N. Leister, G. Fütterer, H. Stolle, and A. Schwerdtner, “Holographic 3-D displays—electro-holography within the grasp of commercialization,” in Advances in Lasers and Electro Optics, N. Costa and A. Cartaxo, eds. (INTECH, 2012), Chap. 29.
  135. S. Reichelt, R. Häussler, G. Fütterer, and N. Leister, “Depth cues in human visual perception and their realization in 3D displays,” Proc. SPIE 7690, 76900B (2010). [CrossRef]
  136. IMEC Holographic Display, http://www.imec.be/ScientificReport/SR2010/2010/1159126.html .
  137. P.-A. Blanche, A. Bablumian, R. Voorakaranam, C. Christenson, W. Lin, T. Gu, D. Flores, P. Wang, W.-Y. Hsieh, M. Kathaperumal, B. Rachwal, O. Siddiqui, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature 468, 80–83 (2010). [CrossRef]
  138. CNN, www.cnn.com .
  139. Holographic screen, http://en.wikipedia.org/wiki/Holographic_screen .
  140. Vermeer, http://research.microsoft.com/en-us/projects/vermeer/ .
  141. Musion Eyeliner, http://www.eyeliner3d.com/ .
  142. ViZoo, http://www.vizoo.com .
  143. P. Simonson and M. Corell, “Method and arrangement for projecting images,” U.S. patent7,184,209 (February27, 2007).
  144. Musion Systems Ltd, http://www.musion.co.uk .
  145. “Pepper’s ghost,” http://en.wikipedia.org/wiki/Pepper%27s_ghost .
  146. FogScreen, http://www.fogscreen.com/ .
  147. UK FogScreen, http://ukfogscreen.com/ .
  148. A. Kataoka and Y. Kasahara, “Method and apparatus for a fog screen and image-forming method using the same,” U.S. patent5,270,752 (December14, 1993).
  149. H. Hasegawa, A. Yamamoto, T. Fujimori, and N. Uchibori, “Image display system and method, and screen device,” U.S. patent8,157,382 (April17, 2012).
  150. C. D. Dyner, “Method and system for free-space imaging display and interface,” U.S. patent6,857,746 (February22, 2005).
  151. S. H. Pevnick, “Water supply method and apparatus for a fountain,” U.S. patent6,557,777 (May6, 2003).
  152. Graphical Waterfalls, http://pevnickdesign.com/ .
  153. P. Richards, “MIT architects design building with digital water walls,” MIT News Office (July12, 2007).
  154. O. Hilliges, D. Kim, S. Izadi, M. Weiss, and A. Wilson, “HoloDesk: direct 3D interactions with a situated see-through display,” in Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (2012), pp. 2421–2430.
  155. N. Holliman, Three-Dimensional Display Systems (Taylor and Francis, 2006).