## Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display |

Optics Express, Vol. 20, Issue 21, pp. 23755-23768 (2012)

http://dx.doi.org/10.1364/OE.20.023755

Acrobat PDF (2601 KB)

### Abstract

Three-dimensional image distortion caused by mismatch between autostereoscopic displays and contents is analyzed. For a given three-dimensional object scene, the original light ray field in the contents and deformed one by the autostereoscopic displays are calculated. From the deformation of the light ray field, the distortion of the resultant three-dimensional image is finally deduced. The light ray field approach enables generalized distortion analysis across different autostereoscopic display techniques. The analysis result is verified experimentally when multi-view contents are applied to a multi-view display of non-matching parameters and a horizontal parallax only integral imaging display.

© 2012 OSA

## 1. Introduction

1. T. Okoshi, “Three-dimensional displays,” Proc. IEEE **68**(5), 548–564 (1980). [CrossRef]

2. P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV display: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. **17**(11), 1647–1658 (2007). [CrossRef]

3. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. **50**(34), H87–H115 (2011). [CrossRef] [PubMed]

3. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. **50**(34), H87–H115 (2011). [CrossRef] [PubMed]

7. K. Yamamoto, T. Mishina, R. Oi, T. Senoh, and M. Okui, “Cross talk elimination using an aperture for recording elemental images of integral photography,” J. Opt. Soc. Am. A **26**(3), 680–690 (2009). [CrossRef] [PubMed]

## 2. Light ray field

### 2.1 Concept

*L*(

*x*,

*y*,

*θ*,

_{x}*θ*;

_{y}*z*) as shown in Fig. 1 where (

_{r}*x*,

*y*) represents spatial position in a reference plane at

*z*=

*z*and (

_{r}*θ*,

_{x}*θ*) represents propagation angle measured with respect to

_{y}*x*and

*y*axis, respectively [22

22. A. Stern and B. Javidi, “Ray phase space approach for 3-D imaging and 3-D optical data representation,” J. Display Technol. **1**(1), 141–150 (2005). [CrossRef]

*x*,

*θ*) distribution is considered without (

_{x}*y*,

*θ*) for simplicity. The reference plane is also assumed to be located at

_{y}*z*= 0 without loss of generality. Figure 2 shows an example of the light ray field of a single object point at a position (

_{r}*x*

_{1},

*z*

_{1}). As shown in Fig. 2, the corresponding light ray field can be represented by a straight line intersecting the

*x*-axis at

*x*

_{1}with a slanting angle of −1/

*z*

_{1}in

*x*-

*θ*plot. For general 3D objects composed of 3D object point cloud, the light ray field is given by a collection of the slanted lines.

_{x}### 2.2 Light ray field captured in MV contents

*x*at a specific distance from the 3D object scene to capture 3D contents. Figure 3 shows the captured rays in the MV contents where

_{c}*x*represents lateral position of

_{cn}*n*-th camera and

*z*is the camera distance from the reference plane. The

_{c}*x*-

*θ*plot of the corresponding light ray field at a reference plane is shown in Fig. 4 . As shown in Fig. 4, a single view image corresponds to a single line with a slanting angle −1/

_{x}*z*in the light ray field. Therefore MV contents with

_{c}*N*views can be represented by

*N*slanted lines in the light ray field. Note that only a small linear portion of the entire light ray field information is included in the MV contents.

### 2.3 Light ray field reconstructed by MV and HPO InIm display

*θ*.

_{x}*x*-

*θ*plot of the reconstructed light ray field at the ray guiding optics plane of two display methods. In case of MV display shown in Fig. 6(a), the reconstructed light ray field is represented as a collection of the slanted lines each of which corresponds to the light rays converging at a viewpoint. The slating angle and the

_{x}*x*-axis intersection of the lines are given by −1/

*z*and

_{v}*x*respectively where

_{vn}*z*and

_{v}*x*are the distance and the lateral position of the

_{vn}*n*-th viewpoint. On the other hand, in case of HPO InIm display, the reconstructed light ray field is represented as a collection of the horizontal lines, reflecting parallel ray reconstruction characteristics as shown in Fig. 6(b). Note that ideal 3D display would reconstruct whole plane in

*x*-

*θ*plot. Figure 6 shows that MV display and HPO InIm display only reconstruct a few linear portions of entire light ray field. The light ray field portion that is reconstructed depends on the display method and system specifications.

_{x}## 3. Analysis of image distortion

### 3.1 Light ray field and image distortion

*z*and spacing Δ

_{c}*x*. When the 4-view image contents in Fig. 7(a) are supplied to the 4-view MV display with viewing distance

_{c}*z*=

_{v}*z*and viewpoint spacing Δ

_{c}*x*= Δ

_{v}*x*, the light ray field is reconstructed correctly as shown in Fig. 7(b) and the 3D image can be observed without distortion at the designed viewpoints. On the other hands, when the specifications of the display deviate from this optimal condition, the light ray field is reconstructed with deformation as shown in Fig. 7(c) and thus the 3D image is distorted accordingly even when the observer is located at the designated viewpoint of the display. In the followings, the distortions are analyzed in detail.

_{c}### 3.2 Image distortion caused by using MV image contents to incorrect MV display panel

*l'*(

*x*,

*θ*) =

_{x}*l*(

*x*,

*θ*), and thus the 3D image is displayed without distortion, i.e.

_{x}*f'*(

*x*;

*z*) =

*f*(

*x*;

*z*).

*a*as shown in Fig. 9(b). In this case, the light rays collected at a camera position (

*x*,

_{cn}*z*) are actually reconstructed such that they converge at a viewpoint located at a different lateral position (

_{c}*x*,

_{vn}*z*) = (

_{v}*x*+

_{cn}*a*,

*z*). An original light ray propagating from a position

_{c}*x*in the reference plane to a camera position (

*x*,

_{cn}*z*) with an angle (

_{c}*x*-

_{cn}*x*)/

*z*is reconstructed as a light ray propagating from the same position

_{c}*x*in the reference plane but to a different position (

*x*,

_{vn}*z*) = (

_{v}*x*+

_{cn}*a*,

*z*) with an angle (

_{c}*x*-

_{vn}*x*)/

*z*= (

_{v}*x*-

_{cn}*x*+

*a*)/

*z*. Therefore, in the reference plane, the direction of each light ray is rotated by an angle

_{c}*a*/

*z*. In the

_{c}*x*-

*θ*plot of light ray field, this can be represented by shifting each slanted line by

_{x}*a*/

*z*along

_{c}*θ*-axis, resulting in

_{x}*l'*(

*x*,

*θ*) =

_{x}*l*(

*x*,

*θ*-(

_{x}*a*/

*z*)). From Eqs. (1) and (2), this leads toLetting

_{c}*u'*=

*x*+

*θ*, we haveSince Eq. (4) should hold for all

_{x}z'*θ*, we have

_{x}*z*=

*z*′ andwhere

*u*′ is replaced by

*x*for better clarity. Equation (5) indicates that each depth slice at

*z*of the original 3D object is reconstructed at the same distance

*z*but with a lateral shift of (

*a*/

*z*)

_{c}*z*, distorting the reconstructed 3D image along lateral direction. The amount of the distortion is proportional to the object depth

*z*and the position mismatch

*a*between the camera position and viewpoint. The distortion is visualized in the right inset of Fig. 9(b) for a hexahedron object.

*z*of the display is shorter than the camera distance

_{v}*z*of the capture system by a distance

_{c}*b*. In this case, a light ray propagating from a position

*x*in the reference plane to a camera position (

*x*,

_{cn}*z*) with an angle (

_{c}*x*-

_{cn}*x*)/

*z*is reconstructed as a light ray propagating from the same position

_{c}*x*in the reference plane but to a different position (

*x*,

_{vn}*z*) = (

_{v}*x*,

_{cn}*z*-

_{c}*b*) with an angle (

*x*-

_{vn}*x*)/

*z*= (

_{v}*x*-

_{cn}*x*)/(

*z*-

_{c}*b*). Therefore light ray field is deformed by

*u'*=

*x*+

*θ*into Eq. (6) givesIn order to make Eq. (7) true for all

_{x}z'*θ*, we have

_{x}*z*′ = (

*z*-

_{c}*b*)

*z*/

*z*and thus Eq. (7) reduces towhere

_{c}*u*′ and

*z*′ are replaced by

*x*and

*z*. Equation (8) indicates that each depth slice at

*z*is reconstructed at a different distance (

*z*-

_{c}*b*)

*z*/

*z*, compressing the 3D image longitudinally.

_{c}*x*is wider than camera spacing Δ

_{v}*x*by

_{c}*k*. A light ray propagating from a position

*x*in the reference plane to a camera position (

*x*,

_{cn}*z*) = (

_{c}*n*Δ

*x*,

_{c}*z*) with an angle (

_{c}*n*Δ

*x*-

_{c}*x*)/

*z*is reconstructed as a light ray propagating from the same position

_{c}*x*in the reference plane but to a different position (

*x*,

_{vn}*z*) = (

_{v}*n*Δ

*x*,

_{v}*z*) = (

_{c}*nk*Δ

*x*,

_{c}*z*) with an angle (

_{c}*nk*Δ

*x*-

_{c}*x*)/

*z*, giving a relationUsing

_{c}*u'*=

*x*+

*θ*into Eq. (11) givesEquation (12) is true for all

_{x}z'*θ*whenBy substituting Eq. (13) into Eq. (12), finally we havewhere

_{x}*u*′ and

*z*′ are replaced by

*x*and

*z*. Equation (14) indicates that the 3D image of hexahedron object is distorted to trapezoidal shape with both longitudinal and lateral deformations.

*et al*.’s analysis.

^{14}It confirms the validity of our light ray field based analysis. Our light ray field analysis, however, is more general framework as it can be applied to autostereoscopic displays of different modality. Next section shows distortion analysis when MV image contents are applied to HPO InIm display panel.

### 3.3 Image distortion caused by using MV image contents to HPO InIm display panel

*x*in the reference plane to a camera position (

*x*,

_{cn}*z*) with an angle (

_{c}*x*-

_{cn}*x*)/

*z*is reconstructed as a light ray emanating from the same position

_{c}*x*in the reference plane but with a different constant angle

*θ*, where

_{xn}*θ*is

_{xn}*n*-th parallel light ray reconstruction angle of the HPO InIm display. This can be represented by a relationAssuming

*θ*=

_{xn}*n*Δ

*θ*and

_{x}*x*=

_{ci}*n*Δ

*x*where Δ

_{c}*θ*is the angular ray spacing of the HPO InIm display, and treating the sampled angle

_{x}*θ*as a continuous variable

_{xn}*θ*(i.e.

_{x}*θ*=

_{xn}*n*Δ

*θ*=

_{x}*θ*,

_{x}*x*=

_{cn}*n*Δ

*x*= (

_{c}*θ*/Δ

_{x}*θ*) Δ

_{x}*x*), Eq. (15) becomesAgain, from Eqs. (1) and (2), we havewhich reduces toEquation (18) indicates that when MV contents are applied to HPO InIm display panel, the 3D images are distorted with depth scaling and depth dependent lateral magnification. Note that Eq. (18) agrees with T. Saishu et al.’s result [21

_{c}21. T. Saishu, K. Taira, R. Fukushima, and Y. Hirayama, “Distortion control in a one-dimensional integral imaging autostereoscopic display system with parallel optical beam groups,” SID Tech. Dig. **35**(1), 1438–1441 (2004). [CrossRef]

## 4. Experimental verification

^{−1}(2/3) measured from the vertical axis. The pixel pitch of the panel was 294μm. The designed viewpoint distance

*z*and spacing Δ

_{v}*x*of the MV display were 600mm and 29mm, respectively. For 6-view MV contents, two contents were prepared. One was synthesized with parameters

_{v}*z*= 2

_{c}*z*= 1200mm (i.e.

_{v}*b*= 600mm) and Δ

*x*= Δ

_{c}*x*= 29mm (i.e.

_{v}*k*= 1) and the other one with

*z*=

_{c}*z*= 600mm (i.e.

_{v}*b*= 0mm) and Δ

*x*= 0.5Δ

_{c}*x*= 14.5mm (i.e.

_{v}*k*= 2). For these parameters, Eqs. (8) and (14) indicate that in both cases the depth of the displayed ‘apple’ image will be reduced to around 10mm.

^{−1}(2/3) as the MV display panel used in previous experiment. The horizontal spacing between apertures, however, were adjusted so that the display panel has 6 parallel ray directions with Δ

*θ*= 2.8 degree spacing. For 6-view MV contents, an image content was synthesized with a camera spacing Δ

_{x}*x*= 65/5mm. The contents capturing distance

_{c}*z*was set to 600mm from the panel. For these parameters, Eq. (18) indicates that the depth of the displayed ‘apple’ image is reduced to 9mm.

_{c}*x*= 65/5mm case, which indicates that the displayed ‘apple’ object is reconstructed with reduced depth. On the contrary, in ideal HPO InIm image contents case, the relative motion is negligible, confirming that the ‘apple’ image has similar depth with the cylindrical object. In Fig. 15, the real cylindrical object was located at 9mm. As expected, the relative motion becomes negligible in MV image contents of Δ

_{c}*x*= 65/5mm case, indicating the depth of the 'apple' image is reduced to around 9mm.

_{c}## 5. Conclusion

## Acknowledgment

## References and links

1. | T. Okoshi, “Three-dimensional displays,” Proc. IEEE |

2. | P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV display: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech. |

3. | J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. |

4. | A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE |

5. | J.-H. Jung, S.- Park, Y. Kim, and B. Lee, “Integral imaging using a color filter pinhole array on a display panel,” Opt. Express |

6. | J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. |

7. | K. Yamamoto, T. Mishina, R. Oi, T. Senoh, and M. Okui, “Cross talk elimination using an aperture for recording elemental images of integral photography,” J. Opt. Soc. Am. A |

8. | D. B. Diner and D. H. Fender, |

9. | A. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” Proc. SPIE |

10. | L. M. J. Meesters, W. A. IJsselsteijn, and P. J. H. Seuntiens, “A survey of perceptual evaluations and requirements of three-dimensional TV,” IEEE Trans. Circ. Syst. Video Tech. |

11. | K.-H. Lee, M.-J. Lee, Y.-S. Yoon, and S.-K. Kim, “Incorrect depth sense due to focused object distance,” Appl. Opt. |

12. | C. Ricolfe-Viala, A.-J. Sanchez-Salmeron, and E. Martinez-Berti, “Calibration of a wide angle stereoscopic system,” Opt. Lett. |

13. | J.-Y. Son, Y. N. Gruts, K.-D. Kwack, K.-H. Cha, and S.-K. Kim, “Stereoscopic image distortion in radial camera and projector configurations,” J. Opt. Soc. Am. A |

14. | V. Saveljev, “Image and observer regions in 3D displays,” J. Inform. Disp. |

15. | B.-R. Lee, J.-J. Hwang, and J.-Y. Son, “Characteristics of composite images in multiview imaging and integral photography,” Appl. Opt. |

16. | T. Horikoshi, S.-I. Uehara, T. Koike, C. Kato, K. Taira, G. Hamagishi, K. Mashitani, T. Nomura, A. Yuuki, N. Watanabe, Y. Hisatake, and H. Ujike, “Characterization of 3D image quality on autostereoscopic displays: proposal of interocular 3D purity,” SID Tech. Dig. |

17. | S.-I. Uehara, T. Horikoshi, C. Kato, T. Koike, G. Hamagishi, K. Mashitani, T. Nomura, K. Taira, A. Yuuki, N. Umezu, N. Watanabe, Y. Hisatake, and H. Ujike, “Characterization of motion parallax on multi-view/integral-imaging displays,” SID Tech. Dig. |

18. | H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A |

19. | J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express |

20. | M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. |

21. | T. Saishu, K. Taira, R. Fukushima, and Y. Hirayama, “Distortion control in a one-dimensional integral imaging autostereoscopic display system with parallel optical beam groups,” SID Tech. Dig. |

22. | A. Stern and B. Javidi, “Ray phase space approach for 3-D imaging and 3-D optical data representation,” J. Display Technol. |

23. | J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express |

24. | M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. On Graphics (Proc. SIGGRAPH) |

**OCIS Codes**

(100.2960) Image processing : Image analysis

(100.6890) Image processing : Three-dimensional image processing

**ToC Category:**

Image Processing

**History**

Original Manuscript: August 24, 2012

Revised Manuscript: September 21, 2012

Manuscript Accepted: September 22, 2012

Published: October 2, 2012

**Citation**

Hee-Seung Kim, Kyeong-Min Jeong, Sung-In Hong, Na-Young Jo, and Jae-Hyeung Park, "Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display," Opt. Express **20**, 23755-23768 (2012)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-21-23755

Sort: Year | Journal | Reset

### References

- T. Okoshi, “Three-dimensional displays,” Proc. IEEE68(5), 548–564 (1980). [CrossRef]
- P. Benzie, J. Watson, P. Surman, I. Rakkolainen, K. Hopf, H. Urey, V. Sainov, and C. von Kopylow, “A survey of 3DTV display: techniques and technologies,” IEEE Trans. Circ. Syst. Video Tech.17(11), 1647–1658 (2007). [CrossRef]
- J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt.50(34), H87–H115 (2011). [CrossRef] [PubMed]
- A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE94(3), 591–607 (2006). [CrossRef]
- J.-H. Jung, S.- Park, Y. Kim, and B. Lee, “Integral imaging using a color filter pinhole array on a display panel,” Opt. Express20(17), 18744–18756 (2012). [CrossRef]
- J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt.48(34), H77–H94 (2009). [CrossRef] [PubMed]
- K. Yamamoto, T. Mishina, R. Oi, T. Senoh, and M. Okui, “Cross talk elimination using an aperture for recording elemental images of integral photography,” J. Opt. Soc. Am. A26(3), 680–690 (2009). [CrossRef] [PubMed]
- D. B. Diner and D. H. Fender, Human Engineering in Stereoscopic Viewing Devices (Plenum, 1994).
- A. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” Proc. SPIE1915, 36–48 (1993). [CrossRef]
- L. M. J. Meesters, W. A. IJsselsteijn, and P. J. H. Seuntiens, “A survey of perceptual evaluations and requirements of three-dimensional TV,” IEEE Trans. Circ. Syst. Video Tech.14(3), 381–391 (2004). [CrossRef]
- K.-H. Lee, M.-J. Lee, Y.-S. Yoon, and S.-K. Kim, “Incorrect depth sense due to focused object distance,” Appl. Opt.50(18), 2931–2939 (2011). [CrossRef] [PubMed]
- C. Ricolfe-Viala, A.-J. Sanchez-Salmeron, and E. Martinez-Berti, “Calibration of a wide angle stereoscopic system,” Opt. Lett.36(16), 3064–3066 (2011). [CrossRef] [PubMed]
- J.-Y. Son, Y. N. Gruts, K.-D. Kwack, K.-H. Cha, and S.-K. Kim, “Stereoscopic image distortion in radial camera and projector configurations,” J. Opt. Soc. Am. A24(3), 643–650 (2007). [CrossRef] [PubMed]
- V. Saveljev, “Image and observer regions in 3D displays,” J. Inform. Disp.11(2), 68–75 (2010). [CrossRef]
- B.-R. Lee, J.-J. Hwang, and J.-Y. Son, “Characteristics of composite images in multiview imaging and integral photography,” Appl. Opt.51(21), 5236–5243 (2012). [CrossRef] [PubMed]
- T. Horikoshi, S.-I. Uehara, T. Koike, C. Kato, K. Taira, G. Hamagishi, K. Mashitani, T. Nomura, A. Yuuki, N. Watanabe, Y. Hisatake, and H. Ujike, “Characterization of 3D image quality on autostereoscopic displays: proposal of interocular 3D purity,” SID Tech. Dig.41(1), 331–334 (2010). [CrossRef]
- S.-I. Uehara, T. Horikoshi, C. Kato, T. Koike, G. Hamagishi, K. Mashitani, T. Nomura, K. Taira, A. Yuuki, N. Umezu, N. Watanabe, Y. Hisatake, and H. Ujike, “Characterization of motion parallax on multi-view/integral-imaging displays,” SID Tech. Dig.41(1), 661–664 (2010). [CrossRef]
- H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A15(8), 2059–2065 (1998). [CrossRef]
- J.-H. Park, Y. Kim, J. Kim, S.-W. Min, and B. Lee, “Three-dimensional display scheme based on integral imaging with three-dimensional information processing,” Opt. Express12(24), 6020 (2004). [CrossRef] [PubMed]
- M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett.33(7), 684–686 (2008). [CrossRef] [PubMed]
- T. Saishu, K. Taira, R. Fukushima, and Y. Hirayama, “Distortion control in a one-dimensional integral imaging autostereoscopic display system with parallel optical beam groups,” SID Tech. Dig.35(1), 1438–1441 (2004). [CrossRef]
- A. Stern and B. Javidi, “Ray phase space approach for 3-D imaging and 3-D optical data representation,” J. Display Technol.1(1), 141–150 (2005). [CrossRef]
- J.-H. Park and K.-M. Jeong, “Frequency domain depth filtering of integral imaging,” Opt. Express19(19), 18729–18741 (2011). [CrossRef] [PubMed]
- M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. On Graphics (Proc. SIGGRAPH) 25, 924–934 (2006).

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

### Figures

Fig. 1 |
Fig. 2 |
Fig. 3 |

Fig. 4 |
Fig. 5 |
Fig. 6 |

Fig. 7 |
Fig. 8 |
Fig. 9 |

Fig. 10 |
Fig. 11 |
Fig. 12 |

Fig. 13 |
Fig. 14 |
Fig. 15 |

« Previous Article | Next Article »

OSA is a member of CrossRef.