OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 19 — Sep. 12, 2011
  • pp: 18458–18469
« Show journal navigation

Phase-stepped fringe projection by rotation about the camera’s perspective center

Y. R. Huddart, J. D. Valera, N. J. Weston, T. C. Featherstone, and A. J. Moore  »View Author Affiliations


Optics Express, Vol. 19, Issue 19, pp. 18458-18469 (2011)
http://dx.doi.org/10.1364/OE.19.018458


View Full Text Article

Acrobat PDF (1230 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

A technique to produce phase steps in a fringe projection system for shape measurement is presented. Phase steps are produced by introducing relative rotation between the object and the fringe projection probe (comprising a projector and camera) about the camera’s perspective center. Relative motion of the object in the camera image can be compensated, because it is independent of the distance of the object from the camera, whilst the phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated with a static fringe projection system by moving an object on a coordinate measuring machine (CMM). The alternative approach, of rotating a lightweight and robust CMM-mounted fringe projection probe, is discussed. An experimental accuracy of approximately 1.5% of the projected fringe pitch was achieved, limited by the standard phase-stepping algorithms used rather than by the accuracy of the phase steps produced by the new technique.

© 2011 OSA

Introduction

Precision measurement of manufactured parts commonly uses contact measurement methods. A probe mounted to a coordinate measuring machine (CMM) touches the surface of the part, recording the probe’s tip position at each contact. Devices have been developed recently that continuously scan the probe tip across the surface, enabling quicker measurement for shapes that are easily parameterized such as a sphere or a plane. However, contact measurement remains slow and requires considerable user input for more general objects such as those with free-form surfaces. There is a requirement for a CMM-mounted optical probe that can provide the preliminary shape of a free-form surface to enable faster subsequent measurements with a contact probe. Full-field optical measurement is preferred to single point measurement (such as laser spot triangulation or time-of-flight) for speed of data acquisition. The optical probe should be compact, lightweight and contain no moving or delicate parts due to the high accelerations (up to 20ms−2) experienced in a CMM.

Fringe projection produces high resolution, full-field shape measurements. In phase stepping fringe projection, a series of sinusoidal patterns, with a phase step between each, is projected on to an object. A camera records a corresponding sequence of images. The phase of the pattern at each imaged point is calculated and converted to a 3D representation of the object’s surface. The phase step is often produced by a relative motion of components within the projector, such as the light source and a mirror, or a slide and a projection lens [1

1. K. Creath, “Comparison of phase-measurement algorithms,” Proc. SPIE 680, 19–28 (1986).

]. Programmable data projectors can project different patterns to resolve step-height ambiguities, but are too heavy to be attached to a CMM and require considerable time to reach the thermal stability needed for accurate shape measurement [2

2. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef] [PubMed]

]. Miniature data projectors do not currently have the resolution or brightness required and the high CMM accelerations could have an effect on the internal moving parts (digital micro-mirror device array) [3

3. 3M, “3M MPro 110 Micro projector,” http://www.3mselect.co.uk/p-1783-3m-mpro-110-micro-projector-uk-model.aspx (accessed 5 February 2010).

].

Kranz et al. presented an approach that avoided internal moving projector parts by translating the object with respect to a fixed camera and fringe projector [4

4. D. M. Kranz, E. P. Rudd, D. Fishbaine, and C. E. Haugan, “Phase profilometry system with telecentric projector,” International Patent, Publication Number WO01/51887 (2001).

]. The object was moved in a direction parallel to the camera image plane and perpendicular to the projected fringes by a distance equivalent to an integral number of pixels at the camera image plane. The image translation was compensated so that each pixel imaged the same point on the object before and after the motion, but the phase of the projected fringes at that point was stepped. This system relied on telecentric imaging optics, or a long standoff compared to the depth of field, so that the magnification remained constant throughout the measurement volume.

Theory

Figure 1
Fig. 1 Schematic of fringe projection system.
shows a schematic of the fringe projection system. The camera and projector are described using the central perspective projection model [5

5. M. A. R. Cooper with S. Robson, “Theory of close-range photogrammetry,” in Close range photogrammetry and machine vision, K. B. Atkinson, ed., (Whittles Publishing, Caithness, UK, 2001).

]. The model consists of a bounded image plane that is normal to the optic axis and a “pinhole” or perspective center through which light is considered to pass in a straight line. Lens distortions are incorporated into the model through subsequent calibration of the system [6

6. J. G. Fryer, “Camera Calibration,” in Close range photogrammetry and machine vision, K. B. Atkinson, ed., (Whittles Publishing, Caithness, UK, 2001).

,7

7. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of the 1997 Conference in Computer Vision and Pattern Recognition (CVPR ’97) (IEEE Computer Society, Washington, DC, 1997), pp. 1106–1112.

]. The global coordinate system (X^_,Y^_,Z^_) has its origin O at the perspective center of the camera. The perspective center of the projector is at X_OP=(XOP,YOP,ZOP) relative to the camera, and the projector coordinate system is denoted (X^_P,Y^_P,Z^_P). A point X_ in the global or camera coordinate system is referred to in projector coordinates as X_P, where X_=RX_P+X_OP, and R is the rotation matrix between the two coordinate systems. Although Fig. 1 shows the camera and projector coordinate Y-axes parallel to each other (i.e. coplanar optic axes) it should be emphasized that it is not a requirement: the mathematical development below does not require it and experimentally the axes are determined by the system calibration. The projector and camera optic axes intersect with included angle α, at standoff distance S from the camera’s perspective center.

A point xp in the projector image plane is projected to X_P=(XP,YP,ZP) where
x_p=cpZPX_P=[xPyPcp]
(1)
and cp is the principle distance of the projector (the distance between the perspective center and the image plane along the optic axis). Without loss of generality, the projector is assumed to project fringes extending in the Y^_P direction with period p in its ‘image’ plane. The phase at any point (x,y,cP) in the projector’s pixel array is given by 2πx/p. Then the phase at the point X_P due to the projected fringes is

ϕ=2πcppXPZP.
(2)

The period of the fringes at this point normal to the optical axis of the projector is ZPp/cP. Substituting for XP and ZP gives an expression for phase at a point X in the camera coordinate system.

ϕ=2πcppX^_P(X_X_OP)Z^_P(X_X_OP)
(3)

Consider the rotation of the camera about its own perspective center. An object point that is imaged at the camera image plane at x_=(x,y,c) (in camera coordinates) prior to the rotation, will be imaged after the rotation at x_1=(xR/zR,yR/zR,c) (in the coordinate system of the rotated camera) where x_R=(xR,yR,zR)represents the original image point x in the coordinate system of the rotated camera and c is the principal distance of the camera. That is, the image point after rotation is the projection of the original image point on to the image plane of the rotated camera. Importantly, the relationship between new and old image coordinates for a given point is not dependent on the position of the point in object space. Therefore the image point from the rotated camera can be translated by x_x_1to return it to its original position in the image plane, independent of the distance to the object point, i.e. the object’s shape.

The phase change due to the relative motion of the projector with respect to the object can be calculated using a small angle approximation. A point X following the rotation may be expressed in the rotated camera coordinate system relative to the camera perspective center as
X_R=X_|X_|ωV^_
(4)
where ω is the small rotation angle and V^_ is the unit vector in the direction of the effective motion of the object point relative to the camera. From Eq. (3), the phase change at X due to the motion of the projector is

Δϕ=2πcPp|X_|(X^_P.V^_XPZPZ^_P.V^_ZP+|X_|Z^_P.V^_ωω)
(5)

Provided that the projector to object distance is large compared to the motion, |X_ω|<<ZP, Eq. (5) can be simplified to

Δϕ2πcPpZP|X_|ω(X^_PXPZPZ^_P).V^_ (6)

Equation (6) separates into a number of factors: pZPcP represents the period of the fringes at distance ZP from the projector; (X^_PXPZPZ^_P).V^_ corresponds to the component of the motion contributing to the phase change; and |X_|ω is the effective motion of the object relative to the camera.

Equation (6) shows that the phase change varies linearly with the angle of rotation of the fringe projection probe, to a first order approximation. Errors introduced by higher order terms, i.e. non-linear phase steps, are discussed later. The magnitude of the phase change for a given angle of rotation varies with distance from the projector and is therefore not constant with position in the image. Standard algorithms which are valid for a range of phase step sizes, e.g. Carré’s algorithm [8

8. P. Carré, “Installation et utilisation du comparateur photoelectrique et interferential du Bureau International des Poids et Mesures,” Metrologia 2(1), 13–23 (1966). [CrossRef]

] and the algorithms described by Novak [9

9. J. Novak, “Five-step phase-shifting algorithms with unknown values of phase shift,” Optik (Stuttg.) 114(2), 63–68 (2003). [CrossRef]

], should therefore be used.

The phase change is maximized if the rotation produces a motion that is perpendicular to the fringes, i.e. the camera is rotated about the Y-axis in Fig. 1. For the particular case where the camera and projector Y-axes are parallel, the phase change at the intersection of the optical axes from Eq. (6) simplifies to

Δϕ2πcPpZPSωcosα
(7)

The term pZPcP represents the period of the fringes at distance ZP from the projector and Sωcosα is the component of the object motion that contributes to the phase change. This simple expression enables the rotation angle required to achieve a required nominal phase step at distance S from the camera and ZP from the projector to be calculated.

Experimental demonstration

A shape measurement system was constructed, comprising a PointGrey FLEA-HIBW camera and a Hewlett-Packard VP6311 digital video projector. Calibration targets and test objects were fixed to a two-axis Renishaw Revo articulating head mounted on a three-axis Mitutoyo Crysta Apex 9106 CMM. The CMM was driven using a Renishaw UCC2 controller. Objects were moved in an arc about the perspective center of the camera to simulate rotation of the camera and projector about the camera’s perspective center. There are several advantages in choosing to validate the new technique with a static fringe projection probe (fringe projector and camera) and by rotating the object about the camera’s perspective center. Firstly, the technique could be demonstrated with a readily available data projector and camera prior to the construction of a suitably compact and light CMM-mounted probe. Secondly, the data projector could project phase-stepped fringes in the traditional way for a direct comparison with the new technique. Clearly, the data projector chosen would not be suitable for a compact and light CMM-mounted probe. The design and implementation of such a light probe is beyond the scope of the present paper, although a prototype system is discussed briefly later.

The optic axes of the projector and camera were nominally coplanar and intersected at an angle α of ~27°. The axes intersected approximately 230 mm from the perspective center of the camera. The exact angles and positions (including the off-axis projection typically found in a commercial data projector) do not need to be known for accurate shape measurement due to the calibration process described below. Straight parallel fringes with a sinusoidal intensity profile were projected into the 100×100×100 mm3 measurement volume. The period of the projected fringes at the intersection of the camera and projector axes was approximately 4 mm (approximately 30 pixels in the camera image). For the system geometry used, a nominal phase step of 90° in the projected fringe pattern required a rotation of about the camera’s perspective centre of 0.25°. The motion required by the CMM to effect this rotation about the camera’s perspective centre comprised a rotation about the Revo head's centre of rotation of 0.25° and a translation in an arc of 300μm. The head and CMM motors are capable of positioning correct to 0.5 arc sec (or 1.4×10−4 degrees) and 5μm respectively. From Eq. (6) the phase step variation throughout the measurement volume varied between 0.8 and 1.4 times the nominal value for a given rotation about the camera perspective center.

The set-up procedure first required the CMM to be calibrated. The standard procedure ISO 10360 has various stages and resulted in the accurate and traceable position of the tip of a calibrated touch probe attached to the two-axis head on the three-axis CMM to the order of 1µm in the measurement volume. Next, the intrinsic calibration parameters of the camera were calculated by taking multiple images of a calibration target (Edmund Optics part number NT46-250) [10

10. J.-Y. Bouguet, “Camera calibration toolbox for Matlab”, http://www.vision.caltech.edu/bouguetj/calib_doc/index.html (accessed 5 February 2010).

]. Intrinsic parameters include the camera’s principal distance and terms describing lens distortions. Finally the extrinsic parameters (position and orientation of the camera) were found in CMM coordinates by recording images of a custom-made calibrated touch probe (comprising a white spherical stylus ball and black shaft) placed at different positions throughout the camera’s field of view. The image coordinates of the center of the probe tip and the known position of the probe tip in CMM coordinates were input to a least squares fit to determine the position of the perspective center and the orientation of the camera axes relative to the CMM coordinate system. The root mean square (rms) error between the measured position of the probe tip in the camera image and the position calculated from its CMM coordinates and the camera calibration parameters was 0.4 pixels for the measurement volume.

Rotation about the camera perspective center

The accuracy with which the position of points in the camera image could be calculated for a known rotation about the camera’s perspective center was determined experimentally. Pairs of images were recorded, between which the CMM touch probe was moved along a circular arc centered at the camera’s perspective center. The expected position of the tip in the second image of each pair was calculated from the initial tip position in the first image and the applied rotation about the camera’s perspective center. Differences between the calculated and measured positions in the second image are indicated in Fig. 2
Fig. 2 Error in the calculated probe tip image position after rotation about the perspective center of the camera at one plane in the measurement volume. Arrow lengths are scaled to make them visible: maximum arrow length 0.5 pixels.
, for a rotation of 1 degree (nominally equivalent to one fringe period) at one plane in the measurement volume. The arrows indicating the difference at each point are scaled to make them visible. The measurement was repeated at various planes within the measurement volume and an rms error of 0.1 pixels (maximum error of 0.5 pixels) was achieved.

Phase to height calibration

Fringe projection shape measurement systems commonly use a calibration between the optical phase measured and the object height, in order to accommodate the variation of the fringe period in the measurement volume (due to the relative geometry of the camera and projector) and the lens distortions in the projector [11

11. M. Reeves, A. J. Moore, D. P. Hand, and J. D. C. Jones, “Dynamic shape measurement system for laser materials processing,” Opt. Eng. 42(10), 2923–2929 (2003). [CrossRef]

]. A matte white plane calibration surface was attached to the CMM and placed in the measurement volume of the fringe projection system, approximately normal to the camera’s optical axis. The optical phase on the plane was measured at each of 14 distances from the camera equally spaced through the measurement volume. The phase steps were introduced by rotating the plane in an arc about the camera Y-axis (i.e. about the camera perspective center) using the 5-axis CMM.

At each distance of the calibration plane, data sets were recorded with four images (with a nominal phase step of 110° between each) and with five images (nominal phase step of 90°). Wrapped phase maps were calculated using Carré’s algorithm [8

8. P. Carré, “Installation et utilisation du comparateur photoelectrique et interferential du Bureau International des Poids et Mesures,” Metrologia 2(1), 13–23 (1966). [CrossRef]

] and Novak’s algorithm A1 [9

9. J. Novak, “Five-step phase-shifting algorithms with unknown values of phase shift,” Optik (Stuttg.) 114(2), 63–68 (2003). [CrossRef]

], for the four and five frame data sets, respectively. As mentioned previously, these algorithms were chosen because they are valid for a range of linear phase step values. The phase maps for each distance of the calibration plane were unwrapped from the ‘zero order’ projected fringe, in order to provide an absolute phase to height correspondence. It is straightforward to identify a zero order fringe, for example by projecting a spot or stripe with the same data projector. A cubic polynomial was fitted at each pixel relating the phase to height relative to a reference plane.

The shape measurement error for a plane surface positioned in the measurement volume is shown in Fig. 3
Fig. 3 Effect of phase stepping algorithm on the shape measurement errors for a plane surface. Graphs show a single line across the plane. Error for phase-stepped images recorded with rotation about the camera perspective center for (a) four frames (Carré’s algorithm, rms 3.1% of the projected fringe period for the entire plane) and (b) five frames (Novak’s algorithm, rms 1.6%). Error for standard phase-stepped images for (c) Novak’s algorithm (rms 1.5%) and (d) Bruning’s algorithm (rms 0.6%).
. The plots show the error for a typical line across the surface and the caption shows the rms error for the whole plane (expressed as a fraction of the projected fringe period of 4 mm at the intersection of the projector and camera axes). Figure 3(a) and (b) show the result for the new technique for four frame (error 3.1% rms) and five frame data sets (error 1.6% rms), respectively. The error is due to Carré’s and Novak’s linear phase step invariant algorithms, rather than to the accuracy of the phase steps produced by the new technique. To demonstrate this claim, a data set was also recorded using phase stepped fringe patterns directly from the data projector (i.e. standard phase stepping with no rotation about the camera perspective center). Using Novak’s algorithm to process these images produced an error of 1.5% rms, Fig. 3(c), which is comparable to the new technique. Using Bruning’s five frame algorithm [12

12. J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangaccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Appl. Opt. 13(11), 2693–2703 (1974). [CrossRef] [PubMed]

] to process the same images produced an error of 0.6%, Fig. 3(d), which is comparable to the performance of other phase-stepped fringe projection systems reported in the literature [13

13. G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Opt. 36(19), 4463–4472 (1997). [CrossRef] [PubMed]

,14

14. S. Zhang and S.-T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007). [CrossRef] [PubMed]

]. Hence the accuracy achieved is limited by the linear phase step invariant algorithms, rather than errors in the phase steps produced by the new technique.

Object measurement

An object with a free-form surface shape with overall depth of approximately 40 mm was mounted to the CMM and measured using the new phase step technique. Five frames were recorded and used to calculate a wrapped phase map, Fig. 4(a)
Fig. 4 (a) Wrapped phase map for a free-form object using the new phase step technique and (b) calibrated height measurement along the line indicated. Inset shows a comparison to a standard phase step measurment.
, which was unwrapped from the zero order fringe. The unwrapped phase was converted to height above the reference plane using the phase to height calibration described above. The height for the horizontal line marked in Fig. 4(a) is shown in Fig. 4(b). Figure 4(b) also shows the height recorded for a data set using phase-stepped fringe patterns directly from the data projector (i.e. standard phase stepping with no rotation about the camera perspective center) and processed using Bruning's algorithm. This second measurement was used as the baseline to evaluate the performance of the new technique. The rms difference over an object area of approximately 100×100 mm2 was 60μm, equivalent to approximately 1.5% of a fringe period

Discussion

The new technique to produce phase steps in a shape measurement system, by relative rotation between the fringe projection probe and the object about the perspective center of the camera, has been successfully demonstrated. The experiments used a static fringe projection probe and rotated the object on a coordinate measuring machine (CMM) about the camera’s perspective center. Validating the new technique with a static fringe projector probe enabled a standard data projector to be used and the results to be compared directly with phase-stepped fringes projected in the traditional way. The intended implementation of the new technique is in, for example, a compact and light CMM-mounted fringe projection probe (projector and camera) that is rotated about the camera’s perspective centre with the object stationary. Clearly a data projector such as the one used in our experiments would not be suitable for a CMM-mounted probe, for the reasons described in the Introduction. It is in a CMM-mounted probe, where traditional techniques to introduce phase steps are impractical, that the new technique will find application.

The new technique is well-suited for use in a compact, lightweight and robust fringe projection probe with no moving internal parts, mounted on a coordinate measuring machine [15

15. N. J. Weston, Y. R. Huddart, A. J. Moore, and T. C. Featherstone, “Phase analysis measurement apparatus and method”, International patent pending WO2009 / 024757(A1) (2008).

]. A CMM-mounted probe demonstrator has been developed, comprising a commercially-available camera and imaging lens, and a bespoke LED projector containing a fixed, amplitude mask with a sinusoidal pattern. The probe is compact (less than 100 mm in each dimension) and weighs less than 300g. It mounts directly to the Renishaw Revo articulating head using the standard connector used for interchangeable touch probes, and is powered through the same connector. Measurements from the optical probe, which is interchangeable with standard touch probes and is able to withstand the high accelerations encountered in a CMM, will enable fast preliminary measurement of a free-form surface to guide subsequent touch probe measurements. The optical probe projects a single fringe pattern which means that the zero order fringe for phase unwrapping cannot be easily identified by projecting a spot or stripe. Alternative approaches to resolve step height and hidden fringe ambiguities can be used.

The height error for the plane calibration surface was 1.6% of the projected fringe period and the phase difference for the free-form surface was 1.5%. It was shown that these errors arise due to the linear phase step invariant algorithms used. The errors do not originate from the size of the phase steps produced by the new technique, and so could be reduced by using an alternative phase step algorithm (probably involving more phase stepped images). However, the small increase in error introduced by the linear phase step invariant algorithms with only four or five phase steps is acceptable for our intended application.

The calibration plane and the free-form surface were both matte white. Metallic or non-Lambertian surfaces could result in a less accurate measurement from the new technique compared to standard phase stepping. In particular, speckle noise, inter-frame intensity variations and non-linear phase step errors could potentially affect the new technique in a different way to standard phase stepping. The effect of these error sources will be discussed in turn.

Speckle noise

Speckle, which arises from the roughness of the surface being measured, produces a multiplicative intensity fluctuation in the image of the projected fringe pattern. In standard phase stepping, speckle reduces the modulation amplitude at individual pixels in the camera image, and therefore increases the sensitivity of the measured phase to camera intensity noise [15

15. N. J. Weston, Y. R. Huddart, A. J. Moore, and T. C. Featherstone, “Phase analysis measurement apparatus and method”, International patent pending WO2009 / 024757(A1) (2008).

,17

17. A. J. Moore, R. McBride, J. S. Barton, and J. D. C. Jones, “Closed-loop phase stepping in a calibrated fiber-optic fringe projector for shape measurement,” Appl. Opt. 41(16), 3348–3354 (2002). [CrossRef] [PubMed]

]. However with the phase step created by relative motion of the object and camera the speckle noise could vary between phase-stepped images. The effect of speckle noise on the Carré and Novak algorithms was simulated for zero-mean Gaussian distributed multiplicative intensity noise, and the resulting rms error for a range of contrast values are shown in Fig. 5(a)
Fig. 5 Simulated errors from different error sources: (a) rms error from speckle noise; (b) mean error from linear variation in intensity between images; (c) mean error due to non-linear variation in phase step.
. The rms value was obtained over one period of phase for 1000 repetitions of the simulation. The mean phase error was zero.

For the experimental results presented, the speckle contrast for the surfaces and illumination used was less than 1% of the mean intensity, and the effect on the measurements presented was negligible.

Inter-frame intensity variation

For a non-Lambertian surface, the reflectance properties vary with viewing and illumination direction which could cause a change in the mean intensity and fringe modulation amplitude between successive images for the proposed technique [18

18. H. Ragheb and E. R. Hancock, “Surface radiance: empirical data against model predictions,” in Proceedings of the 2004 International Conference on Image Processing (ICIP), (Institute of Electrical and Electronics Engineers, 2005), pp. 2689–2692.

,19

19. K. E. Torrance and E. M. Sparrow, “Theory of off-specular reflection from roughened surfaces,” J. Opt. Soc. Am. 57(9), 1105–1114 (1967). [CrossRef]

]. The reflectance changes rapidly with illumination and observation angles close to the specular direction, but more slowly for other angles. It is therefore expected that phase measurement would be less accurate close to the specular direction, but that the reflectance would not change significantly with the small angular rotation required for the phase for diffusely reflecting angles away from the specular direction. The response of Carré’s and Novak’s algorithms to a linear variation in background intensity and fringe amplitude across the set of four or five phase stepped images was simulated. The mean error over one fringe period dominated the rms error, and is shown in Fig. 5(b). A straightforward way to reduce such errors with a CMM-mounted probe would be to choose a measurement position away from the specular direction for the surface patch under inspection.

For the experimental results presented, the object surfaces were diffuse with no significant specular component, and the effect on the measurements was negligible.

Non-linear phase step error

Equation (5) showed that the phase step produced by the new technique is proportional to the angle of rotation (to a first order approximation) and that its magnitude varies with position in the measurement volume. In order to accommodate the variation in magnitude across the image, Carré’s and Novak’s linear phase step invariant algorithms were used. Non-linearity in the phase step will result in an error in the calculated phase for these two algorithms. By considering the second order approximation of Eq. (5) the non-linearity resulting from the rotation about the perspective center can be estimated. The second order approximation gives

Δϕ'=2πcPp|X_|(X^_PXPZPZ^_P).V^_ZPω(1+|X_|Z^_P.V^_ZPω)
(8)

Expressing this as Δϕ'=Δϕ(1+εΔϕ) after Creath gives the factor of non-linearity as [20

20. K. Creath, “Temporal phase measurement methods,” in Interferogram Analysis, D.W. Robinson and G.T. Reid, eds., 94–140 (Institute of Physics 1993).

],

ε=pZP2πcpZ^_P.V^_(ZPX^_PXPZ^_P).V^_
(9)

The non-linearity in phase step for a point X is proportional to the fringe width at that point. The factor of proportionality is at most of order 1/|X| for a practical system and varies with position within the measurement volume. Therefore the non-linear phase step error is small provided that the fringe pitch is small compared to the standoff distance. This condition is desirable for fringe projection systems because height resolution increases with decreasing fringe period. The response of Carré’s and Novak’s algorithms to non-linear phase steps was simulated. The mean error over one fringe period is shown in Fig. 5(c), and again dominated the rms error.

For the measurements presented, the non-linear phase step error was calculated to be less than 1% and its contribution to the phase error was therefore insignificant. This was experimentally verified by the insignificant mean error in either the phase to height calibration (Fig. 3) or the object measurement (Fig. 4). If non-linearity were more significant, a calibration procedure or an alternative phase step algorithm could be considered.

Conclusions

A new technique to introduce phase steps in a fringe projection system for shape measurement has been demonstrated. The technique requires the relative rotation of the fringe projection probe (projector and camera) and object about the camera's perspective center. Relative rotation about the camera perspective center enables the movement of the image of the target to be compensated without knowing the distance to the object. The phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated using a static fringe projection probe and rotating the object on a coordinate measuring machine (CMM) about the camera’s perspective center. The technique will enable full-field shape measurement with a light and robust coordinate measuring machine – mounted fringe projection probe that is rotated about the camera’s perspective centre with the object stationary. Errors of approximately 1.5% of the projected fringe period were achieved in practice, compared to 0.6% with a standard phase stepping approach (which is typical of values reported in the literature). It was shown that the error is due to the phase-stepping algorithms used (insensitive to linear phase step errors) rather than to the accuracy of the phase steps produced by the new technique. Therefore the error could be reduced by using an alternative phase step algorithm, probably involving more phase stepped images. Errors associated with speckle noise, inter-frame intensity variations and non-linearity of the phase steps were quantified, and were shown not to influence the accuracy of the experimental measurements presented.

Acknowledgments

This project was part-funded by the Engineering and Physical Sciences Research Council [grant numbers GR/T11289/01 and GR/S12395/01]. Andrew Moore acknowledges the support of AWE through its William Penney Fellowship scheme.

References and links

1.

K. Creath, “Comparison of phase-measurement algorithms,” Proc. SPIE 680, 19–28 (1986).

2.

G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef] [PubMed]

3.

3M, “3M MPro 110 Micro projector,” http://www.3mselect.co.uk/p-1783-3m-mpro-110-micro-projector-uk-model.aspx (accessed 5 February 2010).

4.

D. M. Kranz, E. P. Rudd, D. Fishbaine, and C. E. Haugan, “Phase profilometry system with telecentric projector,” International Patent, Publication Number WO01/51887 (2001).

5.

M. A. R. Cooper with S. Robson, “Theory of close-range photogrammetry,” in Close range photogrammetry and machine vision, K. B. Atkinson, ed., (Whittles Publishing, Caithness, UK, 2001).

6.

J. G. Fryer, “Camera Calibration,” in Close range photogrammetry and machine vision, K. B. Atkinson, ed., (Whittles Publishing, Caithness, UK, 2001).

7.

J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of the 1997 Conference in Computer Vision and Pattern Recognition (CVPR ’97) (IEEE Computer Society, Washington, DC, 1997), pp. 1106–1112.

8.

P. Carré, “Installation et utilisation du comparateur photoelectrique et interferential du Bureau International des Poids et Mesures,” Metrologia 2(1), 13–23 (1966). [CrossRef]

9.

J. Novak, “Five-step phase-shifting algorithms with unknown values of phase shift,” Optik (Stuttg.) 114(2), 63–68 (2003). [CrossRef]

10.

J.-Y. Bouguet, “Camera calibration toolbox for Matlab”, http://www.vision.caltech.edu/bouguetj/calib_doc/index.html (accessed 5 February 2010).

11.

M. Reeves, A. J. Moore, D. P. Hand, and J. D. C. Jones, “Dynamic shape measurement system for laser materials processing,” Opt. Eng. 42(10), 2923–2929 (2003). [CrossRef]

12.

J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangaccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Appl. Opt. 13(11), 2693–2703 (1974). [CrossRef] [PubMed]

13.

G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Opt. 36(19), 4463–4472 (1997). [CrossRef] [PubMed]

14.

S. Zhang and S.-T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007). [CrossRef] [PubMed]

15.

N. J. Weston, Y. R. Huddart, A. J. Moore, and T. C. Featherstone, “Phase analysis measurement apparatus and method”, International patent pending WO2009 / 024757(A1) (2008).

16.

H. Lui, G. Lu, S. Wu, S. Yin, and F.T. S.U. Yu, “Speckle-induced phase error in laser-based phase-shifting projected fringe profilometry,” J. Opt. Soc. Am. A 16(6), 1484–1495 (1999). [CrossRef]

17.

A. J. Moore, R. McBride, J. S. Barton, and J. D. C. Jones, “Closed-loop phase stepping in a calibrated fiber-optic fringe projector for shape measurement,” Appl. Opt. 41(16), 3348–3354 (2002). [CrossRef] [PubMed]

18.

H. Ragheb and E. R. Hancock, “Surface radiance: empirical data against model predictions,” in Proceedings of the 2004 International Conference on Image Processing (ICIP), (Institute of Electrical and Electronics Engineers, 2005), pp. 2689–2692.

19.

K. E. Torrance and E. M. Sparrow, “Theory of off-specular reflection from roughened surfaces,” J. Opt. Soc. Am. 57(9), 1105–1114 (1967). [CrossRef]

20.

K. Creath, “Temporal phase measurement methods,” in Interferogram Analysis, D.W. Robinson and G.T. Reid, eds., 94–140 (Institute of Physics 1993).

OCIS Codes
(120.3940) Instrumentation, measurement, and metrology : Metrology
(120.4630) Instrumentation, measurement, and metrology : Optical inspection
(120.5050) Instrumentation, measurement, and metrology : Phase measurement
(110.2650) Imaging systems : Fringe analysis

ToC Category:
Instrumentation, Measurement, and Metrology

History
Original Manuscript: July 13, 2011
Revised Manuscript: August 13, 2011
Manuscript Accepted: August 15, 2011
Published: September 6, 2011

Citation
Y. R. Huddart, J. D. Valera, N. J. Weston, T. C. Featherstone, and A. J. Moore, "Phase-stepped fringe projection by rotation about the camera’s perspective center," Opt. Express 19, 18458-18469 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-19-18458


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. K. Creath, “Comparison of phase-measurement algorithms,” Proc. SPIE680, 19–28 (1986).
  2. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt.38(31), 6565–6573 (1999). [CrossRef] [PubMed]
  3. 3M, “3M MPro 110 Micro projector,” http://www.3mselect.co.uk/p-1783-3m-mpro-110-micro-projector-uk-model.aspx (accessed 5 February 2010).
  4. D. M. Kranz, E. P. Rudd, D. Fishbaine, and C. E. Haugan, “Phase profilometry system with telecentric projector,” International Patent, Publication Number WO01/51887 (2001).
  5. M. A. R. Cooper with S. Robson, “Theory of close-range photogrammetry,” in Close range photogrammetry and machine vision, K. B. Atkinson, ed., (Whittles Publishing, Caithness, UK, 2001).
  6. J. G. Fryer, “Camera Calibration,” in Close range photogrammetry and machine vision, K. B. Atkinson, ed., (Whittles Publishing, Caithness, UK, 2001).
  7. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of the 1997 Conference in Computer Vision and Pattern Recognition (CVPR ’97) (IEEE Computer Society, Washington, DC, 1997), pp. 1106–1112.
  8. P. Carré, “Installation et utilisation du comparateur photoelectrique et interferential du Bureau International des Poids et Mesures,” Metrologia2(1), 13–23 (1966). [CrossRef]
  9. J. Novak, “Five-step phase-shifting algorithms with unknown values of phase shift,” Optik (Stuttg.)114(2), 63–68 (2003). [CrossRef]
  10. J.-Y. Bouguet, “Camera calibration toolbox for Matlab”, http://www.vision.caltech.edu/bouguetj/calib_doc/index.html (accessed 5 February 2010).
  11. M. Reeves, A. J. Moore, D. P. Hand, and J. D. C. Jones, “Dynamic shape measurement system for laser materials processing,” Opt. Eng.42(10), 2923–2929 (2003). [CrossRef]
  12. J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangaccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Appl. Opt.13(11), 2693–2703 (1974). [CrossRef] [PubMed]
  13. G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Opt.36(19), 4463–4472 (1997). [CrossRef] [PubMed]
  14. S. Zhang and S.-T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt.46(1), 36–43 (2007). [CrossRef] [PubMed]
  15. N. J. Weston, Y. R. Huddart, A. J. Moore, and T. C. Featherstone, “Phase analysis measurement apparatus and method”, International patent pending WO2009 / 024757(A1) (2008).
  16. H. Lui, G. Lu, S. Wu, S. Yin, and F.T. S.U. Yu, “Speckle-induced phase error in laser-based phase-shifting projected fringe profilometry,” J. Opt. Soc. Am. A16(6), 1484–1495 (1999). [CrossRef]
  17. A. J. Moore, R. McBride, J. S. Barton, and J. D. C. Jones, “Closed-loop phase stepping in a calibrated fiber-optic fringe projector for shape measurement,” Appl. Opt.41(16), 3348–3354 (2002). [CrossRef] [PubMed]
  18. H. Ragheb and E. R. Hancock, “Surface radiance: empirical data against model predictions,” in Proceedings of the 2004 International Conference on Image Processing (ICIP), (Institute of Electrical and Electronics Engineers, 2005), pp. 2689–2692.
  19. K. E. Torrance and E. M. Sparrow, “Theory of off-specular reflection from roughened surfaces,” J. Opt. Soc. Am.57(9), 1105–1114 (1967). [CrossRef]
  20. K. Creath, “Temporal phase measurement methods,” in Interferogram Analysis, D.W. Robinson and G.T. Reid, eds., 94–140 (Institute of Physics 1993).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4 Fig. 5
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited