OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 17, Iss. 23 — Nov. 9, 2009
  • pp: 20735–20746
« Show journal navigation

A generalized reference-plane-based calibration method in optical triangular profilometry

Suochao Cui and Xiao Zhu  »View Author Affiliations


Optics Express, Vol. 17, Issue 23, pp. 20735-20746 (2009)
http://dx.doi.org/10.1364/OE.17.020735


View Full Text Article

Acrobat PDF (252 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper, a generalized reference-plane-based calibration method is proposed in optical triangular profilometry by exploring projection ray tracing method and image ray tracing method. The pin-hole camera model is used to model the camera and the projector, and parallel planes model is used to model the reference and test planes. The camera, projector, and planes can be in arbitrary positions and arbitrary directions. The reciprocal of the height and the reciprocal of the phase shift (or pixel position vertical distance) are in linear relationship. Experiments are conducted to verify the proposed method.

© 2009 OSA

1. Introduction

Recently used calibration methods in optical triangular profilometry of diffuse surface can be classified into two categories based on whether a reference plane is used or not: the reference-plane approach and the no-reference-plane approach. One of the key features of the reference-plane approaches is the deduction of calibration equations from the geometry of the system. For example, Zhou et al. proposed a phase to height method based on the geometrics of the optical triangular profilometry in 1994 [1

1. W. S. Zhou and X. Y. Su, “A direct mapping algorithm for phase-measuring profilometry,” J. Mod. Opt. 41(1), 89–94 ( 1994). [CrossRef]

], where the heights (distances) between the test and reference planes are used in calibration procedure. They solve the problem of the co-plane for the camera axis and the projector axis and give the expression of the phase to height in two dimensions: 1/h(x,y)=p(x,y)/a(x,y)+b(x,y), parametersa(x,y),b(x,y) are the coefficients at the object position(x,y). In their system, the camera must be in normal view of the reference plane. Chen et al. used a least-squares approach to evaluate the carrier phases [2

2. L. Chen and C. Quan, “Fringe projection profilometry with nonparallel illumination: a least-squares approach,” Opt. Lett. 30(16), 2101–2103 ( 2005). [CrossRef] [PubMed]

, 3

3. L. Chen and C. J. Tay, “Carrier phase component removal: a generalized least-square approach,” J. Opt. Soc. Am. A 23(2), 435–443 ( 2006). [CrossRef]

], and Guo et al. used similar approach in their calibration methods [4

4. H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. 44(3), 033603 ( 2005). [CrossRef]

, 5

5. H. Guo, M. Chen, and P. Zheng, “Least-squares fitting of carrier phase distribution by using a rational function in fringe projection profilometry,” Opt. Lett. 31(24), 3588–3590 ( 2006). [CrossRef] [PubMed]

], the cameras in their system setup are in normal view too, which is a rigorous condition difficult to satisfy in practice. Rajoub et al. proposed a method supporting the rotation of the camera [6

6. B. A. Rajoub, M. J. Lalor, D. R. Burton, and S. A. Karout, “A new model for measuring object shape using non-collimated fringe-pattern projections,” J. Opt. A, Pure Appl. Opt. 9(6), S66–S75 ( 2007). [CrossRef]

], but they did not solve the problems of the normal view of the camera, either. Wang et al. proposed a method in which the camera can be in any view of the reference plane [7

7. Z. Wang, H. Du, and H. Bi, “Out-of-plane shape determination in generalized fringe projection profilometry,” Opt. Express 14(25), 12122–12133 ( 2006). [CrossRef] [PubMed]

9

9. Z. Wang, H. Du, S. Park, and H. Xie, “Three-dimensional shape measurement with a fast and accurate approach,” Appl. Opt. 48(6), 1052–1061 ( 2009). [CrossRef]

], but the expression of their conclusion is complicated and has lots of coefficients. All the above algorithms use the image ray tracing method, which cannot be used in the non-phase shift measuring profilometry. Anand et al. proposed the unified calibration techniques using the projection ray tracing method [10

10. A. Asundi and Z. Wensen, “Unified calibration technique and its applications in optical triangular profilometry,” Appl. Opt. 38(16), 3556–3561 ( 1999). [CrossRef] [PubMed]

], but the setup of their system is restricted to rotating work table systems. It requires the normal incident of the projection line stripes too. The restrictions of the above-mentioned systems mainly stem from the use of geometry in their deduction, which also makes the process inefficient and tedious.

Instead of deducing the calibration equations from the geometry of the system, the no-reference-plane approach sets up a world coordinates system in which all the points in the measuring volumes are represented by the three-dimensional (3D) coordinates. The projector and the camera are always modeled as pin-hole camera models [11

11. J. Heikkila, and O. Silven, “Calibration Procedure for short focal length off-the-shelf CCD cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Vienna, Austria, 1996), pp. 166–170.

15

15. R. L. Saenz, T. Bothe, and W. P. Juptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43, 467–471 ( 2004).

], and the intrinsic and extrinsic parameters can be calibrated by single device calibration techniques [11

11. J. Heikkila, and O. Silven, “Calibration Procedure for short focal length off-the-shelf CCD cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Vienna, Austria, 1996), pp. 166–170.

14

14. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 ( 2006). [CrossRef]

] or unite calibration techniques [15

15. R. L. Saenz, T. Bothe, and W. P. Juptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43, 467–471 ( 2004).

]. The process of obtaining calibration equations is significantly simplified and generalized as the systems are described by the transitions of the matrices and the rotation and translation effects are considered.

In this work, we use the matrices to describe the measuring system under reference-plane approach as an attempt to simplify and generalize the process of deducing the calibration equations. The camera and projector are modeled as pin-hole camera model, and the reference (test) planes are modeled by parallel planes. For the generalization, the direction of the camera, projector and reference (test) planes is in arbitrary position and direction in the world coordinates system. When the projection point and image point positions are known, the world coordinates of the point can be obtained by solving the equations formed by the system parameters. So the relationship of the world coordinate position, projection position and image position can be acquired. By deducing from this relationship, for both the projection ray and image ray tracing systems, the linear relationship of the reciprocal of the height and the reciprocal of the phase shift (or pixel position vertical distance) can be acquired.

The rest of the paper is organized as follows. In Section 2, after introducing the setup of the measuring system, the calibration method of projection ray tracing line stripe profilometry is deduced. It is then extended to the phase shift measuring profilometry. And then the calibration method of image ray tracing phase shift measuring profilometry is deduced to get simple calibration equation. The distortion effects of the camera and projector lenses are considered at the end of Section 2. In Section 3, experiments are conducted to verify the proposed method. The paper is concluded in Section 4.

2. Methods

2.1 Projection ray tracing line stripe system calibration

Figure 1
Fig. 1 Schematics of the system setup.
gives the schematics of the system setup. The reference plane Srefis an arbitrary plane given by Eq. (1) in the world coordinate system {ow;xw,yw,zw}.

ax+by+cz+dref=0.
(1)
ax+by+cz+di=0.
(2)
vs=[a,b,c]T.
(3)

The planeSi, which parallels to the reference plane, is given by Eq. (2). The normal vector vs of the planes is given by Eq. (3). So the height between surfaceSiandSrefcan be given by following Eq. (4).

hi=(didref)/a2+b2+c2.
(4)

The camera and the projector are described by pin-hole camera model [11

11. J. Heikkila, and O. Silven, “Calibration Procedure for short focal length off-the-shelf CCD cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Vienna, Austria, 1996), pp. 166–170.

13

13. S. Cui, X. Zhu, W. Wang, and Y. Xie, “Calibration of a laser galvanometric scanning system by adapting a camera model,” Appl. Opt. 48(14), 2632–2637 ( 2009). [CrossRef] [PubMed]

]. Pointocis the optical lens center of the camera, and pointopis the optical lens center of the projector. Camera image coordinate system {oic;uc,vc} is constructed in the camera image sensor plane with the origin at left top, and the principal point isPc0(uc0,vc0). The camera device coordinate system {oc;xc,yc,zc} is constructed with the origin at pointoc, with axis ocxcparallels to axisoicucand axis ocyc parallels to axisoicvc. Axisoczcis the main optical axis of the camera image lens. Through similar approach, the projector image coordinate system{oip;up,vp}and the projector device coordinate system{op;xp,yp,zp}are constructed, the principal point of the projector image plane isPp0(up0,vp0). The distances between the image plane and the origin point are fc(camera focus length) and fp(projector focus length) for camera and projector systems respectively.

First of all, we will deduce the relationship of heightdi, projection position and image position. Assuming thatPp(up,vp) is an arbitrary point in the projector image plane, which is projected on the planeS1at pointP1with the coordinateX1=[x1,y1,z1]Tand on the reference planeSrefat pointPref. This projection procedure can be described by the following sequence of transformations: the coordinateX1of world coordinate system is translated to projector device coordinate system coordinateXpby Eq. (5),
Xp=RpX1+tp.
(5)
whereRpis a 3 × 3 orthogonal rotation matrix of the two coordinate systems, it is an expression of the rotation vector rp=[rpx,rpy,rpz]T, which parallels the rotation axis and whose magnitude is equal to the rotation angle. The relationship betweenRpand rpcan be described by Rodrigues equation [16

16. O. Faugeras, “Three-Dimensional Computer Vision: A Geometric Viewpoint,” (MIT Press, 1993).

]. tp=[tpx,tpy,tpz]Tis a translation vector. The projector device coordinateXpis translated to projector image plane by Eq. (6),
sp[upvp1]T=PpXp.
(6)
wherePp=[fpu0up00fpvvp0001], fpu=fp/dpu,fpv=fp/dpv, parametersdpu,dpvare the pixel sizes along horizontal and vertical direction of the projector image plane respectively, which are the priori-knowledge, andspis a scale factor. By substituting Eq. (5) to Eq. (6), Eq. (7) can be obtained.

sp[upvp1]T=Pp(RpX1+Tp).
(7)

Similarly, X1can be translated to camera image coordinate (uc,vc) by following Eq. (8),
sc[ucvc1]T=Pc(RcX1+Tc).
(8)
wherePc=[fcu0uc00fcvvc0001], fcu=fc/dcu,fcv=fc/dcv, parameters dcu,dcvare the pixel sizes along horizontal and vertical direction of the image plane respectively, which are the priori-knowledge, and scis a scale factor. Rcis the 3×3 orthogonal rotation matrix of the camera device coordinate system and world coordinate system, and tc=[tcx,tcy,tcz]Tis the translation vector. By definingMp=PpRp, tbp=Pptp, Mc=PcRc, tbc=Pctc, Eq. (7) can be written as the following Eq. (9),
sp[upvp1]T=MpX1+tbp.
(9)
Equation (8) can be written as following Eq. (10).

sc[ucvc1]T=McX1+tbc.
(10)

ParametersMc,Mpare 3 × 3 matrices, for the convenience of the induction, letMc=[mc1,mc2,mc3]T,Mp=[mp1,mp2,mp3]T, mci,mpiare the row vectors of matrices Mc,Mp. Similarly, lettbc=[tbcx,tbcy,tbcz]T, tbp=[tbpx,tbpy,tbpz]T. Once the system is set up, parameters Rc,Rp,tp,tc,Pp,Pcare changeless, so parameters Mc,Mp,tbp,tbcare changeless too. From Eq. (9) and Eq. (10), the following Eq. (11) can be constructed to acquire the world coordinate X1when uc,vc,vp are known.
AX1=B.
(11)
whereA=[mc1ucmc3mc2vcmc3mp2vpmp3],B=[tbcxuctbcztbcyvctbcztbpyvptbpz]. CoordinateX1is in planeS1, substituteX1 and Eq. (3) to Eq. (2), the following Eq. (12) can be obtained.
vsTX1+di=0.
(12)
diis given by following Eq. (13).

di=vsTX1.
(13)

By substituting the solution of Eq. (11) to Eq. (13), Eq. (14) can be obtained. The explicit expressions of Eq. (14) are calculated by symbol computation in MATLAB. In the calculation procedure, parameters Mc,Mp,tbp,tbc,vsare constant, and uc,vc,vpare variables. From the calculation result of Eq. (13), it can be found out that diis the sum of three fractions, of which denominator are the same, and both the numerator and denominator contain the variable items ofvc,uc,vpvc,vpuc,vpand the constant item. Therefore, Eq. (14) can be obtained.
di=a1vc+a2uc+a3+a4vpvc+a5vpuc+a6vpb1vc+b2uc+b3+b4vpvc+b5vpuc+b6vp.
(14)
wherea1,a2,a3,a4,a5,a6,b1,b2,b3,b4,b5,b6are constant parameters determined by system parametersMp,Mc,tbp,tbc and reference plane parametersa,b,c. To simplify the description, definec1=(a1+a4vp)/(b1+b4vp),c2=(a2+a5vp)/(a1+a4vp), c3=(a3+a6vp)/(a1+a4vp), c4=(b2+b5vp)/(b1+b4vp), c5=(b3+b6vp)/(b1+b4vp), then Eq. (14) turns to Eq. (15),
di=kc1uc+kc2vc+c4uc+c5+c1.
(15)
where kc1=c1(c2c4),kc2=c1(c3c5).

Next, we will investigate the relationship between the height and image pixel vertical distance of the same projection points from Eq. (15) for the projection ray tracing line stripe pattern systems. Line stripe pattern vp=vpgiven of the projection image, shown in Fig. 2(a)
Fig. 2 Line stripe pattern of the projectors image and camera image. (a) A line stripe where vp=vpgivenon the projector image. (b) An image of the reference plane. (c) An image of the test plane. (d) The combination of two images.
, is projected on reference plane. The image is acquired by camera, which is shown in Fig. 2(b). As the surface planeSiis placed ahead of the reference plane, the image acquired by camera is shown in Fig. 2(c). The two images are combined together for illustration, as shown in Fig. 2(d). For the two lines, the values of vpare the same, so parametersc1,c2,c3,c4,c5,kc1,kc2are the same. By substitutingvcrefto Eq. (15), parameterdrefcan be represented by the following Eq. (16):
dref=kc1uc+kc2vcref+c4uc+c5+c1.
(16)
Equation (15) and Eq. (16) are substituted to Eq. (4), so Eq. (17) can be obtained.
hi=kn1uc+kn2vc+c4uc+c5kn1uc+kn2vcref+c4uc+c5.
(17)
wherekn1=kc1a2+b2+c2,kn2=kc2a2+b2+c2. By calculating the reciprocal of Eq. (17), the following Eq. (18) can be obtained,
1hi=(vcref+c4uc+c5)2kn1uc+kn21vcvcrefvcref+c4uc+c5kn1uc+kn2.
(18)
From Eq. (16), vcref+c4uc+c5=(kc1uc+kc2)/(drefc1)can be known, by substituting it to Eq. (18), Eq. (19) can be obtained.

1hi=kc1uc+kc2(drefc1)2a2+b2+c21vcvcref1(drefc1)a2+b2+c2.
(19)

Definey=hi1,x=(vcvcref)1,vcvcrefis the vertical distance between the reference line and test lines, which is shown in Fig. 2(d), represented by the double row line, Eq. (19) turns to Eq. (20), parameter p1is given by Eq. (21),
y=p1(uc,vc)x+p2.
(20)
p1(uc,vc)=kb1uc+kb2.
(21)
wherekb1=kc1(drefc1)2(a2+b2+c2)0.5,kb2=kc2(drefc1)2(a2+b2+c2)0.5, p2=(drefc1)1(a2+b2+c2)0.5 are constant for the stripe line. p1(uc,vc)is the coefficient at the camera image pixel position(uc,vc), it does not contain the itemvc. This equation can be used conveniently in optical stripe pattern measuring systems due to the projection ray tracing fact.

After the deduction of the calibration equation, the calibration procedure for the projection ray tracing line stripe profilometry can be described by the following steps:

  • step 1. Project the line stripe pattern to the reference planes and acquire the image (noted as reference image) by camera.
  • step 2. Place N(N≥2) parallel planes on top of the reference planes(noted as test plane 1, …,test plane N) and acquire image respectively(noted as test image 1,…, test image N).
  • step 3. Extract the line positions of the reference image and test images by algorithms, such as C. Steger method [17

    17. C. Steger, “An Unbiased Detector of Curvilinear Structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 ( 1998). [CrossRef]

    ].
  • step 4. For a horizontal positionucof the test images, calculatevdif=vcvcrefto form[vdif1,vdif2,...,vdifN], and the height between test planes and reference plane are[h1,h2,...,hN], p1,p2at horizontal positionuccan be determined by using the linear fitting algorithm of Eq. (20).
  • step 5. Repeat step 4 for every ucvalue to form a coefficient table.
  • step 6. Theuc,p1values acquired in step 5 is fitted by linear algorithm of Eq. (21) to acquire coefficientskb1,kb2, this will reduce the noise influence(relative to the single point).

After the system is calibrated and parameters kb1,kb2,p2are determined, given an arbitrary specimen, its image is acquired by camera. After the extraction of the line position, the height can be calculated by Eq. (22).

h(uc,vc)={(kb1uc+kb2)(vcvcref)1+p2}1.
(22)

2.2 Projection ray tracing phase shift measuring profilometery calibration

The above method can also be extended to be used in the phase shift measuring profilometry [18

18. P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45(21), 5086–5091 ( 2006). [CrossRef] [PubMed]

], where vpis a variable item. Since parameterskc1,kc2,c1are not constant, the coefficients of Eq. (20) at pixel position(uc,vc)turn to following Eq. (23) and Eq. (24),
p1(uc,vc,vp)=kp1avp+kp1b(vp+kp0)2uc+kp2avp+kp2b(vp+kp0)2.
(23)
p2(uc,vc,vp)=(kpavp+kpb)/(vp+kp0).
(24)
vpcan be translated from the phase by the following Eq. (25),

vp=ϕ/2π.
(25)

Parameterskp0,kp1a,kp1b,kp2a,kp2b,kpa,kpb are the functions of a1,a2,a3,a4,a5,a6,b1,b2,b3,b4, b5,b6. The key point of the algorithms is to determine the reference line of the given phase valueϕ. For a given pixel position(uc1,vc1)of the test plane image (noted as image 1), its extraction phase isϕ1, to calculate the height by Eq. (22), the pixel position(uc1ref,vc1ref)of the same phaseϕ1on the reference plane image (noted as image 2) should be found out. Then the relationship ofvc1andvp1 will be discussed. Define

D=[mc1uc1mc3mp2vp1mp3abc], E=[tbcxuc1tbcztbpyvp1tbpzd1], so the Point Pi’s coordinates can be got from equation DXi=E, this equation is constructed from Eq. (2), Eq. (9) and Eq. (10). Then the image point ofXiin the camera can be calculated from Eq. (10). By substituting the solution of equationDXi=Eto Eq. (10), Eq. (26) can be obtained. The explicit expression of Eq. (26) is obtained by the symbol computing in Matlab. In the calculation procedure, parametersMp,Mc,tbp,tbc,a,b,c,d1and uc1are constant, the only variable is vp1. The calculation result is in the form of fraction after the simplification, where both the numerator and denominator contain single variable itemvp1. So it can be written in the form of Eq. (26).

vc1=krc1vp1+krc0+krc2.
(26)

Parameters krc1,krc2,krc0are determined by system constant parametersMp,Mc,tbp,tbc, a,b,c,diand parameter uc(known value).

After the deduction of the calibration equation, the calibration procedure of projection ray tracing phase shift measuring profilometery can be described by the following steps:

  • step 1. The phase shift images are projected on the reference planes and test planes, the images are acquired by camera.
  • step 2. The phase are calculated and unwrapped for each plane (noted as reference phase, test phase 1, …, test phase M) [19

    19. J. Meneses, T. Gharbi, and P. Humbert, “Phase-unwrapping algorithm for images with high noise content based on a local histogram,” Appl. Opt. 44(7), 1207–1215 ( 2005). [CrossRef] [PubMed]

    ].
  • step 3. For each column pixels of the reference phase, variablevc1ref, vp1are known, so nonlinear fit algorithm can be used to determine the parameterskrc1,krc2,krc0 in Eq. (26).
  • step 4. Repeat step 3 for each test phases.
  • step 5. For one point (uck,vcki)in test plane i, its phase isϕt,and corresponding projection image position isvp1. By taking the parameters obtained in step3, step4, the reference pixel position(uck,vckref), the corresponding pixel positions (uck,vck1), …, (uck,vckN) of the same phase on other test planes can be determined through Eq. (26), parametersp1,p2can be determined by linear fit algorithms of Eq. (20).
  • step 6. Repeat step 5 for each points of the test plane i, parametersp1(uc1,vc1,vp1),p1(uc2,vc2,vp2), …, p1(ucM,vcM,vpM)and p2(uc1,vc1,vp1),p2(uc2,vc2,vp2),…, p2(ucM,vcM,vpM)are acquired, and uc1,vc1,vp1,uc2,vc2,vp2,...,ucM,vcM,vpMare known, so lest square methods can be used to determine the parameters kp0,kp1a,kp1b,kp2a,kp2b,kpa,kpb in Eq. (23) and Eq. (24).

After the calibration is done, parameters can be used in the measure procedure. The phase image on the specimen is computed. For each pixel(uc,vc), its phaseϕis translated tovpby Eq. (25), and then the reference pixel position is acquired by Eq. (26). p1,p2are determined by Eq. (23), Eq. (24), so height can be determined by Eq. (27).

h={p1(uc,vc,vp)(vcvcref)1+p2(uc,vc,vp)}1
(27)

2.3 Image ray tracing phase shift measuring profilometery calibration

From the above description of Section 2.2, it can be known that the calibration and calculation procedure is complicated for the phase shift measuring profilometry, and it’s not convenient, particularly in finding the reference position of given phase. If the image ray tracing method is used, we can use the phase shift information at the same pixel position for calibration. It’s convenient for calculating. Rewrite Eq. (14) as following Eq. (28),
di=t1+t2vp+t3.
(28)
wheret1=(a4vc+a5uc+a6)(b4vc+b5uc+b6)1,t2={(a1vc+a2uc+a3)(b4vc+b5uc+b6)(b1vc+b2uc+b3)(a4vc+a5uc+a6)}(b4vc+b5uc+b6)2,t3=(b1vc+b2uc+b3)(b4vc+b5uc+b6)1. For the same pixel position(uc,vc)of the camera image plane, parameterst1,t2,t3are constant, so Eq. (19) turns to Eq. (29).

1hi=1dreft1t2(dreft1)21vpvpref.
(29)

This equation equals to Eq. (20) whenp1=t2(dreft1)2,p2=(dreft1)1. vp,vprefcan be translated from the phase by Eq. (25). For each point, the reciprocal of the height and reciprocal of the phase shift satisfy Eq. (29).

Then the calibration procedure of image ray tracing phase shift measuring profilometery calibration can be described by the following steps:
  • step 1. The phase shift images are projected on the reference planes and test planes, the images are acquired by camera.
  • step 2. The phase are calculated and unwrapped for each plane (noted as reference phase, test phase 1, …, test phase M).
  • step 3. For pixel position(uck,vck), the reference phase isϕref, its correspondingvpvalue isvpref, for test planes,vpvalues arevp1, vp2, …,vpM. Heighthi1is known and(vpivpref)1is computed, so parametersp1,p2are determined by linear fit algorithm of Eq. (20);
  • step 4. Repeat step 3 for each pixel position of reference image, the coefficients table for p1,p2can be formed.
After calibration, the height can be calculated by Eq. (30).

hi={p1(uc,vc)(vpivpref)1+p2(uc,vc)}1.
(30)

2.4 Lens distortion compensation

All the above calculations base on the ideal model of the projector and camera lenses, but in the practical systems, the lens distortions of camera and the projector should be considered to achieve high measurement accuracies. The distortions of the lens usually contain radial and tangential distortion modeled by the following Eq. (31) [11

11. J. Heikkila, and O. Silven, “Calibration Procedure for short focal length off-the-shelf CCD cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Vienna, Austria, 1996), pp. 166–170.

].
ud=u+2P1uv+P2(r2+2u2)+K1ur2+K2ur4+K3ur6vd=v+2P2uv+P1(r2+2v2)+K1vr2+K2vr4+K3vr6.
(31)
where(u,v)is the ideal point, and(ud,vd)is the distortion point,r=u2+v2,K1,K2and K3are the coefficients of radial distortion, P1and P2are the coefficients of tangential distortions. Calibration procedure [11

11. J. Heikkila, and O. Silven, “Calibration Procedure for short focal length off-the-shelf CCD cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Vienna, Austria, 1996), pp. 166–170.

13

13. S. Cui, X. Zhu, W. Wang, and Y. Xie, “Calibration of a laser galvanometric scanning system by adapting a camera model,” Appl. Opt. 48(14), 2632–2637 ( 2009). [CrossRef] [PubMed]

] can be used to acquire these distortion coefficients of the camera lens, and the projector lens distortions can be calibrated by the modify calibration procedure of Zhang et al. [14

14. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 ( 2006). [CrossRef]

].

After the distortion coefficients having been calibrated, iteration method will be used to calculate the ideal point(u,v)from(ud,vd) for the camera image [15

15. R. L. Saenz, T. Bothe, and W. P. Juptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43, 467–471 ( 2004).

]. For the projector image, the ideal point(u,v)is the known value, and then the distortion point(ud,vd) is calculated by Eq. (31). When (ud,vd)is projected on the test object, after the distortion of the projector lens, the approximate ideal point can be acquired on the test objects. In the proposed method of this work, all the images acquired by the camera are compensated before being used in the calibration procedure, and all the projector images are compensated before being projected.

3. Experiments

We conduct experiments to verify the proposed method. The line stripe pattern optical triangular measuring system is set up as shown in Fig. 1. The reference plane, the camera and projector are fixed after being set up. Blocks of the fixed thickness are placed on the reference plane to form the parallel test planes. The image of reference plane acquired by the camera is shown in Fig. 3(a)
Fig. 3 Experiments result for the line stripe pattern profilometry. (a) Camera image of the reference plane. (b) Camera image of one of test planes. (c) The combination image of the extracted test and reference lines. (d) Plot of parameterp1and variableuc . (e) Measuring result of a given block. (f) Error of (e).
, one of the test images is shown in Fig. 3(b), and Fig. 3(c) are the combination of the reference and test line stripes extracted by Steger line position extraction algorithm from the images. By taking the procedures described in Section 2.1, parametersp1,p2of each point can be determined. Figure 3(d) shows the plot ofp1anduc. From the figure, it can be found out that p1is linear with ucsuggesting that the experiment result coincides with the Eq. (21). After the calibration being done, a block with thickness of 162.3um is measured, the results are shown in Fig. 3(e), and the errors are shown in Fig. 3(f). The mean error is 0.03um and root mean square error is 0.21um. In the experiment, the camera and projector are in arbitrary position and direction and the reference plane is in arbitrary direction too.

A phase shift measuring profilometry experiment is conducted to verify the proposed image ray tracing calibration method. Three steps shift algorithm is used [18

18. P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45(21), 5086–5091 ( 2006). [CrossRef] [PubMed]

], the projector is NEC VT491 + digital projector, and the camera is SONY A-300, five test planes parallel to the reference plane are used, the distance from the reference plane are 200mm, 400mm, 600mm, 800mm, and 1000mm, the distance accuracy is ± 0.05mm, and the camera and projector are focused at the distance of 500 mm from the reference plane. The system is calibrated by procedures described in Section 2.3, and a coefficients table for each pixel position(uc,vc)is acquired. These parameters are used to calculate the height of a given plane whose distance from the reference plane is 500mm, the measured distance is shown in Fig. 4(a)
Fig. 4 The measured result of the given plane. (a) Measured result. (b) The error of the measured result
, the errors are shown in Fig. 4(b), the mean distance value of the measured plane is 499.92mm, and the root mean square error is 0.12mm. A face experiment is also conducted to verify the method for the complicated objects. Figure 5(a)
Fig. 5 Measured result of face. (a) 2D show of the face, gray value represented the height dedicated by the color bar on the right side of the Fig. 5(b) 3D mesh grid of the measured face.
shows 2D form of the measured face, the height of the face is shown as gray values, and Fig. 5(b) shows the 3-D mesh grid of the measured face.

4. Conclusion

In this work, we propose a generalized calibration method in optical triangular profilometry by modeling the projector and camera as pin-hole camera models and the reference (test) planes as parallel planes. For the projection ray tracing line strip measuring systems, the reciprocal of height and the reciprocal of the pixel position vertical distance are in linear relationship, and for the image ray tracing phase shift measuring profilometry, the reciprocal of height and the reciprocal of the phase shift (change) of the same pixel position are in linear relationship. So for both the projection and image ray tracing systems, the simple linear calibration equation is obtained, which has fewer coefficients, so computer speed is fast for measuring. The distortion effects of images should be compensated before images being used in the proposed method. Experiments of the line stripe pattern and phase shift measuring profilometry are conducted to verify the proposed methods.

The calibration equations we obtained are consistent with the results from the geometry approaches. While the accuracy of the calibration was not significantly improved in our work as compared to the geometry approach [1

1. W. S. Zhou and X. Y. Su, “A direct mapping algorithm for phase-measuring profilometry,” J. Mod. Opt. 41(1), 89–94 ( 1994). [CrossRef]

10

10. A. Asundi and Z. Wensen, “Unified calibration technique and its applications in optical triangular profilometry,” Appl. Opt. 38(16), 3556–3561 ( 1999). [CrossRef] [PubMed]

], our work is the first attempt to use matrices in the reference-plane approaches. With the use of matrices, our model has a number of important advantages over the traditional model derived from geometry: firstly, the deduction of the calibration is more convenient and simple, giving results that are easy to understand; secondly, it supports the complicated rotations and translations for the system setup; thirdly, the same linear calibration equation for both projection and image ray tracing system can be acquired.

There are some extended results for this paper, for the image ray tracing phase shift measuring profilometry, the coefficients can be also given by the expression of parametersa1,a2,a3,a4,a5,a6,b1,b2,b3,b4,b5,b6, which are determined by the system setup. The coefficients table and (uc,vc)values can be used to determine these parameters by least square methods [7

7. Z. Wang, H. Du, and H. Bi, “Out-of-plane shape determination in generalized fringe projection profilometry,” Opt. Express 14(25), 12122–12133 ( 2006). [CrossRef] [PubMed]

9

9. Z. Wang, H. Du, S. Park, and H. Xie, “Three-dimensional shape measurement with a fast and accurate approach,” Appl. Opt. 48(6), 1052–1061 ( 2009). [CrossRef]

], but it is complicated and unnecessary in practice.

Acknowledgement

The authors wish to express their thanks to Mr. Ho Simon Wang at HUST Academic Writing Center for improving the structure and presentation of the paper. We also wish to thank Changhong Zhu at HUST for his helpful discussions. We extended our thanks to the reviewers for their helpful and constructive comments.

References and links

1.

W. S. Zhou and X. Y. Su, “A direct mapping algorithm for phase-measuring profilometry,” J. Mod. Opt. 41(1), 89–94 ( 1994). [CrossRef]

2.

L. Chen and C. Quan, “Fringe projection profilometry with nonparallel illumination: a least-squares approach,” Opt. Lett. 30(16), 2101–2103 ( 2005). [CrossRef] [PubMed]

3.

L. Chen and C. J. Tay, “Carrier phase component removal: a generalized least-square approach,” J. Opt. Soc. Am. A 23(2), 435–443 ( 2006). [CrossRef]

4.

H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. 44(3), 033603 ( 2005). [CrossRef]

5.

H. Guo, M. Chen, and P. Zheng, “Least-squares fitting of carrier phase distribution by using a rational function in fringe projection profilometry,” Opt. Lett. 31(24), 3588–3590 ( 2006). [CrossRef] [PubMed]

6.

B. A. Rajoub, M. J. Lalor, D. R. Burton, and S. A. Karout, “A new model for measuring object shape using non-collimated fringe-pattern projections,” J. Opt. A, Pure Appl. Opt. 9(6), S66–S75 ( 2007). [CrossRef]

7.

Z. Wang, H. Du, and H. Bi, “Out-of-plane shape determination in generalized fringe projection profilometry,” Opt. Express 14(25), 12122–12133 ( 2006). [CrossRef] [PubMed]

8.

H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 ( 2007). [CrossRef] [PubMed]

9.

Z. Wang, H. Du, S. Park, and H. Xie, “Three-dimensional shape measurement with a fast and accurate approach,” Appl. Opt. 48(6), 1052–1061 ( 2009). [CrossRef]

10.

A. Asundi and Z. Wensen, “Unified calibration technique and its applications in optical triangular profilometry,” Appl. Opt. 38(16), 3556–3561 ( 1999). [CrossRef] [PubMed]

11.

J. Heikkila, and O. Silven, “Calibration Procedure for short focal length off-the-shelf CCD cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Vienna, Austria, 1996), pp. 166–170.

12.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 ( 2000). [CrossRef]

13.

S. Cui, X. Zhu, W. Wang, and Y. Xie, “Calibration of a laser galvanometric scanning system by adapting a camera model,” Appl. Opt. 48(14), 2632–2637 ( 2009). [CrossRef] [PubMed]

14.

S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 ( 2006). [CrossRef]

15.

R. L. Saenz, T. Bothe, and W. P. Juptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43, 467–471 ( 2004).

16.

O. Faugeras, “Three-Dimensional Computer Vision: A Geometric Viewpoint,” (MIT Press, 1993).

17.

C. Steger, “An Unbiased Detector of Curvilinear Structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 ( 1998). [CrossRef]

18.

P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45(21), 5086–5091 ( 2006). [CrossRef] [PubMed]

19.

J. Meneses, T. Gharbi, and P. Humbert, “Phase-unwrapping algorithm for images with high noise content based on a local histogram,” Appl. Opt. 44(7), 1207–1215 ( 2005). [CrossRef] [PubMed]

OCIS Codes
(120.2830) Instrumentation, measurement, and metrology : Height measurements
(120.5050) Instrumentation, measurement, and metrology : Phase measurement
(120.6650) Instrumentation, measurement, and metrology : Surface measurements, figure

ToC Category:
Instrumentation, Measurement, and Metrology

History
Original Manuscript: September 1, 2009
Revised Manuscript: October 21, 2009
Manuscript Accepted: October 22, 2009
Published: October 28, 2009

Citation
Suochao Cui and Xiao Zhu, "A generalized reference-plane-based calibration method in optical triangular profilometry," Opt. Express 17, 20735-20746 (2009)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-17-23-20735


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. W. S. Zhou and X. Y. Su, “A direct mapping algorithm for phase-measuring profilometry,” J. Mod. Opt. 41(1), 89–94 (1994). [CrossRef]
  2. L. Chen and C. Quan, “Fringe projection profilometry with nonparallel illumination: a least-squares approach,” Opt. Lett. 30(16), 2101–2103 (2005). [CrossRef] [PubMed]
  3. L. Chen and C. J. Tay, “Carrier phase component removal: a generalized least-square approach,” J. Opt. Soc. Am. A 23(2), 435–443 (2006). [CrossRef]
  4. H. Guo, H. He, Y. Yu, and M. Chen, “Least-squares calibration method for fringe projection profilometry,” Opt. Eng. 44(3), 033603 (2005). [CrossRef]
  5. H. Guo, M. Chen, and P. Zheng, “Least-squares fitting of carrier phase distribution by using a rational function in fringe projection profilometry,” Opt. Lett. 31(24), 3588–3590 (2006). [CrossRef] [PubMed]
  6. B. A. Rajoub, M. J. Lalor, D. R. Burton, and S. A. Karout, “A new model for measuring object shape using non-collimated fringe-pattern projections,” J. Opt. A, Pure Appl. Opt. 9(6), S66–S75 (2007). [CrossRef]
  7. Z. Wang, H. Du, and H. Bi, “Out-of-plane shape determination in generalized fringe projection profilometry,” Opt. Express 14(25), 12122–12133 (2006). [CrossRef] [PubMed]
  8. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef] [PubMed]
  9. Z. Wang, H. Du, S. Park, and H. Xie, “Three-dimensional shape measurement with a fast and accurate approach,” Appl. Opt. 48(6), 1052–1061 (2009). [CrossRef]
  10. A. Asundi and Z. Wensen, “Unified calibration technique and its applications in optical triangular profilometry,” Appl. Opt. 38(16), 3556–3561 (1999). [CrossRef] [PubMed]
  11. J. Heikkila, and O. Silven, “Calibration Procedure for short focal length off-the-shelf CCD cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Vienna, Austria, 1996), pp. 166–170.
  12. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]
  13. S. Cui, X. Zhu, W. Wang, and Y. Xie, “Calibration of a laser galvanometric scanning system by adapting a camera model,” Appl. Opt. 48(14), 2632–2637 (2009). [CrossRef] [PubMed]
  14. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]
  15. R. L. Saenz, T. Bothe, and W. P. Juptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43, 467–471 (2004).
  16. O. Faugeras, “Three-Dimensional Computer Vision: A Geometric Viewpoint,” (MIT Press, 1993).
  17. C. Steger, “An Unbiased Detector of Curvilinear Structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). [CrossRef]
  18. P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45(21), 5086–5091 (2006). [CrossRef] [PubMed]
  19. J. Meneses, T. Gharbi, and P. Humbert, “Phase-unwrapping algorithm for images with high noise content based on a local histogram,” Appl. Opt. 44(7), 1207–1215 (2005). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4 Fig. 5
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited