OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 20, Iss. 15 — Jul. 16, 2012
  • pp: 16926–16941
« Show journal navigation

Hyper-accurate flexible calibration technique for fringe-projection-based three-dimensional imaging

Minh Vo, Zhaoyang Wang, Bing Pan, and Tongyan Pan  »View Author Affiliations


Optics Express, Vol. 20, Issue 15, pp. 16926-16941 (2012)
http://dx.doi.org/10.1364/OE.20.016926


View Full Text Article

Acrobat PDF (2874 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Fringe-projection-based (FPB) three-dimensional (3D) imaging technique has become one of the most prevalent methods for 3D shape measurement and 3D image acquisition, and an essential component of the technique is the calibration process. This paper presents a framework for hyper-accurate system calibration with flexible setup and inexpensive hardware. Owing to the crucial improvement in the camera calibration technique, an enhanced governing equation for 3D shape determination, and an advanced flexible system calibration technique as well as some practical considerations on accurate fringe phase retrieval, the novel FPB 3D imaging technique can achieve a relative measurement accuracy of 0.010%. The validity and practicality are verified by both simulation and experiments.

© 2012 OSA

1. Introduction

Fringe-projection-based (FPB) three-dimensional (3D) imaging technique has emerged as one of the most reliable methods for acquiring the 3D images of objects in real applications because of its considerable advantages such as low cost, easy implementation, high accuracy, and full field imaging. The fundamental principle of the technique is to project a known illumination pattern onto the objects of interest, and the 3D shape information can be extracted from the observed deformation of the pattern.

In order to achieve accurate 3D imaging, the FPB system must be calibrated prior to its application. The notable challenges faced by the technique involve the flexibility in the scale of the field of view and the ability to accurately determine the 3D information of the objects of interest. In recent years, the calibration approaches based on using a governing equation that relates the height or depth information of the object surface to the phase map of the projection fringes at each point have been carefully investigated [1

1. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef] [PubMed]

3

3. Z. Wang, D. Nguyen, and J. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010). [CrossRef]

]. These methods can be easily implemented by employing a number of gage blocks of different heights to calibrate the system and can yield very accurate results. The challenge lies in manufacturing high precision gage blocks for various applications because the block sizes must be changed according to the field size of imaging, which makes such calibration techniques impractical. A different type of calibration approach is to treat the projector as a reversed camera, and use the camera calibration scheme to calibrate both camera and projector [4

4. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]

]. With projection fringes as a tool to establish the correspondence, the 3D coordinates of points can be determined by mapping the point locations in the camera plane with those in the projector plane. A drawback of this technique is that the calibration of a projector is error-prone and the result of stereo vision is inherently noisy [5

5. X. Zhang and L. Zhu, “Projector calibration from the camera image point of view,” Opt. Eng. 48(11), 117208 (2009). [CrossRef]

].

The key to our approach is to take advantage of the flexibility and high accuracy nature of the classical planar camera calibration technique [6

6. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]

]. As technology evolves, the existing camera calibration methods typically fail to satisfy the ever-increasing demand for higher imaging accuracy. The relevant literature has addressed two major sources of error that affect the camera calibration results: the imperfection of the calibration target board and the uncertainty in locating its target control points [7

7. J. Lavest, M. Viala, and M. Dhome, “Do we really need an accurate calibration pattern to achieve a reliable camera calibration?” in Proceedings European Conference on Computer Vision (1998), pp. 158–174.

, 8

8. A. Datta, J. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings IEEE International Conference on Computer Vision Workshops (IEEE, 2009), 1201–1208.

]. This paper presents a crucial improvement in the geometric camera calibration to overcome these two problems without loss of the original advantages of the conventional techniques. The proposed technique uses a sophisticated lens distortion model that takes the radial, tangential, and prism distortion into account, and achieves a precise localization of the target control points with a novel refinement process using frontal image correlation concept; in addition, the defects of the calibration board can be compensated. The proposed camera calibration technique can yield high accuracy in practice with the re-projection error smaller than 0.01 pixels.

Along with the two aforementioned improvements, a novel governing equation for the 3D shape determination is proposed. Being algebraically inspired, the theoretically derived governing equation for height or depth determination [1

1. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef] [PubMed]

] is modified to take account of both the camera and projection lens distortion without the need of projector calibration and to eliminate other nuisance effects. The associated system calibration procedure can be performed with a process similar to the commonly used camera calibration [6

6. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]

] except that phase-shifted fringe patterns are projected onto the calibration board. The proposed FPB 3D imaging technique is capable of providing relative accuracy higher than 0.010% for the entire field of view using low-cost and off-the-shelf hardware, where the relative accuracy is defined as the ratio of out-of-plane imaging error to the in-plane dimension.

2. Camera calibration

2.1. Camera model and traditional calibration technique

The camera is typically described as a pinhole model. With such a model, the relation between the 3D world coordinates of a control point M = {Xw, Yw, Zw}T and its corresponding location m = {u, v}T in the image plane are given by:
s{m1}=A[RT]{M1},A=[αγu00βv0001]
(1)
where s is a scale factor; A is the intrinsic matrix, with α and β the horizontal and vertical focal length in pixel unit, γ the skew factor, and (u0, v0) the coordinates of the principal point; R and T are the extrinsic parameters that denote the rotation and translation relating the world coordinate system to the camera coordinate system. Because the camera lens usually exhibits nonlinear optical distortion, Eq. (1) is insufficient for accurate camera calibration. In spite that some very complex models exist [14

14. D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009). [CrossRef]

], in practice they induce more instability rather than accuracy because of the high order distortion components. In this paper, the lens distortion is modeled by
xcn'=(1+a0r2+a1r4+a2r6)xcn+(p0+r2p2)(r2+2xcn2)+2(p1+r2p3)w+s0r2+s2r4ycn'=(1+a0r2+a1r4+a2r6)ycn+(p1+r2p3)(r2+2ycn2)+2(p0+r2p2)w+s1r2+s3r4r2=xcn2+ycn2,w=xcnycn
(2)
where (a0, a1, a2), (p0, p1, p2, p3), and (s0, s1, s2, s3) represent the radial, tangential, and prism distortion coefficients, respectively. In Eq. (2), (xcn, ycn) denotes the normalized location of a distortion-free point (u, v), and (xcn, ycn) is normalized location of the corresponding distorted point (u′, v′). Their relations are as follows:
{xcnycn1}=A1{uv1},{xcn'ycn'1}=A1{uv1}
(3)

Given the locations of the control points in the world coordinate as Mj and in the image plane as mij, where i denotes the ith of the k images (i = 1, 2,..., k) and j denotes the jth of the l control points (j = 1, 2,...,l), the calibration process involves a nonlinear optimization with the cost function defined as:
S=i=1kj=1l||mijP(A,φ,ρi,Ti,Mj)||2
(4)
where φ = (a0, a1, a2, p0, p1, p2, p3, s0, s1, s2, s3), ρ is the Rodrigues vector presentation of R [6

6. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]

], and P denotes the projection of control points onto the image planes according to Eqs. (1) and (2). The optimization is performed using the Levenberg-Marquardt (L-M) algorithm.

2.2. Position estimation and refinement of the calibration target control points

Because planar calibration target board is employed, the world coordinate system is placed on the board with its surface as the XY plane for simplicity. Intrinsically, the aforementioned camera calibration method requires that the control points to be perfectly positioned on the board. However, the actual positions of these points always have certain errors that are due to the inevitable inaccuracy and imprecision of the calibration board fabrication [7

7. J. Lavest, M. Viala, and M. Dhome, “Do we really need an accurate calibration pattern to achieve a reliable camera calibration?” in Proceedings European Conference on Computer Vision (1998), pp. 158–174.

,14

14. D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009). [CrossRef]

]. To cope with this problem, the world coordinates of the control points are be treated as unknown, and they will be determined together with the camera parameters using the so-called bundle adjustment process [15

15. C. Engels, H. Stewenius, and D. Nister, “Bundle adjustment rules,” in Proceedings on Photogrammetric Computer Vision (2006), pp. 266–271.

]. With this scheme, the optimization requires a geometric constraint where three non-collinear control points are selected to form a plane. Specifically, the planarity constraint sets the world coordinate Z = 0 for each of the three points, and requires the distance between any two of the three points to be accurately known to get the scale information. Although the three constraint points can be randomly selected, placing them at the different corners of the calibration target board is helpful for determining the orientation of the target.

Fig. 1 The conversion from raw image (left) to frontal image (middle) enables the correlation with the ring templates (right).

The basic principle of the new approach is that the circle centers can be better detected in the frontal images than in the oblique raw images. The hyper-accurate detection of the control points directly leads to greater accuracy in the recovery of the camera parameters and the actual position of these points. The procedure of the camera calibration can be summarized as follows:
  1. Detect the control points in the raw images using the edge detection and ellipse fitting method.
  2. Optimize the camera parameters and world coordinates of the control points using the L-M algorithm.
  3. Obtain the frontal images and detect the control points using the DIC method.
  4. Reversely project the detected control points back to the raw images.
  5. Re-optimize the camera parameters together with the world coordinates of the control points.

3. Projection fringes

3.1. Gamma correction

In the FPB 3D imaging technique, because the 3D shape information is encoded into the fringe patterns projected onto the object surfaces, the full-field fringe information must be retrieved from captured images in order to get the 3D coordinate data. This can be achieved by using sinusoidal fringe patterns as well as the phase-shifting algorithm. The initial fringe pattern (e.g., vertical fringes) is typically generated with the following sinusoidal function:
I0i(x,y)=Ic[1+cos(2πfx/W+δi)]
(7)
where I0 is the intensity of the pattern at pixel coordinate (x, y), i indicates the ith image, W is the width of the pattern, f is the fringe frequency, i.e., number of fringes in the image, δ is the phase-shifting amount, and Ic is a constant denoting the intensity modulation amplitude. The corresponding fringe pattern in the captured image can be usually expressed as
Ii(u,v)=a(u,v)+j=1pbj(u,v)cos{j[ϕ(u,v)+δi]}
(8)
where (u, v) denotes a pixel point in the captured image, a is the background intensity, bj is the intensity modulation amplitude of the jth order harmonic term, ϕ is the fringe phase, and p is the highest significant harmonic order of the captured fringes. In the equation, the high-order harmonics are incorporated to consider the existence of nonlinear luminance distortion in practice. When N-step uniform phase-shifting scheme is employed (Np + 2), the full-field wrapped phase distribution can be easily determined by:
ϕw(u,v)=arctani=1Nsin(δi)Ii(u,v)i=1Ncos(δi)Ii(u,v)
(9)
where δi=i1N2π, and the superscript w denotes the wrapped phase. In reality, p can be a large value and very difficult to determine. However, it can be deduced from Eq. (9) that the phase determination process can benefit from the use of a larger phase shifting step, as higher harmonic orders can be handled.

The nonlinear intensity distortion mainly originates from the gamma encoding and decoding process applied by the camera and projector system. Generally, this effect can be described as:
I^=I^0γ0
(10)
where γ0 is the gamma values of the system, and Î and Î0 are the normalized intensities of the captured and computer-generated initial images, respectively. From the equation, it is evident that an effective yet simple method to cope with the gamma problem is to pre-encode the initial computer-generated image. Specifically, by applying an appropriate gamma encoding, 1/γp, to the generation of ideal image I0, the gamma effect can be attenuated to achieve higher accuracies in the phase detection and the eventual 3D imaging. With this handling, Eq. (10) is rewritten as:
I^=(I^01γp)γ0=I^0γ0/γp
(11)

Considering the various uncertainties encountered in real measurements, the best γp can be determined by the following approach:
  1. With a plate as the target, use a relatively large step scheme for phase-shifting, e.g, 20 steps, to determine the phase distribution ϕrw(u,v) without gamma pre-encoding (γp = 1). Experimental evidence shows that the highest influential harmonic order in typical measurements is generally smaller than six [9

    9. B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009). [CrossRef] [PubMed]

    ], so the large number of phase-shifted images (i.e., 20) yields a reference phase distribution without nonlinear distortion.
  2. Apply a series of different gamma values, such as γp = {1.5, 1.7,...,3.3, 3.5}, to the computer-generated fringe patterns, and use the three-step phase-shifting method to determine the phase distributions ϕdw(u,v). Then, calculate the sum of squared errors of the detected phase e=u,v[ϕdw(u,v)ϕrw(u,v)]2. The three-step phase-shifting algorithm is adopted here because it is the one most sensitive to the nonlinear intensity distortion.
  3. Use curve fitting to find the γp that gives the smallest phase detection error.

3.2. Multi-frequency fringes

In reality, because the model described by Eq. (11) is a simple model and the true γp is hard to be actually determined, the implementation of the FPB 3D imaging can involve both the presented gamma correction scheme and the multi-step phase-shifting technique. Practically, the most widely used four-step phase-shifting technique can be employed in the FPB 3D imaging. Yet, a larger phase-shifting step, such as the eight-step technique, should be adopted for the highest-frequency (i.e., the working frequency) fringe patterns to eliminate the nonlinear intensity distortion, reduce noise, and obtain phase distributions with high accuracy.

4. System calibration

In 3D imaging, the primary task is to obtain the out-of-plane height or depth information of an object or object system. For a generalized FPB system where the positions and orientations of the projector and camera can be arbitrary arranged as long as the regions of interest can be illuminated and captured, the governing equation of the 3D height determination is [1

1. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef] [PubMed]

]:
Z=1+c1ϕ+(c2+c3ϕ)u+(c4+c5ϕ)vd0+d1ϕ+(d2+d3ϕ)u+(d4+d5ϕ)v
(13)
where Z is the out-of-reference-plane height or depth at point (X, Y) corresponding to the pixel (u, v) in the captured image, c1c5 and d0d5 are constant coefficients associated with the geometric and other relevant parameters, and ϕ is the unwrapped phase determined by Eq. (12). Considering that Eq. (13) is for the ideal case where there are no uncertainties such as lens distortion, the actual application can include extra terms of second order of u and v to enhance the imaging accuracy. With this scheme, Eq. (13) becomes:
Z=FcFdFc=1+c1ϕ+(c2+c3ϕ)u+(c4+c5ϕ)v+(c6+c7ϕ)u2+(c8+c9ϕ)v2+(c10+c11ϕ)uv+(c12+c13ϕ)u2v+(c14+c15ϕ)uv2+(c16+c17ϕ)u2v2Fd=d0+d1ϕ+(d2+d3ϕ)u+(d4+d5ϕ)v+(d6+d7ϕ)u2+(d8+d9ϕ)v2+(d10+d11ϕ)uv+(d12+d13ϕ)u2v+(d14+d15ϕ)uv2+(d16+d17ϕ)u2v2
(14)
It is noted that Huang et al. [2

2. L. Huang, P. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010). [CrossRef] [PubMed]

] proposed to take into account the radial distortion effect of the camera lens by introducing 16 extra terms ranging from the third to fifth orders of u and v into Eq. (13); however, this method suffers from divergence and instability in the numerical computation.

The calibration of the FPB system usually relies on a number of accurate and precise gage blocks of different heights [1

1. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef] [PubMed]

3

3. Z. Wang, D. Nguyen, and J. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010). [CrossRef]

], and the sizes of the gage blocks should be selected according to the size of the field of imaging. This requires manufacturing a larger number of accurate and precise gage blocks, and thus makes the calibration technique impractical for broad applications. To cope with this problem, this paper advocates the use of the flexible camera calibration board, as shown in Fig. 1, for FPB system calibration. The rationale is that once the 3D coordinates of the calibration target control points are accurately determined during the camera calibration, they can serve as gage points for the calibration of the FPB system.

The flexible FPB calibration technique requires capturing a series of images of the calibration board at different positions with phase-shifted fringes projected on it. A clear calibration board image can be obtained at each position by averaging the captured fringe images. The camera calibration process is applied to these clear board images at all the positions in order to find the intrinsic and extrinsic parameters of the camera as well as the 3D coordinates of each control point on the calibration board. An implicit benefit of using the averaged clear image from phase-shifted images is the significant reduction of noise level, which substantially helps improve the accuracies of the control point detection and the camera calibration.

From the calibrated camera parameters, the height of each calibration control point with respect to a reference plane can be determined at every imaging position. Specifically, for the jth control point in the image of the ith board position, its physical coordinates on the calibration board, refined by the L-M optimization during camera calibration, are first transformed to the corresponding point (Xc,ij, Yc,ij, Zc,ij) in the camera coordinate system by employing the camera extrinsic parameters. A virtual reference plane is then created by fitting a planar equation to all the points (Xc,1j, Yc,1j, Zc,1j) in the first board position. Subsequently, the height of every point (Xc,ij, Yc,ij, Zc,ij) with respect to the virtual reference plane can be calculated as:
Zij=AXc,ij+BYc,ij+CZc,ij+1A2+B2+C2
(15)
where A, B, C are the planar coefficients of the reference plane and Zij is the height of the jth target control point on the calibration board captured at the ith position. Figure 2 shows a representative image of the concentric-circle calibration board with fringes projected onto it, the corresponding phase map, and the out-of-reference-plane height map. A cubic B-spline interpolation process has been applied to obtain the height distributions in the bright regions.

Fig. 2 Representative images of system calibration: calibration board with projection fringes, unwrapped phase map, and out-of-reference-plane height map.

After the unwrapped phase ϕ and the height Z of each control point at every calibration board position are obtained, the system calibration can be carried out to determine the coefficients c0c17 and d0d17 through minimizing a nonlinear least-squares error defined as:
S=i=1kj=1l(FcFdZij)2
(16)
where Zij denotes the absolute out-of-reference-plane heights of the control points on the calibration boards at various positions, k is the total number of board positions, and l is the number of control points on the calibration board.

The coefficients c1c17 and d0d17 can be determined by using the L-M algorithm, where an initial guess can be obtained by minimizing a linear least-squares error in the form of S=i=1kj=1l[FcFdZij]2. It should be noted that at least three different positions of the calibration board must be utilized to correctly determine the coefficients because of the complexity of the governing equation and the camera calibration process. In practice, more than 20 positions are usually adopted. Moreover, the various positions of the calibration board should cover the volume of the field of imaging to assure accurate imaging over the entire field.

5. Experiments

5.1. Camera calibration

To demonstrate the validity of the proposed technique, computer simulations along with a real experiment have been conducted. These experiments use a flat calibration panel with 10 × 7 concentric-circle patterns whose grid distance is 25.4mm (as illustrated in Fig. 1).

5.1.1. Synthesized images

In the simulation, the images are synthesized with camera parameters that are obtained from rounding a real calibration result where the radial, tangential, and prism lens distortion are considered. Gaussian noise with a standard deviation of 0.2% of the 25.4mm grid distance is added to the position of each circular pattern in the horizontal and vertical directions. For better approximation of the synthetic images to the real images, in addition to being blurred by a 5 × 5 Gaussian filter, their intensity values are perturbed with additive white Gaussian noise (σ = 2.0). The images are in bitmap format with a size of 2048 × 1536 pixels. The quartic O-MOMS (optimal maximal-order-minimal-support) B-spline algorithm [17

17. L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36(16), 3070–3072 (2011). [CrossRef] [PubMed]

] is used for the interpolation involved in the calibration process.

Figure 3 shows the errors in detecting the positions of the control points against their true values. With Heikkila’s technique [16

16. J. Heikkila, “Moment and curvature preserving technique for accurate ellipse boundary detection,” in Proceedings IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1998), pp. 734–737.

], the control points are directly detected from the raw images, and the relatively large errors are mainly due to the perspective distortion where the ellipse centers are usually not the actual centers of the circles. In contrast, with the frontal image correlation approach, the distortion effects are effectively removed, which leads to an improvement of 30 times on the detection of control points in terms of the root-mean-square error (RMSE) in both vertical and horizontal directions.

Fig. 3 Localization errors of the control points obtained by: (a) Heikkila’s method, and (b) the frontal image correlation method.

Figure 4 presents the performance of the camera calibration process. In the first step, the conventional camera calibration method cannot provide accurate results. In the second step, the approach of control point adjustment alleviates the defects in the calibration target board and subsequently yields much smaller errors for the camera calibration. In the third or last step, the frontal image correlation process helps refine the positions of the control points with substantially higher accuracies; consequently, high-accuracy camera calibration can be achieved. The residual of the calibration, named reprojection error and defined as the RMSE between the projection of the control points to the image planes and their detected locations, is 0.000834 pixels. It is noted that the sharp jump at the beginning of the third step is due to the hyper-accurate locations of the control points detected by the frontal image correlation. These new locations are not associated with the calibration parameters previously obtained in the second step, so larger reprojection errors are produced. However, the errors are reduced significantly and quickly as the iteration continues. Table 1 summarizes the results of the calibration, where the physical rotation angles instead of the Rodrigues rotation vectors are presented for easy interpretation purpose. It can be seen that the camera parameters can be accurately retrieved with the proposed camera calibration scheme.

Fig. 4 Convergence of the proposed camera calibration scheme.

Table 1. Retrieved calibration parameters and their accuracy assessment

table-icon
View This Table
| View All Tables

5.1.2. Robustness to noise

Figure 5 shows the detection errors of the control points when the images are contaminated with Gaussian noise whose standard deviation varies from 0 to 3 grayscales. A 5×5 pixels Gaussian filter has been applied to the relevant images. Since interpolation is critical for the control point refinement with sub-pixel accuracies, different interpolation methods [17

17. L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36(16), 3070–3072 (2011). [CrossRef] [PubMed]

] have been employed to assess their robustness. For clarification, the data have been separated into two subfigures with the same cubic B-spline results shown in both. The results indicate that the family of B-spline interpolation yields much higher accuracies than the widely used bicubic interpolation for the detection of control points. Particularly, the quartic O-MOMS interpolation provides the highest accuracies.

Fig. 5 Errors of control point detection with different interpolation methods.

The accuracy of control point detection in the captured images has a direct relation with the recovery accuracy of the true camera parameters. Figure 6 shows the calibration reprojection error as a function of the standard deviation of noise. It can be seen that the B-spline interpolation methods typically provide an accuracy two to three times higher than the bicubic interpolation method with respect to the RMSE of the reprojection.

Fig. 6 Reprojection error of camera calibration with different interpolation methods.

5.1.3. Real experimental images

Like many other techniques, although the proposed camera calibration technique has been demonstrated to be robust against noise and position uncertainties of the control points, the real experimental results are not good as the simulation ones. This issue can be found in existing techniques as well. For instance, in the work reported by Douxchamps et al. [14

14. D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009). [CrossRef]

] where a similar approach was used, the simulation yielded a reprojection RMSE of 0.002 pixels and the real experiment gave 0.045 pixels. Nevertheless, the novel scheme proposed in this paper is still able to provide a remarkable improvement over the existing techniques.

In the experiment, a CMOS camera is used to capture a set of 20 images where the calibration board is located at different positions and oriented in various directions. The images are in bitmap format and each image has a size of 2048 × 1536 pixels.

Figure 7 shows the position error vectors between the detected and projected positions at each control point of the calibration target. With the conventional method, because of the insufficient accuracy of the control point detection and the imperfection of the calibration target along with the effect of lens and perspective distortions, the reprojection errors are noticeably polarized. On the contrary, with the proposed frontal image correlation method, the errors behave more isotropic, indicating that the distortions and the imprecision of the calibration target have been successfully compensated. For the overall reprojection error, the conventional method yielded 0.3839 pixels, and the frontal image correlation method yielded 0.0093 pixels.

Fig. 7 Reprojection errors at the control points obtained by: (a) the conventional method, and (b) the frontal image correlation method. The vector scales are different in the figures for clear illustration purpose.

5.2. Gamma correction

The validity of the gamma correction technique with the large phase-shifting step scheme and the gamma pre-encoding scheme has been verified by real experiments. To examine the effect of the fringe frequency on the accuracy of phase extraction, three different projection patterns with fringe frequencies of 100, 25, and 8 pixels per fringe are used in the experiments.

Figure 8(a) shows the RMSE of the extracted phase associated with different numbers of phase-shifting steps without gamma pre-encoding, where the reference phase is acquired by a 30-step phase-shifting measurement. As expected, when the number of phase-shifting step increases, the phase error decreases. Reasonably good accuracy can be achieved with the phase-shifting step as small as four, and high accuracy can be obtained if the phase-shifting step is no smaller than six. The result also reveals that a higher frequency can give slightly higher accuracy. In theory, the highest fringe frequency can be two pixels per fringe, but the effect of fringe interference can occur in this case. Moreover, a further experiment indicates that no perceptible difference can be seen for the cases of four and eight pixels per fringe.

Fig. 8 Phase retrieval error as a function of: (a) number of images, and (b) gamma pre-encoded values.

Figure 8(b) shows the phase RMSE with different gamma values pre-encoded in the projection patterns. In the experiment, the three-step phase-shifting method is used because it is sensitive to the gamma distortion and thus helpful for detecting the best pre-encoding gamma value. The experimental result gives the best pre-encoding gamma as 2.65, as illustrated in the figure. In addition, the gamma problem can also be well solved by using a high fringe frequency.

To further demonstrate the effect of the gamma pre-encoding scheme, Fig. 9 shows two captured fringe patterns without and with gamma pre-encoded. The line plots shown in the figure are the intensity distributions along the lines highlighted in the captured patterns. It is evident that the gamma correction method can remarkably reduce the nonlinear intensity distortion.

Fig. 9 The effect of pre-encoded gamma values on the captured fringe patterns.

Based on the experimental data, it is desired for the practical 3D imaging to use the following approach: high fringe frequency such as eight pixels per fringe, gamma pre-encoding, and large phase-shifting step (e.g., eight).

5.3. FPB calibration and experiment

The first experiment aims at testing the imaging accuracy, where a flat plate of 457.2mm × 304.8mm with eight high-precision gage blocks and a concentric-circle array board with a grid size of 25.4mm are selected as the object of interest and the calibration board, respectively. Figure 10 shows the testing plate together with its experimentally obtained 2D and 3D plots. The results summarized in Table 2 indicate that the maximum error of the mean measured heights over the entire field is 0.048mm, which yields a relative accuracy (defined as the ratio of out-of-plane measurement accuracy to the in-plane dimension) of 0.010%. This confirms the validity and reliability of the 3D imaging technique.

Fig. 10 3D imaging results of a plate with eight gage blocks.

Table 2. Actual and measured heights of gage blocks

table-icon
View This Table
| View All Tables

For comparison purpose, the captured images of the first three frequencies are also used for the 3D imaging analysis. This means that the working frequency is 40 pixels per fringe or 20 fringes in the 800-pixel width, and four-step phase-shifting algorithm is used. The mean measured heights are 25.729mm, 18.903mm, 6.340mm, 6.302mm, 12.332mm, 15.622mm, 9.522mm, and 50.855mm for the eight gage blocks, respectively. The largest error is −0.368mm and the largest standard deviation is 0.449mm. The results clearly show the effectiveness of using high-frequency fringes and large phase-shifting step. In practice, it is favorable to apply all the following approaches: high-frequency fringes, gamma pre-encoding, and large phase-shifting step.

The other four experiments are intended to verify the flexibility of the proposed technique for 3D imaging at various scales. Figure 11 illustrates the full 360° 3D imaging result of a conch shell obtained by combining the 3D images captured from multiple views. In this case, the conch shell is 221mm long and the grid distance of the calibration target pattern is scaled down to 16mm. Visual inspection vividly shows that the full 360° image has a very good match with the actual object, and the surface structure can be seen at ease. The high quality full 360° 3D registration clearly benefits from the accurate 3D measurements.

Fig. 11 A conch shell and its 3D images observed from five different views.

Another experiment is carried out to validate the performance of the 3D imaging at small scales, where a small calibration board with a pattern grid distance of 6.0mm is employed to calibrate the system. In this experiment, an optical lens is utilized to focus the projection fringes into a small region to cover the object of interest. Figure 12 shows the imaging result of a 48.0mm × 56.0mm printed circuit board with many components on it. It can be seen that except for the shadow and shiny regions, the result indicates a good imaging accuracy.

Fig. 12 A printed circuit board, the 2D height map, and the 3D rendered surface.

Fig. 13 A horse sculpture, a 2D height map in color, and the 3D images.

Fig. 14 A lion toy and its 3D images ( Media 1).

6. Conclusion

A series of new approaches toward a hyper-accurate calibration and application of the FPB 3D imaging system is presented. The contributions of this work can be summarized as three major improvements. First, an advanced geometric camera calibration technique allows using low-cost calibration hardware to yield accurate camera calibration results by utilizing the bundle adjustment technique and the frontal image correlation scheme. Second, an effective gamma pre-encoding method along with high-frequency fringes and large phase-shifting step schemes are proposed to ensure accurate retrieval of fringe phase. Third, an enhanced governing equation capable of effectively handling various uncertainties and lens distortion issue is proposed for the 3D shape determination, and an advanced flexible calibration technique using the control points of calibration target as gage points is introduced. Because only one calibration board of appropriate size is required for each imaging scenario and the calibration pattern can be printed out by a regular printer, the calibration technique is remarkably flexible and convenient to use. Both simulation and real experimental results successfully verify the validity, reliability and practicality of the proposed technique.

Acknowledgments

This work was supported by the U.S. Army Research Office (ARO) under Grant No. W911NF-10-1-0502 and the National Science Foundation (NSF) under Grant No. CMMI-0825806.

References and links

1.

H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett. 32(16), 2438–2440 (2007). [CrossRef] [PubMed]

2.

L. Huang, P. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt. 49(9), 1539–1548 (2010). [CrossRef] [PubMed]

3.

Z. Wang, D. Nguyen, and J. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng. 48(2), 218–225 (2010). [CrossRef]

4.

Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]

5.

X. Zhang and L. Zhu, “Projector calibration from the camera image point of view,” Opt. Eng. 48(11), 117208 (2009). [CrossRef]

6.

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]

7.

J. Lavest, M. Viala, and M. Dhome, “Do we really need an accurate calibration pattern to achieve a reliable camera calibration?” in Proceedings European Conference on Computer Vision (1998), pp. 158–174.

8.

A. Datta, J. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings IEEE International Conference on Computer Vision Workshops (IEEE, 2009), 1201–1208.

9.

B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009). [CrossRef] [PubMed]

10.

J. Zhong and J. Weng, “Phase retrieval of optical fringe patterns from the ridge of a wavelet transform,” Opt. Lett. 30(19), 2560–2562 (2005). [CrossRef] [PubMed]

11.

L. Xiong and S. Jia, “Phase-error analysis and elimination for nonsinusoidal waveforms in Hilbert transform digital-fringe projection profilometry,” Opt. Lett. 34(15), 2363–2365 (2009). [CrossRef] [PubMed]

12.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of art in structured light patterns for surface profilometry,” Pattern Recogn. 43(8), 2666–2680 (2010). [CrossRef]

13.

T. Hoang, B. Pan, D. Nguyen, and Z. Wang, “Generic gamma correction for accuracy enhancement in fringe-projection profilometry,” Opt. Lett. 35(12), 1992–1994 (2010). [CrossRef] [PubMed]

14.

D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009). [CrossRef]

15.

C. Engels, H. Stewenius, and D. Nister, “Bundle adjustment rules,” in Proceedings on Photogrammetric Computer Vision (2006), pp. 266–271.

16.

J. Heikkila, “Moment and curvature preserving technique for accurate ellipse boundary detection,” in Proceedings IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1998), pp. 734–737.

17.

L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett. 36(16), 3070–3072 (2011). [CrossRef] [PubMed]

18.

B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol. 20(6), 062001 (2009). [CrossRef]

19.

B. Pan, H. Xie, and Z. Wang, “Equivalence of digital image correlation criteria for pattern matching,” Appl. Opt. 49(28), 5501–5509 (2010). [CrossRef] [PubMed]

OCIS Codes
(110.6880) Imaging systems : Three-dimensional image acquisition
(150.6910) Machine vision : Three-dimensional sensing
(150.0155) Machine vision : Machine vision optics
(150.1135) Machine vision : Algorithms
(150.1488) Machine vision : Calibration

ToC Category:
Imaging Systems

History
Original Manuscript: April 9, 2012
Revised Manuscript: June 21, 2012
Manuscript Accepted: June 29, 2012
Published: July 11, 2012

Citation
Minh Vo, Zhaoyang Wang, Bing Pan, and Tongyan Pan, "Hyper-accurate flexible calibration technique for fringe-projection-based three-dimensional imaging," Opt. Express 20, 16926-16941 (2012)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-20-15-16926


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. H. Du and Z. Wang, “Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system,” Opt. Lett.32(16), 2438–2440 (2007). [CrossRef] [PubMed]
  2. L. Huang, P. Chua, and A. Asundi, “Least-squares calibration method for fringe projection profilometry considering camera lens distortion,” Appl. Opt.49(9), 1539–1548 (2010). [CrossRef] [PubMed]
  3. Z. Wang, D. Nguyen, and J. Barnes, “Some practical considerations in fringe projection profilometry,” Opt. Lasers Eng.48(2), 218–225 (2010). [CrossRef]
  4. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng.47(5), 053604 (2008). [CrossRef]
  5. X. Zhang and L. Zhu, “Projector calibration from the camera image point of view,” Opt. Eng.48(11), 117208 (2009). [CrossRef]
  6. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell.22(11), 1330–1334 (2000). [CrossRef]
  7. J. Lavest, M. Viala, and M. Dhome, “Do we really need an accurate calibration pattern to achieve a reliable camera calibration?” in Proceedings European Conference on Computer Vision (1998), pp. 158–174.
  8. A. Datta, J. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” in Proceedings IEEE International Conference on Computer Vision Workshops (IEEE, 2009), 1201–1208.
  9. B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett.34(4), 416–418 (2009). [CrossRef] [PubMed]
  10. J. Zhong and J. Weng, “Phase retrieval of optical fringe patterns from the ridge of a wavelet transform,” Opt. Lett.30(19), 2560–2562 (2005). [CrossRef] [PubMed]
  11. L. Xiong and S. Jia, “Phase-error analysis and elimination for nonsinusoidal waveforms in Hilbert transform digital-fringe projection profilometry,” Opt. Lett.34(15), 2363–2365 (2009). [CrossRef] [PubMed]
  12. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of art in structured light patterns for surface profilometry,” Pattern Recogn.43(8), 2666–2680 (2010). [CrossRef]
  13. T. Hoang, B. Pan, D. Nguyen, and Z. Wang, “Generic gamma correction for accuracy enhancement in fringe-projection profilometry,” Opt. Lett.35(12), 1992–1994 (2010). [CrossRef] [PubMed]
  14. D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell.31(2), 376–383 (2009). [CrossRef]
  15. C. Engels, H. Stewenius, and D. Nister, “Bundle adjustment rules,” in Proceedings on Photogrammetric Computer Vision (2006), pp. 266–271.
  16. J. Heikkila, “Moment and curvature preserving technique for accurate ellipse boundary detection,” in Proceedings IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1998), pp. 734–737.
  17. L. Luu, Z. Wang, M. Vo, T. Hoang, and J. Ma, “Accuracy enhancement of digital image correlation with B-spline interpolation,” Opt. Lett.36(16), 3070–3072 (2011). [CrossRef] [PubMed]
  18. B. Pan, K. Qian, H. Xie, and A. Asundi, “Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review,” Meas. Sci. Technol.20(6), 062001 (2009). [CrossRef]
  19. B. Pan, H. Xie, and Z. Wang, “Equivalence of digital image correlation criteria for pattern matching,” Appl. Opt.49(28), 5501–5509 (2010). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Supplementary Material


» Media 1: MOV (3987 KB)     

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited