OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 15, Iss. 19 — Sep. 17, 2007
  • pp: 12318–12330
« Show journal navigation

3-D shape measurement by composite pattern projection and hybrid processing

H. J. Chen, J. Zhang, D. J. Lv, and J. Fang  »View Author Affiliations


Optics Express, Vol. 15, Issue 19, pp. 12318-12330 (2007)
http://dx.doi.org/10.1364/OE.15.012318


View Full Text Article

Acrobat PDF (2838 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This article presents a projection system with a novel composite pattern for one-shot acquisition of 3D surface shape. The pattern is composed of color encoded stripes and cosinoidal intensity fringes, with parallel arrangement. The stripe edges offer absolute height phases with high accuracy, and the cosinoidal fringes provide abundant relative phases involved in the intensity distribution. Wavelet transform is utilized to obtain the relative phase distribution of the fringe pattern, and the absolute height phases measured by triangulation are combined to calibrate the phase data in unwrapping, so as to eliminate the initial and noise errors and to reduce the accumulation and approximation errors. Numerical simulations are performed to prove the new unwrapping algorithms and actual experiments are carried out to show the validity of the proposed technique for accurate 3-D shape measurement.

© 2007 Optical Society of America

1. Introduction

In this paper, a novel pattern is proposed as a composite projection of color encoded stripes and cosinoidal intensity fringes aligned in parallel, to acquire 3-D shapes by a hybrid solution of combining triangulation with wavelet processing. The color stripes offer an identifiable pattern so that the height phases on the stripe edges can be obtained to relate with the surface height by triangular geometry. Therefore, the relative phases distributed inside the cosinoidal fringes of the intensity pattern, which are obtained here by wavelet transform (WT) processing, can be calibrated by those absolute height phases for each fringe. In this way, not only the approximation errors in the WT processing can be compensated, but also the accumulated errors in the unwrapping process can be much reduced in the whole pattern so that the surface topography can be obtained with high accuracy. The method to generate this kind of composite pattern is presented in Section 2, and the hybrid processing for the pattern is given in Section 3, describing ways of phase measurement and calibration. In Section 4, simulations and experimental tests are presented to prove the validity of the new method for the 3-D surface evaluation.

2. Pattern acquisition

2.1 Optical system for pattern projection and triangular measurement

Figure 1 presents a layout of the optical system to project the composite pattern on object surface with one-shot illumination. In this configuration, the connection line of the projector lens center Op and the camera lens center Oc, with a distance D in between, is parallel to the reference plane. The triangle ▵ABC is similar to ▵OcOpC and the side from the point B to A on the reference plane, corresponding to the side OpOc¯, results in a shift d from χr to χo on the image plane, when the object intersects the projected pattern at point C. The shift d can be measured by comparing the pattern image distorted by the object shape with that projected on the reference plane, so as to obtain the object height h by triangulation [13

13. C. Guan, L. G. Hassebrook, and D. L. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express 11, 406–417 (2003). [CrossRef] [PubMed]

], given by

h=(H×d×k)(d×k+D),
(1)

Fig. 1. Optical system for structured pattern projection and triangulation.

2.2 Composite pattern of color stripes and intensity fringes

An optical pattern of structured light is composed of color encoded stripes and cosinoidal intensity fringes in a parallel arrangement, with the stripe edges coinciding with the extreme locations of the fringe intensity. The color encoding is embedded in the hue channel and the cosinoidal intensity is in the value channel of the HSV color space, respectively. In comparison with other structured light patterns [13

13. C. Guan, L. G. Hassebrook, and D. L. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express 11, 406–417 (2003). [CrossRef] [PubMed]

15

15. P. Fong and F. Buron, “Sensing deforming and moving objects with commercial off the shelf hardware,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005), Vol. 3, pp.20–26.

], this composition included complementary information carried by these two kinds of optical patterns. The borderlines of the color stripes, which can be searched using edge detectors, provide the absolute height phases determined by triangulation. The relative phases of the intensity distributed inside each fringe, moreover, can be solved by the wavelet processing and then unwrapped based on the correction of the absolute phases on each stripe edge, to obtain the whole map of the surface shape. In comparison with the composite pattern proposed by Fong et al. [15

15. P. Fong and F. Buron, “Sensing deforming and moving objects with commercial off the shelf hardware,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005), Vol. 3, pp.20–26.

], which combined a group of color strips and intensity fringes in an orthogonal arrangement, the difficulty in edge detection can be avoided by this new composition. The key points are that the two patterns are aligned in parallel and the stripe edges are located in the brightest positions of the intensity fringes, which make the stripe edges easily detected in the composite pattern. Moreover, the phase solution can be much improved by using the wavelet transform without low pass filtering processing for those fringes with non-uniform frequencies.

A group of brightness fringes, as presented in Fig. 2(b), are superimposed on those color encoded stripes, as shown in Fig. 2(c). This means that the bright intensity along the coordinate x-axis is a cosinoidal distribution in the value channel, given by

V(x)=cos(2πxfv)×38+58,
(2)

where V(x)∈[0,1], fv is the frequency of the cosinoidal fringes. With the spatial frequency of the value channel identical to that in the hue channel, i.e., fv=20, this intensity distribution ensures that the edge positions of the color stripes locate in the extremal positions of the cosinoidal fringes, when the two groups of patterns are parallelized, to make the stripe edges easily recognizable and the intensity distribution detectable in the processing.

Fig. 2. The structured light pattern consists of color encoded stripes (a) and cosinoidal intensity fringes (b), to form the composite pattern (c).

3. Image processing

After the composite pattern described above is projected onto a scene and captured by a digital camera, image processing is performed to obtain the surface shape from the recorded pattern. Firstly, color space conversion is made to extract color stripes and intensity fringes from the image. Secondly, the edges of the color stripes are searched to obtain their absolute phases. Thirdly, the wavelet transform is carried out for the intensity fringes to solve the relative phase data. Fourthly, the phase unwrapping is performed to calibrate the phase map based on the absolute height phases. These four steps are described in detail as follows.

3.1 Extraction of color stripes and intensity fringes

The pattern combining color stripes and intensity fringes is composed in HSV color space. After projecting it on an object surface, we convert the captured image from the RGB color space back to the HSV color space to separate them for respective processing, with an algorithm (MATLAB v7.0) of

Ht={(GB)(max(R,G,B)min(R,G,B)),R=max(R,G,B)2+(BR)(max(R,G,B)min(R,G,B)),G=max(R,G,B)4+(RG)(max(R,G,B)min(R,G,B)),B=max(R,G,B)
H={Ht6,Ht>0Ht6+2,Ht<0,
S=(max(R,G,B)min(R,G,B))max(R,G,B)
V=max(R,G,B)
(3)

where R,G,B∈[0,1] and H,S,V ∈[0,1], max(R,G,B) and min(R,G,B) represent the maximum and minimum values in R, G and B channels, respectively, and Ht is a temporary variable in the algorithm. A captured image from the reference plane is presented in Fig. 3(a). The extracted color stripes in the hue channel and the extracted intensity fringes in the value channel are presented in Fig. 3(b) and Fig. 3(c), respectively. The results basically recover the designed patterns. As shown in Fig. 3(c), however, the extracted fringes have some brightness changes caused by intensity imbalance among different colors produced in the projector and camera, which make the fringe intensity be deviated from the exact cosinoidal distribution in some degree. Therefore, the wavelet transform, which will be described in Section 3.3, is used to automatically reduce this influence by a bank of filterers with different bandpass [16

16. J Fang, C. Y. Xiong, and Z. L. Yang, “Digital transform processing of carrier fringe patterns from speckle-shearing interferometry,” J. Mod. Opt. 48, 507–520 (2001). [CrossRef]

].

Fig. 3. (a). A captured image from the reference plane. (b) The extracted pattern in the hue channel. (c) The extracted pattern in the value channel.

3.2 Edge searching and phase solution in color stripes

The absolute phases along the edge lines are solved by two main steps:

(1) Identifying stripe edges:

The pattern produced in the section above can be searched by automatic algorithms such as the Canny edge detector [1

1. E. Trucco and A. Verri, Introductory techniques for 3-D computer vision, (Prentice Hall, 1998).

], to easily find the edges. False boundaries, however, may still exist in the places with low light intensity due to noise and color aberration, as shown in Fig. 4(a). In this case, some post processing techniques are carried out to improve the results, as presented in Fig. 4(b). Firstly, the false edges with low intensity are recognized and deleted through statistics of neighboring intensities. Because the real edges locate in the places with the maximum intensity in the cosinoidal fringes, their neighboring pixels must keep relative high brightness. Thus a statistical detection of neighboring pixel intensity is performed with a threshold, to eliminate the false pixel when over 1/3 of its neighboring pixels have lower intensities than the limit. Secondly, the short segments caused by noises are eliminated by considering that the real edges between the color stripes are long but those produced by noises are short. Thus the pixel amount included in each segment is counted to check its length, to remove the false edges with low pixel numbers. For the interrupted edges, morphological algorithms are then used to bridge the disconnections.

Fig. 4. (a). The stripe edges searched by the Canny edge detector. (b) The edge lines processed by the statistical and morphological algorithms.

(2) Determining the absolute phase

To obtain the phase change on the edges of color stripes due to the height change of object surface, boundary matching is carried out between the captured images of the reference plane and the object surface. The stripe color encoding with De Bruijn sequence, as mentioned in Section 2.2, provides a well defined pattern to recognize the color edges by distinguishing their adjacent stripe colors. When the pattern is projected onto the reference plane, there are 19 edges in the image, with codes of ECreferencej=(clreferencej,crreferencej),(1≤j≤19) to form a sequence of ECreference 1=(1, 2), ECreference 2=(2,1), ECreference 3=(1, 3), ECreference 4=(3,1), ECreference 5=(1, 4), ECreference 3=(4,1), ECreference 7=(1, 5), ECreference 8=(5, 2), ECreference 9=(2,3), ECreference 10=(3, 2), ECreference 11=(2, 4), ECreference 12=(4, 2), ECreference 13=(2,5), ECreference 14=(5,3), ECreference 15=(3, 4), ECreference 16=(4,3), ECreference 17=(3,5), ECreference 18=(5, 4), ECreference 19=(4,5). So when an edge in the image of the object surface has a color edge code of ECobject=(clobject, crobject), which matches the code ECreferencej of the jth edge in the image of the reference plane, given by clobject=clreferencej,crobject=crreferencej, the absolute phase of this edge can be calculated as ϕ=2. For example, an edge in the captured image of the object surface has a yellow stripe on the left side and a magenta stripe on the right side. Its edge code is ECobject=(1,5), equal to the 7th edge code in the sequence of the reference plane. Therefore, the absolute phases of the pixels on that edge are ϕ=14π.

3.3 Wavelet transform processing for intensity fringes

The wavelet analysis is performed for the cosinoidal fringes to obtain the phase values in the intensity pattern. In general, the intensity distribution of such fringes can be expressed as:

I(x)=I0(x)+I1(x)cos(ϕ(x)),
(4)

where I 0(x) is the image background of illumination, I 1(x) the fringe contrast, and ϕ(x) is the phase function

ϕ(x)=2πf(x)+ϕm(x),
(5)

A continuous wavelet transform for the intensity distribution on I(x) is defined as an integral of the signals with translation and dilation of the complex conjugation of a mother wavelet ψ(x), given by

WTI(a,b)=1aI(x)ψ*(xba)dx,
(6)

WT(a,b)=2π(1+a4ϕ2(b))14exp(i2arctan(a2ϕ(b)))
×exp(a22(ϕ(b)ω0a)211ia2ϕ(b))I1(b)exp(iϕ(b)).
(7)

The searching range of the maximum amplitude of WT coefficient can be estimated referring to the method in [17

17. H. J. Li and H. J. Chen, “Phase solution of modulated fringe carrier using wavelet transform,” Acta Sci. Nat. Uni. Pek. 43, 317–320 (2007).

]. In our case with non-uniform frequency f(x) distributed in the fringe pattern, the range is [ω 0/(10πf max/3),ω 0/(10πf min/3)], where f max and f min are the maximum and minimum frequency in f(x), which can be found in the reference image. Corresponding to the positions of the maximum amplitude determined in this range, the phases of the wavelet coefficients are obtained in the WT phase map, ranged in [-π,π] or [-π/2,π/2]. As a result, an unwrapping procedure is needed to make the interrupted phases continued, as presented in the following section.

3.4 Unwrapping based on absolute phase

For most phase maps, the unwrapping results are sensitive to the initial error of the phases at the starting point, and to the image noise spreading along the unwrapping path, whose errors may be accumulated to result in significant mistakes in the columns or rows of the unwrapped patterns. Our image matching and optical triangulation offers the absolute phases of the surface height on the color stripe edges, which are then used as the correction data to reduce the errors in the unwrapping of the intensity phases.

It is assumed that there is no abrupt depth variation between any two adjacent edges with the known phases resulting from triangulation. By using a general unwrapping procedure, the unwrapped phases at every point between two adjacent known phases in a row can be obtained as ϕunwrapi, where the index i∈[1, len] includes the points from i=1 at the position of the left known phase to i=len at the position of the right known phase. The differences between the unwrapping phase and the absolute phase at those two points are given by

ϕ1diff=ϕ1ϕ1unwrap,ϕlendiff=ϕlenϕlenunwrap.
(8)

From them, a linear interpolation is produced as the correction phase, given by

ϕilinear=(ϕlendiffϕ1diff)(len1)×(i1)+ϕ1diff.
(9)

Therefore, a corrected phase at the ith point between the two adjacent stripe edges can be obtained as

ϕi=ϕiunwrap+ϕilinear.
(10)

Because the calibration data on the color stripe edges results from the independent measurement of triangulation, this strategy of using the known phases at the two edge points to correct the phases of the points included in between, not only limits the error accumulation along the unwrapping path caused by either the initial error or the noise involved, but also reduces the approximation errors of the wavelet transform as ignoring the second order of phase derivatives. Subtracting the phases of the reference image from the object image, as inversely expressed by Eq. (5), the whole-field phases ϕm(x) of surface height can be obtained to realize static and dynamic 3-D shape measurement.

4. Results

4.1 Simulation for the phase correction in unwrapping

To verify the unwrapping procedure calibrated by the known phase to correct the phase errors in the WT processing, we give an example with continuous intensity of

I(x)=1+cos(32π+5sin(2πx)).
(11)

This is a fringe pattern with non-uniform spatial frequency, as shown in Fig. 5(a), including a jump of phase slope in the middle region to test the influence of rapid phase change on the WT processing. In fact, this kind of signal intensity is difficult to be processed by Fourier transform because the carrier frequency does not keep constant and strong frequency localization exists in the pattern. By using the WT process to digitalize the intensities with sampling number of 2048, the signal is transformed into a scale-space domain with the scale range [3/80,3/16] divided into 2000 intervals. The wavelet transform coefficients of the signal are presented in Fig. 5(b) to show their amplitudes, and in Fig. 5(c) to show their phase distribution, including the interrupting points at the phase with ±π. By searching the maximum values in the amplitude map of the WT coefficient (the black curve in the brightest region of Fig. 5(b)), the fringe phases are solved in the corresponding WT phase map [the corresponding black curve in Fig. 5(c)] and then unwrapped to connect the phase interruptions. For comparison, a traditional unwrapping procedure (MALTLAB v7.0) is used to directly connect those interrupting phases, as presented in Fig. 5(d), showing big differences between the designed values and the calculated phase curve, with a standard deviation of 0.32628. In the region around two phase peaks with high second order derivatives, Fig. 5(d) demonstrates the influence of the approximation errors on the WT phase, and near the two ends of the plot and in the middle area with abrupt change of the first derivative of the phases, the results present significant errors resulting from the non-smoothness of the phase variation. By using our new method to calibrate the points at phase 2 with the known phases specified in the intensity distribution, and to correct the phase data at the points among them with the above algorithm, the unwrapped results show very good agreement with the designed phases, as presented in Fig. 5(e), with a standard deviation of 0.07648, showing that the phase errors in the WT processing have been much reduced.

Fig. 5. A simulation of WT processing and unwrapping for a signal of ϕm(x)=5|sin(2πx) (a). The WT magnitude map (b) and the phase map (c) are used to track the magnitude maxima and the fringe phases. The results from the traditional WT processing (d) and from the proposed processing (e).are compared with the designed values.

4.2 Two examples of measurement experiment

A piece of twisted paper is used as the measuring object to detect the efficiency of the acquisition system. Figure 6(a) shows the projection of the composite pattern on the curved surface, from that the height phases involved in the fringes are solved by the wavelet processing. Using the method of traditional unwrapping (MATLAB v7.0) to directly connect the interrupted points at 2, Fig. 6(b) presents the phase map with reasonable smoothness due to smooth paper surface with regular boundaries, where the phase errors at the starting points of unwrapping are small and the noise in the unwrapping path is little in each row. The maximum and minimum phases obtained by the traditional process are 9.91 and 0.42, respectively. The results obtained by the proposed method, however, have larger value of 10.36 as the maximum phase and smaller value of 0.08 as the minimum phase, respectively, as indicated in Fig. 6(c), in which the calibration has been performed based on the absolute phases of every stripe edge in the phase unwrapping. The phase map is thus improved by eliminating the accumulation errors and the approximation errors, showing the 3-D shape of the curved surface with high accuracy.

Fig. 6. (a). A pattern projected on a piece of curved paper. (b) 3D surface shape from traditional wavelet transform and unwrap process. (c) 3D shape from the proposed method by eliminating the approximation errors in WT process.

Furthermore, the shape of a female model is measured when the surface is illuminated by the color-fringe pattern, as shown in Fig. 7(a). Also for comparison, after the relative phases included in the intensity fringes are solved by the wavelet processing, the phase unwrapping is firstly carried out by the traditional unwrapping algorithm through MATLAB computation. Figure 7(b) presents the result of the phase map with many interrupted segments, caused by the initial error at the starting point of unwrapping process and the accumulating errors of the noise distributed in the phase map. On the other hand, by using the new process to calibrate the whole phase map with the absolute phases measured along the color stripe edges, the phases are corrected at every initial point to unwrap each fringe of the pattern, and the errors due to noises do not cumulate in the whole pattern, as presented in Fig. 7(c), showing a height phase map with smooth contours to reflect the real shape of the 3-D object surface.

Fig. 7. (a). A pattern projected on a female model. (b) 3D surface shape from traditional wavelet transform and unwrapping process. (c) 3D shape topography from the proposed method by calibrating the phases in unwrapping.

5. Conclusion

Acknowledgment

The support of National Basic Research Program of China (No. 2007CB935602) is greatly appreciated.

References and links

1.

E. Trucco and A. Verri, Introductory techniques for 3-D computer vision, (Prentice Hall, 1998).

2.

R. Furukawa and H. Kawasaki, “Interactive shape acquisition using marker attached laser projector,” in Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling (2003), pp. 491–498.

3.

J. Salvia, J. Pages, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recogn. 37, 827–849 (2004). [CrossRef]

4.

D. Caspi, N. Kiryati, and J. Shamir. “Range imaging with adaptive color structured light,” IEEE Trans Pattern Anal. Mach. Intel. 20, 470–480 (1998). [CrossRef]

5.

F. Tsalakanidou, F. Forster, S. Malassiotis, and M. G. Strintzis, “Real-time acquisition of depth and color images using structured light and its application to 3D face recognition,” Real-Time Imag. 11, 358–369 (2005). [CrossRef]

6.

Z. J. Geng, “Rainbow 3-dimensional camera: new concept of high-speed 3-dimensional vision systems,” Opt. Eng. 35, 376–383 (1996). [CrossRef]

7.

M. S. Jeong and S. W. Kim, “Color grating projection moiré with time-integral fringe capturing for high-speed 3-D imaging,” Opt. Eng. 41, 1912–1917 (2002). [CrossRef]

8.

O. A. Skydan, M. J. Lalor, and D. R. Burton, “Technique for phase measurement and surface reconstruction by use of colored structured light,” Appl. Opt. 41, 6104–6117 (2002). [CrossRef] [PubMed]

9.

Z. H. Zhang, C. E. Towers, and D. P. Towers “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency selection,” Opt. Express 14, 6444–6455 (2006). [CrossRef] [PubMed]

10.

S. Zhang and S. -T. Yau, “High-resolution, real-time 3D absolute coordinate measurement based on a phase shifting method,” Opt. Express 14, 2644–2649 (2006). [CrossRef] [PubMed]

11.

C. Karaalioglu and Y. Skarlatos, “Fourier transform method for measurement of thin film thickness by speckle interferometry,” Opt. Eng. 42, 1694–1698 (2003). [CrossRef]

12.

H. J. Li, H. J. Chen, J. Zhang, C. Y. Xiong, and J. Fang, “Statistical searching of deformation phases on wavelet transform maps of fringe patterns,” Opt. Laser Tech. 39, 275–281 (2006). [CrossRef]

13.

C. Guan, L. G. Hassebrook, and D. L. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express 11, 406–417 (2003). [CrossRef] [PubMed]

14.

A. K.C. Wong, P. Niu, and X. He, “Fast acquisition of dense depth data by a new structured light scheme,” Comput. Vis. Image Underst. 98, 398–422 (2005). [CrossRef]

15.

P. Fong and F. Buron, “Sensing deforming and moving objects with commercial off the shelf hardware,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005), Vol. 3, pp.20–26.

16.

J Fang, C. Y. Xiong, and Z. L. Yang, “Digital transform processing of carrier fringe patterns from speckle-shearing interferometry,” J. Mod. Opt. 48, 507–520 (2001). [CrossRef]

17.

H. J. Li and H. J. Chen, “Phase solution of modulated fringe carrier using wavelet transform,” Acta Sci. Nat. Uni. Pek. 43, 317–320 (2007).

OCIS Codes
(110.6880) Imaging systems : Three-dimensional image acquisition
(120.2650) Instrumentation, measurement, and metrology : Fringe analysis
(120.5050) Instrumentation, measurement, and metrology : Phase measurement
(120.6650) Instrumentation, measurement, and metrology : Surface measurements, figure

ToC Category:
Instrumentation, Measurement, and Metrology

History
Original Manuscript: August 9, 2007
Revised Manuscript: September 10, 2007
Manuscript Accepted: September 11, 2007
Published: September 13, 2007

Citation
H. J. Chen, J. Zhang, D. J. Lv, and J. Fang, "3-D shape measurement by composite pattern projection and hybrid processing," Opt. Express 15, 12318-12330 (2007)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-19-12318


Sort:  Year  |  Journal  |  Reset  

References

  1. E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision, (Prentice Hall, 1998).
  2. R. Furukawa and H. Kawasaki, "Interactive shape acquisition using marker attached laser projector," in Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling (2003), pp. 491- 498.
  3. J. Salvia, J. Pages, and J. Batlle, "Pattern codification strategies in structured light systems," Pattern Recogn. 37, 827-849 (2004). [CrossRef]
  4. D. Caspi, N. Kiryati, and J. Shamir, "Range imaging with adaptive color structured light," IEEE Trans Pattern Anal. Mach. Intell. 20, 470-480 (1998). [CrossRef]
  5. F. Tsalakanidou, F. Forster, S. Malassiotis and M. G. Strintzis, "Real-time acquisition of depth and color images using structured light and its application to 3D face recognition," Real-Time Imag. 11, 358-369 (2005). [CrossRef]
  6. Z. J. Geng, "Rainbow 3-dimensional camera: new concept of high-speed 3-dimensional vision systems," Opt. Eng. 35, 376-383 (1996). [CrossRef]
  7. M. S. Jeong and S. W. Kim, "Color grating projection moiré with time-integral fringe capturing for high-speed 3-D imaging," Opt. Eng. 41, 1912-1917 (2002). [CrossRef]
  8. O. A. Skydan, M. J. Lalor, and D. R. Burton, "Technique for phase measurement and surface reconstruction by use of colored structured light," Appl. Opt. 41, 6104-6117 (2002). [CrossRef] [PubMed]
  9. Z. H. Zhang, C. E. Towers, and D. P. Towers "Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency selection," Opt. Express 14, 6444-6455 (2006). [CrossRef] [PubMed]
  10. S. Zhang and S. -T. Yau, "High-resolution, real-time 3D absolute coordinate measurement based on a phase-shifting method," Opt. Express 14, 2644-2649 (2006). [CrossRef] [PubMed]
  11. C. Karaalioglu and Y. Skarlatos, "Fourier transform method for measurement of thin film thickness by speckle interferometry," Opt. Eng. 42, 1694-1698 (2003). [CrossRef]
  12. H. J. Li, H. J. Chen, J. Zhang, C. Y. Xiong, and J. Fang, "Statistical searching of deformation phases on wavelet transform maps of fringe patterns," Opt. Laser Technol. 39, 275-281 (2006). [CrossRef]
  13. C. Guan, L. G. Hassebrook, and D. L. Lau, "Composite structured light pattern for three-dimensional video," Opt. Express 11, 406-417 (2003). [CrossRef] [PubMed]
  14. A. K.C. Wong, P. Niu, and X. He, "Fast acquisition of dense depth data by a new structured light scheme," Comput. Vis. Image Underst. 98, 398-422 (2005). [CrossRef]
  15. P. Fong and F. Buron, "Sensing deforming and moving objects with commercial off the shelf hardware," in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005), Vol. 3, pp.20-26.
  16. J Fang, C. Y. Xiong and Z. L. Yang, "Digital transform processing of carrier fringe patterns from speckle-shearing interferometry," J. Mod. Opt. 48, 507-520 (2001). [CrossRef]
  17. H. J. Li and H. J. Chen, "Phase solution of modulated fringe carrier using wavelet transform," Acta Sci. Nat. Uni. Pek. 43, 317-320 (2007).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited