OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 25 — Dec. 5, 2011
  • pp: 25712–25722
« Show journal navigation

Measurement of transient deformation by color encoding

C. Mares, B. Barrientos, and A. Blanco  »View Author Affiliations


Optics Express, Vol. 19, Issue 25, pp. 25712-25722 (2011)
http://dx.doi.org/10.1364/OE.19.025712


View Full Text Article

Acrobat PDF (2264 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We present a method based on color encoding for measurement of transient 3D deformation in diffuse objects. The object is illuminated by structured light that consists of a fringe pattern with cyan fringes embedded in a white background. Color images are registered and information on each color channel is then separated. Surface features appear on the blue channel while fringes on the red channel. The in-plane components of displacement are calculated via digital correlation of the texture images. Likewise, the resulting fringes serve for the measuring of the out-of-plane component. As crossing of information between signals is avoided, the accuracy of the method is high. This is confirmed by a series of displacement measurements of an aluminum plate.

© 2011 OSA

1. Introduction

Recently it has been shown that combining digital image correlation (DIC) and fringe projection (FP) may be adequate for the simultaneous measurement of the three components of deformation in solid objects [1

1. C. J. Tay, C. Quan, T. Wu, and Y. H. Huang, “Integrated method for 3-D rigid-body displacement measurement using fringe projection,” Opt. Eng. 43(5), 1152–1159 (2004). [CrossRef]

, 2

2. B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008). [CrossRef]

]. In general, a complete characterization of displacement fields must contain the 2-D in-plane displacement components and the out-of-plane component. The two mutually perpendicular in-plane components of displacement are obtained by the correlation technique whereas the out-of-plane component is addressed by fringe projection. In both techniques, two images of the object for two different states are captured (reference image and displaced image, respectively). As the object undergoes deformation, the corresponding spatial structures carrying the information (variations of intensity of the object surface in the case of digital correlation and fringes in fringe projection) move accordingly. Thus, it is possible the calculation of displacement fields by comparing the reference image and the displaced image. If the target deformational event is relatively slow then all three components of displacement may be obtained simultaneously.

To widen the range of events it is desirable that the information required by DIC and FP be contained in just one image. This has been shown in [1

1. C. J. Tay, C. Quan, T. Wu, and Y. H. Huang, “Integrated method for 3-D rigid-body displacement measurement using fringe projection,” Opt. Eng. 43(5), 1152–1159 (2004). [CrossRef]

], where separation of the signals is carried out by Fourier transform; the fringe information is filtered out in the Fourier domain, yielding a fringe-free image ready to be used by DIC. In this case, the carrier frequency of FP has to be larger than the maximum signal frequency of the object surface. As pointed out in [3

3. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef] [PubMed]

], this may limit the size of the region of interest and the range of in-plane displacements. An alternative is encoding the signals for both DIC and FP in the RGB signal of a color image [3

3. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef] [PubMed]

]. Color encoding has been used in 3D shape recovering [4

4. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef] [PubMed]

, 5

5. L. Fu, Z. Li, L. Yang, Q. Yang, and A. He, “New phase measurement profilometry by grating projection,” Opt. Eng. 45(7), 073601 (2006). [CrossRef]

] and fluid studies [6

6. H. G. Park, D. Dabiri, and M. Gharib, “Digital particle image velocimetry/thermometry and application to the wake of a heated circular cylinder,” Exp. Fluids 30(3), 327–338 (2001). [CrossRef]

, 7

7. C. Brücker, “3-D PIV via spatial correlation in a color-coded light-sheet,” Exp. Fluids 21 (4), 312–314 (1996). [CrossRef]

]. As shown in [3

3. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef] [PubMed]

], the necessary information for measuring 3D displacement is contained in just one RGB image, and calibration procedures are not required as in methods that use multiple cameras, such as 3D DIC [8

8. P. Synnergren and M. Sjodahl, “A stereoscopic digital speckle photography system for 3-D displacement field measurements,” Opt. Lasers Eng. 31(6), 425–443 (1999). [CrossRef]

, 9

9. A. K. Prasad, “Stereoscopic particle image velocimetry,” Exp. Fluids 29(2), 103–116 (2000). [CrossRef]

]. In [3

3. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef] [PubMed]

], red spots (speckles) that serve as the DIC signal are directly printed onto the object surface and fringes with blue and white portions are projected simultaneously. By the use of a color camera, the signals for DIC and FP can then be separated. However, as the red spots appear in the fringe information they reduce the accuracy of the technique; for alleviating the problem, application of a directional filter was applied. In the present work, we propose an improvement to the color encoding technique that reduces significantly the presence of speckles in the image of fringes, thus increasing the accuracy of the technique.

2. Theoretical background

2.1 DIC

An image of the surface features (white-light speckle) of an object illuminated by white light may serve as the carrier signal for DIC. A schematic of an optical arrangement for DIC is shown in Fig. 1
Fig. 1 Schematic layout of DIC. Here P and L stand for projector and imaging lens, respectively. Other variables are described in the text.
. By taking two images of the object by a CCD camera, before and after deformation, the relative displacement between these images can be found by cross correlating corresponding subimages [10

10. M. Raffel, C. Willert, and J. Kompenhans, Particle image velocimetry, a practical guide, (Springer-Verlag, 1998).

, 11

11. D. J. Chen, F. P. Chiang, Y. S. Tan, and H. S. Don, “Digital speckle-displacement measurement using a complex spectrum method,” Appl. Opt. 32(11), 1839–1849 (1993). [CrossRef] [PubMed]

]. Take I2(x,y)and I1(x,y)as the distributions of intensity of the reference and displaced subimages, respectively; then, we may assume thatI2(x,y)=I1(xΔx,yΔy); relative displacements (Δx,Δy)may be determined by the two-dimensional correlation function defined as
h(Δx,Δy)=I1(x,y)I1(xΔx,yΔy)dxdy,
(1)
which in turn may be obtained by the Fourier transform,
h(Δx,Δy)=1{F1(fx,fy)F2*(fx,fy)},
(2)
where, F1(fx,fy)andF2(fx,fy) denote the Fourier transform ofI1andI2,respectively. The inverse Fourier transform operator is indicated by1{}and the frequency domain variables by (fx,fy). The coordinates of the location of the maximum of the correlation map correspond directly to the desired in-plane displacements. Subpixel resolution in the displacements can be achieved by fitting Gaussian [10

10. M. Raffel, C. Willert, and J. Kompenhans, Particle image velocimetry, a practical guide, (Springer-Verlag, 1998).

] or paraboloidal [11

11. D. J. Chen, F. P. Chiang, Y. S. Tan, and H. S. Don, “Digital speckle-displacement measurement using a complex spectrum method,” Appl. Opt. 32(11), 1839–1849 (1993). [CrossRef] [PubMed]

] functions to the values defining the maximum peak of correlation.

2.2 Fringe projection

The change of phase can be calculated via the Fourier method [13

13. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982). [CrossRef]

] as follows. Let the reference image be expressed by [14

14. K. J. Gasvik, Optical Metrology, (3rd Ed. John Wiley and Sons, Sussex 2003).

]
I1(x,y)=a(x,y)+b(x,y)cos(2πf0x+ϕref),
(3)
where a(x,y) is the background illumination, b(x,y) the modulation term, f0 a carrier frequency that allows us the use of the Fourier method for automatic phase calculation, and ϕref a phase term that accounts for projection and aberration effects. Likewise, the displaced image may be defined by

I2(x,y)=a(x,y)+b(x,y)cos(2πf0x+ϕref+Δϕ).
(4)

The arguments of Eqs. (3) and (4) can be calculated by the Fourier method. For example, for the reference image, after applying the Fourier transform operator we obtain
IF(fx,fy)=A(fx,fy)+B(fxf0,fy)+B*(fxf0,fy),
(5)
whereA(fx,fy)={a(x,y)}andB(fx,fy)={12b(x,y)exp(i2πf0x)exp(iϕref)}.The asterisk denotes the operation of complex conjugation.

Then we apply the inverse Fourier transform to a band-pass filtered version of Eq. (5) that isolates one of its spectrum side lobes (centered at the carrier frequency f0),
1{B(fxf0,fy)}=1{{12b(x,y)exp(i2πf0x)exp(iϕref)}fxf0,fy}=Re(x,y)+iIm(x,y),
(6)
whereRe(x,y)=12b(x,y)cos(2πf0x+ϕref)andIm(x,y)=12b(x,y)sin(2πf0x+ϕref). Therefore, the reference argument can be obtained by

2πf0x+ϕref=tan1[Im(x,y)Re(x,y)].
(7)

In a similar way, the argument of the displaced image (2πf0x+ϕref+Δϕ)can be calculated; and the desired phase term can be finally found by subtracting directly the latter two arguments or alternatively by [15

15. T. Kreis, “Digital holographic interference-phase measurement using the Fourier-transform method,” J. Opt. Soc. Am. A 3(6), 847–855 (1986). [CrossRef]

]
Δϕ(x,y)=tan1(Im2Re1Im1Re2Re1Re2+Im1Im2),
(8)
where subscripts 1 and 2 correspond to the reference and displaced images, respectively.

Phase maps calculated by Eq. (8) may give rise to wrapped phase maps when absolute difference-of-phase values are larger than 2πrad.

2.3 One-shot method by FP and DIC

As noted from Fig. 2(b), in the registered image, the white part of the fringes acquires a greenish hue. This effect arises from the mechanisms of color balancing of the camera and it increases as the grating pitch decreases. Cross sections of Figs. 2(a) and (b) are shown in Figs. 2(c) and (d).

In Fig. 3
Fig. 3 Equivalent period of 9.90 mm. Titles of figures are as the corresponding ones in Fig. 2.
we show another example but with an equivalent period of 9.90 mm. As it is observed, the white part of the fringe, despite being larger than before, also acquires a bluish hue because of artifacts of the camera. For better results in this case, the cyanide color was set as [0,255,200]. By observing Fig. 3(f), it may be apparent that the blue value may be lowered further in order to reduce the observed residual fringes in the speckle image, but the camera in that case starts yielding distorted images. Results of the accuracy of the method as a function of the equivalent period are discussed in Section 3.

For colored objects, partial absorption of the illuminating light occurs and contrast of the signals for FP and DIC is expected to decrease. This may imply a reduction of the accuracy of the proposed method. An analysis of this issue should have to take into account the spectral characteristics of the projector, the coupling effects between neighboring channels and the color characteristics of the object [16

16. D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998). [CrossRef]

]. An example of a colored object is shown in Fig. 4
Fig. 4 Equivalent period of 2.83 mm. Titles of figures (a)-(g) are as the corresponding ones in Fig. 2. (h) Object illuminated by white light.
. The mean color of the object when illuminated by white light is [190,93,110], see Fig. 4(h). For neutral colored objects, selection of the illuminating light should take into account the color content of the object. For the current example we use illuminating light composed by green fringes [0,255,60] embedded in a magenta background [127,0,255]. As noticed from Figs. 4(e) and (f), the separation of the signals is still possible. However, for some particular colors, such as pure red hues, the method is not adequate.

2.4 Influence of residual speckle on the accuracy of FP

When particles (white-light speckles) are directly painted on the object as in [3

3. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef] [PubMed]

], they ultimately appear in the fringe image as multiplicative noise. This effect may be represented as a random variation of the object reflectivity r(x,y). Thus, the intensity recorded by the camera for a certain object state can be expressed as [5

5. L. Fu, Z. Li, L. Yang, Q. Yang, and A. He, “New phase measurement profilometry by grating projection,” Opt. Eng. 45(7), 073601 (2006). [CrossRef]

]

I2(x,y)=r(x,y)[a(x,y)+b(x,y)cos(2πf0x+ϕref+Δϕ)].
(9)

In our case, the representation of fringe images with residual white-light speckles follows that given by Eq. (9) as well, but the contrast of the speckle is relatively low. When applying the Fourier method to Eq. (9), the expression for the band-pass filtered side lobe in the Fourier domain, Eq. (6), should be modified to consider high-frequency components of the central lobe that may permeate into the side lobe. In this case, recovering of the phase term is not warrantied. However, if Eq. (6) is used with no modification, a reduction of the accuracy of the FP method results. An additional source of error for the Fourier method arises from the digitalization of the high-frequency signals that the speckle field represents. This problem has been previously pointed out in [17

17. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]

].

To show the influence of the white-light speckle on the FP accuracy, we carry out numerical simulations by using Eq. (8) with the following values: a(x,y)=1,ϕref=0, Δϕ=4π(x2+y2) withx,y|1/2|and f0=64. The object reflectivity is expressed as a summation of Gaussian functions, r'(x,y)=n=1Nexp{2[(xxn)2+(yyn)2]/σ2},whose centers (xn,yn)are randomly selected, and have constant radiiσ. By varying the latter parameter and the number of Gaussian functionsN, the level of disturbance caused by the residual speckle field can be adjusted. Furthermore, adjustment of the speckle contrast can be achieved by considering a normalized version of r'(x,y), r(x,y)=[norm{r'(x,y)}]B+A, wherenorm{:}denotes a normalization operator andAandBthe background and the contrast of the speckle field (withA+B=1), respectively. Thus, the resulting effect caused by the residual speckles can be varied by eitherN,σ,A,B,or by any combination of them.

The aspect of the simulated images with fringes can be altered as well by changing the modulation termb(x,y). For relatively small fringe modulation the influence of the speckle field is enhanced. In Fig. 5
Fig. 5 Synthetic images used to show the influence of residual speckle on the accuracy of the FP method: (a) Speckle-free fringe image, (b) and (c) fringe images with different level of speckle disturbance. Experimental images: (d) and (e), images with similar speckle influence to (b) and (c), respectively. Dimensions are pixels.
we show three simulated displaced images;Nis taken as 2×106. Numerical simulations are done in Matlab Version 7. The reference image is not shown, but is also obtained from Eq. (8) withΔϕ=0. Fig. 5(a) corresponds to a speckle-free image with b(x,y)=1,while b) and c) to images with speckle content; for Fig. 5(b) we set σ=2.0/S,whereS=1024pix (the size of the image), b(x,y)=0.12, andA=0.6; and for Fig. 5(c), σ=3.0/S, b(x,y)=0.05, andA=0.65. For these two last cases, modulationb(x,y)is measured directly from the experimental images shown in Figs. 5(d) and (e), respectively. Calculation of b(x,y) is done by using Eq. (6) as b(x,y)=2|Re(x,y)+iIm(x,y)|.

The relative errors for the synthetic images, Figs. 5(a-c), after applying the Fourier method, are 0.1, 1.6 and 3.6%, respectively. As expected, the accuracy of the FP method varies inversely with the contrast of the speckle field and directly with the modulation of the fringe pattern. Recently, a similar result has been found for fringe projection using laser interference [18

18. S. Rosendahl, E. Hällstig, P. Gren, and M. Sjödahl, “Phase errors due to speckles in laser fringe projection,” Appl. Opt. 49(11), 2047–2053 (2010). [CrossRef] [PubMed]

].

3. Experimental results

For the experimental results, the arrangement employed for DIC and FP is illustrated in Fig. 6
Fig. 6 Optical layout. (a) Photograph. (b) Schematic drawing. Dimensions are mm. (c) Typical image obtained when illuminating the object with black-and-white structured light. Dimensions are pixels. Symbols are as in Fig. 1.
. It consists of a high-definition three-LCD Panasonic projector (PT-AE2000U, 1500 lm) and an Olympus camera (Camedia C8080WZ, 2/3” Bayer mosaic-based CCD sensor, 8 Mpix) which renders images of 3264x2448 pix. The imaging system is set to an f-number (f#) of 3.5 and exposure time of 1/1000 s. Distances from the object to projector and from the camera to projector are selected as to minimize projection effects [14

14. K. J. Gasvik, Optical Metrology, (3rd Ed. John Wiley and Sons, Sussex 2003).

]. Also, the projection angle is set to 22° and the projected period is selected to take on the values: 0.78, 1.00, 2.00, 3.00 and 4.00 mm (corresponding equivalent periods of 1.93, 2.48, 4.95, 7.43 and 9.90 mm). The object is an aluminum plate of dimensions 300x300x6.35 mm whose surface is sprayed white by applying a powder developer (Ardrox 9D4A). The width of the region of observation is not constant: 164, 170 and 250 mm; corresponding the first value to the projected period of 0.78 mm, the second value to period 1.0 mm, and the last value to periods 2.0, 3.0 and 4.0 mm. The three main components of the layout, CCD camera, object and projector, are mounted on tripods as shown in Fig. 6(a). Care was taken in the alignment of the experimental setup, in particular to prevent the appearance of any significant in-plane displacement when the object is given an out-of-plane displacement, and similarly for given in-plane displacements.

The experiment consists in translating the aluminum plate by known steps along the coordinate axes. The steps are given by a step-motor driven translation stage from Thorlabs with 1.25 micrometers per step. For each projected period, the aluminum plate is given the following displacements in mm (out-of-plane and in-plane displacements): 0.063, 0.125, 0.250, 0.500, 0.750, 1.000, 1.250, 1.500, 1.750 and 2.000, which in pixels are (for T=1.0mm) 0.8, 1.6, 3.3, 6.5, 9.8, 13.1, 16.3, 19.6, and 26.1. In Fig. 6(c) we show a typical image produced by the system when the illuminating light corresponds to black-and-white structured light.

In Fig. 7
Fig. 7 Relative error for measured displacements, (a) out-of-plane, and (b) in-plane, for various projected grating periods (in mm), T.The size of subimages is 64x64 pix. For displaying purposes, the graphs are clipped at 15% and 5%, respectively. The thicker curves correspond to results obtained by standard methods of FP and DIC.
we show results of the obtained accuracy of the method for the five different grating periods, for both out-of-plane and in-plane displacements. For each given displacement, three measurements were done, and the average relative error is calculated. In addition to this, results using standard methods for FP and DIC are included and are indicated by letters BW. These standard results were obtained by projecting black-and-white fringes for FP and uniform white light for DIC, and serve as reference values. Furthermore, they were implemented in a non-simultaneous way.

On calculation of in-plane displacements, software for correlation of images from IDT (proVISION-XS) is used. A subimage size of 64x64 pix is used. It is worth noting to point out that when calculating out-of-plane displacements, phase unwrapping is not required as all the equivalent periods are greater than the displacements (an exception case is the displacement of 2.0 mm combined with an equivalent period of 1.931 mm). Besides, the duty cycle of the period was set to 75% in order to reduce the size of the region that is partially darkened by the cyan portion of the fringes, which may affect the speckle information.

On the other hand, from Fig. 7(b) it is seen that the in-plane error is relatively large for small displacements. Furthermore, since the in-plane movement is uniform for the whole region of observation, the resulting rigid body motion can be compensated readily and the error is almost independent of the given displacement. However, for an arbitrary distribution of in-plane displacement, an average in-plane displacement cannot be used for compensation, and in general the error would increase with displacement [19

19. J. Westerweel, “Fundamentals of digital particle image velocimetry,” Meas. Sci. Technol. 8(12), 1379–1392 (1997). [CrossRef]

]. Additionally, the in-plane error should not depend on the projected period if correct color encoding is carried out. This is confirmed by noticing the similarity of all curves in Fig. 7(b) which include also the reference result (labeled BW). This similarity of the results also implies that the presence of residual fringes in the speckle images does not affect the in-plane measurements obtained by the proposed method.

As results in Fig. 7 suggest, the average error for the present method is around 1% when projected periods of 1.0 mm are employed. When comparing this value with those obtained in [1

1. C. J. Tay, C. Quan, T. Wu, and Y. H. Huang, “Integrated method for 3-D rigid-body displacement measurement using fringe projection,” Opt. Eng. 43(5), 1152–1159 (2004). [CrossRef]

] and [3

3. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef] [PubMed]

], which are within 2 and 5%, we can say that the present method shows a good performance. This holds even for uniformly colored objects, such as the one shown in Fig. 4, where the resulting errors were similar to the ones presented in Fig. 7.

In the case of out-of-plane measurements, if the corresponding speckle images are processed by DIC, then we obtain a radial-like vector field caused by the angular field of view of the object, as shown in Fig. 8(a)
Fig. 8 In-plane residual error. (a) In obtaining this vector field, the signal on the blue channel (speckle image) is used. The aluminum plate undergoes an out-of-plane displacement of 1.5 mm. The maximum residual in-plane displacement is 0.26 mm, and is located at the corners. (b) To obtain this graph, the signal on the red channel (image of fringes) is used. An in-plane displacement of 1.5 mm is given to the plate. The absolute residual average in-plane displacement is 19.5 μm. Dimensions of coordinate axes are pixels.
. In this figure, for the case of an out-of-plane movement of 1.5 mm, a maximum in-plane displacement of 0.26 mm is found. This value agrees with that found by using the parameters of the setup (object-to-imaging lens distance of 885 mm and observation size of 250 mm). For three-dimensional displacements, the added nonuniform in-plane displacement arising from the out-of-plane displacement will increase the in-plane error. This, however, can be prevented by the use of a telecentric imaging lens [1

1. C. J. Tay, C. Quan, T. Wu, and Y. H. Huang, “Integrated method for 3-D rigid-body displacement measurement using fringe projection,” Opt. Eng. 43(5), 1152–1159 (2004). [CrossRef]

] or may be numerically compensated.

Similarly, for in-plane measurements, by processing the corresponding fringe images via correlation, we find that the average absolute residual displacement is close to zero, see Fig. 8(b). This implies one thing: the speckle information does not appear on the image of fringes; thus, this warranties a considerable reduction of the levels of noise in out-of-plane displacement measurements. Hence, as commented above, it makes unnecessary the application of any directional low-pass filter to the images of fringes.

4. Conclusions

We presented a color-encoded method that permits the measurement of 3D deformation for opaque objects by a combination of fringe projection and digital image correlation. To achieve this, we used an illuminating image that consisted of a pattern of cyan stripes embedded in a white background. For white-sprayed objects the signal information for DIC was encoded on the blue channel of a recorded image and that for FP on the red channel. To show the feasibility of the technique, a series of measurements was conducted and the error was computed. We found that the overall performance of DIC was better than that for FP with the reported conditions. Furthermore, it was shown that as the content of the DIC signal (speckles) in the resulting images of fringes is quite low, a high level of accuracy of 3D displacement measurements could be achieved.

Having all the information required for 3D deformation calculation in one image allows us to carry out analyses of relatively fast events.

References and links

1.

C. J. Tay, C. Quan, T. Wu, and Y. H. Huang, “Integrated method for 3-D rigid-body displacement measurement using fringe projection,” Opt. Eng. 43(5), 1152–1159 (2004). [CrossRef]

2.

B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008). [CrossRef]

3.

P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef] [PubMed]

4.

Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef] [PubMed]

5.

L. Fu, Z. Li, L. Yang, Q. Yang, and A. He, “New phase measurement profilometry by grating projection,” Opt. Eng. 45(7), 073601 (2006). [CrossRef]

6.

H. G. Park, D. Dabiri, and M. Gharib, “Digital particle image velocimetry/thermometry and application to the wake of a heated circular cylinder,” Exp. Fluids 30(3), 327–338 (2001). [CrossRef]

7.

C. Brücker, “3-D PIV via spatial correlation in a color-coded light-sheet,” Exp. Fluids 21 (4), 312–314 (1996). [CrossRef]

8.

P. Synnergren and M. Sjodahl, “A stereoscopic digital speckle photography system for 3-D displacement field measurements,” Opt. Lasers Eng. 31(6), 425–443 (1999). [CrossRef]

9.

A. K. Prasad, “Stereoscopic particle image velocimetry,” Exp. Fluids 29(2), 103–116 (2000). [CrossRef]

10.

M. Raffel, C. Willert, and J. Kompenhans, Particle image velocimetry, a practical guide, (Springer-Verlag, 1998).

11.

D. J. Chen, F. P. Chiang, Y. S. Tan, and H. S. Don, “Digital speckle-displacement measurement using a complex spectrum method,” Appl. Opt. 32(11), 1839–1849 (1993). [CrossRef] [PubMed]

12.

B. Barrientos, M. Cywiak, W. K. Lee, and P. Bryanston-Cross, “Measurement of dynamic deformation using a superimposed grating,” Rev. Mex. Fis. 50(1), 12–18 (2004).

13.

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982). [CrossRef]

14.

K. J. Gasvik, Optical Metrology, (3rd Ed. John Wiley and Sons, Sussex 2003).

15.

T. Kreis, “Digital holographic interference-phase measurement using the Fourier-transform method,” J. Opt. Soc. Am. A 3(6), 847–855 (1986). [CrossRef]

16.

D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998). [CrossRef]

17.

S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]

18.

S. Rosendahl, E. Hällstig, P. Gren, and M. Sjödahl, “Phase errors due to speckles in laser fringe projection,” Appl. Opt. 49(11), 2047–2053 (2010). [CrossRef] [PubMed]

19.

J. Westerweel, “Fundamentals of digital particle image velocimetry,” Meas. Sci. Technol. 8(12), 1379–1392 (1997). [CrossRef]

OCIS Codes
(120.2650) Instrumentation, measurement, and metrology : Fringe analysis
(120.3940) Instrumentation, measurement, and metrology : Metrology
(120.4290) Instrumentation, measurement, and metrology : Nondestructive testing

ToC Category:
Instrumentation, Measurement, and Metrology

History
Original Manuscript: September 19, 2011
Revised Manuscript: November 7, 2011
Manuscript Accepted: November 21, 2011
Published: December 1, 2011

Citation
C. Mares, B. Barrientos, and A. Blanco, "Measurement of transient deformation by color encoding," Opt. Express 19, 25712-25722 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-25-25712


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. C. J. Tay, C. Quan, T. Wu, and Y. H. Huang, “Integrated method for 3-D rigid-body displacement measurement using fringe projection,” Opt. Eng.43(5), 1152–1159 (2004). [CrossRef]
  2. B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt.10(10), 104027 (2008). [CrossRef]
  3. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett.36(1), 10–12 (2011). [CrossRef] [PubMed]
  4. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express14(14), 6444–6455 (2006). [CrossRef] [PubMed]
  5. L. Fu, Z. Li, L. Yang, Q. Yang, and A. He, “New phase measurement profilometry by grating projection,” Opt. Eng.45(7), 073601 (2006). [CrossRef]
  6. H. G. Park, D. Dabiri, and M. Gharib, “Digital particle image velocimetry/thermometry and application to the wake of a heated circular cylinder,” Exp. Fluids30(3), 327–338 (2001). [CrossRef]
  7. C. Brücker, “3-D PIV via spatial correlation in a color-coded light-sheet,” Exp. Fluids21 (4), 312–314 (1996). [CrossRef]
  8. P. Synnergren and M. Sjodahl, “A stereoscopic digital speckle photography system for 3-D displacement field measurements,” Opt. Lasers Eng.31(6), 425–443 (1999). [CrossRef]
  9. A. K. Prasad, “Stereoscopic particle image velocimetry,” Exp. Fluids29(2), 103–116 (2000). [CrossRef]
  10. M. Raffel, C. Willert, and J. Kompenhans, Particle image velocimetry, a practical guide, (Springer-Verlag, 1998).
  11. D. J. Chen, F. P. Chiang, Y. S. Tan, and H. S. Don, “Digital speckle-displacement measurement using a complex spectrum method,” Appl. Opt.32(11), 1839–1849 (1993). [CrossRef] [PubMed]
  12. B. Barrientos, M. Cywiak, W. K. Lee, and P. Bryanston-Cross, “Measurement of dynamic deformation using a superimposed grating,” Rev. Mex. Fis.50(1), 12–18 (2004).
  13. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A72(1), 156–160 (1982). [CrossRef]
  14. K. J. Gasvik, Optical Metrology, (3rd Ed. John Wiley and Sons, Sussex 2003).
  15. T. Kreis, “Digital holographic interference-phase measurement using the Fourier-transform method,” J. Opt. Soc. Am. A3(6), 847–855 (1986). [CrossRef]
  16. D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell.20(5), 470–480 (1998). [CrossRef]
  17. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng.48(2), 149–158 (2010). [CrossRef]
  18. S. Rosendahl, E. Hällstig, P. Gren, and M. Sjödahl, “Phase errors due to speckles in laser fringe projection,” Appl. Opt.49(11), 2047–2053 (2010). [CrossRef] [PubMed]
  19. J. Westerweel, “Fundamentals of digital particle image velocimetry,” Meas. Sci. Technol.8(12), 1379–1392 (1997). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited