OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 11 — Jun. 3, 2013
  • pp: 13442–13449
« Show journal navigation

Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger's algorithm

Li Qi, Yixin Zhang, Xuping Zhang, Shun Wang, and Fei Xie  »View Author Affiliations


Optics Express, Vol. 21, Issue 11, pp. 13442-13449 (2013)
http://dx.doi.org/10.1364/OE.21.013442


View Full Text Article

Acrobat PDF (880 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Triangulation laser range scanning, which has been wildly used in various applications, can reconstruct the 3D geometric of the object with high precision by processing the image of laser stripe. The unbiased line extractor proposed by Steger is one of the most commonly used algorithms in laser stripe center extraction for its precision and robustness. Therefore, it is of great significance to assess the statistical performance of the Steger method when it is applied on laser stripe with Gaussian intensity profile. In this paper, a statistical behavior analysis for the laser stripe center extractor based on Steger method has been carried out. Relationships between center extraction precision, image quality and stripe characteristics have been examined analytically. Optimal scale of Gaussian smoothing kernel can be determined for each laser stripe image to achieve the highest precision according to the derived formula. Flexible three-step noise estimation procedure has been proposed to evaluate the center extraction precision of a typical triangulation laser scanning system by simply referring to the acquired images. The validity of our analysis has been verified by experiments on both artificial and natural images.

© 2013 OSA

1. Introduction

Triangulation laser range scanning is one of the most efficient and precise techniques for various 3D geometric measurement tasks such as object 3D shape reconstructions [1

1. S. Cui and X. Zhu, “A generalized reference-plane-based calibration method in optical triangular profilometry,” Opt. Express 17(23), 20735–20746 (2009). [CrossRef] [PubMed]

, 2

2. Z. Zhang and L. Yuan, “Building a 3D scanner system based on monocular vision,” Appl. Opt. 51(11), 1638–1644 (2012). [CrossRef] [PubMed]

], on-line industrial 3D inspection [3

3. Y. Zhang, S. Wang, X. Zhang, F. Xie, and J. Wang, “Freight train gauge-exceeding detection based on three-dimensional stereo vision measurement,” Mach. Vis. Appl. 24(3), 461–475 (2012).

], and culture heritage digitization [4

4. L. Marc, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, and D. Fulk, “The digital Michelangelo project: 3D scanning of large statues,” in Proceedings of the 27th annual Conference on Computer Graphics and Interactive Techniques, ACM Press/Addison-Wesley Publishing Co., 131–144, (2000).

]. Usually the measurement system contains one camera and one laser projector. The projector puts a set of structured laser stripe on the object while the camera captures the image of laser stripe patterns. The image will then be processed to reconstruct the 3D shape of the object. Such reconstruction covers two steps: first to locate the laser stripe center on the acquired images, second to triangulate these centers and recover range data based on system structure. Therefore, the extraction of laser stripe center from image is one of the most important steps throughout the whole measurement procedure since the stripe extraction error dominates the reconstruction precision once the system mechanism has been determined. The unbiased line center detection algorithm proposed by C. Steger [5

5. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). [CrossRef]

, 6

6. C. Steger, “Unbiased Extraction of Curvilinear Structures from 2D and 3D Images,” Dissertation, Fakultät für Informatik, Technische Universität München, 1998.

] can achieve sub-pixel precision detection and has outstanding noise robustness performance. For this reason the algorithm has been introduced and widely applied to triangulation laser range scanning [7

7. F. Zhou, G. Zhang, and J. Jiang, “Constructing feature points for calibrating a structured light vision sensor by viewing a plane from unknown orientations,” Opt. Lasers Eng. 43(10), 1056–1070 (2005). [CrossRef]

10

10. C. Steger, “Unbiased extraction of lines with parabolic and Gaussian profiles,” Comput. Vis. Image Underst. 117(2), 97–112 (2013).

]. For practical industry applications, the request on evaluating the statistical behavior of extracted stripe center positions, which is also referred to the repeatability of a laser stripe extractor, is emerging. Such request has hardly been fulfilled in literatures yet except for the edge detection precision evaluation methods proposed by F. Bouchara [11

11. F. Bouchara and S. Ramdani, “Statistical behavior of edge detectors,” Signal Image Video Process. 1(3), 273–285 (2007). [CrossRef]

] and K. Astrom [12

12. K. Astrom and A. Heyden, “Stochastic modeling and analysis of sub-pixel edge detection,” in Proceedings of the 13th International Conference on Pattern Recognition, (1996), 86–90. [CrossRef]

]. C. Steger has given an empirical evaluation method for his unbiased extractor applied to bar-shape edge/line detection [13

13. C. Steger, “Analytical and empirical performance evaluation of sub-pixel line and edge detection,” in Empirical Evaluation Methods in Computer Vision, K.W. Bowyer and P. J. Phillips, ed. (IEEE Computer Society Press, 1998).

]. Nevertheless, when it comes to stripes with Gaussian intensity profile, which is the most commonly case if the stripe is generated by laser projection, the behavior analysis for center extractor remains unsolved.

In this paper, a statistical behavior analysis for the laser stripe center extractor based on the Steger method has been carried out. Relationships between center extraction precision, image quality and stripe characteristics have been examined analytically. Optimal width of Gaussian smoothing kernel can be determined for each laser stripe image to achieve the highest precision according to the derived formula. Flexible three-step noise estimation procedure has been proposed to evaluate the center extraction precision of a typical triangulation laser scanning system by simply referring to the acquired images. The remaining sections of this paper are arranged as follows: Firstly, Section 2 will give a brief introduction on the mathematic model of laser stripe intensity profile and its center position distribution. Next, a comprehensive statistical behavior evaluation on the Steger center extractor will be proposed in Section 3. The details of a flexible random noise and stripe width estimation method for single laser stripe image will also be presented in this section. Experimental results on both artificial and natural images are shown in Section 4 to verify the validity of our analysis. Finally, the discussion and conclusion are drawn in section 5 and 6.

2. Laser stripe profile model and center position distribution

Without loss of generality, it can be assumed that the laser stripes are parallel to the y-axis of the image. The intensity profile fw(x) of a laser stripe can be described by a Gaussian function with background [10

10. C. Steger, “Unbiased extraction of lines with parabolic and Gaussian profiles,” Comput. Vis. Image Underst. 117(2), 97–112 (2013).

]:
fw(x)=Aexp(x22σw2)+h(x),
(1)
where the term A stands for the peak intensity of the laser stripe while the term σw is equal to the standard deviation width (STDW) of laser stripe, and h(x) stands for the image background which can be easily eliminated by background subtraction. For the laser stripe center extractor based on Steger method, the intensity profile fw(x) is convolved with Gaussian kernel gσ(x) and differential operators to calculate the derivatives of fw(x). The width of gσ(x) is determined by scale parameter σ. The center of laser stripe is given by the first-order zero-crossing-point (FOZCP) of the convolution result which also reaches local extreme points in the zero-order and second-order derivatives as shown in Fig. 1
Fig. 1 Laser stripe profile model: (a) zero-order; (b) first-order; and (c) second-order.
. The reason to involve Gaussian kernel instead of using differential operators solely is to smooth the influence from noise.

Random noise is considered as the most important factor affecting the center detection precision especially in low resolution images [13

13. C. Steger, “Analytical and empirical performance evaluation of sub-pixel line and edge detection,” in Empirical Evaluation Methods in Computer Vision, K.W. Bowyer and P. J. Phillips, ed. (IEEE Computer Society Press, 1998).

]. For actual measurements, the extraction of laser stripe centers is subject to two kinds of random noise [15

15. J. Forest, J. Salvi, E. Cabruja, and C. Pous, “Laser stripe peak detector for 3D scanners. A FIR filter approach,” in Proceedings of the 17th International Conference on Pattern Recognition, (2004), 646–649. [CrossRef]

]: one is the random noise of image sensor, such as thermal noise and quantizing noise; the other is the random noise produced by the laser itself, such as laser speckle. So, the laser stripe image f(x, y) can be defined as:

f(x,y)=i(x,y)+n(x,y).
(2)

The term i(x, y) stands for the noise-free laser stripe image. Random noise n(x, y) can be described as one-dimensional stochastic process in multi-dimensional random field and is commonly treated as Gaussian white noise of zero expectation. According to Central Limit Theorem, when affected by random noise, each center extraction process can be considered as an independent identically distributed random variable. Its distribution satisfies the normal distribution:
N(μ,σl)=12πσlexp[(xμ)22σl2],
(3)
Where μ stands for the error free center position of laser stripe, and σl2 is the variance of the stripe center position. In Eq. (3), the confidence interval (CI) of variance σl2 can be considered as the precision of the stripe center extraction under certain corresponding confidence level (CL). For example, if the user requires a precision of ± Δ (CI = [-Δ, + Δ]) with CL = 99.99%, then according to the property of the normal distribution we have Δ≈l .

3. Statistical behavior analysis for laser stripe center extraction

According to the Steger method, the center positions of the laser stripe should be the zero crossings point of the first-order derivative of intensity profile fw(x):

rx(x,y)=rf,x(x,y)+rn,x(x,y)=0,
(4)

According to [6

6. C. Steger, “Unbiased Extraction of Curvilinear Structures from 2D and 3D Images,” Dissertation, Fakultät für Informatik, Technische Universität München, 1998.

] and [13

13. C. Steger, “Analytical and empirical performance evaluation of sub-pixel line and edge detection,” in Empirical Evaluation Methods in Computer Vision, K.W. Bowyer and P. J. Phillips, ed. (IEEE Computer Society Press, 1998).

], given the noise variance σn2 and the Gaussian kernel scale parameter σ, the variance of random noise field and the variance of the position of a 2D asymmetry bar-shape line are given by:

σrn,x2=σn28πσ4,
(5)
σl2=σrn,x2rf,xx(0,0)2=σn28πσ4/rf,xx(0,0)2,
(6)

Considering the Gaussian shape symmetry stripe condition, given the gray value peak A of the laser stripe, we have:

rw(x,σ,A,σw)=fw(x)gσ(x)=fw(x)[12πσexp(x22σ2)]=Aσw(σw2+σ2)12(x2(σw2+σ2)21(σw2+σ2))exp(x22(σw2+σ2))
(7)

In fact, Eq. (7) is a simplification of Eq. (20) in [10

10. C. Steger, “Unbiased extraction of lines with parabolic and Gaussian profiles,” Comput. Vis. Image Underst. 117(2), 97–112 (2013).

], for the case a = 0. Hence,

rf,xx(0,0)=Aσw(σw2+σ2)32,
(8)

If we considered the term σn2/A2 to be equivalent to the reciprocal of image Signal-to-Noise-Ratio (SNR), the variance of center position extraction will be:

σl2=σrn,x2rf,xx(0,0)2=σn2(σ2+σw2)38πA2σ4σw2=(σ2+σw2)38πσ4σw21SNR.
(9)

From Eq. (9) we can see that given a certain acquired image and Gaussian smoothing kernel, the center extraction variance σl2 and image SNR is of linear inverse proportion. Therefore the statistical behavior of the Steger extractor applied to laser stripe with Gaussian intensity profile can be evaluate once the SNR of laser stripe image is identified. If the laser peak value A is known then the image SNR can be easily obtained by estimating the variance of the random noise σn2 in the acquired images. As it is mentioned in Sec. 2, random noise of a laser stripe image comes from both the image sensor and the laser. Therefore, the estimation is applied on the stripe region of the image (instead of the whole image) by finding out the difference between the ideal Gaussian profile and the real stripe profile.

Furthermore, by taking the derivative of (9), we can find that when σ=2σW, the minimal of center extraction variance σl2min is obtained. The Gaussian kernel scale should be set to approximately 2 times of the laser stripe STDW. However, both the SNR and STDW are not posterior knowledge. We can only estimate them from the acquired image. Therefore we proposed a flexible three steps estimation method:

First, adaptive estimation of the laser strip line-width wc is performed. Given a prior estimated center Xc where peak value A of intensity occurs, two points Xa and Xb that have the intensity of a certain fraction of A (i.e. 5%-20%) are chosen as the boundary points by searching through the neighborhood of Xc. As shown in Fig. 2
Fig. 2 Laser stripe line-width estimation and Gaussian curve fitting.
, Xa and Xb are chosen with 0.2A. The line-width is derived as wc = Xb-Xa (8σW) and about 99.9% energy of the laser stripe is within this area. The laser stripe profile within range [Xa-wc, Xb-wc] is defined as the stripe region la.

Second, Gaussian curve fitting [14

14. R. B. Fisher and D. K. Naidu, “A comparison of algorithms for sub-pixel peak detection,” in Advances in Image Processing, Multimedia and Machine Vision, J. Sanz, ed. (Springer-Verlag, Heidelberg, 1996).

] is applied on the cut out laser stripe region la to obtain the approximated noiseless Gaussian profile lb. Using this fitted Gaussian profile, the true stripe width variance σW2 used to determine the center extractor’s Gaussian smoothing kernel width σ is also derived.

The third step is to calculate the noise by n = lb-la and then the estimated noise variance σn2 is obtained.

4. Experiments

To verify the validity of our analysis, we generated an artificial laser stripe image. A stripe of Gaussian intensity profile lies exactly in the center of the image with σw=5, A = 100. White noise with variance σn2=100 is added in the image during simulation as shown in Fig. 3(a)
Fig. 3 Synthetic image experiments: (a) synthetic image with Gaussian white noise; (b) center position distribution; (c) extracted and predicted variance with respect to different image SNR (A = 100, σw = 5).
. We applied 10000 times center extraction on the artificial image using the optimal Gaussian smoothing kernel scale σ=2σW. Figure 3(b) shows the statistics of center positions extracted by Steger method. As we can see in the figure, these extracted center positions are satisfying normal distribution, which agrees with our previously deduce.

The same experiment is performed under different image SNR to further investigate the consistency between the actual distribution and estimated distribution according to the proposed method. The contrast result between predicted and extracted center position variance is shown in Fig. 3(c). As can be seen, the shapes of predicted and extracted variance are resembled to each other. However, the absolute extraction error is larger in low image SNR regions, where there may be a threshold effect that leads to undistinguished stripe peak signals from background noise. In general, a processed image is acceptable if its SNR is greater than 30 dB [16

16. T.-S. Chen, C.-C. Chang, and M.-S. Hwang, “A virtual image cryptosystem based upon vector quantization,” IEEE Trans. Image Process. 7(10), 1485–1488 (1998). [CrossRef] [PubMed]

] and the “clean” images have an implicit SNR of around 60 dB [17

17. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of Gaussians in the wavelet domain,” IEEE Trans. Image Process. 12(11), 1338–1351 (2003). [CrossRef] [PubMed]

].

We further investigate our method by performing center extraction experiments on real images. The experimental setup is shown in Fig. 4(a)
Fig. 4 Real experiments: (a) system setup; (b) estimated and real distribution of the extracted center positions.
. We use a 100mW lined-structured laser to generate the laser stripe image on a planar target. A 1280 × 960 pixels camera (Imaging Source, DFK51BG02) parallel to the laser samples the image and transfer the data to a computer for processing. Both the laser and the camera are about 0.5m away from the target. 1000 individual samples of images are recorded and analyzed. The distribution of the extracted center positions are shown in Fig. 4(b). As can be seen, the real distribution is coincided with the estimated normal distribution. Furthermore, by changing the laser drive current, we captured laser stripe images with different noise and stripe peak value. Center extractions were performed on these images, and the proposed SNR estimation method was used to estimate the center variance. The contrast result between estimated precision and actual precision with confidence level equal to 99.99% is shown in Table 1

Table 1. Comparison of estimated and actual precision in real image tests (CL = 99.9%)

table-icon
View This Table
. For every drive current, 1000 times of samples are performed. It can be seen that the random noise caused by the laser (mainly speckle noise because the planar target surface is quite rough) is the principal cause of the increasing noise level since the experiments are performed by increasing the laser drive current while keeping all camera settings steady. Also it can be seen that, the estimated center standard deviations are coincided with the actual center standard deviations, which demonstrated the efficiency of the proposed image SNR estimation method. More importantly, the actual precisions are coincided with the estimated precisions, the difference between them are within one tenth of a pixel. Overall, Eq. (9) gives an excellent estimation of the line position variance both in synthetic and real images.

The above experiment is performed using a personal computer with an Intel Core i7 CPU running at 2.67 GHz and 3.25GB memories. The computational complexity of the noise estimation procedure is O(32NM) for a M*N size image if linear least square method is applied to approximate the Gaussian model. The example codes are written in MATLAB. The process could not achieve real-time performance for the average time to estimate the noise of a single image is 89.45s. Moreover, failures occur with two situations: the angle between the projected laser plane and the planar target is much less or larger than 90 degrees; and wrong camera exposure that brought over-saturation to the laser stripe image. Both situations will cause inaccurate Gaussian curve fitting result since the first situation destroys the Gaussian shape while the second situation leads to deforming of the Gaussian profile. These two situations should be treated separately in practice.

5. Discussion

Using Eq. (9), the center extraction variance of an actual laser stripe image can be fully described and thus the distribution of extracted center positions can also be estimated according to the statistical error model (Eq. (3)). Once the distribution of the extraction result is established, its confidence interval can be found with a given confidence level. On the other hand, given a precision requirement ± Δ (equal to confidence interval), the confidence level can be derived by the integral of the normal distribution N(μ,σl) at interval [-Δ, + Δ].

Hence, the relationships between center extraction precision, confidence level and image SNR are established. With these relationships, once the two of them are determined, the left one can be easily obtained. Three practical applications of this triple relation are introduced:

The first application is to evaluate the repeatability of the extractor only from acquired images. Given a certain center extractor (σ is known), extraction variance σl2 and the center positions distribution are determined since SNR is estimated from the acquired images. Then the corresponding confidence level, which actually representing the repeatability of the extractor, is found to satisfy minimum precision requirement.

The second application is to provide image acquisition unit design guidance. The minimum image SNRmin can be determined based on user proposed minimum precision requirement. Means of reducing lens’ F-number/refining illuminations/replacing image sensor that could possibly improve image quality should be implemented if the acquired images could not meet this SNRmin. Most of the times, a stable laser can also offer great improvement.

The third application is to estimate the center extraction precision of a given detection system without knowing the parameters of the center extractor. Image SNR can be estimated from the acquired images; therefore extraction precision is found on different confidence level.

6. Conclusion

In this paper, based on the robust line detection algorithm by C. Steger, we proposed a comprehensive statistical behavior analysis for laser stripe center extractor in triangulation laser range scanning system. Variance of extracted center positions can be evaluated in the presence of noise with the formula we derived. Optimal scale of Gaussian smoothing kernel can be determined for each laser stripe image to achieve the highest precision according to the derived formula. Flexible three-step noise estimation procedure has been proposed to estimate the center extraction precision of a typical triangulation laser scanning system by simply referring to the acquired images. Not only the newly derived variance evaluation method, but also the proposed noise estimation method is proven valid experimentally by both artificial and natural images. With our analysis, the relationships between center extraction precision, confidence level and image quality have been established. Once the two of them are given, the left one can also be determined. This behavior analysis can be put into practice to estimate the precision and repeatability of the laser stripe center extractor, which is of great value in typical triangulation laser range scanning applications.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (61027017), National Basic Research Program of China (2010CB327803) and Postgraduate Research Innovation Fund of Jiangsu Province (CXLX12-0050). The authors would also like to thank Jingzhu Xu for her helpful assistance of this work.

References and links

1.

S. Cui and X. Zhu, “A generalized reference-plane-based calibration method in optical triangular profilometry,” Opt. Express 17(23), 20735–20746 (2009). [CrossRef] [PubMed]

2.

Z. Zhang and L. Yuan, “Building a 3D scanner system based on monocular vision,” Appl. Opt. 51(11), 1638–1644 (2012). [CrossRef] [PubMed]

3.

Y. Zhang, S. Wang, X. Zhang, F. Xie, and J. Wang, “Freight train gauge-exceeding detection based on three-dimensional stereo vision measurement,” Mach. Vis. Appl. 24(3), 461–475 (2012).

4.

L. Marc, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, and D. Fulk, “The digital Michelangelo project: 3D scanning of large statues,” in Proceedings of the 27th annual Conference on Computer Graphics and Interactive Techniques, ACM Press/Addison-Wesley Publishing Co., 131–144, (2000).

5.

C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). [CrossRef]

6.

C. Steger, “Unbiased Extraction of Curvilinear Structures from 2D and 3D Images,” Dissertation, Fakultät für Informatik, Technische Universität München, 1998.

7.

F. Zhou, G. Zhang, and J. Jiang, “Constructing feature points for calibrating a structured light vision sensor by viewing a plane from unknown orientations,” Opt. Lasers Eng. 43(10), 1056–1070 (2005). [CrossRef]

8.

R. Yang, S. Cheng, W. Yang, and Y. Chen, “Robust and accurate surface measurement using structured light,” IEEE Trans. Instrum. Meas. 57(6), 1275–1280 (2008). [CrossRef]

9.

R. D. Wedowski, G. A. Atkinson, M. L. Smith, and L. N. Smith, “A system for the dynamic industrial inspection of specular freeform surfaces,” Opt. Lasers Eng. 50(5), 632–644 (2012). [CrossRef]

10.

C. Steger, “Unbiased extraction of lines with parabolic and Gaussian profiles,” Comput. Vis. Image Underst. 117(2), 97–112 (2013).

11.

F. Bouchara and S. Ramdani, “Statistical behavior of edge detectors,” Signal Image Video Process. 1(3), 273–285 (2007). [CrossRef]

12.

K. Astrom and A. Heyden, “Stochastic modeling and analysis of sub-pixel edge detection,” in Proceedings of the 13th International Conference on Pattern Recognition, (1996), 86–90. [CrossRef]

13.

C. Steger, “Analytical and empirical performance evaluation of sub-pixel line and edge detection,” in Empirical Evaluation Methods in Computer Vision, K.W. Bowyer and P. J. Phillips, ed. (IEEE Computer Society Press, 1998).

14.

R. B. Fisher and D. K. Naidu, “A comparison of algorithms for sub-pixel peak detection,” in Advances in Image Processing, Multimedia and Machine Vision, J. Sanz, ed. (Springer-Verlag, Heidelberg, 1996).

15.

J. Forest, J. Salvi, E. Cabruja, and C. Pous, “Laser stripe peak detector for 3D scanners. A FIR filter approach,” in Proceedings of the 17th International Conference on Pattern Recognition, (2004), 646–649. [CrossRef]

16.

T.-S. Chen, C.-C. Chang, and M.-S. Hwang, “A virtual image cryptosystem based upon vector quantization,” IEEE Trans. Image Process. 7(10), 1485–1488 (1998). [CrossRef] [PubMed]

17.

J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of Gaussians in the wavelet domain,” IEEE Trans. Image Process. 12(11), 1338–1351 (2003). [CrossRef] [PubMed]

OCIS Codes
(100.0100) Image processing : Image processing
(140.0140) Lasers and laser optics : Lasers and laser optics
(150.6910) Machine vision : Three-dimensional sensing

ToC Category:
Image Processing

History
Original Manuscript: March 18, 2013
Revised Manuscript: May 9, 2013
Manuscript Accepted: May 17, 2013
Published: May 28, 2013

Citation
Li Qi, Yixin Zhang, Xuping Zhang, Shun Wang, and Fei Xie, "Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger's algorithm," Opt. Express 21, 13442-13449 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-11-13442


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. S. Cui and X. Zhu, “A generalized reference-plane-based calibration method in optical triangular profilometry,” Opt. Express17(23), 20735–20746 (2009). [CrossRef] [PubMed]
  2. Z. Zhang and L. Yuan, “Building a 3D scanner system based on monocular vision,” Appl. Opt.51(11), 1638–1644 (2012). [CrossRef] [PubMed]
  3. Y. Zhang, S. Wang, X. Zhang, F. Xie, and J. Wang, “Freight train gauge-exceeding detection based on three-dimensional stereo vision measurement,” Mach. Vis. Appl.24(3), 461–475 (2012).
  4. L. Marc, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, and D. Fulk, “The digital Michelangelo project: 3D scanning of large statues,” in Proceedings of the 27th annual Conference on Computer Graphics and Interactive Techniques, ACM Press/Addison-Wesley Publishing Co., 131–144, (2000).
  5. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell.20(2), 113–125 (1998). [CrossRef]
  6. C. Steger, “Unbiased Extraction of Curvilinear Structures from 2D and 3D Images,” Dissertation, Fakultät für Informatik, Technische Universität München, 1998.
  7. F. Zhou, G. Zhang, and J. Jiang, “Constructing feature points for calibrating a structured light vision sensor by viewing a plane from unknown orientations,” Opt. Lasers Eng.43(10), 1056–1070 (2005). [CrossRef]
  8. R. Yang, S. Cheng, W. Yang, and Y. Chen, “Robust and accurate surface measurement using structured light,” IEEE Trans. Instrum. Meas.57(6), 1275–1280 (2008). [CrossRef]
  9. R. D. Wedowski, G. A. Atkinson, M. L. Smith, and L. N. Smith, “A system for the dynamic industrial inspection of specular freeform surfaces,” Opt. Lasers Eng.50(5), 632–644 (2012). [CrossRef]
  10. C. Steger, “Unbiased extraction of lines with parabolic and Gaussian profiles,” Comput. Vis. Image Underst.117(2), 97–112 (2013).
  11. F. Bouchara and S. Ramdani, “Statistical behavior of edge detectors,” Signal Image Video Process.1(3), 273–285 (2007). [CrossRef]
  12. K. Astrom and A. Heyden, “Stochastic modeling and analysis of sub-pixel edge detection,” in Proceedings of the 13th International Conference on Pattern Recognition, (1996), 86–90. [CrossRef]
  13. C. Steger, “Analytical and empirical performance evaluation of sub-pixel line and edge detection,” in Empirical Evaluation Methods in Computer Vision, K.W. Bowyer and P. J. Phillips, ed. (IEEE Computer Society Press, 1998).
  14. R. B. Fisher and D. K. Naidu, “A comparison of algorithms for sub-pixel peak detection,” in Advances in Image Processing, Multimedia and Machine Vision, J. Sanz, ed. (Springer-Verlag, Heidelberg, 1996).
  15. J. Forest, J. Salvi, E. Cabruja, and C. Pous, “Laser stripe peak detector for 3D scanners. A FIR filter approach,” in Proceedings of the 17th International Conference on Pattern Recognition, (2004), 646–649. [CrossRef]
  16. T.-S. Chen, C.-C. Chang, and M.-S. Hwang, “A virtual image cryptosystem based upon vector quantization,” IEEE Trans. Image Process.7(10), 1485–1488 (1998). [CrossRef] [PubMed]
  17. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of Gaussians in the wavelet domain,” IEEE Trans. Image Process.12(11), 1338–1351 (2003). [CrossRef] [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1 Fig. 2 Fig. 3
 
Fig. 4
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited