In recent years, many methods have been put forward to improve the image matching for different viewpoint images. However, these methods are still not able to achieve stable results, especially when large variation in view occurs. In this paper, an image matching method based on affine transformation of local image areas is proposed. First, local stable regions are extracted from the reference image and the test image, and transformed to circular areas according to the second-order moment. Then, scale invariant features are detected and matched in the transformed regions. Finally, we use epipolar constraint based on the fundamental matrix to eliminate wrong corresponding pairs. The goal of our method is not to increase the invariance of the detector but to improve the final performance of the matching results. The experimental results demonstrate that compared with the traditional detectors the proposed method provides significant improvement in robustness for different viewpoint images matching in the 2D scene and 3D scene. Moreover, the efficiency is greatly improved compared with affine scale invariant feature transform (Affine-SIFT).
© 2012 Optical Society of America
Original Manuscript: March 20, 2012
Revised Manuscript: October 17, 2012
Manuscript Accepted: November 19, 2012
Published: December 21, 2012
Min Chen, Zhenfeng Shao, Dongyang Li, and Jun Liu, "Invariant matching method for different viewpoint angle images," Appl. Opt. 52, 96-104 (2013)