OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 18, Iss. 2 — Jan. 18, 2010
  • pp: 513–522
« Show journal navigation

Remote sensing image registration approach based on a retrofitted SIFT algorithm and Lissajous-curve trajectories

Zhi-li Song, Sheng Li, and Thomas F. George  »View Author Affiliations


Optics Express, Vol. 18, Issue 2, pp. 513-522 (2010)
http://dx.doi.org/10.1364/OE.18.000513


View Full Text Article

Acrobat PDF (614 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Through retrofitting the descriptor of a scale-invariant feature transform (SIFT) and developing a new similarity measure function based on trajectories generated from Lissajous curves, a new remote sensing image registration approach is constructed, which is more robust and accurate than prior approaches. In complex cases where the correct rate of feature matching is below 20%, the retrofitted SIFT descriptor improves the correct rate to nearly 100%. Mostly, the similarity measure function makes it possible to quantitatively analyze the temporary change of the same geographic position.

© 2010 OSA

1. Introduction

Image registration is a crucial part in some graphical analysis tasks. It is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors, namely reference images and template images. Its aim is to find a suitable transformation that enables the transformed template image to be similar to the reference one. This generally consists of four steps: feature detection, feature matching, mapping function design, and image resampling and transformation. In remote sensing image registration, the performance of the registration algorithms always has two challenges.

For the first challenge, due to the different physical characteristics of various sensors and/or the photos taken within different wavelengths, the intensities of corresponding pixels are often intricate. Thus, the difficulty we face is that the multiple values of the pixels’ intensities in one image may correspond to a single value in another. The features of one photograph might partially appear in the other one or even disappear completely. Thus, it is necessary to develop a similarity measure to enhance the robustness and improve the accuracy of image registration. Currently, most of the utilized similarity measures are a series of area-based methods, such as intensity-based [1

1. A. Wade and F. Fitzke, “A fast, robust pattern recognition asystem for low light level image registration and its application to retinal imaging,” Opt. Express 3(5), 190–197 (1998). [CrossRef] [PubMed]

6

6. G. Shao, F. Yao, and M. Malkani, “Aerial image registration based on joint feature-spatial spaces, curve and template matching,” in IEEE International Conference on Information and Automation (Hunan, China, 2008), pages 863–868.

], frequency-based [7

7. A. Wong and J. Orchard, “Efficient FFT-accelerated approach to invariant optical-LIDAR registration,” IEEE Trans. Geosci. Rem. Sens. 463917–3925 (2008). [CrossRef]

10

10. G. Hong and Y. Zhang, “Wavelet-based image registration technique for high-resolution remote sensing images,” Comput. Geosci. 34(12), 1708–1720 (2008). [CrossRef]

] and other types [11

11. S. Guyot, M. Anastasiadou, E. Deléchelle, and A. De Martino, “Registration scheme suitable to Mueller matrix imaging for biomedical applications,” Opt. Express 15(12), 7393–7400 (2007). [CrossRef] [PubMed]

,12

12. B. Zitova and J. Flusser, “Image registration methods: a survey,” Image Vis. Comput. 21(11), 977–1000 (2003). [CrossRef]

]. Among the area-based methods, the mutual information is regarded as an efficient similarity measure in the multi-modal image registration [13

13. A. Rajwade, A. Banerjee, and A. Rangarajan, “Probability density estimation using isocontours and isosurfaces: applications to information-theoretic image registration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 475–491 (2009). [CrossRef] [PubMed]

]. The pre-condition of area-based methods, however, is that the windows of the primary image should be similar to those of the reference one. Actually, in the remote sensing images for the same geographic position, there always exists rotation, scale, affine, perspective transformations, etc., causing the windows’ mismatch, which makes these kinds of methods invalid.

In this paper, we propose a image registration approach based on retrofitted SIFT algorithm and trajectories generated from Lissajous curves. On the basis of this method, we demonstrate that the accuracy of the similarity measure is greatly improved.

2. Method

2.1 Retrofitted SIFT

SIFT is used to detect and extract local features in images. But, it is sensitive to the affine transformation and other intensity changes caused by noise, varying illumination and different sensors. This deficiency can be remedied by the advantage of geomorphology, which remains stable in the remote sensing images. Among salient structure features extracted from the images in feature-based methods, such as point, line, edge, region, etc [15

15. K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” Int. J. Comput. Vis. 60(1), 63–86 (2004). [CrossRef]

17

17. F. P. Nava, and A. P. Nava, “A probabilistic generative model for unsupervised invariant change detection in remote sensing images,” in IEEE International Geoscience and Remote Sensing Symposium (Barcelona, 2007), pages 2362–2365.

], the point feature turns out to be unreliable if there are significant photometric and deformational changes caused by the transformation. Compared with the point feature, the contour-based methods are more superior, not only because they remain stable (or partly stable) under significant changes caused by the multi-modality of images, but also because they have the transformation information [18

18. S. Jiao, C. Wu, R. W. Knighton, G. Gregori, and C. A. Puliafito, “Registration of high-density cross sectional images to the fundus image in spectral-domain ophthalmic optical coherence tomography,” Opt. Express 14(8), 3368–3376 (2006). [CrossRef] [PubMed]

21

21. A. Wong and D. A. Clausi, “ARRSI: Automatic registration of remote-sensing images,” IEEE Trans. Geosci. Rem. Sens. 45(5), 1483–1493 (2007). [CrossRef]

]. So we optimize the SIFT algorithm in the following way.

We first obtain a collection of matched feature point pairs by SIFT and rank them by their similarities to achieve a coarse transformation. Since in this step the correct rate of feature matching is low, methods such as the least median of squares (LMS) [22

22. C. Stewart, “Robust parameter estimation in computer vision,” SIAM Rev. 41(3), 513–537 (1999). [CrossRef]

] and random sample and consensus (RANSAC) [23

23. M. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981). [CrossRef]

] algorithms become inefficient to estimate the parameters of the selected transformation model. We thus propose to utilize a triangle-area representation (TAR) [24

24. N. Alajlan, I. El Rube, M. S. Kamel, and G. Freeman, “Shape retrieval using triangle-area representation and dynamic space warping,” Pattern Recognit. 40(7), 1911–1920 (2007). [CrossRef]

], which is relatively invariant to affine transformation, to select and generate the sensible matching point pairs.

The TAR value is computed from the signed area of a triangle formed by three points, say,pb, pm andpe. The corresponding signed area is defined as follows:
TAR(pb,pm,pe)=12|xbyb1xmym1xeye1|=12(xbym+xmye+xeybxeymxbyexmyb),
(1)
wherexkand yk are horizontal and vertical coordinates of pixelk, respectively. If the relation between the reference and template images is given by
(x^y^)=(abcd)(xy)+(ef),
(2)
and TA^R(p^b,p^m,p^e)denotes the transformed version of TAR(pb,pm,pe)in the template image, by substituting (2) into (1), we obtain
TA^R(p^b,p^m,p^e)=(adbc)TAR(pb,pm,pe).
(3)
It clear that TAR is relatively invariant to the affine transformation.

For a pair of matched feature points fr and fs with marked by colored crosses in Figs. 1(a)
Fig. 1 Feature points and edges in reference and template images. (a) Feature point and one edge around it in the reference image. (b) Feature point and one edge around it in the template image. (c) Corresponding triangle of one matched point from (a). (d) Corresponding triangle of one matched point from (b).
, 1(b), we denote points on the curve Eras Er={pri:i=1,2,,n} and points on the curve Es as Es={psj:j=1,2,,m}. The TAR representation of the edges can be calculated by means of the following equations:
ImR[i1][i2]=TAR(pri1,fr,pri2)ImS[j1][j2]=TAR(psj1,fs,psj2),
(4)
where 0i1,i2n and 0j1,j2m. According to Eq. (4), ImR and ImS describing the TAR representation can be shown in Figs. 2(a)
Fig. 2 TAR image and its applications. (a) 3D picture of TAR signatures of edges in the reference image. (b) 3D picture of TAR signatures of edges in the template image.
, 2(b) as follows:

If the local transformation around frand fscan be approximated as affine, then the transformation between ImR and ImScan be approximated as a scale and shift operation. We thus employ the SIFT algorithm to detect and match feature points from ImR and ImS. According to Eq. (4), a point (i,j) in ImR corresponds to a triangle formed by fr, pri and prj in the reference image. It is the same with ImS. The pair of feature points detected from ImR and ImS corresponds to one pair of triangles as shown in Figs. 1(c) ,1(d). Since the affine transformation is determined by three points which are not collinear, the local transformation around fr and fs can be estimated with the aid of nearby edges.

2.2 Similarity measure

To enhance the robustness, accuracy and reliability of a registration algorithm, we propose a novel similarity measure based on the mutual information and Lissajous figure (MILF). The Lissajous figure (or Lissajous curve) is a graph of the system of parametric equations

{x=Axsin(ωxt+ϕx)y=Aysin(ωyt+ϕy).
(5)

Given a set values of Ax,Ay,ωx,ωy,ϕx,ϕy, a trajectory (denoted by TR) can be generated according to Eq. (5). If λ denotes the mutual information, the similarity (based on the Lissajous figure) between two points pRand pSis defined as
MILF(pR,pS)=λ(G1R,G2R,G1S,G2S),
(6)
where G1R,G2R,G1S,G2S are four gray-value sets of four point sets selected in the following way:

Table 1. Details of sets of test remote sensing images from USGS and NASA/JPL.

table-icon
View This Table

If TR1 and TR2 are two trajectories around pRgenerated according to Eq. (5), then G1R={pr1i:i=1,2,,n} and G2R={pr2i:i=1,2,,n}are two sets of gray values of two sets of points selected from TR1 andTR2, respectively. Equivalently, G1S={ps1i:i=1,2,,n} and G2S={ps2i:i=1,2,,n} are two set of gray values of two sets of points selected from trajectories TS1fand TS2f aroundpS, where TS1f and TS2f are the corresponding trajectories of TR1r and TR2r through f (the transformation between the reference and template images), wherep(r1,r2)is the joint distribution of G1R,G2R; p(s1,s2) is the joint distribution of G1S,G2S, and p(r1,r2,s1,s2) is the joint distribution ofG1R,G2R,G1S,G2S. Based on these, MILF(pR,pS) can be calculated according to Eq. (7) as follows:
MILF(pR,pS)=H(G1R,G2R)+H(G1S,G2S)H(G1R,G2R,G1S,G2S), 
(7)
where
H(G1R,G2R)=r1G1Rr2G2Rp(r1,r2)logp(r1,r2),H(G1S,G2S)=s1G1Ss2G2Sp(s1,s2)logp(s1,s2)
and

H(G1R,G2R,G1S,G2S)=r1G1Rr2G2Rs1G1Ss2G2Sp(r1,r2,s1,s2)logp(r1,r2,s1,s2).

A Lissajous figure generates different trajectories used to calculate the similarity measure. If two trajectories are adopted, with the mutual information among four discrete random variables, MILF not only captures the statistical dependency, but also characterizes the spatial interrelationships of the gray tones. In this way, much more information is considered, leading to improved robustness and accuracy.

3. Experiments and discussion

Three sets of remote sensing images are selected to conduct this experiment, as shown in Figs. 3
Fig. 3 Sets of images. (a) First reference image. (b) First template image. (c) Part of the reference image. (d) Part of the template image.
and 4
Fig. 4 Sets of images. (a) Second reference image. (b) Second template image. (c) Third reference image. (d) Third template image.
. The details of the three sets of test images are tabulated in Table. I. Figures 3(c), 3(d) provides only the partial images selected from Figs. 3(a), 3(b). In order to evaluate the performance of similarity measure proposed in this paper, a pair of points (pr and ps) are chosen from Figs. 3(c), 3(d), which are marked by the crosses. Both of them correspond to the same geographic position.

In order to compare the performance of the optimized SIFT algorithm and SIFT algorithm, we extract 3440 feature points from the reference image and 3515 feature points from the template image. After matching with the optimized SIFT and SIFT algorithms separately, these matched feature points are ranked by their similarities. The correct matching rate of the N top pairs of matched feature points is used to evaluate their performance. It is calculated according to C(N)=Nr/N, where Nr is the number of correct matched feature point pairs among the N top pairs. The experiment results are shown in Fig. 5
Fig. 5 The correct matching rate of the 500 top pairs from the queue of matched feature points with the SIFT and retrofitted SIFT algorithms, respectively.
. From Fig. 5(a), we can see that the correct feature matching rate with the original SIFT is below 20%. One reason is that the photos shown in Figs. 3(a), 3(b) were taken at different wavelengths and times. Another reason is that the size and position of the image is also changed and distorted – even the color of the image is changed due to the different spectral features of the images. Following the method in Sec. 2.1, we retrofit the SIFT algorithm by combining the local contour information. The optimized SIFT descriptor not only considers the information from the pixel values of the images, but also extracts the topological information on geography, such as coasts, rivers, and so forth. So the retrofitted SIFT algorithm greatly improves the correct feature matching rate from below 20% to nearly 100%, as shown in Fig. 5(b).

Based on the similarity measure proposed in Sec. 2.2, the similarity value of every pixel corresponding to a point in the reference image can be easily obtained, which is generally used to represent the matching of geographic points. The highest similarity measure value between two points in the reference and template images indicates the same geographic position of the two points.

For the three sets of images shown in Figs. 3 and 4, after combining all the methods developed above, these two remote sensing images taken by different sensors at different wavelengths can be precisely registered, as shown in Fig. 8
Fig. 8 Mosaic result of the remotely sensing images.
. Here, we use an image mosaic to show the registration results intuitively. The mosaic of the reference image and the template image is created in the following way. Firstly, the reference image is divided into equal-sized rectangular sections. Then, every other section is filled with the corresponding part of the warped template image according transformation estimated. The correctness of the registration results can be verified visually by checking the continuity of the common edges and regions in the mosaic images.

In order to evaluate the accuracy and performance of the method proposed in this paper, we also compare it against three widely available image registration tools, and the mutual information of the reference and aligned template image is taken as the metric. In this experiment, the first and second sets of images are used. The three kinds of tools are TurboReg [27], image registration tool (IRT) [28] and ImReg [29]. The 2D projection transformation model that we have used is defined by Eq. (8) in homogeneous coordinates:

(x'y'z')=(a11a12a13a21a22a23a31a32a33)(xy1).
(8)

4. Conclusion

In this paper, we proposed an image registration approach, which is based on the retrofitted SIFT algorithm and a new similarity measure based on trajectories generated from Lissajous figures. By optimizing the SIFT algorithm, the accuracy of the feature matching in some difficult cases of remote sensing image registration is improved from below 20% to nearly 100%. With the help of the similarity measure proposed in this paper, the accuracy of the image registration algorithm is improved considerably. This makes it possible to carry out quantitative analyses and measurements of the change of the same geographic position at different times. The experiment shows that the method can register the remote sensing images with a satisfactory performance.

Acknowledgments

This work was supported by the National Science Foundation of China under Grant 20804039 and the Zhejiang Provincial Natural Science Foundation of China under Grant Y4080300. The authors would like to thank Intermap Technologies Inc./USGS for the test data.

References and links

1.

A. Wade and F. Fitzke, “A fast, robust pattern recognition asystem for low light level image registration and its application to retinal imaging,” Opt. Express 3(5), 190–197 (1998). [CrossRef] [PubMed]

2.

H. Chen, M. K. Arora, and P. K. Varshney, “Mutual information-based image registration for remote sensing data,” Int. J. Remote Sens. 24(18), 3701–3706 (2003). [CrossRef]

3.

Z. Li, Z. Bao, H. Li, and G. Liao, “Image autocoregistration and InSAR interferogram estimation using joint subspace projection,” IEEE Trans. Geosci. Rem. Sens. 44(2), 288–297 (2006). [CrossRef]

4.

J. Orchard, “Efficient least squares multimodal registration with a globally exhaustive alignment search,” IEEE Trans. Image Process. 16(10), 2526–2534 (2007). [CrossRef] [PubMed]

5.

Z. Cao, Y. Zheng, Y. Wang, and R. Yan, “An algorithm for object function optimization in mutual information-based image registration,” Proceedings of the 2008 Congress on Image and Signal Processing, 4, 426–430 (2008).

6.

G. Shao, F. Yao, and M. Malkani, “Aerial image registration based on joint feature-spatial spaces, curve and template matching,” in IEEE International Conference on Information and Automation (Hunan, China, 2008), pages 863–868.

7.

A. Wong and J. Orchard, “Efficient FFT-accelerated approach to invariant optical-LIDAR registration,” IEEE Trans. Geosci. Rem. Sens. 463917–3925 (2008). [CrossRef]

8.

I. Zavorin and J. Le Moigne, “Use of multiresolution wavelet feature pyramids for automatic registration of multisensor imagery,” IEEE Trans. Image Process. 14(6), 770–782 (2005). [CrossRef] [PubMed]

9.

J. G. Liu and H. Yan, “Phase correlation pixel-to-pixel image co-registration based on optical flow and median shift propagation,” Int. J. Remote Sens. 29(20), 5943–5956 (2008). [CrossRef]

10.

G. Hong and Y. Zhang, “Wavelet-based image registration technique for high-resolution remote sensing images,” Comput. Geosci. 34(12), 1708–1720 (2008). [CrossRef]

11.

S. Guyot, M. Anastasiadou, E. Deléchelle, and A. De Martino, “Registration scheme suitable to Mueller matrix imaging for biomedical applications,” Opt. Express 15(12), 7393–7400 (2007). [CrossRef] [PubMed]

12.

B. Zitova and J. Flusser, “Image registration methods: a survey,” Image Vis. Comput. 21(11), 977–1000 (2003). [CrossRef]

13.

A. Rajwade, A. Banerjee, and A. Rangarajan, “Probability density estimation using isocontours and isosurfaces: applications to information-theoretic image registration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 475–491 (2009). [CrossRef] [PubMed]

14.

D. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004). [CrossRef]

15.

K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” Int. J. Comput. Vis. 60(1), 63–86 (2004). [CrossRef]

16.

T. Tuytelaars and L. Van Gool, “Matching widely separated views based on affinely invariant neighborhoods,” Int. J. Comput. Vis. 59(1), 61–85 (2004). [CrossRef]

17.

F. P. Nava, and A. P. Nava, “A probabilistic generative model for unsupervised invariant change detection in remote sensing images,” in IEEE International Geoscience and Remote Sensing Symposium (Barcelona, 2007), pages 2362–2365.

18.

S. Jiao, C. Wu, R. W. Knighton, G. Gregori, and C. A. Puliafito, “Registration of high-density cross sectional images to the fundus image in spectral-domain ophthalmic optical coherence tomography,” Opt. Express 14(8), 3368–3376 (2006). [CrossRef] [PubMed]

19.

F. Eugenio, F. Marques, and J. Marcello, A contour-based approach to automatic and accurate registration of multitemporal and multisensor satellite imagery,” in IEEE International Geoscience and Remote Sensing Symposium, Volume 6 (Toronto, 2002), pages 3390–3392.

20.

N. Netanyahu, J. Le Moigne, and J. Masek, “Georegistration of Landsat data via robust matching of multiresolution features,” IEEE Trans. Geosci. Rem. Sens. 42(7), 1586–1600 (2004). [CrossRef]

21.

A. Wong and D. A. Clausi, “ARRSI: Automatic registration of remote-sensing images,” IEEE Trans. Geosci. Rem. Sens. 45(5), 1483–1493 (2007). [CrossRef]

22.

C. Stewart, “Robust parameter estimation in computer vision,” SIAM Rev. 41(3), 513–537 (1999). [CrossRef]

23.

M. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981). [CrossRef]

24.

N. Alajlan, I. El Rube, M. S. Kamel, and G. Freeman, “Shape retrieval using triangle-area representation and dynamic space warping,” Pattern Recognit. 40(7), 1911–1920 (2007). [CrossRef]

25.

N. Kyriakoulis, A. Gasteratos, and S. G. Mouroutsos, “Fuzzy vergence control for an active binocular vision system,” in 7th International Conference on Cybernetic Intelligent Systems (London, 2008), pages 1–5.

26.

A. Rajwade, A. Banerjee, and A. Rangarajan, “Probability density estimation using isocontours and isosurfaces: applications to information-theoretic image registration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 475–491 (2009). [CrossRef] [PubMed]

27.

http://bigwww.epfl.ch/thevenaz/turboreg/.

28.

http://www.mathworks.com/.

29.

http://vision.ece.ucsb.edu/registration/demo/.

OCIS Codes
(100.2000) Image processing : Digital image processing
(100.5010) Image processing : Pattern recognition
(100.3008) Image processing : Image recognition, algorithms and filters

ToC Category:
Image Processing

History
Original Manuscript: September 29, 2009
Revised Manuscript: November 2, 2009
Manuscript Accepted: November 16, 2009
Published: January 4, 2010

Citation
Zhi-li Song, Sheng Li, and Thomas F. George, "Remote sensing image registration approach based on a retrofitted SIFT algorithm and Lissajous-curve trajectories," Opt. Express 18, 513-522 (2010)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-2-513


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. A. Wade and F. Fitzke, “A fast, robust pattern recognition asystem for low light level image registration and its application to retinal imaging,” Opt. Express 3(5), 190–197 (1998). [CrossRef] [PubMed]
  2. H. Chen, M. K. Arora, and P. K. Varshney, “Mutual information-based image registration for remote sensing data,” Int. J. Remote Sens. 24(18), 3701–3706 (2003). [CrossRef]
  3. Z. Li, Z. Bao, H. Li, and G. Liao, “Image autocoregistration and InSAR interferogram estimation using joint subspace projection,” IEEE Trans. Geosci. Rem. Sens. 44(2), 288–297 (2006). [CrossRef]
  4. J. Orchard, “Efficient least squares multimodal registration with a globally exhaustive alignment search,” IEEE Trans. Image Process. 16(10), 2526–2534 (2007). [CrossRef] [PubMed]
  5. Z. Cao, Y. Zheng, Y. Wang, and R. Yan, “An algorithm for object function optimization in mutual information-based image registration,” Proceedings of the 2008 Congress on Image and Signal Processing, 4, 426–430 (2008).
  6. G. Shao, F. Yao, and M. Malkani, “Aerial image registration based on joint feature-spatial spaces, curve and template matching,” in IEEE International Conference on Information and Automation (Hunan, China, 2008), pages 863–868.
  7. A. Wong and J. Orchard, “Efficient FFT-accelerated approach to invariant optical-LIDAR registration,” IEEE Trans. Geosci. Rem. Sens. 463917–3925 (2008). [CrossRef]
  8. I. Zavorin and J. Le Moigne, “Use of multiresolution wavelet feature pyramids for automatic registration of multisensor imagery,” IEEE Trans. Image Process. 14(6), 770–782 (2005). [CrossRef] [PubMed]
  9. J. G. Liu and H. Yan, “Phase correlation pixel-to-pixel image co-registration based on optical flow and median shift propagation,” Int. J. Remote Sens. 29(20), 5943–5956 (2008). [CrossRef]
  10. G. Hong and Y. Zhang, “Wavelet-based image registration technique for high-resolution remote sensing images,” Comput. Geosci. 34(12), 1708–1720 (2008). [CrossRef]
  11. S. Guyot, M. Anastasiadou, E. Deléchelle, and A. De Martino, “Registration scheme suitable to Mueller matrix imaging for biomedical applications,” Opt. Express 15(12), 7393–7400 (2007). [CrossRef] [PubMed]
  12. B. Zitova and J. Flusser, “Image registration methods: a survey,” Image Vis. Comput. 21(11), 977–1000 (2003). [CrossRef]
  13. A. Rajwade, A. Banerjee, and A. Rangarajan, “Probability density estimation using isocontours and isosurfaces: applications to information-theoretic image registration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 475–491 (2009). [CrossRef] [PubMed]
  14. D. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004). [CrossRef]
  15. K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” Int. J. Comput. Vis. 60(1), 63–86 (2004). [CrossRef]
  16. T. Tuytelaars and L. Van Gool, “Matching widely separated views based on affinely invariant neighborhoods,” Int. J. Comput. Vis. 59(1), 61–85 (2004). [CrossRef]
  17. F. P. Nava, and A. P. Nava, “A probabilistic generative model for unsupervised invariant change detection in remote sensing images,” in IEEE International Geoscience and Remote Sensing Symposium (Barcelona, 2007), pages 2362–2365.
  18. S. Jiao, C. Wu, R. W. Knighton, G. Gregori, and C. A. Puliafito, “Registration of high-density cross sectional images to the fundus image in spectral-domain ophthalmic optical coherence tomography,” Opt. Express 14(8), 3368–3376 (2006). [CrossRef] [PubMed]
  19. F. Eugenio, F. Marques, and J. Marcello, A contour-based approach to automatic and accurate registration of multitemporal and multisensor satellite imagery,” in IEEE International Geoscience and Remote Sensing Symposium, Volume 6 (Toronto, 2002), pages 3390–3392.
  20. N. Netanyahu, J. Le Moigne, and J. Masek, “Georegistration of Landsat data via robust matching of multiresolution features,” IEEE Trans. Geosci. Rem. Sens. 42(7), 1586–1600 (2004). [CrossRef]
  21. A. Wong and D. A. Clausi, “ARRSI: Automatic registration of remote-sensing images,” IEEE Trans. Geosci. Rem. Sens. 45(5), 1483–1493 (2007). [CrossRef]
  22. C. Stewart, “Robust parameter estimation in computer vision,” SIAM Rev. 41(3), 513–537 (1999). [CrossRef]
  23. M. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981). [CrossRef]
  24. N. Alajlan, I. El Rube, M. S. Kamel, and G. Freeman, “Shape retrieval using triangle-area representation and dynamic space warping,” Pattern Recognit. 40(7), 1911–1920 (2007). [CrossRef]
  25. N. Kyriakoulis, A. Gasteratos, and S. G. Mouroutsos, “Fuzzy vergence control for an active binocular vision system,” in 7th International Conference on Cybernetic Intelligent Systems (London, 2008), pages 1–5.
  26. A. Rajwade, A. Banerjee, and A. Rangarajan, “Probability density estimation using isocontours and isosurfaces: applications to information-theoretic image registration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 475–491 (2009). [CrossRef] [PubMed]
  27. http://bigwww.epfl.ch/thevenaz/turboreg/ .
  28. http://www.mathworks.com/ .
  29. http://vision.ece.ucsb.edu/registration/demo/ .

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited