OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 18, Iss. 15 — Jul. 19, 2010
  • pp: 15349–15360
« Show journal navigation

Rotation-invariant target recognition in LADAR range imagery using model matching approach

Qi Wang, Li Wang, and Jianfeng Sun  »View Author Affiliations


Optics Express, Vol. 18, Issue 15, pp. 15349-15360 (2010)
http://dx.doi.org/10.1364/OE.18.015349


View Full Text Article

Acrobat PDF (1194 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Shape index is introduced to explore the target recognition problem from laser radar (Ladar) range imagery. A raw Ladar scene range image is transformed into a height-range image and a horizontal-range image. Then they are compared with a model library in which every model target includes six selected samples based on the principles of raising recognition rate and shortening computation time. We experimentally demonstrate that the proposed recognition approach can resolve the out-of-plane and rotation-invariant target recognition problem with a high recognition rate.

© 2010 OSA

1. Introduction

Target recognition from laser radar (Ladar) has become a hot research topic in recent years, because Ladar can produce high space resolution and collect rich target information such as range images, intensity images, and Doppler images. The 3D data collected by Ladar can provide geometric information about objects, which is less sensitive to 2D imaging problems [1

1. J. G. Verly, R. L. Delanoy, and D. E. Dudgeon, “Model-based system for automatic target recognition from forward-looking laser-radar imagery,” Opt. Eng. 31(12), 2540–2552 (1992). [CrossRef]

4

4. J. Sun, O. Li, W. Lu, and Q. Wang, “Image recognition of laser radar using linear SVM correlation filter,” Chin. Opt. Lett. 5(9), 549–551 (2007).

]. Thus, a recognition system using 3D range images has received significant attention in the past few years.

In the 3D pattern recognition field, the out-of-plane rotation invariance recognition problem is a puzzle that also exists in the pattern recognition from the 3D Ladar range image. The main problems in 3D target recognition are how to represent a free-form surface effectively and how to match surfaces using the selected representation. Traditional 3D feature-matching algorithms segment the target into simple geometric primitives such as planes and cylinders [5

5. N. A. Watts, “Calculating the principal views of a polyhedron,” In Proceedings of the 9th International Conference on Pattern Recognition, 316–322 (1988).

,6

6. D. Eggert and K. Bowyer, “Computing the perspective projection aspect graph of solids of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 15(2), 109–128 (1993). [CrossRef]

]. They mainly deal with polyhedral targets. However, geometric primitives are not the most suitable representation for free-form surfaces, and researchers have used a number of shape-based approaches to recognize free-form objects from 3D range images. A couple of recent algorithms using the shape-based representations are Shantaram’s contour-based algorithm, Andrew Johnson’s spin-image algorithm, Yamany’s surface signatures, and Correa’s spherical spin image algorithm [7

7. V. Shantaram, and M. Hanmandlu, “Contour-based matching technique for 3D object recognition,” in Proceedings of the International Conference on Information Technology, 274–279 (2002).

11

11. S. Correa, and L. Shapiro, “A new signature-based method for efficient 3-D object recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 1, 769–776 (2001).

].

In this paper we use the shape index of an object’s surface to encode the object’s signature for recognizing an object of interest from a Ladar range image. The algorithm simplifies the problem of 3D target recognition as multiple 2D patches matching problems, and it benefits from excellent achievements in the field of relatively mature 2D image processing. Thus the paper evolves an algorithm to solve the out-of-plane rotation-invariant target recognition problem. When an airborne Ladar detects a ground-based object, an obtained range image of the object can be divided into two parts, i.e., gthe top-half information and the side-face information. Using some geometrical relations, they are transformed into a height-range image and a horizontal-range image, respectively. Both the contents of the top-half and the side-face information included in two kinds of range images are variable with the depression angle and the azimuth angle. Then we create a model library with every target supplying six samples. The sample set is selected by some tests based on the principles of advancing recognition rate and shortening computation time. Finally, both the height-range and the horizontal-range images corresponding to a single range image are compared with the model library, and then we choose the bigger recognition value as the final recognition performance.

Our research team finished the imaging experiments using a coherent CO2 imaging Ladar and got some intensity and range images that can be used to recognize the target. For the Ladar system, the angle resolution is less than 0.3 mrad and the space resolution is 32×64 pixels. The range image is subject to fluctuations arising from the combined effects of laser speckle and local-oscillator shot noise [12

12. Q. Wang, Q. Li, Z. Chen, J. Sun, and R. Rao, “Range image noise suppression in laser imaging system,” Opt. Laser Technol. 41(2), 140–147 (2009). [CrossRef]

]. Due to the limitations of the experiment conditions, it is very difficult to get real range images of targets with arbitrary attitude angles that are required in this paper. Thus we simulated a range image of a Ladar with OpenGL technology and added noise to scene targets with arbitrary attitude angles based on a real Ladar range image noise model.

The rest of the paper is organized as follows. Section 2 introduces the shape index free-form surfaces representation and the surface patches matching method. Section 3 resolves the rotation-invariant target recognition problem by comparing a height-range image and a horizontal-range image extracted from a raw range image with the specific model library. Section 4 gives the experimental results to demonstrate the effectiveness of the proposed approach.

2. Overview of shape index matching approach

In this section, we extract feature points from a range image and get local surface patches (LSPs). Then we calculate the surface type and the centroids of the LSPs. Finally, by comparing images and surface types for LSPs, the correspondences between surface points are determined and verified.

2.1 Local surface patch representation

The shape index is a quantitative measure of the shape of an object surface at a point p [13

13. D. Chitra, and K. J. Anil, “Shape spectra based view grouping for free-form objects,” in the 1995 International Conference on Image Processing 3, 340–343 (1995).

],
Si(p)=121πtan1κ1(p)+κ2(p)κ1(p)κ2(p),
(1)
where κ1 and κ2 are the maximum and minimum curvatures of the surface [14

14. P. J. Flynn, and A. K. Jain, “On reliable curvature estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 110–116 (1989).

]. With this definition, all shapes can be mapped into the interval Si[0,1], and every distinct shape corresponds to a unique value ofSi. Figure 1
Fig. 1 (a) Top view of an aircraft, (b) its range image, and (c) its shape index image.
shows the range image of an aircraft and its shape index image.

From Fig. 1, we can see that index values could capture the characteristics of the shape of objects, so the shape index can be used for extracting feature points that are defined in areas with a large shape variation. If the shape index Si of a point satisfies Eq. (2) within a k×k window, the point is marked as a feature point [15

15. H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recognit. Lett. 28(10), 1252–1262 (2007). [CrossRef]

].Si(1+α)μ and Si = max of shape indexes;or Si(1β)μ and Si = min of shape indexes
μ=1Wj=1WSi(j)0α,β1.
(2)
α and β control the selection of feature points, and W is the number of points in the window. α and β range from zero to one. The larger the values of α and β are, the smaller the numbers of the feature points are. The fewer feature points for a target result in a lower recognition rate and a higher recognition efficiency. Thus we recommend that α=0.35,β=0.2to balance the contradiction between the rate and efficiency of recognition. Figure 2
Fig. 2 Feature point locations in a range image.
shows the results of feature point extraction, which are marked by red dots on the aircraft. The shape index Siof the feature point is the maximum or minimum of all shape indexes for points in the window W.

We calculate the surface type Fp of an LSP based on the Gaussian curvature (G) and mean curvature (K) of the feature point [14

14. P. J. Flynn, and A. K. Jain, “On reliable curvature estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 110–116 (1989).

]. There are eight fundamental independent surface types that can be characterized using only the sign of the mean and Gaussian curvature as shown in Table 1

Table 1. Surface Type Labels Based on Surface Curvature Sign

table-icon
View This Table
[16

16. P. J. Besl and R. C. Jain, “Segmentation through variable-order surface fitting,” IEEE Trans. Pattern Anal. Mach. Intell. 10(2), 167–192 (1988). [CrossRef]

]. The centroid of a LSP is also calculated for the computation of the rigid transformation. Figure 3 shows the LSP representation includes a 2D histogram as well as surface type Fp and the centroid of the patch. The 2D histogram and surface type are used for comparison of LSPs and the centroid is used for grouping corresponding LSPs.

2.2 Matching and verification

Given a scene range image, we extract feature points and obtain local surface patches. Then we calculate the surface type and 2D histogram corresponding to every LSP. Since a histogram can be thought of as an approximation of a probability density function, the divergence function (4) - χ2 is used to measure the dissimilarity,
χ2(M,N)=i(mini)2mi+ni,
(4)
where M and N are the two normalized histograms and mi and ni are the numbers in the ith bin of the histograms for M and N, respectively.

After voting on all the LSPs contained in a scene object, we choose the LSP with minimum dissimilarity and the same surface type as the possible corresponding patches. The correspondences between LSPs are filtered and then grouped by geometric consistency to computer plausible transformations that match the scene to the model data. In order to obtain a more definite match, the initial scene-to-model alignment is verified and refined using a modified version of the iterative closest point algorithm (ICP) [17

17. Z. Y. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vis. 13(2), 119–152 (1994). [CrossRef]

]. If the scene object is really an instance of the model object, the ICP algorithm results in a good registration by eliminating inconsistent matches.

Target recognition can be measured using the recognition goodness of fit value (VGOF) between the scene and the model. VGOF is defined as [18

18. A. N. Vasile and R. M. Marino, “Pose-independent automatic target detection and recognition using 3D laser radar imagery,” Lincoln Lab. J. 15(1), 61–78 (2005).

]:
VGOF(s,m)=ε2NptMSE.
(5)
s and m represent a scene and a model, respectively; ε is the fraction of overlap between s and m, which is determined by a process of verification; Npt is the number of plausible pose transformations; and MSEis the mean-squared error as determined by the ICP. In order to express clearly the recognition performance of the surface matching, VGOF is normalized to all theVGOFvalues in the model library. If the scene s correctly matches model i in a model library including the R models, normalized VGOF is defined as [18

18. A. N. Vasile and R. M. Marino, “Pose-independent automatic target detection and recognition using 3D laser radar imagery,” Lincoln Lab. J. 15(1), 61–78 (2005).

]:
V¯GOF(s,mi)=VGOF(s,mi)j=1RVGOF(s,mj).
(6)
V¯GOF ranges from 0 to 1, and the sum of V¯GOF for all the models is 1. When the scene matches none of the models, the sum of the V¯GOF values equals 0.

3. Proposed rotation-invariant target recognition approach

When Ladar detects a scene target with an arbitrary attitude angle, it is impossible to recognize a target from the range image by matching directly with a model library, not only because there is not adequate model sample set corresponding to every attitude angle in the library, but also because the whole comparison process is time-consuming. In addition, as mentioned above the range imagery can be transformed into two parts: a height-range image only containing top-half information and a horizontal-range image only having side-face information. We use the height-range and horizontal-range images to recognize different scene targets by comparing different model sets. For the height-range image, the scene target can be with an arbitrary value of azimuth angle and a larger value of depression angle. The model set is composed of an overhead viewing range image of each model having all top-half information. For the horizontal-range image, the target can also be with an arbitrary value of azimuth angle but a smaller value of depression angle. The model set is made up of several forward-looking range images of each model having all side-face information. According to our estimation, both of them can achieve high recognition rates. Therefore, using the proposed rotation-invariant target recognition approach, a scene target with an arbitrary attitude angle can be recognized by matching it with a specific model library that just includes several samples for each model. The number of samples for every model in the library is the key to raising recognition efficiency, so we will discuss the selection program of the model samples in order to meet the practical requirements of higher recognition rate and less recognition time.

In this section, first we extract the horizontal-range image and the height-range image from a raw range image to resolve the problem of target recognition from front-overhead viewing Ladar imagery. Second, we appropriately add the sample set of the models based on the principles of advancing recognition rate and shortening computation time in order to resolve the problem of target recognition from forward-looking Ladar imagery. Finally, we fuse the two kinds of methods in order to recognize a target from Ladar range imagery with measured target with an arbitrary depression angle and an arbitrary azimuth angle.

3.1 Object recognition from front-overhead viewing Ladar imagery

In order to recognize a target from a front-overhead view Ladar imagery, the raw Ladar range imagery is transformed to a height-range imagery and a horizontal-range imagery of the measured target.

Given range imagery containing a measured target with an arbitrary depression angle and an azimuth angle, the target range image contains two kinds of contents: top-half information and side-face information. When the depression angle and the azimuth angle of the measured target vary, the top-half and the side-face information contents will change accordingly. Corresponding to the two kinds of information, the raw range imagery is transformed into a height-range image and a horizontal-range image, which is described visually in Fig. 4
Fig. 4 Transformation from range value to height-range and horizontal-range values [19].
. In this figure, L denotes the range value between the Ladar and a point on the target surface, and A is the angle measured from the horizontal plane to the line between the Ladar and the point on the target surface. H and T represent the vertical range and the horizontal range values from the Ladar to the point on the target surface, respectively. Height-range and horizontal-range values are calculated according to

H=LsinA,T=LcosA.
(7)

After transformation from a scene range image in Eq. (7), we obtain the values of the height range and the horizontal range for all points of the target in the field of view. We assume that the ground height is zero and calculate the relative height value (Hi) for every point in the field of view. We set a threshold heightHs=γ(max(Hi)min(Hi)), and γ controls the selection of the points contributing to the height-range image. γ ranges from zero to one. Some points with relative height values less than threshold Hs are filtered, and then the rest of the points are used to form a height-range image. In like manner, a horizontal-range image is obtained. Two noise-free range images for a car and a bus are shown in Figs. 5(b)
Fig. 5 (a) and (e) 3D models of a car and a bus (b) and (f) Corresponding noise-free range images, (c) and (g) Corresponding height-range images, (d) and (h) Corresponding horizontal-range image.
and 5(f) with the depression angles of 60° and 20° respectively, and the azimuth angles of 20° and 30° respectively. Figures 5(c) and 5(g) show the transformed height-range images, and Figs. 5(d) and 5(h) are the transformed horizontal-range images.

From Fig. 5 we can conclude that the bigger the depression angle is, the more the top-half information of a target is in a height-range image. Similarly, the smaller the depression angle is, the more the side-face information of a target is in a horizontal-range image.

3.2 Target recognition from forward-looking Ladar imagery

When the depression angle of the measured target is smaller, especially for the forward-looking detection (i.e., the depression angle equals zero), the lower recognition rate is not suitable for practical application. Therefore, this section focuses on solving the problem by aptly increasing the number of samples (training-set size) in the model library based on the principles of advancing recognition rate and shortening computation time.

The number of samples in the model library is obtained by a test, so we do the following preparation. We build a model library using eight specific simulated targets consisting of military ground vehicles ranging from tank to camion and armored car. Figure 6
Fig. 6 3-D model library.
shows their 3D models.

We carry out some experiments to investigate the effects of the training-set size on the recognition rate. We use a side-face information range image of simulated targets in Fig. 6 to form a model library. Figure 7
Fig. 7 Simulated noise-free model target range images.
shows a set of simulated noise-free range images for models that contain 32×64 pixels and are generated in the case of forward-looking detection for the Ladar. Increasing the number of samples means that forward-looking range images for each model are added at the same time based on the same principle. For example, for the case of nine samples, nine range images for a model target are obtained by rotating every model target by 0°, 40°, 80° . . . 280°, and 320°, and the total number of samples in the model library is seventy-two. All samples are represented as a set of LSPs for comparisons.

In order to determine recognition performance, multiple scenes are analyzed. The measured scene data, each containing a target instance, are obtained by rotating the target by 0°, 10°, 20°. . . 340°, and 350° in the case of forward-looking detection, and the sum of the scene data is thirty-six. Then the thirty-six measured range images are represented as a set of LSPs. Each of the thirty-six measured scene targets is compared to the model library. A series of V¯GOF [computed by Eq. (6)] are obtained, each of which corresponds to a comparison. Performance of the comparisons of scene data to a one, three, four, six, nine, twelve, eighteen, and thirty-six sample set for each model in the model library, respectively, are shown in Fig. 8
Fig. 8 Average V¯GOF as a function of training-set size.
.

When the number of samples for every model is one, five, and six, the corresponding average V¯GOF results are 0.50, 0.86, and 0.90. Thus it can be seen that the more samples there are, the higher the recognition rate is. The average V¯GOF initially increases with the training-set size and then nearly levels off, indicating no further improvements. The number of samples needed for efficient pattern recognition is considerably lower than previously expected. On one hand, the model library should contain as many samples as possible for a higher recognition rate; on the other hand, the training-set size should be small so that the computation is convenient and the training images are not too similar to each other. Thus this method provides a systematic procedure for selecting a training-set size.

3.3 Rotation-invariant target recognition

In this section, we fuse the methods described in 3.1 and 3.2 to resolve the problem of out-of-plane rotation-invariant target recognition.

In a word, the proposed rotation-invariant target recognition approach involves three main steps. First, build a model library in which every model target contains six samples. Since the depression angle and the azimuth angle are random when Ladar detects a scene target, the sample set of model targets should contain all surface information. Thus one sample is the top-half range image and the others are the side-face range images for every model. All surface information for every model is included in the six samples. Then LSPs are created for all samples in the model library. Second, extract both the height-range image and the horizontal-rang image from measured scene image and creating corresponding LSPs. Finally, compare LSPs of measured image with ones of model library and obtaining a series of V¯GOF values denoting the final recognition performance. Based on the new method, we solve the problem of rotation-invariant target recognition from forward-looking, front-overhead viewing, and overhead viewing Ladar imagery.

4. Test results

We use eight specific simulated targets shown in Fig. 6 to construct a model library sample set according to the method described in Subsection 3.3. One sample is the top-half range image. Figure 10
Fig. 10 Simulated noise-free range images.
shows a set of simulated noise-free top-half range images for every model, each of which contains 32×64 pixels and is generated in the case of overhead detection for Ladar. The selecting method of the other samples for each model is described, and five samples for each model are picked out to compose our model library in Subsection 3.2. That is to say, the number of the samples for all models amounts to forty-eight in the model library. Then the simulated ideal target models are represented as a set of LSPs. The resulting model LSP library is compared to a measured scene in order to recognize and identify the scene target.

4.1 Test A

In this section we use height-range images and horizontal-range images extracted from measured scene data to compare with the model library, respectively, and obtain two fitted curves of recognition rate as a function of the depression angle of the measured target. Therefore, there are two confidence measurement V¯GOF values calculated by Eq. (6) for a measured target, and we choose the bigger value as the recognition performance of the measured target. In the end, based on our proposed approach we obtain the relationship between the depression angle of the target and the recognition rate.

In order to determine the recognition performance, multiple scenes are analyzed. The measured scene data with noise are detected in 0°, 10°, 20°,…, and 90° depression angles and random azimuth angles, and the total number of the scene data is ten. The scene images are transformed to the corresponding height-range images and the horizontal-range images using the method described in Subsection 3.1. Then a series of LSPs are generated for both the measured height-range and the horizontal-range images.

The LSPs of the measured height-range images are compared to ones of each model target. After every scene target is compared to every model target, ten V¯GOF values are obtained. Figure 11
Fig. 11 V¯GOF as a function of depression angles.
marks the ten V¯GOF values, and the fitting curve as a function of the depression angle is also shown in the figure.

The V¯GOF initially increases with increments of the depression angle and then almost levels off. That is to say, for range images with bigger depression angles, the variation of recognition rate is very small, since the corresponding height-range images all include a large quantity of range information. From Fig. 11 we can see that when the depression angles is 50°, the V¯GOF reaches 80%. When the depression angle is bigger than 50°, the recognition rate is high and acceptable.

For measured scene data with smaller depression angles, the recognition rate obtained only by height-range image comparison is not acceptable in practical application. So for this kind of scene data we compare the horizontal-range image containing more range information with the model library to increase recognition rate. Its recognition performance is demonstrated on the measured scene data above. From Fig. 12
Fig. 12 V¯GOF as a function of depression angles.
, the V¯GOFvalues all exceed 0.80 when the depression angle is less than 40°. That is to say, the method by matching a horizontal-range image with the model library can successfully recognize a target from the Ladar scene data with a smaller depression angle.

There are two V¯GOF values for a measured scene data that are obtained, respectively, by height-range image and horizontal-range image comparisons. Thus we choose the bigger V¯GOF value in both as the recognition result. The recognition results of the ten measured scene data are shown in Fig. 13
Fig. 13 V¯GOF as a function of depression angles.
.

From Fig. 13 we can see that no matter what the depression angles of the scene targets are, V¯GOF values all exceed 0.80, which is acceptable in practice. In addition, when depression angles are 0° and 90°, the V¯GOF values are all larger than 0.95. From Figs. 11, 12, and 13 we can conclude that the recognition rate obtained by the proposed approach is obviously increased greatly. In a word, the proposed approach resolves the rotation-invariant target recognition problem preferably.

4.2 Test B

In order to further validate the proposed approach, we analyze sixty scenes with noise, each of which contains a measured target with an arbitrary depression angle and an arbitrary azimuth angle. They all contain 32×64 pixels. Eight of the sixty scene data are shown in Fig. 14
Fig. 14 Noise measured targets with arbitrary depression angles and azimuth angles.
. The height-range and the horizontal-range images for these scene targets are extracted to generate a series of LSPs.

The model library used in the test is the same as the library used in test A. The model LSP library is used to compare with that of the measured scenes in order to recognize and identify the scene target. We achieve sixty V¯GOF values corresponding to the comparisons and fit them for a curved surface. Figure 15
Fig. 15 V¯GOF as a function of depression angles and azimuth angles.
shows the curved surface.

From Fig. 15 we can see that no matter what the attitude angles of the measure targets are relative to the detecting Ladar, the measured targets are all recognized and identified with an acceptable recognition rate using the proposed approach. That is to say, the proposed approach resolves the problem of rotation-invariant target recognition from the overhead view, front-overhead view, and forward-looking Ladar range images.

5. Conclusions and discussions

In this paper we propose a new recognition approach to recognize a target from Ladar range imagery whose output is invariant to out-of-plane and in-plane rotation changes, and whose success is based on the extraction of a height-range image and a horizontal-range image from a raw range image and the creation of the model library sample set. The proposed recognition system performance is successfully demonstrated on sixty measured scene targets with arbitrary attitude angles. It is experimentally shown that no matter what the attitude angles of the measure targets are, correct target recognition and identification are achieved along with a high recognition rate.

The target recognition performance of our method is successfully demonstrated on simulated noise scene data; however, the performance needs to be validated with real scene data if the condition will be satisfied in the future. The extraction method of the height-range image and the horizontal-range image also needs to be improved in order to increase the recognition rate for the whole algorithm.

Acknowledgments

References and links

1.

J. G. Verly, R. L. Delanoy, and D. E. Dudgeon, “Model-based system for automatic target recognition from forward-looking laser-radar imagery,” Opt. Eng. 31(12), 2540–2552 (1992). [CrossRef]

2.

K. Mori, T. Takahashi, and I. Ide, “Fog density recognition by in-vehicle camera and millimeter wave radar,” Int. J. Control 3(5), 1173–1182 (2007).

3.

N. R. Pal, T. C. Cahoon, J. C. Bezdek, and K. Pal, “A new approach to target recognition for Ladar data,” IEEE Trans. Fuzzy Syst. 9(1), 44–52 (2001). [CrossRef]

4.

J. Sun, O. Li, W. Lu, and Q. Wang, “Image recognition of laser radar using linear SVM correlation filter,” Chin. Opt. Lett. 5(9), 549–551 (2007).

5.

N. A. Watts, “Calculating the principal views of a polyhedron,” In Proceedings of the 9th International Conference on Pattern Recognition, 316–322 (1988).

6.

D. Eggert and K. Bowyer, “Computing the perspective projection aspect graph of solids of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 15(2), 109–128 (1993). [CrossRef]

7.

V. Shantaram, and M. Hanmandlu, “Contour-based matching technique for 3D object recognition,” in Proceedings of the International Conference on Information Technology, 274–279 (2002).

8.

Y. S. and A. Farag, “3D objects coding and recognition using surface signatures,” in Proceedings of the 15th International Conference on Pattern Recognition 4, 571–574 (2000).

9.

A. Johnson, Ph. D. thesis, “A Representation for 3-D Surface Matching,” (Robotics Institute, Carnegie Mellon University, Pittsburgh, 1997).

10.

S.-H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14(25), 12085–12095 (2006). [CrossRef] [PubMed]

11.

S. Correa, and L. Shapiro, “A new signature-based method for efficient 3-D object recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 1, 769–776 (2001).

12.

Q. Wang, Q. Li, Z. Chen, J. Sun, and R. Rao, “Range image noise suppression in laser imaging system,” Opt. Laser Technol. 41(2), 140–147 (2009). [CrossRef]

13.

D. Chitra, and K. J. Anil, “Shape spectra based view grouping for free-form objects,” in the 1995 International Conference on Image Processing 3, 340–343 (1995).

14.

P. J. Flynn, and A. K. Jain, “On reliable curvature estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 110–116 (1989).

15.

H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recognit. Lett. 28(10), 1252–1262 (2007). [CrossRef]

16.

P. J. Besl and R. C. Jain, “Segmentation through variable-order surface fitting,” IEEE Trans. Pattern Anal. Mach. Intell. 10(2), 167–192 (1988). [CrossRef]

17.

Z. Y. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vis. 13(2), 119–152 (1994). [CrossRef]

18.

A. N. Vasile and R. M. Marino, “Pose-independent automatic target detection and recognition using 3D laser radar imagery,” Lincoln Lab. J. 15(1), 61–78 (2005).

19.

W. Li, S. Jianfeng, and W. Qi, “Model-based algorithm for rotation-invariant target recognition using laser radar range imager,” J. Russ. Laser Res. 30(6), 615–625 (2009). [CrossRef]

ToC Category:
Image Processing

History
Original Manuscript: May 5, 2010
Revised Manuscript: June 16, 2010
Manuscript Accepted: June 18, 2010
Published: July 2, 2010

Citation
Qi Wang, Li Wang, and Jianfeng Sun, "Rotation-invariant target recognition in LADAR range imagery using model matching approach," Opt. Express 18, 15349-15360 (2010)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-18-15-15349


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. J. G. Verly, R. L. Delanoy, and D. E. Dudgeon, “Model-based system for automatic target recognition from forward-looking laser-radar imagery,” Opt. Eng. 31(12), 2540–2552 (1992). [CrossRef]
  2. K. Mori, T. Takahashi, and I. Ide, “Fog density recognition by in-vehicle camera and millimeter wave radar,” Int. J. Control 3(5), 1173–1182 (2007).
  3. N. R. Pal, T. C. Cahoon, J. C. Bezdek, and K. Pal, “A new approach to target recognition for Ladar data,” IEEE Trans. Fuzzy Syst. 9(1), 44–52 (2001). [CrossRef]
  4. J. Sun, O. Li, W. Lu, and Q. Wang, “Image recognition of laser radar using linear SVM correlation filter,” Chin. Opt. Lett. 5(9), 549–551 (2007).
  5. N. A. Watts, “Calculating the principal views of a polyhedron,” In Proceedings of the 9th International Conference on Pattern Recognition, 316–322 (1988).
  6. D. Eggert and K. Bowyer, “Computing the perspective projection aspect graph of solids of revolution,” IEEE Trans. Pattern Anal. Mach. Intell. 15(2), 109–128 (1993). [CrossRef]
  7. V. Shantaram, and M. Hanmandlu, “Contour-based matching technique for 3D object recognition,” in Proceedings of the International Conference on Information Technology, 274–279 (2002).
  8. Y. S. and A. Farag, “3D objects coding and recognition using surface signatures,” in Proceedings of the 15th International Conference on Pattern Recognition 4, 571–574 (2000).
  9. A. Johnson, Ph. D. thesis, “A Representation for 3-D Surface Matching,” (Robotics Institute, Carnegie Mellon University, Pittsburgh, 1997).
  10. S.-H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express 14(25), 12085–12095 (2006). [CrossRef] [PubMed]
  11. S. Correa, and L. Shapiro, “A new signature-based method for efficient 3-D object recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 1, 769–776 (2001).
  12. Q. Wang, Q. Li, Z. Chen, J. Sun, and R. Rao, “Range image noise suppression in laser imaging system,” Opt. Laser Technol. 41(2), 140–147 (2009). [CrossRef]
  13. D. Chitra, and K. J. Anil, “Shape spectra based view grouping for free-form objects,” in the 1995 International Conference on Image Processing 3, 340–343 (1995).
  14. P. J. Flynn, and A. K. Jain, “On reliable curvature estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 110–116 (1989).
  15. H. Chen and B. Bhanu, “3D free-form object recognition in range images using local surface patches,” Pattern Recognit. Lett. 28(10), 1252–1262 (2007). [CrossRef]
  16. P. J. Besl and R. C. Jain, “Segmentation through variable-order surface fitting,” IEEE Trans. Pattern Anal. Mach. Intell. 10(2), 167–192 (1988). [CrossRef]
  17. Z. Y. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vis. 13(2), 119–152 (1994). [CrossRef]
  18. A. N. Vasile and R. M. Marino, “Pose-independent automatic target detection and recognition using 3D laser radar imagery,” Lincoln Lab. J. 15(1), 61–78 (2005).
  19. W. Li, S. Jianfeng, and W. Qi, “Model-based algorithm for rotation-invariant target recognition using laser radar range imager,” J. Russ. Laser Res. 30(6), 615–625 (2009). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited