OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 21, Iss. 20 — Oct. 7, 2013
  • pp: 23579–23591
« Show journal navigation

Geospatial analysis based on GIS integrated with LADAR

Matt R. Fetterman, Robert Freking, Christy Fernandez-Cull, Christopher W. Hinkle, Anu Myne, Steven Relyea, and Jim Winslow  »View Author Affiliations


Optics Express, Vol. 21, Issue 20, pp. 23579-23591 (2013)
http://dx.doi.org/10.1364/OE.21.023579


View Full Text Article

Acrobat PDF (1607 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this work, we describe multi-layered analyses of a high-resolution broad-area LADAR data set in support of expeditionary activities. High-level features are extracted from the LADAR data, such as the presence and location of buildings and cars, and then these features are used to populate a GIS (geographic information system) tool. We also apply line-of-sight (LOS) analysis to develop a path-planning module. Finally, visualization is addressed and enhanced with a gesture-based control system that allows the user to navigate through the enhanced data set in a virtual immersive experience. This work has operational applications including military, security, disaster relief, and task-based robotic path planning.

© 2013 OSA

1. Introduction

Recently, LADAR systems [1

1. A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010). [CrossRef]

] have been developed that are capable of acquiring high-resolution elevation data (transverse linear resolution: 30 cm) over a broad area (thousands of km2). Figure 1
Fig. 1 (left) High-resolution LADAR data. Image displayed with Eyeglass software package [2]. (right) Satellite imagery of the same region as on the left. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
(left) displays the remarkable detail of a LADAR data set generated by the ALIRT (Airborne Ladar Imaging Research Testbed) system [1

1. A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010). [CrossRef]

], with the Eyeglass software [2

2. The display of Fig. 1 was generated with the Eyeglass software, developed by Ross Anderson, MIT Lincoln Laboratory.

]. The 3D profile of individual buildings can be clearly seen. Figure 1 (right) shows the corresponding satellite imagery.

In this work, we explore computational analysis of the high-resolution ALIRT LADAR (hereafter referred to as the LADAR data) data set for feature extraction and building classification. Results of the analysis are inserted into layers on a GIS (geographic information system). This approach allows us to display, or fuse, LADAR data and LADAR-derived data with visible data. The LADAR data and the visible data set that we used were both aligned to absolute latitude and longitude coordinates, so that such fusion was possible. We should note that we use the term “alignment” in a loose sense; to the eye, the LADAR and visual data sets line up well. Precise geo-registration of such data sets is an active area of research [3

3. A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi: [CrossRef]

], including algorithms to account for variations in scale and lens distortion. An advantage of using GIS is that other types of data, such as road data, can be integrated into the system.

In the first section, we describe processing of the LADAR data to extract features, including building, cars, and trees. Once a feature is identified, parameters of the feature such as height and area can be computed. Analysis of the distribution of buildings and their heights can reveal population distribution and demographic information. Since we extract significant features from the LADAR data set, the resultant data set is significantly compressed. For example, instead of needing to know the vertical profile of every feature, the user might only be interested to learn building locations. Based on building locations, we can identify neighborhoods and dense population areas. We developed Matlab tools to query the GIS system [4

4. Matlab is a product of MathWorks, http://www.mathworks.com

].

Next, we investigate path-planning algorithms, to find the best path through a region. The LADAR data allow us to find the local slope, so we can determine the speed that a human or vehicle would travel across the region. We could also identify impassable regions, such as steep mountains or buildings. This analysis also used the road data, which we had loaded as a layer in the GIS. So the fastest path can be found, and then displayed in the GIS.

For military users, the fastest path may not necessarily be the best path. A soldier may want to operate covertly, and walk through areas where the terrain obscures him from view. We did a line-of-sight (LOS) analysis to determine which regions were obscured, and which regions were more visible. Then, we modified the path-planning method to find a route that was fast, but also covert.

We also investigated how the user can interact with the GIS. Using a gesture-control system, the user can move around the data set, rotate the data set, and turn layers on and off. This gesture-control system presents a very natural interface and allows the user to explore the data set without taking their attention away from the data.

2. Geofetch

The data shown in this paper were taken over Haiti, shortly after the 2010 earthquake. The data are displayed in GeoFetch, a GIS virtual globe developed by Lincoln Laboratory. GeoFetch is built on the NASA Worldwind [10

10. D. G. Bell, F. Kuehnel, C. Maxwell, R. Kim, K. Kasraie, T. Gaskins, P. Hogan, and J. Coughlan, “NASA World Wind: opensource GIS for mission operations,” 2007 IEEE Aero. Conf., (2007).

] platform, an open source GIS tool. We added features to GeoFetch that allow us to easily switch layers, to import visual imagery, and to make external calls via a UDP server. We also added the capability to GeoFetch to use the LADAR data for the digital elevation data, so that the GIS tool has much higher resolution elevation data than typically found in GIS tools. Having the digital elevation data allows the user to precisely measure the height at every position, and to inspect the 3D model from different angles.

In Fig. 2(a)
Fig. 2 The National Palace in Port-Au-Prince, Haiti was destroyed during the earthquake. (a) LADAR imagery with satellite imagery wrapped over it. Satellite imagery credit: DigitalGlobe (b) Aerial imagery. Image credit: Logan Abassi/UNDP, licensed under Creative Commons License. (c) Photograph from the ground. Image credit: Logan Abassi/UNDP, licensed under Creative Commons License. Approved for public release 13-387.
, we show an example of GeoFetch displaying satellite imagery mapped onto the LADAR data. The imagery shows the damage to the National Palace, where the President of Haiti lives. The damage to the front of the palace as well as the middle section can be seen as discontinuities in the LADAR imagery. Figures 2(b) and 2(c) show photographs of the palace that were taken at the time, for comparison.

3. Building classification analyses

The LADAR data are initially represented by a point cloud, and then the point cloud is further processed into a DEM (digital elevation model) with 30 cm transverse resolution [1

1. A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010). [CrossRef]

] and less than 7.5 cm cross-range resolution [13

13. R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:. [CrossRef]

]. The LADAR data have much higher resolution than the DEMs that are typically used in GIS programs. These LADAR data sets are notable not only for their high resolution, but also because the data sets span regions of thousands of km2. We developed a suite of algorithms to support the identification and classification of objects of interest in the LADAR data, consisting of the main functions: segmentation and ground plane estimation, feature calculation, and object classification. We show results from applying these algorithms, including inserting the classified features into a GIS tool, and we quantify the algorithm performance. Finally, buildings are classified into neighborhoods, which can themselves be characterized.

3.1 Segmentation and ground plane estimation

The first step in the processing was the segmentation of a region into distinct objects. Edges between objects of interest are identified by height changes across neighboring pixels (elevation derivatives) that exceed a fixed threshold. Separate regions isolated by edge boundaries are culled by a set of morphological rules and then preliminarily labeled as separate objects, with the object of largest area labeled as the ground. Mean filtering is applied to produce a smoothed estimate of the ground plane over the entire region, enabling the calculation of the height above ground for each pixel. In Fig. 3
Fig. 3 (left) The region is colored according to height. (right) The region has been segmented, and the colors indicate different objects. Analyses based on LADAR data. Approved for public release 13-387.
, we show a region of Haiti of size approximately 1.5 km2. Figure 3 (left) is colored according to the height of each pixel. In Fig. 3 (right), we have segmented the region, and the colors indicate different objects in the scene.

3.2 Feature extractions and object classification

Through much experimentation across data sets with diverse terrain, we developed a sufficient collection of 16 features to characterize each object, with the primary objective of separating man-made objects such as buildings from natural objects such as vegetation, as well as eliminating false natural object detections commonly occurring in rugged mountainous regions. Feature categories included (i) summary statistics on object dimensions, such as mean and variance, (ii) two-dimensional shape descriptors, such as area, perimeter, and eccentricity, and (iii) surface smoothness estimators, such as least squares plane fit error. An initial attempt to intuitively specify static, fixed, heuristic classification rules using these features proved unsatisfactory, so a supervised learning approach was adopted.

To develop a training set of labeled truth data, the LADAR-derived features were displayed on top of GIS data, in the GeoFetch virtual globe. Human expertise was employed to mark data sets with putative truth categories including clutter, trees, and man-made structures, primarily by consulting the corresponding visible data. The marked data were then used to train a supervised classifier. We selected the random forest classifier [11

11. L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001). [CrossRef]

, 12

12. MATLAB interface by Abhishek Jaiantilal (http://code.google.com/p/randomforest-matlab/), C code by Andy Liaw and Matthew Wiener, based on FORTRAN code by Leo Breiman and Adele Cutler.

] for supervised classification. The random forest algorithm begins with binary decision trees, comprising a set of decisions. At each branch point, the evaluated decision governs whether the tree is traversed to the left or right, and ultimately which categorical leaf node is reported. For example, a branch point could have the test and response: “is the area larger than 5 m2? If yes, then go to the left branch; if no go to the right branch”. If the decision tree is well designed, then each path in the decision tree leads to a distinct class of object, with dependencies implicit in the structure and order of the tree. Provided that the algorithm is supplied with the classified groups and their associated features, such a decision tree can be constructed automatically through entropy minimization techniques.

In the random forest approach, numerous parallel decision trees are constructed, each with a different subset of features. For example, one decision tree may be limited to considering just the area of a region and the height variance, while another decision tree may encompass only surface smoothness and perimeter. The features associated with each decision tree are allocated randomly, with the collection of all the trees is referred to as the random forest. After all decision trees are optimized using the supervised data, then the trees can be used to classify data on which the system has not been trained. New data will be evaluated according to each decision tree, with a majority vote exercised to elect the final object class designation. The random forest has proven elsewhere and in our study to be one of the more effective classifier algorithms.

To further enhance accuracy, an interactive interface was included to permit a human analyst to correct any perceived misclassifications made by the algorithm; accordingly, the classifier would then be automatically retrained, with updated predictions available within seconds. After a sizable database of human-labeled fiduciary data sets had been produced, the algorithm was finally run over the entire LADAR data set, comprising thousands of km2.

Tall buildings are often of particular interest; the building height may indicate increased population density, or that the buildings have religious, business, or military significance. Since height is a feature of each building in our database, we can filter by selecting buildings that are higher than some threshold; in this paper, we defined tall buildings as taller than 7 m, and these tall buildings are colored red. As an example of tall buildings, in Fig. 4, a complex of liquid storage tanks appear at the top of the image. Even at broad zoom levels, these storage tanks stand out due to the red coloring. The classifier also recognizes walls as a separate class from buildings. Vegetation, mountainous features, cars and the catch-all category of “other” round out the main categories of for which the classifier is responsible. Roads, water and other previously satellite-surveyed categories are handled external to the classifier.

In Fig. 5
Fig. 5 (left) The lower right of this image shows the Haiti National Penitentiary. (right) Another area in Port-au-Prince. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
(left), we show the Haiti National Penitentiary, which is located in the heart of Port-au-Prince (the capital of Haiti). The threshold-exceeding structures within the Penitentiary grounds, automatically highlighted in red, may indicate guard towers. In Fig. 5 (right), we show another snapshot, in a dense urban region of Port-Au-Prince.

3.3 Classifier performance analysis

In this section, we study the performance of the building classifier. Since we did not have the absolute truth data as to whether objects were buildings or not, we compared the automated classifier with the results from a human observer, taking the human observer as “truth”. The classifier clearly does well in areas where objects are well defined, large, and separated, as in the upper part of Fig. 4. Therefore, we chose a more challenging urban area to consider. Figure 6
Fig. 6 (left) Satellite imagery of a block in Haiti. (right) The automated classifier has identified buildings and overlaid them upon the satellite imagery from Fig. 6 (left). The buildings are randomly colored so that close buildings can be distinguished. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
(left) shows the satellite imagery for a region in Haiti, and in Fig. 6 (right) we have overlaid the buildings that were identified by the automated classifier from the LADAR data onto the satellite imagery.

  • In a complex urban region, we compared the performance of a human versus the automated classifier, when both human and classifier were given only the LADAR data. The classifier correctly identified 100% of the buildings identified by the human. The classifier identified an additional 6 buildings that the human identified as trees or ground clutter, giving a comparative false alarm rate of 12%.
  • In the same complex urban region, we compared the performance of a human versus the automated classifier, where the human was presented with the satellite imagery, and the classifier still considered only the LADAR data. The classifier identified 88% of the buildings identified by the human, and additionally the classifier had a 20% false alarm rate.

The automated classification algorithm could be further improved, for example, a more sensitive detection of trees and irregular roof profiles, to achieve a lower false alarm rate. Human performance suggests that fusing the LADAR data with the satellite data would also improve the classifier performance.

For many LADAR applications, the analysis goal would not cease at simply classification, but would build upon it with higher-level filtering queries. For example, find a region with tall buildings. Or, find regions with buildings that may be prone to flooding, due to the local geography. For these analyses, the performance reported above may be sufficient.

3.4 Building clusters and Matlab tools

Using the results of our building analysis, we can computationally associate clusters of buildings, or neighborhoods. We define a neighborhood as a group of buildings such that each building is within a distance D of the other surrounding buildings in the neighborhood. The value D is defined as a fraction of the mean value between buildings in the region. Statistical values for average spacing between buildings within regions were calculated. The average spacing between buildings was then used to determine a dilation factor applied to detected structures, to compute perimeters around the structures. Regional islands are formed by merging the structure perimeters, thereby generating neighborhoods, as shown in Fig. 7
Fig. 7 The green regions represent clusters of buildings, or neighborhoods. Feature analysis based on LADAR data. Approved for public release 13-387.
.

Fig. 8 When the user clicks on a building or region in the GIS tool, a window pops up (lower right) with detailed information about the object that the user clicked on. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.

4. Path planning and line-of-sight analyses

4.1 Discussion of A* path-finding algorithm

In this section, we describe path planning and LOS analysis, with a focus on military applications. To find the fastest path, we apply a modified A* algorithm, based on the A* algorithm [15

15. P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths ,” IEEE Transactions on Systems Science and Cybernetics SSC4, 4(2) (1968).

], that is extensively used in path planning and graph traversal.

To compute the distance from the start-point to the end-point, the A* algorithm progressively builds an approximation to a global cost estimate (termed the f-value) based on a local, incremental cost criterion. The cost associated with the start-point is assigned based on the Manhattan distance from the start-point to the end-point (the Manhattan distance is the distance between two points- not in a straight diagonal, but in a grid-like path). Then, the cost associated with the nearest neighbors is computed. The cost of a node (the f-value) is determined by the cost from the start-point (referred to as the g-value) to that node, added to the cost from the node to the end-point (referred to as the h-value). At each evaluated point, the algorithm greedily [16

16. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2009)

] chooses the neighbor or neighbors with the lowest f-value, and proceeds to evaluate the cost of its neighbors. In this way, the algorithm proceeds until the lowest-cost path from the start to finish is found. Once the lowest-cost path is found, then the algorithm retraces each node to its parent, and thus determines the sequence of nodes corresponding to that path.

4.2 Path finding incorporating LADAR-derived traversability

We analyzed terrain traversability, using terrain slope and estimated travel speed, derived from the LADAR data. Figure 9
Fig. 9 Traversability analysis. (left) Assumed walking/driving speeds as a function of terrain slope. (right) Calculated velocities and go/no-go regions over a small region. Feature analysis based on LADAR data. Approved for public release 13-387.
(left) shows estimated travel speed of a human and a vehicle, as a function of terrain slope. Figure 9 (middle) shows the elevation data for a sample region containing a building, and then Fig. 9 (right) shows the calculated traversability for a north-going vehicle through the region shown in Fig. 9 (middle). The buildings (identified from the algorithms shown previously) are added into the traversability calculation, as “no-go” zones; otherwise the algorithm may find a path through a building. We also include the roads database in the calculation, by setting a higher speed for travel on roads.

In Fig. 10
Fig. 10 (left) Black line indicates the fastest path from A to B. Image colored according to height. (right) Points visible from Point C are shaded red. This figure also shows the fastest path, incorporating traversability and covertness from an observer located at point C. Analysis based on LADAR data. Approved for public release 13-387.
(left), we show a fast path (incorporating traversability) for a walking man to get between points A and B, over a sample patch of LADAR data. Note that the path does use the roads to minimize travel time. The traversability map turns out to be rather mundane, because the region we chose for analysis was relatively flat. In this case, the buildings are no-go zones, and the non-road terrain has an associated velocity of ~5 km/hr. We weighted the velocity improvement associated with roads by a factor of 2 (for a maximum of 10 km/hr) to account for the fact that walking on a road would probably be much faster than walking on dirt or brush. These path analyses could also be displayed using the GeoFetch GIS tool.

4.3 Path finding incorporating traversability and LOS assessment based on viewshed (single known hostile observer) analysis

Traversability may not be the sole consideration in choosing a fast path. Expeditionary forces often seek a path that is not only fast, but also covert. For example, soldiers will avoid walking through an open flat field, because there is no protection from possible snipers. We use the LOS calculations to consider a variety of scenarios. For the scenario considered in this section, we consider how to find a path from point A to point B, while not being seen by a hostile observer located at point C.

The basic LOS calculation, given an observer point Y and an observed point X, gives a Boolean result that is true when mutual visibility exists. This LOS calculation includes an assumption about the height relative to the ground plane of points X and Y. We set the heights of points X and Y as 2 m, to represent a typical observer height. The viewshed calculation extends the LOS calculation over an area: a viewshed is an area that is visible to an observer located at point X. When the visibility of a point is considered from multiple viewing points, the result is not binary, but the sum of the contributions.

First we consider the problem of finding a path between points A and B, given that a hostile observer is located at point C. Figure 10 (right) shows the same spatial region as in Fig. 10 (left), and the points are still shaded according to elevation. But we have added a point C to Fig. 10 (right), to represent a hostile observer and points that are visible from point C are shaded red. In Fig. 10 (right), we show the fastest path from points A to B, incorporating terrain traversability, such that a hostile observer at point C would have low visibility of the path.

4.4 Path finding incorporating traversability and LOS analysis based on aggregate viewshed (find path of least visibility assuming enemies uniformly distributed)

Consider a point A. We define R as the observer radius, the distance that an observer could reasonably see. Then we may find N, the number of points within a radius R of the point A. Of these N points, M points have LOS (line-of-sight) to the point A. Then we define the visibility rating of the point A as M/N. The visibility rating ranges from 0 to 1. If all points in the region around A of radius R can see A, then the visibility rating is 1. If none of these points can see A, the visibility is 0.

The visibility rating shows how visible an area is to those around it. The visibility rating enables us to identify potential hiding places or ideal sniper locations that may not have been so obvious beforehand. Sharp edges between high visibility and low visibility could represent ideal areas from which to launch a surprise encounter, providing a way to go in and out of cover.

We refer to the process that determines the visibility rating as the aggregate viewshed. Dividing the number of times points are observed by the number of times surveyed will produce a normalized value between 0 and 1, the visibility rating. The visibility rating of any single point may be determined by a single viewshed analysis, where a single viewshed represents what that specific point can and cannot see, but cannot provide a visibility rating over a large area.

In our aggregate viewshed calculation, we chose a visibility radius limit of 150 m for the viewshed (this radius represents the distance that an observer can see). This radius should be adjusted depending on the expected sensor range capabilities in the environment the analysis is performed. To maintain performance, as well as for practical reasons of diminishing returns, we downsampled to a resolution of 20 m per viewshed. Because viewshed analysis is computationally intensive even at reduced resolution, we developed parallel code for computation of the aggregate viewshed over a large area, with a cluster of 256 parallel computers.

In Fig. 11
Fig. 11 (left) Visibility map. This map assumes a hostile observer is equally likely to be anywhere in the region; we refer to this as the aggregated line-of-sight calculation. In this map, the red areas have the highest visibility rating, and the blue areas have the lowest visibility. (right) Fastest path from A to B, incorporating the aggregated LOS calculation. The image is colored according to height, which makes it easier to recognize features than in the visibility map image. Analysis based on LADAR data. Approved for public release 13-387.
(left), we show the visibility rating for the same region as shown in Fig. 10. Red areas denote regions with a high visibility rating, while blue areas denote a low visibility rating. In the calculation of Fig. 11, we assumed that hostile observers are only located in the region shown in Fig. 11, and not in the neighboring regions. The viewshed highlights potential crests and rivulets that would be ideal hiding or sniping locations. These areas might not be obvious or visible to a user looking across a broad terrain or image product. Urban areas are notable regions of low visibility because buildings effectively block long-range views. Figure 11 (right) shows a path from A to B, which incorporates traversability and covertness (where covertness means having a low visibility). This requires a weighting function that balances the utility of fastest route against the importance of covertness. Covert routes consistently trace markedly different paths than traversable routes.

There are many other applications for LOS calculations using the LADAR data, beyond the examples we have shown here. For example, we showed the case where the location of one hostile observer is known. Using the aggregate viewshed calculation, we further considered multiple observers, uniformly distributed over a region. Conversely, the calculation can be turned around to identify the optimal place for a spotter or sniper, such that he will have the maximal view of the region. Another analysis discovers places on the edge of visibility, such that a soldier could stand at a location, have a good view of the region, but also have the ability to quickly find shelter from hostile observation/attack. These analyses could be incorporated into a tool to generate paths that are vetted for safety. Such an analysis tool could be used for search-and-rescue as well as expeditionary missions.

5. Gesture control of GIS tool

Gesture control is a developing trend in computing and robotics, sparked most recently by the introduction of the low-cost Microsoft Kinect [17

17. Microsoft Kinect, http://www.xbox.com/en-US/KINECT

]. Instead of using a mouse, the user controls the panning, zooming, and other features of the GIS tool by using a range of arm motions and physical motions. The advantage of this organic and intuitive approach is that the user does not have to take focus away from viewing the screen. Gesture control may be especially important for this LADAR project, where the GIS tool contains multiple 2D and 3D layers.

To implement gesture control, we integrated the Microsoft Kinect with the GeoFetch GIS tool. To avoid cross-platform incompatibilities, the integration occurs via keystroke mapping. With the Kinect software (we used the programming scheme Brekel Kinect [18

18. Breckel Kinect- Tools for Kinect, http://www.breckel.com

], as well as FAAST- Flexible Action and Articulated Skeleton Toolkit [19

19. FAAST- Flexible Action and Articulated Skeleton Toolkit, http://projects.ict.usc.edu/mxr/faast/

]), we developed the system so that a gesture becomes a keystroke that controls the GeoFetch software. Gestures for user control over the virtual globe include (for example) drawing the left arm inward toward the chest to zoom in. In Fig. 12
Fig. 12 GeoFetch with software for gesture control of the virtual globe. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
, the left panel shows the GIS screen and the right panel shows the Kinect’s perception of the user. We plan to investigate this gesture recognition technology for use in other virtual immersive environments [20

20. R. Kehl and L. Van Gool, “Real-time pointing gesture recognition for an immersive environment,” Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, (2004). [CrossRef]

].

6. Conclusions

In this paper, we explored various analyses, and methods to display and interact with a high resolution LADAR data set. Analyses included building classification, demographic analysis, path finding, and line-of-sight analysis. While building classification has been demonstrated based solely on satellite imagery, such analysis can be misled by reflectivity variations. Furthermore, the accurate determination of structure height does require the LADAR data, or other methods of obtaining 3D imagery.

We developed the GeoFetch GIS tool to display the LADAR data and the analyses derived from the LADAR data. GeoFetch is capable of using the LADAR data as a source of elevation data, allowing us to cleanly map visual imagery onto the LADAR data. Such mapping can be displayed over broad areas in the GIS, and could have important application to disaster relief efforts.

GeoFetch also displayed the feature analyses layers shown in the paper. The user can find a town, select a building within that town by applying various filters, and then generate a safe path (with limited visibility and good traversability) to approach the building. We also explored gesture-based control of the GIS, which will enables virtual immersive environments in which the user can walk, drive, or fly through environmentally customized highlight overlays. Our group has been investigating robotic exploration of outdoor terrain [21

21. M. R. Fetterman, T. Hughes, N. Armstrong-Crews, C. Barbu, K. Cole, R. Freking, K. Hood, J. Lacirignola, M. McLarney, A. Myne, S. Relyea, T. Vian, S. Vogl, and Z. Weber, “Distributed multi-modal sensor system for searching a foliage-covered region,” IEEE Technologies for Practical Robot Applications (TEPRA), (2011).

], and in future work, we will explore application of this LADAR tool to robotic path planning.

Acknowledgments

References and links

1.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010). [CrossRef]

2.

The display of Fig. 1 was generated with the Eyeglass software, developed by Ross Anderson, MIT Lincoln Laboratory.

3.

A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi: [CrossRef]

4.

Matlab is a product of MathWorks, http://www.mathworks.com

5.

P. Cho, “3D organization of 2D urban imagery,” IEEE 2008 Geosci. Remote Sensing Symp., 2 (2008).

6.

R. Madhavan and T. Hong, “Robust detection and recognition of buildings in urban environments from LADAR data,” 33rd Applied Imagery Pattern Recognition Workshop (2004) doi: [CrossRef]

7.

Q. Wang, L. Wang, and J. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express 18(15), 15349–15360 (2010). [CrossRef] [PubMed]

8.

N. Rackliffe, H. A. Yanco, and J. Casper, “Using geographic information systems (GIS) for UAV landings and UGV navigation,” Technologies for Practical Robot Applications (TEPRA) (2001).

9.

J. B. Campbell, “GloVis as a resource for teaching geographic content and concepts,” J. Geog. 106, 6 (2007)..

10.

D. G. Bell, F. Kuehnel, C. Maxwell, R. Kim, K. Kasraie, T. Gaskins, P. Hogan, and J. Coughlan, “NASA World Wind: opensource GIS for mission operations,” 2007 IEEE Aero. Conf., (2007).

11.

L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001). [CrossRef]

12.

MATLAB interface by Abhishek Jaiantilal (http://code.google.com/p/randomforest-matlab/), C code by Andy Liaw and Matthew Wiener, based on FORTRAN code by Leo Breiman and Adele Cutler.

13.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:. [CrossRef]

14.

OpenStreetMaps, http://www.openstreetmaps.org

15.

P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths ,” IEEE Transactions on Systems Science and Cybernetics SSC4, 4(2) (1968).

16.

T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2009)

17.

Microsoft Kinect, http://www.xbox.com/en-US/KINECT

18.

Breckel Kinect- Tools for Kinect, http://www.breckel.com

19.

FAAST- Flexible Action and Articulated Skeleton Toolkit, http://projects.ict.usc.edu/mxr/faast/

20.

R. Kehl and L. Van Gool, “Real-time pointing gesture recognition for an immersive environment,” Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, (2004). [CrossRef]

21.

M. R. Fetterman, T. Hughes, N. Armstrong-Crews, C. Barbu, K. Cole, R. Freking, K. Hood, J. Lacirignola, M. McLarney, A. Myne, S. Relyea, T. Vian, S. Vogl, and Z. Weber, “Distributed multi-modal sensor system for searching a foliage-covered region,” IEEE Technologies for Practical Robot Applications (TEPRA), (2011).

OCIS Codes
(100.0100) Image processing : Image processing
(120.0280) Instrumentation, measurement, and metrology : Remote sensing and sensors

ToC Category:
Remote Sensing

History
Original Manuscript: August 5, 2013
Manuscript Accepted: September 5, 2013
Published: September 26, 2013

Virtual Issues
November 1, 2013 Spotlight on Optics

Citation
Matt R. Fetterman, Robert Freking, Christy Fernandez-Cull, Christopher W. Hinkle, Anu Myne, Steven Relyea, and Jim Winslow, "Geospatial analysis based on GIS integrated with LADAR," Opt. Express 21, 23579-23591 (2013)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-20-23579


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE7684, 768408, 768408-9 (2010). [CrossRef]
  2. The display of Fig. 1 was generated with the Eyeglass software, developed by Ross Anderson, MIT Lincoln Laboratory.
  3. A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi: [CrossRef]
  4. Matlab is a product of MathWorks, http://www.mathworks.com
  5. P. Cho, “3D organization of 2D urban imagery,” IEEE 2008 Geosci. Remote Sensing Symp., 2 (2008).
  6. R. Madhavan and T. Hong, “Robust detection and recognition of buildings in urban environments from LADAR data,” 33rd Applied Imagery Pattern Recognition Workshop (2004) doi: [CrossRef]
  7. Q. Wang, L. Wang, and J. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express18(15), 15349–15360 (2010). [CrossRef] [PubMed]
  8. N. Rackliffe, H. A. Yanco, and J. Casper, “Using geographic information systems (GIS) for UAV landings and UGV navigation,” Technologies for Practical Robot Applications (TEPRA) (2001).
  9. J. B. Campbell, “GloVis as a resource for teaching geographic content and concepts,” J. Geog.106, 6 (2007)..
  10. D. G. Bell, F. Kuehnel, C. Maxwell, R. Kim, K. Kasraie, T. Gaskins, P. Hogan, and J. Coughlan, “NASA World Wind: opensource GIS for mission operations,” 2007 IEEE Aero. Conf., (2007).
  11. L. Breiman, “Random forests,” Mach. Learn.45(1), 5–32 (2001). [CrossRef]
  12. MATLAB interface by Abhishek Jaiantilal ( http://code.google.com/p/randomforest-matlab/ ), C code by Andy Liaw and Matthew Wiener, based on FORTRAN code by Leo Breiman and Adele Cutler.
  13. R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and ApplicationsX, (2005), doi:. [CrossRef]
  14. OpenStreetMaps, http://www.openstreetmaps.org
  15. P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and CyberneticsSSC4, 4(2) (1968).
  16. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2009)
  17. Microsoft Kinect, http://www.xbox.com/en-US/KINECT
  18. Breckel Kinect- Tools for Kinect, http://www.breckel.com
  19. FAAST- Flexible Action and Articulated Skeleton Toolkit, http://projects.ict.usc.edu/mxr/faast/
  20. R. Kehl and L. Van Gool, “Real-time pointing gesture recognition for an immersive environment,” Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, (2004). [CrossRef]
  21. M. R. Fetterman, T. Hughes, N. Armstrong-Crews, C. Barbu, K. Cole, R. Freking, K. Hood, J. Lacirignola, M. McLarney, A. Myne, S. Relyea, T. Vian, S. Vogl, and Z. Weber, “Distributed multi-modal sensor system for searching a foliage-covered region,” IEEE Technologies for Practical Robot Applications (TEPRA), (2011).

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited