<%BANNER%>

Simultaneous Localization, Mapping, and Object Tracking in an Urban Environment using Multiple 2D Laser Scanners

Permanent Link: http://ufdc.ufl.edu/UFE0042057/00001

Material Information

Title: Simultaneous Localization, Mapping, and Object Tracking in an Urban Environment using Multiple 2D Laser Scanners
Physical Description: 1 online resource (187 p.)
Language: english
Creator: Johnson, Nicholas
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2010

Subjects

Subjects / Keywords: finder, knowledge, ladar, localization, mapping, moving, object, range, store, tracking
Electrical and Computer Engineering -- Dissertations, Academic -- UF
Genre: Electrical and Computer Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy SIMULTANEOUS LOCALIZATION, MAPPING AND OBJECT TRACKING IN AN URBAN ENVIRONMENT USING MULTIPLE 2D LASER SCANNERS By Nicholas McKinley Johnson December 2010 Chair: A. Antonio Arroyo Cochair: Carl Crane, III Major: Electrical and Computer Engineering The role of robotics has grown rapidly within the last few years. No longer are they found only on the assembly line but have been tasked with cleaning homes, mowing lawns, and protecting lives on the battlefield. As their responsibilities become more complex, the need for safety grows in importance. Therefore, it is critical that a robot be able to detect and understand elements in the environment. Laser range finders (LADAR) have been popular sensors for use in object detection applications such as Simultaneous Localization and Mapping (SLAM) and the Detection and Tracking of Moving Objects (DATMO) due to their high range accuracy, low cost, and low processing demands. However, these applications have commonly been treated separately despite evidence that they are related. The presence of moving objects adversely affects SLAM systems while static objects are commonly misidentified in DATMO applications. One approach to address these shortcomings has been to combine the two applications in a Simultaneous Localization, Mapping, and Moving Object Tracking (SLAM+DATMO) method. However, past efforts have relied on grid-based approaches which require greater memory and processing power due to the use of image processing techniques. In addition, no previous work has attempted to use multiple LADAR to provide a wider field of view, which allows the robot to understand more of the world and avoid threats. The work presented here addresses some of the shortcomings described. A novel SLAM+DATMO approach is introduced that represents the detected objects using line segments and polygons, which are more concise and can be processed more quickly. Also, a formal approach for fusing data from two laser range finders is introduced to provide a low cost and simple solution for improving sensing capability. Finally, a mechanism for sharing detected object data to other software components is outlined through the use of a centralized world model knowledge store.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Nicholas Johnson.
Thesis: Thesis (Ph.D.)--University of Florida, 2010.
Local: Adviser: Arroyo, Amauri A.
Local: Co-adviser: Crane, Carl D.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2010
System ID: UFE0042057:00001

Permanent Link: http://ufdc.ufl.edu/UFE0042057/00001

Material Information

Title: Simultaneous Localization, Mapping, and Object Tracking in an Urban Environment using Multiple 2D Laser Scanners
Physical Description: 1 online resource (187 p.)
Language: english
Creator: Johnson, Nicholas
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2010

Subjects

Subjects / Keywords: finder, knowledge, ladar, localization, mapping, moving, object, range, store, tracking
Electrical and Computer Engineering -- Dissertations, Academic -- UF
Genre: Electrical and Computer Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy SIMULTANEOUS LOCALIZATION, MAPPING AND OBJECT TRACKING IN AN URBAN ENVIRONMENT USING MULTIPLE 2D LASER SCANNERS By Nicholas McKinley Johnson December 2010 Chair: A. Antonio Arroyo Cochair: Carl Crane, III Major: Electrical and Computer Engineering The role of robotics has grown rapidly within the last few years. No longer are they found only on the assembly line but have been tasked with cleaning homes, mowing lawns, and protecting lives on the battlefield. As their responsibilities become more complex, the need for safety grows in importance. Therefore, it is critical that a robot be able to detect and understand elements in the environment. Laser range finders (LADAR) have been popular sensors for use in object detection applications such as Simultaneous Localization and Mapping (SLAM) and the Detection and Tracking of Moving Objects (DATMO) due to their high range accuracy, low cost, and low processing demands. However, these applications have commonly been treated separately despite evidence that they are related. The presence of moving objects adversely affects SLAM systems while static objects are commonly misidentified in DATMO applications. One approach to address these shortcomings has been to combine the two applications in a Simultaneous Localization, Mapping, and Moving Object Tracking (SLAM+DATMO) method. However, past efforts have relied on grid-based approaches which require greater memory and processing power due to the use of image processing techniques. In addition, no previous work has attempted to use multiple LADAR to provide a wider field of view, which allows the robot to understand more of the world and avoid threats. The work presented here addresses some of the shortcomings described. A novel SLAM+DATMO approach is introduced that represents the detected objects using line segments and polygons, which are more concise and can be processed more quickly. Also, a formal approach for fusing data from two laser range finders is introduced to provide a low cost and simple solution for improving sensing capability. Finally, a mechanism for sharing detected object data to other software components is outlined through the use of a centralized world model knowledge store.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Nicholas Johnson.
Thesis: Thesis (Ph.D.)--University of Florida, 2010.
Local: Adviser: Arroyo, Amauri A.
Local: Co-adviser: Crane, Carl D.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2010
System ID: UFE0042057:00001


This item has the following downloads:


Full Text

PAGE 1

1 SIMULTANEOUS LOCALIZATION, MAPPING AND OBJECT TRACKING IN AN URBAN ENVIRONMENT USING MULTIPLE 2D LASER SCANNERS By NICHOLAS MCKINLEY JOHNSON A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2010

PAGE 2

2 2010 Nicholas McKinley Johnson

PAGE 3

3 To my family

PAGE 4

4 ACKNOWLEDGMENTS First I would like to thank my family for their love and support during my time at the University of Florida. They have always believed in my potential and have never placed any undue expectations on me. I thank them for allowing me the freedom to pursue my dreams I would also like to thank my advisors Dr. A. A ntonio Arroyo and Dr. Carl Crane for their guida nce and support over the years. It was only through their help that I was able to participate i n the 2007 Urban Challenge and work on a myriad of interesting projects during my time at the Center for Intellige nt Machines and Robotics In addition, I would like to thank the other members of my committee, Dr. Douglas Dankel, Dr. Herman Lam, and Dr. Eric Schwartz for their valuable input and guidance over the past few years This research was made possible in par t through the support of the Air Force Research Lab at Tyndall Air Force Base in Panama City, Florida so I would like to thank all the staff that I have had the opportunity to work with out there. I would like to heartily thank all of my coworkers and especially my friends, at CIMAR who I have worked with closely while at U F Although, there were many challenges and many long days and late nights, their support and friendship made each day an enjoyable experience and made it possible to make it through to the end. Finally, I would like to thank God for making all things possible.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................. 4 LIST OF TABLES ............................................................................................................ 8 LIST OF FIGURES .......................................................................................................... 9 LIST OF ABBREVIATIONS ........................................................................................... 14 ABSTRACT ................................................................................................................... 16 CHAPTER 1 INTRODUCTION .................................................................................................... 18 Background ............................................................................................................. 18 Focus ...................................................................................................................... 22 Problem Statement ................................................................................................. 24 Motivation ............................................................................................................... 24 2 REVIEW OF LITERATURE .................................................................................... 30 Simultaneous Localization and Mapping ................................................................ 31 Grid Based Approaches ................................................................................... 31 Direct Methods ................................................................................................. 32 FeatureBased Appr oaches .............................................................................. 34 Hierarchical Object Based Approach ................................................................ 37 Detection and Tracking of Moving Objects ............................................................. 38 Object Representation ...................................................................................... 39 Object Ass ociation and Tracking ...................................................................... 41 Classification .................................................................................................... 43 Simultaneous Localization, Mapping and Moving Object Tracking ......................... 44 Grid Based Approaches ................................................................................... 45 Hierarchic al Object Representation Approach .................................................. 48 3 IMPLEMENTED APPROACH ................................................................................. 54 Simultaneous Localization, Mapping, and Moving Object Tracking ........................ 55 Object Detection and Representation ............................................................... 56 Clustering ................................................................................................... 56 Feature extraction ...................................................................................... 57 Object Classification ......................................................................................... 58 Free space polygon generation .................................................................. 59 Moving object detection and representation .............................................. 59

PAGE 6

6 Object Tracking ................................................................................................ 60 Enclosure generation ................................................................................. 61 Object matching and resolution .................................................................. 62 Missing object detection ............................................................................. 65 Position Estimation ........................................................................................... 66 World Model Knowledge Store ............................................................................... 68 Challenges ....................................................................................................... 69 Messaging ........................................................................................................ 70 Message object description ........................................................................ 71 Create Knowledge Store Objects message ............................................... 72 Report Knowledge Store Objects Creation message ................................. 7 2 Modify Knowledge Store Objects message................................................ 73 Report Knowledge Store Objects Modify message .................................... 73 Query Knowledge Store Objects message ................................................ 74 Report Knowledge Store Objects message ............................................... 74 Updated SLAM+DATMO Object Representation ............................................. 75 Laser Range Finder Fusion .................................................................................... 76 4 TESTING METHODOLOGY ................................................................................... 93 Test Platform .......................................................................................................... 93 Hardware .......................................................................................................... 93 Software ........................................................................................................... 95 Test Plan ................................................................................................................. 95 Single LADAR Testing ...................................................................................... 96 Static object detection and tracking ........................................................... 96 Moving object detection and tracking ......................................................... 97 Position estimation ..................................................................................... 97 World Model Knowledge Store access without position estimation ............ 98 World Model Knowledge Store access with position estimation ................. 98 Multiple LADAR Testing ................................................................................... 99 Metrics .................................................................................................................... 99 5 RESULTS ............................................................................................................. 102 Single LADAR Testing .......................................................................................... 102 Static Object Detection and Tracking ............................................................. 102 Moving Object Detection and Tracking ........................................................... 106 Position Estimation ......................................................................................... 107 World Model Knowledge Store W ithout Position Estimation ........................... 109 World Model Knowledge Store W ith Position Estimation ................................ 110 Multiple LADAR Testing ........................................................................................ 112 Static Object Detection and Tracking ............................................................. 112 Moving Object Detection and Tracking ........................................................... 113 Position Estimation ......................................................................................... 114 World Model Knowledge Store Without Position Estimation ........................... 114

PAGE 7

7 Discussion ............................................................................................................ 115 Single LADAR Performance ........................................................................... 115 LADAR Fusion Scheme .................................................................................. 117 6 CONCLUSIONS AND FUTURE WORK ............................................................... 178 Future Work .......................................................................................................... 178 Conclusions .......................................................................................................... 180 LIST OF REFERENCES ............................................................................................. 183 BIOGRAPHICAL SKETCH .......................................................................................... 187

PAGE 8

8 LIST OF TABLES Table page 3 1 Fields used to represent a static object .............................................................. 79 3 2 Fields used to represent a line segment ............................................................. 79 3 3 Fields used to represent a moving object ........................................................... 79 3 4 Additional fields needed when using the WMKS ................................................ 80 3 5 Fields added to all WMKS Objects ..................................................................... 80 3 6 Threshold values and parameters used in the SLAM+DATMO system .............. 80

PAGE 9

9 LIST OF FIGURES Figure page 1 1 The Urban NaviGator: Team Gator Nations 2007 DARPA Urban Challenge vehicle. ............................................................................................................... 28 1 2 Placement of the LADAR used for Object Detection .......................................... 28 1 3 EMM example scenario. ..................................................................................... 29 2 1 Flowchart outlining the general steps in a SLAM system. .................................. 50 2 2 Flowchart outlining the general steps in a DATMO system. ............................... 51 2 3 Dimension estimate error due to angular resolution of LADAR. ......................... 52 2 4 Dimension estimate error correction. .................................................................. 52 2 5 Flowchart outlining the general steps in a SLAM+DATMO system. ................... 53 3 1 Flowchart outlining the presented approach. ...................................................... 82 3 2 Example of the clustering process. ..................................................................... 85 3 3 The Iterative End Point Fit (IEPF) a lgorithm. ...................................................... 85 3 4 Example of the moving object detection method. ............................................... 86 3 5 An example of the free space polygon generation method. ................................ 86 3 6 Free space region generated around the vehicle. .............................................. 87 3 7 Generation of the oriented bounding box used for moving object representation. .................................................................................................... 88 3 8 Example of the enclosures generated around the line segments. ...................... 89 3 9 Possible scenarios that can occur after object matching. ................................... 90 3 10 Pseudocluster points are generated from the line segments of the stored objects. ............................................................................................................... 91 3 11 Object resolution example. ................................................................................. 92 4 1 Scan regions for the two SICK LD LRS1000 laser range finders used for testing. .............................................................................................................. 101

PAGE 10

10 5 1 Satellite imagery from the Gainesville Raceway with an overlay of LADAR point data. ......................................................................................................... 119 5 2 Extraction of o bjects from sensor p oints. .......................................................... 120 5 3 Objects detected using data from the passenger side LADAR. ........................ 121 5 4 Total execution times with a static vehicle and static environment without the presence of sensor noise, the use of position estimation, or access to the wo rld model knowledge store. .......................................................................... 122 5 5 Average function execution times with a static vehicle and static environment without the presence of sensor noise, the use of position estimation, or access to the world model knowledge store. .................................................... 123 5 6 Sensor noise causes the ex tracted objects to vary over time. .......................... 124 5 7 The resolution algorithm allows objects to be combined and updated over time. .................................................................................................................. 125 5 8 Objects detected using data from the driver side LADAR. ................................ 126 5 9 Total execution times w ith a static vehicle and static environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store. .................................................... 127 5 10 Average function execution times with a static vehicle and static environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store ................................................. 128 5 11 Objects detected using the passenger side LAD AR when the vehicle moves through a static environment. ........................................................................... 129 5 12 Objects detected using the driver side LADAR when the vehicle moves through a static environment. ........................................................................... 131 5 13 Total execution times w ith a moving vehicle and static environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store. .................................................... 134 5 14 Average function ex ecution times with a moving vehicle and static environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store ............................... 135 5 15 Occluded moving object is detected when it comes into view. ......................... 136 5 16 Sta tic object becomes a moving object. ........................................................... 137 5 17 Moving object is partially occluded. .................................................................. 138

PAGE 11

11 5 18 Object tracking degradation due sparse LADAR strikes. .................................. 139 5 19 Total execution times with a static vehicle and dynam ic environment with the presence of sensor noise, but without the use of position estimation, or access to the world model knowledge store. .................................................... 140 5 20 Average function execution times with a static vehicle and dynamic environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store. .............................. 141 5 21 Moving object successfully tracked as the platform moves through the environment. ..................................................................................................... 142 5 22 Object incorrectly identified as a moving object due to error introduced through platform motion. ................................................................................... 144 5 23 Total execution times with a dynamic vehicle and dynamic environment with the presence of sensor noise, but without the use of position estimation, or access to the world model knowledge store. .................................................... 145 5 24 Average function execution times with a dynamic vehicle and dynamic environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store. .............................. 146 5 25 Distance from the origin of the position estimate when running with fixed LADAR data on a static platform in a static environment. ................................. 147 5 26 Distance from the origin of the position estimate when running with real LADAR data on a static platform in a static environment. ................................. 148 5 27 Distance from the origin of the position estimate when running with fixed LADAR data on a dynamic platform in a static environment. ............................ 149 5 28 Distance from t he origin of the position estimates when running with real LADAR data on a dynamic platform in a static environment. ............................ 150 5 29 Distance from the origin of the position estimates when running with real LADAR data on a static platform in a dynamic environment. ............................ 151 5 30 Objects are added to the WMKS. ..................................................................... 152 5 31 Objects are updated over time. ......................................................................... 152 5 32 Stored objects are retrieved from the WMKS and updated using the new LADAR data. ..................................................................................................... 153 5 33 The WMKS is updated. ..................................................................................... 154 5 34 O bject is detected as missing and new object is detected. ............................... 155

PAGE 12

12 5 35 Reconstructed objects are successfully stored in the WMKS. .......................... 156 5 36 Moving objects are not added to the WMKS. ................................................... 157 5 37 Objects stored in WMKS are not aligned the current LADAR scan. ................. 158 5 38 Objects are matched and the current position is updated. ................................ 159 5 39 WMKS object versus corrected stored objects versus extracted objects ......... 160 5 40 The correct position causes the WMKS objects to become aligned with the sensed objects. ................................................................................................. 160 5 41 Distance from the origin of the position estimate when running with real LADAR data on a static platform in a static environment with a difference between the retrieved WMKS objects and the sensed objects. ........................ 161 5 42 Total execution times with a static vehicle and static environment with the presence of sensor noise, the use of position estimation, and access to the world model knowledge store. .......................................................................... 162 5 43 Average function execution times with a static vehicle and static environment with the presence of sensor noise, the use of position estimation, and access to the world model knowledge store. ................................................................ 163 5 44 Distance from the origin of the position estimate when running with real LADAR data on a dynamic platform in a static environment with a difference between the retrieved WMKS objects and the sensed objects. ........................ 164 5 45 T otal execution times with a dynamic vehicle and static environment with the presence of sensor noise, the use of position estimation, and access to the world model knowledge store. .......................................................................... 165 5 46 Average function execution times w ith a dynamic vehicle and static environment with the presence of sensor noise, the use of position estimation, and access to the world model knowledge store. ........................... 166 5 47 Satellite imagery from the Gainesville Raceway with an overlay of LADAR point data from the driver and passenger side LADAR s. .................................. 167 5 48 Points from the driver and passenger side LADAR arent aligned. ................... 167 5 49 Different objects are extracted between the driver and passenger side LADAR. ............................................................................................................ 168 5 50 Objects are successfully updated by data from the driver and passenger side LADAR. ............................................................................................................ 168 5 51 Objects are updated faster when both LADAR are used. ................................. 169

PAGE 13

13 5 52 Total execution times with a static vehicle and static environment with the presence of sensor noise, but without the use of position estimation, or access to the world model knowledge store using multiple LADAR ................ 170 5 53 Average function execution times with a static vehicle and static environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store using multiple LADAR ............ 171 5 54 Large difference between the driver and passenger side points when plat form is in motion. ........................................................................................ 172 5 55 Moving object detection using multiple LADAR ................................................ 173 5 56 Placement of moving object changes due to differences between points from each LADAR. .................................................................................................... 174 5 57 The stored object is averaged to lie between the misaligned scan points. ....... 174 5 58 Distance from the origin of the position estimate when running with real LADAR data from both the driver and passenger side on a static platform in a static environment. ........................................................................................... 175 5 59 Object confidence changes at different rates due to sensor overlap. ............... 176 5 60 Objects are added to the WMKS at different times due to the difference rates of change of the object confidence. .................................................................. 176 5 61 Some detected objects are not added to the WMKS due to discrepancies between the LADAR. ........................................................................................ 177

PAGE 14

14 LIST OF ABBREVIATIONS AFRL Air Force Research Laboratory AS4 Aerospace Standard Unmanned Systems Steering Committee CIMAR Center for Intelligent Machines and Robotics DARPA Defense Advanced Research Projects Agency DATMO Detection and Tracking of Moving Objects DGC DARPA Grand Challenge DUC DARPA Urban Challenge EKF Extended Kalman Filter EM Expectation Maximization EMM Environment Mapping and Monitoring GPOS Global Position and Orientation Sensor GPS Global Positioning Sy stem HLP High Level Planner ICP Iterative Closest Point IDC Iterative Dual Correspondence IEPF Iterative End Point Fit IMRP Iterative Matching Range Point IMU Inertial Measurement Unit JAUS Joint Architecture for Unmanned Systems LADAR Laser Detection and Ranging LRF Laser Range Finder LSS LADAR Smart Sensor LWM Local World Model MDF Mission Data File

PAGE 15

15 MO Moving Object Detection Sensor MPA Most Probable Angle NFM North Finding Module RANSAC Random Sample Consensus RN Roadway Navigation Motion Planner RNDF Road Network Definition File ROI Region of Interest SAE Society of Automotive Engineers SCRIM Sampling and Correlation based Range Image Matching SLAM Simultaneous Localization and Mapping SLAM +DATMO Simultaneous Localization, Mapping and Moving Object Tracking TSS Terrain Smart Sensor TTL Time to Live SSC Subsystem Commander UMS Unmanned System UTM Universal Transverse Mercator WMKS World Model Knowledge Store

PAGE 16

16 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy SIMULTANEOUS LOCALIZATION, MAPPING AND OBJECT TRACKING IN AN URBAN ENVIRONMENT USING MULTIPLE 2D LASER SCANNERS By Nicholas McKinley Johnson December 2010 Chair: A. Antonio Arroyo Cochair: Carl Crane, III Major: Electrical and Computer Engineering The role of robotics has grown rapidly within the last few years. No longer are they found only on the assembly line but have been tasked with cleaning homes, mow ing lawns, and protecting lives on the battlefield. As their responsibilities become more complex, the need for safety grows in impor tance. Therefore, it is critical that a robot be able to detect and understand elements in the environment Laser range finders (LADAR) have been popular sensors for use in object detection applications such as Simultaneous Localization and Mapping (SLAM) and the Detection and Tracking of Moving Objects (DATMO) due to their high range accuracy, low cost and low processing dem ands. However, these applications have commonly been treated separately despite evidence that they are related. The presence of moving objects adversely affects SLAM systems while static objects are commonly misidentified in DATMO applications. One approac h to address these shortcomings has been to combine the two applications in a Simultaneous Localization, Mapping, and Moving Object Tracking (SLAM+DATMO) method. However, past efforts have relied on gridbased approaches which require greater memory and pr ocessing power due to the use

PAGE 17

17 of image processing techniques. In addition, no previous work has attempted to use multiple LADAR to provide a wider field of view which allow s the robot to understand more of the world and avoid threats. The work presented here address es some of the shortcomings described. A novel SLAM+DATMO approach is introduced that represents the detected objects using line segments and polygons, which are more concise and can be processed more quickly. Also, a formal approach for fusing data from two laser range finders is introduced to provide a low cost and simple solution for improving sensing capability. Finally, a mechanism for sharing detected object data to other software components is outlined through the use of a centralized wor ld model knowledge store.

PAGE 18

18 CHAPTER 1 INTRODUCTION Robotics is a rapidly developing field. Initially used primarily on the assembly line since the introduction of Unimate at General Motors in 1961 [1] robots have grown to take on many different roles. Todays robots can be seen, not only on the assembly line but also cleaning peoples homes, mowing lawns, and protecting soldiers on the battlefield. One of the areas currently getting a lot of attention is the area of unmanned vehicles. These vehicles are being used to accomplish m issions in dangerous situations where human li fe would be at risk. Many of the unmanned vehicles currently deployed on the battlefield are operated remotely or with human supervision. However, as unmanned vehicles continue to evolve there is an increasing desire to have these vehicles work independent of human interaction. The research presented in this document will hopefully assist in reaching that goal. T he rest of this chapter provides an introduction to the area of unmanned vehicles and some of the general problems that need to be solved. It ends by introducing the specific problem to be addressed and the motivation for the resear ch Chapter 2 provides a summary of the related literature reviewed in devising the presented work C hapter 3 introduces the implemented approached and chapter 4 discusses the testing methodology. Experimentation results are give n in chapter 5 along with a discussion of the limitations, problems and lessons learned. Finally chapter 6 proposes some areas of future work and presents the research conclusions Background Previously robotics dealt heavily with doing repetitive tasks that were difficult or monotonous for humans Robots were not expected to make decisions but instead

PAGE 19

19 simply executed a series of instructions. As t he field evolved, more responsibility was placed on robot s and the demand for them to complete tasks without human interaction grew. The concept of autonomous operation and the field of unmanned systems developed f r o m these demands. The term fully autonomous is defined as a mode of operation wherein the unmanned system ( UMS) is expected to accomplish its mission without human intervention [2] When considering a fully autonomous vehicle the level of cognition required is sta ggering. In general a robot must be able to reason about a designated mission and devise and execute a plan of action. However, actually building a vehicle to do this is extremely difficult. One of t he most basic mission s f or any autonomous vehicle is m oving from one point to another However, to plan a path through the world, the robot must first have some knowledge about the world This knowledge is either provided to the robot a priori or is determined through the use of sensors. A robots understanding of the world usually begins with localization. A robot must know its starting position in the world before it can plan a path to an end point When the robot moves it must know its new positi on in relation to the end point to continue along the plan. One simple and common appr oach is to use wheel encoders or other odometry data to estimate the robots movement from its starting position. Although simple, this method does not give the robot a global sense of the world. Another approach used, especially in indoor environments is Simultaneous Localization and Mapping (SLAM) [3] In SLAM, a map of the world is generated as the robot moves and is used to estimate it s position relative to the map. However, if the map is incomplete or the environment is very symmetrical the robot cannot be certain of its position. One popular approach for out door

PAGE 20

20 applications is the use of the Global Position ing System (GPS) GPS allows a robot to easily and fairly accurately estimate its global position in the world. However, it suffers from problems with noise and errors caused by large structures, such as mount ains, buildings, etc Localization alone is usually not enough. T he w orld can be a dangerous place and t he robot must be able to detect and avoid obstacles and other dangerous situations. Cameras and laser range finders (LADAR) are just two of the many sensors currently used to perceive the world. Cameras provide a lot of i nformation but give inaccurate range readings and the data output has typically been difficult and slow to process. LADAR, on the other hand, provide very accurate range information, and can usually be processed fairly quickly, but do not provide as much information as cameras. Regardless of the sensor used, it is their responsibility to detect obstacles and help guide the robot safely through the world. When discussing sensors and o bject detection, questions about how the objects should be represent ed arise. To avoid obstacles, the robots path planning element must understand that an object exists and where it is located in the world. A representation of the world, called a world model, which the robot understands is required. There are two general ap proaches to world modeling: ras ter based and vector based. In the raster based approach the world is r epresented as a series of discr et e cells. Each cell stores a value that defines some information about the world, such as occupancy or traversability On the other hand, t he vector based approach attempts to model the world as a series of geometric shapes such as lines, circles, polygons, etc.

PAGE 21

21 Now that the robot has an understanding of the world, knows where it i s located in that world, and where are good and bad areas to move in, it can finally plan a path to achieve its goal. There are many options when considering planning algorithms T he chos en algorithm will depend on a number of factors including: the world m odel the mission goal the level of complexity and the level of intelligence. One simple approach may be t o drive towards the goal until an object is encountered, then turn move some random distance, and try again. Another approach treats path planning a s a search problem E very possible path from the current position to the end goal is considered with the best path chosen. Search algorithms such as the A* and D* algorithms, are used to minimize the number of paths evaluated and decrease processing time. Some heuristic measur e, such as travel distance or travel time, is used to choo se one path over another. Finally, the robot must execute its plan and move itself through the world. M ovement is accomplished through control commands which adjust the vehicles state, such as its heading or speed. These commands are executed by low level electromechanical systems such as hydraulic pumps, electric motors or other act u ators which are tied to the vehicle. A ctuators attached to the steering column on a ground vehicle, or to the rudd er of an aircraft can be used to change heading, while speed can be adjusted by controlling the input current to electric motors or the throttle position on a combustion engine. Depending on the level of complexity and uncertainty in the world it may be necessary for a robot to constantly repeat this process. If the environment is static and the sensors can provide a complete picture of the world, the robot only needs to plan

PAGE 22

22 once to complete its goal. However, in real life the world is dynamic and constantly changing, and sensors have limited perception capabilities As such the robot must constantly adjust its plan to deal with previously unforeseen situations The brief summary given here barely covers the issues involved with developing autonomous systems. Robots that can learn from and improve upon previous actions have not been discussed and introduce a whole new level of complexity. The computational power required to perform the tasks described are considerable, especially for real time operation. However, recent developments in computer hardware and the continued advancements in the field have brought the idea of a fully autonomous vehicle closer to a reality. Focus As robots begin to work in more real life environments their understanding of the world needs to increase. Simply marking areas as bad and good is no longer sufficient as areas that may currently be good can quickly become bad. Consider the case of a car on a road. If the car is static, then the area in front of it is safe ; however, if the car is moving, that same area becomes very dangerous. Robots need to understand the contextual difference between these two situations or they will put themselves and others into harms way. C ontextual information can also h elp a robot plan a better route through a city or anticipate possible areas of congestion. If a robot has a map which provides data such as restaurants or schools it may consider the current time and avoid those areas. Two major requirement s to developing contextual awareness are better world model ing and pe r ception. A coherent method for representing and storing objects in the environment which allows for the addition of contextual information, is required. Raster based world models

PAGE 23

23 do not allow for attributes, such as object classifications, to be attached to the objects they represent, and as such a vector based world model is required. To store the generated model and retrieve it at a l ater time a World Model Knowledge Store (WMKS) is usually employed. It provides a centralized storage location and allows for multiple elements to access the same stored data. Shared data access i s an important capability a s robot functions are becoming more distributed as they evolve, and information exchange becomes critical Robots also need to detect, track, characterize, and differentiate between static and dynamic objects in the environment, such as cars and buildings or people and trees One of the many questions that come up is how to isolate static and dynamic objects in the environment. Previously, a lot of research considered them separately and dealt with either one or the other It is only recently that both static and dynamic objects have been considered together. Once static and dynamic objects have been isolated from each other, it makes sense to map the static objects to aid in localization and route planning. T he use of GPS for localization is highly error prone and although, much work has been done in combining GPS with other sensors to improve performance, it can rarely provide the level of accuracy that is needed to function safely in a city environment T he use of the generated map and the application of SLAM techniques can greatly improve localization. When discussing perception, sensor range and viewable area is critical. Ideally, r obots need to have a complete view of their surroundings. However, very few sensors can provide a 360 degree coverage area, and those that do are currently very expensive or computationally intensive. T he most popular approach to achieve a wider

PAGE 24

24 sensor range has traditionally been the use of multiple overlapping sensors However, outside of the use of multiple cameras, very little research has been done on how to best combine the data from these multiple sources especially when building a vector based world model Problem Statement A number of interesting questions and issues arise following the discussion above. When dealing with map generation and object tracking, the interaction between static and dynamic objects in real world environments compound the inherent sensor problems, such as noise and occlusion. Also, d ynamic objects may be temporarily static when initially mapped and may not be present when the robot returns to a previously mapped area. M ultiple sensors provide greater coverage but q uestions about data fusion and computational load come up. More sensor data increases complexity and could in crease processing time Latency issues inherent in the use of a WMKS introduce delay between when a n object is sent to be stored and is actually stored. However, sensor processing cannot always wait until storage is complete due to the real time requirements of an unmanned vehicle. The proposed research address es some of the issues surrounding localizat ion, map generation, and object tracking in dynamic environments using multiple laser range finders and utilizing a W orld M odel K nowledge S tore Motivation The Center for Intelligent Machines and Robotics (CIMAR) was founded in the 1970s and since then has made significant contributions in the areas of mechanisms, autonomous vehi cles, and intelligent machines. They have participated in all three of

PAGE 25

25 the Defense Advanced Research Projects Agency (DARPA) sponsored robotic challenges and have made the trip to the National Qualification Event (NQE) every time. While the 2004 and 2005 DARPA Grand Challenges (DGC s) focused on navigation in static, off road environment s, t he 2007 DARPA Urban Challenge (DUC) focused on a n urban setting and included the presence of both manned and unmanned vehicles The robots were required to exhibit a long list of driving behaviors that humans normally perform. T asks ranging from the basics of maintaining speed limits, executing u turns and observing intersection precedence, to advanced maneuvers such as traversing obstacle fields, parking and merging were required to be performed [4] CIMAR, building off of the experience from the previous c hallenges developed the Urban NaviGator ( Figure 11 ) ; a 2007 Toyota Highlander Hybrid. To be successful, object detection and tracking were critical. The approach chosen was to split the object detection and tracking elements into two separate software entities ( components) The Moving Object Detection Sensor (MO) was responsible for detecting and tracking objects over short periods. The four planar LADAR shown in Figure 1 2 were used for the detection and tracking system. Objects were independently detected from each LADAR sc an and matched using a simple centroid and point distance method, with the points from matching objects being combined to form the final new object. The objects from the current scan were matched with previous objects using the same simple centroid and poi nt distance method. The system did not differentiate between static and moving objects but treated all objects of a given size the same, with s tatic objects h aving zero velocity. If an object was not detected in the current scan, due to

PAGE 26

26 sensor noise, occl usion or some other reason, that object would not be reported to the rest of the system T he Local World Model (LWM) was responsible for tracking objects over a longer period of time It attempted to compensate for occlusion by remembering every object for some defined period. The LWM also assi sted in the high level decisionmaking process by evaluating the future threat level of each object when compared to the known road network For example, an object detected on the road in front of the robot moving at a slower speed will eventually become a danger. As such, the LWM would recommend a vehicle speed that prevent ed a collision with the object ahead. The LWM relied heavily on the object ID numbers generated by MO to correlate objects over time. If MO switched t he IDs of two objects or lost track of an object for a long period, the higher level decisionmaking in the LWM would make bad decisions. CIMAR has also collaborated with the Air Force Research Lab (AFRL) at Tyndall Air Force Base for many y ears. During that time they have successfully developed vehicles for applications such as clearing mine fields, detecting unexploded ordinance, and patrolling base perimeters. At the end of 2008, AFRL tasked CIMAR with developing an Environmental Mapping and Monitoring (EMM) system. This system required a vehicle that could perform two independent but related tasks while autonomously navigating or patrolling an area. First, it must be able to generate a human understandable and modifiable map that can be s tored and later retrieved, and second, i f a map is provided, the system must be able to detect differences between the map and the current environment Figure 13 shows an example scenario for the EMM system

PAGE 27

27 In the 2007 DUC a robust and reliable object detection system was required. However, the implemented system was not very robust and t he approach of treating all objects in a generic manner did not fare well when dealing with a mixture of both static and dynamic objects. The use of multiple LADAR to enhance the robots range of perception was not handled well and the short term storage limitations of the system affected decision making in the presence of occlusion. The research presented here seeks to build a more robust s ystem that can deal with both static and dynamic objects, while exploiting the added benefits of multiple LADAR, by addressing the problems and lessons learned in the DUC It also seeks to meet the requirement s for the EMM system outlined by AFRL. In the n ext chapter previous research addressing the problems of static and moving object detection and map generation are presented.

PAGE 28

28 Figure 1 1 The Urban NaviGator: Team Gator Nations 2007 DARPA Urban Challenge vehicle. Figure 1 2 Placement of the LADAR used for Object Detection

PAGE 29

29 A B Figure 1 3 EMM e xample scenario. The robot is represented by the dark blue box. A) The robot generates and stores a map with two objects, S1 and S2 B) The robot returns t o a previously mapped area and redetects S1, detects the new object S3 and discovers that object S2 is missing.

PAGE 30

30 CHAPTER 2 REVIEW OF LITERATURE A review of published research was conducted to get an idea of how the problems present in localization, mapping, and object tracking have been approached. Although a lot of work has been done using cameras, t he focus was placed on papers dealing with the use of 2D laser range finders ( LADAR) The review began with an investigation into previous Simultaneous Localization and Mapping ( SLAM ) work, which has been studied extensively. However, m any of the approaches only consider static, indoor environments and do not deal with moving objects Therefore, papers discussing the Detection and Tracking of Moving Objects (DATM O) were reviewed next. A wide range of w ork has been done in this area, from tracking pedestrians in indoor environments t o tracking cars and bicycles in urban environments. However, most of the work done t reat moving objects in isolation and d o not consider the stationary objects in the world. Objects of interest are filtered based on size and geometry constraints or based on their location in the environment Stationary objects that cannot be filtered adversely affect the developed algorithms. Finally, work done in Simultaneous Localization, Mapping and Moving Object Tracking (SLAM+DATMO) was reviewed. The area of SLAM+DATMO is still relatively new and there are many problems still to be tackled. One fundamental issue is how to detect moving obj ects and separat e them from static objects in the world. The review process revealed many techniques for dealing with LADAR data that we re applied in all three domains These techniques are discussed below.

PAGE 31

31 Simultaneous Localization and Mapping SLAM is the process by which a robot builds a map of the environment and uses that map to deduce its location. In most SLAM applications an initial position estimate is known using odometry or some other mechanism, which is corrected using the currently sensed environment and the stored map. The loop closing problem is one of the biggest and most important problems in SLAM and deals with correctly recognizing previously mapped areas and reassociating them to the currently sensed environment [3] The general steps required for SLAM are outlined in Figure 2 1 It can be seen that there are two main steps: environment representation and map association. T here are three general approaches to environment representation: grid based, featurebased, and direct methods. Each of the three approaches along with a novel hierarchical approach is discussed below. The chosen representation dictates the method used in the map association stage. One problem with the current SLAM approaches is that the generated map does not treat each real object in the world as a single entity. A single building is represented as a series of grid cells, features, or points which are not related to each other. Grid B ased Approaches Grid based approaches represent the world as a series of cells, sometimes called an occupancy grid [5] or traversability grid [6] Each cell in the gr i d represents a square area of the world, for example in [6] each cell represents a 0.5m square area and is assigned a value based on if a LADAR strike falls within that cell or not. There are many techniques that are used to determine the value of the cell. In a simple, binary occupancy grid, the cell is either occupied or free, while in a more complex grid, the value assigned represent s the probability of occupancy. In grid based approaches

PAGE 32

32 sensor noise can be compensate d for by choosing an appropriate cell size, and map generation is simple Also, a ny object can be represent ed and sensor fusion is straight forward as multiple sensors can be used to adjust the values in a c ell. However, localization using the grid based approach is difficult [7] [8] and no information about the objects represented is known whether an object is a tree, building, or parked car Image processing techniques must be applied to extract any further understanding of the environment [8] and to correlate grid maps. Furthermore, they cannot be used to solve the loop closing problem [7] F or a fairly accurate map, a large amount of data must be stored, which increases processing time and inhibits their use for represent ing large areas of the world. Direct Methods Direct methods represent the world using raw point data without simplifying or extracting features from the data. Data association is usually performed using the Iterative Closest Point (ICP) algorithm or some variant. In its simplest form, this algorithm calculates the distances between all points and associates the two points with the shortest distance [9 ] After all the points are associated, the robot s position estimate is updated and the process is repeated until some convergence criterion is met. T o associate the points between time frames, all the points are transformed to a common reference frame. In [9] an occlusion hypothesis is applied that detects and removes occluded points in order to minimize the association error. One problem with the general ICP algorithm is that as sociated points have different rotational displacements Ideally, every point undergoes the same rotational and translational displacement and, therefore, the ICP algorithm introduces an inherent association error. A variant of the ICP called the Iterativ e Matching Range Point (IMRP)

PAGE 33

33 algorithm [9] associates points that have the closest range values given an angular change window which biases it towards the rotational displacement. It was shown that the translation residua ls converge faster in the ICP algorithm while the rotation residuals converge faster in the IMRP algorithm. Therefore, t he ICP and IMRP algorithms are combined in the Iterative Dual Correspondence (IDC) algorithm [9] to expl oit their individual advantages and produce a more reliable position estimate. The work done in [10] also attempts to apply a uniform rotation displacement to all points by calculating the Most Probable Angle (MPA). A probability for each rotational displacement is calculated and the angle with the highest probability is applied to all the points. As previously mentioned, it is often assumed that every point undergoes the same rotational and translational displacement and that points between scans always perfectly match. However, sensors are not perfect and introduce some level of uncertainty in their readings. The work in [11] considers the sensor uncertainty and i ntroduces three error values: measurement noise error, measurement bias error, and correspondence error. The measurement noi se and bias errors are inherent to the sensor while the correspondence error is a combination of the sensor and position errors. The final matching error is the sum of these three error sources. Therefore, a correspondence error is calculated for each point association which contributes a different weighted value to the matching error. The points that minimize the matching error are as sociated and a position estimate is generated. The process is repeated until some convergence criterion is met. The direct method approach is simple and although not discussed in any of the papers, fusion between multiple LADAR would be straight forward a s all the points are

PAGE 34

34 independent of each other and can be stored together. However, this approach r equires significant memory and processing power to store and associate points in a large or cluttered environment. Also, a s with grid based approaches, direc t methods d o not provide a mechanism for object understanding. Points are treated independently and are not grouped in any way to represent an object (such as buildings, fences, trees, etc) FeatureBased Approaches Feature or landmark based approaches co mpress r aw data into a set of pre defined features which are chosen based on the robots operating envir onment. L ine segments are a popular feature for indoor and structured outdoor environments as they generally tend to have many straight edges that can be easily represented. There are many line extraction techniques and [12] comp ares three of the most popular: Successive Edge Following, Line Tracking, and Iterative End Point Fit (IEPF) Experimental results showed that the IEPF generated better representations of the world (using a visual determination by a person) across all three tested environments, for a wider range of threshold values than the other two. An IEPF variant called Split and Merge and the Line Tracking algorithm are compared in [13] against a number of other algorithms, such as the Hough transform, the Expectation Maximization (EM) algorithm and the Random Sample Consensus (RANSAC) algorithm. The paper concluded that the split and merge and line tracking algorithms are preferred for SLAM with the former being the best choice for real time applications. Circles ar e another feature sometimes seen in SLAM applications. They are used in [14] to represent tree trunks and treelike objects such as pillars, which are present in semi st ructured, outdoor environments and t wo algorithms are introduced for t he

PAGE 35

35 extraction of both edges (line segments) and circles. The first method uses an Extended Kalman Filter (EKF) to cluster the points together by estimating the position of the next point and completing a cluster if the error between the estimate and the actual position is outside a threshold. A line is first fit to the points and the error of the line calculated. If the error is greater than some threshold then a circle is fit using a GaussianNewton method. T h e second method extract s features without clustering and begins by assuming that a circle feature will be extracted. The circle parameters (radius and center point) are initialized, usi ng the first three measurements and i f the radius is below a threshold the circle model assumption is kept otherwi se a line model is used. In either case, an EKF is used to track the points until a discontinuity is detected. Numerous methods for position estimation were found when dealing with featurebased SLAM. In [15] circles are extracted using a GaussianNewton method [14] and matched using a nearest neighbor criterion where the circles that have the closest center positions are associated. A particle filter is used to estimate the robots position between time frames and is compared against the validated estimates generated by the feature associations The validated estimate closest to the predicted estimate is used to update the vehicle state. An EKF is used in [16] but only i ncorporates the features and not the odometry parameters in the position estimate, while [17] uses an information filter A data association and position estimation technique based on the Possibilistic CMeans algorithm (PCM) is introduced in [18] Line and ellipsoidal features are extracted and the distance between features over multiple time frames are calculated and minimized to determine feature association. An alternate optimization approach similar to the PCM algorithm is used to update the vehicle position estimate. An ICP based

PAGE 36

36 approach w hich is extended using an EKF (ICP EKF) is implemented in [19] A polyline map is generated from the scan data and scan points are associated to the line segments using an ICP approach. Once the ICP has converged, the EKF is used to further improve the position estimate. Finally, [20] uses a rulebased approach to match features and a nonlinear least squares method to estimate the vehicle position. In the approach developed, the chosen features are point clusters which are found using an IEPF based methodology, and matched based on their distance and length relative to each other. Once the clusters have been matched, the nonlinear least squares algorithm is applied to generate the final position estimate. Two of the papers reviewed specifically dealt with map building using features without extending the work to also include localization. These papers introduced a few novel ideas and are worth mentioning. An approach that considers the point uncertainty when extracting line segments is discussed in [21] Points are grouped using a Hough transform and a line i s fit to the points using a maximum l ikelihood approach, where each point is wei ghted based on its uncertainty due to sensor noise. The generated line has an uncertainty attached which is used during merging. Lines are merged by converting them into a common reference frame and applying a chi squared test. If two lines are within a 3 sigma deviance threshold from the combined line uncertainty the lines are merged using a maximum likelihood formulation. A method that matches geometric primitives of line segments and circles for map building is discussed in [7] A line matching method involv ing a chi squared distribution test is utilized and matching lines are fused using a static Kalman filter Circles are matched using a simple distance criterion and are fused by averaging the two circles. Another novel concept introduced

PAGE 37

37 is the idea of a w ipe triangle which is used to filter out noisy line segments. When a new line is constructed every line in the map is checked for intersection with the wipe triangle. If a line intersects with o r is inside of the wipe triangle region it is removed or fragm ented. Featurebased approaches to mapping provide a method for compress ing the data and therefore, do not need as much storage space as gridbased or direct methods. However, if bad features are chosen, a large error is introduced into the map and the position estimate is affected. One method that has been proposed is the use of artificial landmarks which can help alleviate problems with feature extraction, as the environment is modified to add features that are well known [17] [19] However, the infrastructure changes that are required to use artificial landmarks make this approach infeasible when considering real life situations. Although featurebased approaches provide a better understanding of the world than both grid based and direct methods most techniques still cause single objects to be decomposed into multiple features without any connection between them. Most feature based SLAM approaches reviewed only used a single LADAR and did not fuse data from other sensors. Only [16] discussed a method for fusing data between a LADAR and sonar by exploiting the advantages of each sensor and using a Kalman filter However, it did n ot discuss any approaches f o r combining multiple of the same sensor types in order to provide a wider sensor coverage area. Hierarchical Object Based Approach A novel approach taken by [5] is to combine all three representations in a hierarchical object based approach in or der to overcome the shortcomings of each individual approach. LADAR scan points are clustered using a simple distance criterion

PAGE 38

38 and clusters from different time frames are associated with each other using the ICP algorithm Associated clusters are grouped together to form objects and a grid based methodology is used to calculate the uncertainty of the cluster associations The generated grids are stored as features which can be matched using featurebased approaches to solve the problem of loop closing. The combination of direct methods and gridbased approaches provide localization within a local frame (close to the vehicle) while the use of the featurebased approach provides localization within a global frame (where the vehicle is located in the map) The discussed hierarchical approach still suffers from some of the shortcomings of the individual approaches. The use of grid maps as a feature requires a large amount of memory, especially when considering large maps. A lthough the objects developed are groupings of points that probably belong to the same object, they are not treated in a manner that allows for understanding what the object represents. Finally, data from the grid map features cannot be easily extracted once they have been stored and the raw cluster data has been lost. Ideally, object data should be retrievable when the robot returns to a previously visited area. Detection and Tracking of Moving Objects The reviewed SLAM approaches break down in the presenc e of moving objects, since they introduce error in the matching process. Therefore, a number of papers were reviewed that specifically examined the DATMO problem. A generalized approach is outline d in Figure 2 2 and is broken up into four mains steps: obje ct representation, object association, classification, and prediction. Each of the four stages are discussed below with object association and prediction (tracking) discussed together as they are tightly coupled, despite being split into two steps

PAGE 39

39 One maj or problem encounter ed with all of the methods reviewed was that they did not consider the presence of static objects in the environment. Therefore, all detected objects were tracked and unneeded complexity was added to the system. To simplify the problem some approaches filtered out objects that did not fit the expected criteria. In [22] objects that were not on the road were removed from consideration, while in [23] objects that were too large were ignored. An other limitation of the current approaches is that simple shape and motion models are applied and the objects actual dimensions, dynamic properties (maximum velocity, acceleration, etc) and intent cannot be deduced. Object Representation The first step in a n y DATMO system is to detect objects in the LADAR scan and represent them in some way. One approach has been to expand upon the occupancy grid concept in order to represent a time varying environment [24] [25] [26] A time stamp map is generated in [24] where only cells in the grid corresponding to a laser strike are updated with a timestamp that indicates the last time the cell was occupied. In [25] each cell is populated with a probability of occupation that is calculated using the new laser data and the previous value. Clusters, line segments and rectangular models were all popular representations in the literature and all require a clustering stage. P oints are grouped together to represent a single object using one of two general methods: point distance based or Kalman filter based [27] with point distancebased methods being the most popular [28] [29] [30] [31] [32] [33] [34] The general procedure for the point distancebased clustering algorithm is to compare the distance between successive laser strikes and group points that are within a distance threshold. A number of distance functions are

PAGE 40

40 used (such as Euclidean, Mahalanobis, Minkow sky, etc) with the Euclidean distance being the most popular. A novel normalized distance function is introduced in [22] while [26] uses a K nearest neighbors approach. The distance threshold which is used for the grouping is usually of two types: fixed or adaptive [35] The fixed distance threshold approach is simple but does not consider that the distance between scan points are greater for objects that are farther away from the LAD AR. Adaptive approaches calculate a maximum distance change between points based on the distance of a point from the LADAR which is used for the grouping threshold [35] [27] Kalman filter based approaches estimate the next point in a cluster and compare the actual scan point with the estimated point. If the real and estimated points are within the validation gate of the filter the real point is added to the cluster, otherwise a new cluster is started [36] [37] [35] [27] A novel multiple hypothesis approach to clustering is introduced in [33] After an initial distancebased clustering, each cluster is considered against a series of possible hypotheses to combine clusters that probably belong to the same object. Once the clus ters have been determined, some approaches use the cluster centroid to represent the objects without extracting any features [26] [38] [30] [22] [36] [37] Additional information is also attached to the object such as velocity or the standard deviation of the cluster [36] [37] A convex hull is used in [29] to simplify the cluster and compress the data required for storage. It is also used to separate clusters that where erroneously grouped in the initial clustering. The distance from each point to the convex hull is checked and the cluster is split if t he distance is greater than a threshold. Other approaches represent the objects as a series of line segments [23] [34] or rectangular bounding boxes [28] [39] [31] [32] [33] Some papers used the IEPF

PAGE 41

41 algorithm to extract line segments [23] [32] [34] while other papers used knowledge about the expected objects (cars, trucks, et c) to create rules for defining the rectangular bounding box es [33] [28] In [39] the rectangular bounding box is aligned to the x axis to acc ount for nonregular shaped objects where the direction of travel is indeterminate. O ther papers attempted to align the bounding box to some assumed object travel direction based o n the object geometry A scheme for merging line segments to account for the legs of people or walls that appear broken due to occlusion was introduced in [34] Small line segments were merged if they were separated by a distance of 50cm while larger line segments where merged if the line segments were collinear and no other line segments existed behind the lines to be merged. One novel innovation presented in [23] improves the object dimension estimates by considering the angular resolution of the LADAR. The closer an object is to t he LADAR the greater the number of beams that will strike the object and the smaller the distance between strike points. As such, the detected dimensions will be closer to the real object dimensions. However, the further away the object is the smaller the number of strikes and the greater the distance between strike points. Therefore, the detected width will be less than the real width, which will affect classification ( Figure 2 3 ). To alleviate this problem, the object width is estimated using the maximum width that could contain an object ( Figure 2 4 ) Object Association and Tracking As mentioned, t he object association and prediction stages are tightly coupled. The most popular tracking methodology employs a Kalman filter to estimate the new position of the tracked objects [39] [30] [36] [23] [32] [33] [34] Objects detected in the current scan are then compared to the estimated new position using either a distanc e measure [26] [39] [38] [30] or a validation region [36] As t he velocity of a new object is

PAGE 42

42 unknown, a large area must be searched to compensate for the error in the first prediction. To reduce the number of possible matches after the initial prediction, [32] separates the validation region into preferred areas. In [33] a bounding box overlap and parameter comparison method is used, where measured and estimated objects are associated if the measured object is smaller or equal in size to the estimated object it overlaps while a network optimization approach is used in [29] A Bayesian scheme based on a Markov model is used in [40] for tracking. The Markov model developed considers all the possible shapes that can be obtained for a car, bicyc le or person when using a LADAR and attempts to match the current scan points to the predefined models. A novel concept for motion detection is introduced in [38] The algorithm initially constructs a free space polygon by connecting all the points from the current scan. At each successive time step, the current scan points are compared against the previous free space polygon, and points that fall within the polygon are marked as violation points. The detected violation points are used to extract useful information about the moving object that caused the vi olations. A heuristic approach is employed to group the violation points into objects that can be tracked over time. When considering gridbased repr esentations, motion detection and tracking was done by comparing grids between time steps. In [24] the previous and current timestamp maps are compared and cells occupied in both maps contain stationary objects. Cells occupied in the current time stamp map but not in the previous map contain moving objects. A nearest neighbor metric is us ed to cluster the cells into objects and associate detected objects between time steps.

PAGE 43

43 Classification One interesting addition to some of the DATMO approaches for improving object tracking was the addition of a classification stage. Each detected object w as considered against a number of possible classes (cars, pedestrians, etc) and the most probable class was assigned. The dynamics of the assigned class (maximum speed, mobility, etc) was used to aid in the tracking stage. The dimensions of the cluster s, l ine segments, or rectangular bounding box es w ere the most commonly used feature in the classification process. However, one of the problems when using a LADAR is that the detected shape of an object varies depending on the position of the object relative t o the LADAR [34] Also, the effects of occlusion affect what is sensed. To solve this problem researchers attempted to improve or verify the classification over time by considering previous classifications. In [31] a priori knowledge about typical road users was used to classify the sensed objects. Typical length and width values were compared against the modeled bounding boxes to choose an object class. A verification phase continuously verified the object class using the expected dimensions and the dynamic limitations of the class. The class assignment was changed if another class type became more probable. A similar approach is taken in [34] except that additional features were considered during classification. Each feature detected contributed a weighted vote towards a class and the highest scoring class was assigned to the object. One major problem when using the object dimensions during classification is the effect of occlusion. If an object is partially occluded, the object dimensions will be affected and could lead to a false classification. This problem is addressed in [23] by considering object occlusion during the voting process. However, it does not consider object dynamics during the

PAGE 44

44 verification phase like the other papers. A formal probability equation defining the general approach described above is given in [40] Cameras can also be used for classification [37] [36] In [36] the camera is the only source of classification, while in [37] LADAR and vision based approaches are performed and fused to provide a more robust classification. In both papers, the LADAR is used to detect objects and generate a Region of Interest (ROI) within the camera image. The generation of the ROI reduces the search area within the image space and therefore reduces computation time, which is a major concern when working with cameras. An AdaBoost classifier is applied to the image to generate a classification which is used in the association stage. In [37] a Gaussian Mixture Model classifier is also applied to the LADAR da ta. It is important to note that the each classifier is independent of each other and can produce different classifications. The outputs from the two classifiers are then combined using a sum decision rule to make the final classification decision Experim ental results showed a significant increase in hit rate and a decrease in false positives by combining the two classifiers. Simultaneous Localization, Mapping and Moving Object Tracking The idea of considering SLAM in the presence of moving objects is fairly new. A good introduction is given in [41] which considers both the concept of SLAM with generalized objects and SLAM+DATMO and provides justification for the SLAM+DATMO approach. When considering SLAM+DATMO, t he biggest question is how to discern between static and dynamic objects in the environment. Moving objects treated as static affect the position estimate while stationary objects marked as dynamic incur unnecessary computational overhead. Temporarily static and slow moving objects introduce another level of complexity as these objects may move while out of sight of

PAGE 45

45 the robot and cause errors in the global SLAM and loop closing approaches A general approach to the SLAM+DATMO problem i s shown in Figure 2 5 All approaches reviewed used a mixed representation approach but most used a gridbased approach as the primary representation strategy. The discussion below is broken into two sections: heavily grid based approaches and a hierarchical object representati on approach. Most SLAM+DATMO approaches try to combine classical SLAM and DATMO approaches and suffer from some of the same problems. Grid maps require large amounts of memory and processing power to store and manipulate and the s tatic objects rep resented have no real world understanding that can aid in the robots planning stages. T he real life dimensions of moving objects cannot be determined even when the entire object has been sensed and their dynamic properties and future intent cannot be deduced. Grid Based Approaches In [42] two grid maps were generated: one for the static objects and one for the dynamic objects in the environment. In the static object map, each cell contained a probability of occupation, which was calculated using a Bayesia n update scheme, while each cell in the dynamic grid stored the number of times a moving object was observed at that cell. Each LADAR strike is categorized as static, dynamic or undecided by comparing the strike agains t the static and dynamic object maps. If a strike lands in a static cell that is occupied (high occupation probability), the strike is categorized as static; if a strike lands in a static cell that is free space (low occupation probability) or in a dynamic cell that is above a threshold, it is categorized as dynamic ; and if a strike lands in a cell where the occupation is unknown, the strike is categorized as undecided.

PAGE 46

46 Strikes categorized as undecided are treated as static until they are proven to be dynam ic. A fast scan matching method is introduced to perform data association and estimat e the robots position, which compares t he current scan against the built static map and performs a maximum likelihood calculation. S can points categorized as dynamic are not used to update the map and makes t he algorithm r esilient to the presence of moving objects since the probability of cells occupied by moving objects will be low. However, it will be affected by temporarily static objects. The scan points determined to be dynamic are grouped together using a simple distance threshold, and the cluster centroids are used for object tracking. A Kalman filter is used to estimate the new object position and a global nearest neighbor metric associates moving objects between t ime steps. However, the method does not construct a global map as the main concern is localization to facilitate safe navigation, and not mapping and allows for the use of their fast scan matching algorithm to improve computation time. As no global map is constructed, the loop closing p roblem is not addressed. The work done in [43] is very similar to that discussed above and also uses two grid maps. Each cell value in the static object map is the number of times it is occupied by a static object, while each cell value in the dynamic object map is the number of time s it is occupied by a moving object. First, the current scan is clustered and compared against the known moving object list The matching clusters are re moved and t he remaining clusters are compared against the static object map to generate a position estimate, using the IDC algorithm (an ICP variant) [9] The updated position estimate is used to find new moving objects from the remaining clusters and the static object map, moving object map, and moving object list are updated. New moving

PAGE 47

47 objects are detected by comparing the new clusters against the static object map and applying two rules. If a previously free area becomes occupied, the cluster in the free space is a moving object (approaching object). If an area that was previously occupied becomes free and an area that was previously occluded is now occupied (leaving object) it is not possible to say that the cluster in the previously occluded area is a moving object (it may be a stationary object that was occluded) ; however, the area that was previously occupied was occupied by a moving object Moving object association is d one using a similar method to the one used for static object association, which the authors call a matchingbased tracking method, since no model is presumed for the moving objects However, problems exist with this approach which the authors are currently trying to resolve. Unlike the previous approaches, the work done in [44] only uses a grid map to represent the static objects in the environment After the clustering process, known moving objects are removed and the remaining clusters are used to estimate the vehicle position. A correlation approach is used for position estimation by pr ojecting the LADAR strikes into grids at different possible positions and finding the pose that produces the best correlation with the stored static object map. A grid pyramid is employed where a series of grids with different resolutions are stored to improve matching speed. The updated position estimate allows for the separation of the remaining clusters int o known static objects and new objects. New objects are first classified as seed object s and determined to be m oving or static by examining their st ate histor ies Objects that remain seed objects for the duration of their observable history (they move out of sensor range) are added to the map as static objects, while moving objects are classified into

PAGE 48

48 one of three categories using a Markov model approach [40] A novel approach to the SLAM pro blem developed in this paper is the use of GPS to correct the generated map in a trajectory oriented closure algorithm However, the inclusion of the seed objects affect the position estimate and lead to loop closure failure. Hierarchical Object Representation Approach A hierarchical object representation [5] is used in [41] to represent the world. First scan points are clustered using a simple distance criterion and the clusters are associated with the known moving objects using a multi hypothesis approach. The unmatched clusters are used to detect new moving object s and generate the pose estimate. To account for unreliable pose estimation and correspondence, which is present with the ICP algorithm in the presence of sparse data, a sampling and correlationbased range image matching ( SCRIM) algorithm is presented. On e hundred initial position estimations are generated and correlated to the stored static object maps to find the best correspondence. New m oving objects are detected with the application of two separate detectors The consistency based detector works by comparing the scan points with the surrounding map. A scan point in a free space cell is marked as a moving point and if the ratio of moving points to total points in a cluster is above a threshold, it is marked as moving. However, s low moving or temporary static objects are difficult to detect, so the moving object map based detector is employed. In this detector a cluster is marked as moving if it is in an area that was previously occupied by a m oving object After the position estimate has been updated and the new moving objects detected, the static object and dynamic object grid maps are updated. Each moving object has a grid associated with it which is updated over time to build an object contour. The SCRIM algorithm is used to associate the new scan poi nts with the

PAGE 49

49 objects grid cells while a Kalman filter is used for tracking. The static and moving object grids represent an area that is local to the robot. As the robot moves, new local grids are created and the previous static grids are stored. The stored grid maps are treated as threedegree of freedom features which can be com pared using image processing techniques. These stored features allow s for global SLAM to be performed and the loop closure problem to be solved However, the stored local grid maps cannot be updated with new position estimates and so overlay consistency cannot be guaranteed. In this chapter previous research in the areas of SLAM, DATMO, and SLAM+DATMO were introduced and discussed. Although significant research has been done on SLAM and DATMO, there are still noteworthy problems to be addressed in the area of SLAM+DATMO. The next chapter will introduce the approach taken by the research presented here to deal with some of these problems.

PAGE 50

50 New Scan Environment Representation New Map Stored Map Map Association Combined Map Pose Estimate Figure 2 1 Flowchart outlining the general steps in a SLAM system.

PAGE 51

51 New Scan Object Representation New Objects Stored Object List Object Association Prediction Tracked Moving Objects New Moving Objects Object Classification Figure 2 2 Flowchart outlining the general steps in a DATMO system.

PAGE 52

52 Figure 2 3 Dimension estimate error due to angular resolution of LADAR. The real back segment of the vehicle is shown in green while the det ected segment is shown in blue. Figure 2 4 Dimension estimate error correction. (1) the real width of the vehicle in green, (2) the detected width of the vehicle in blue, (3) the estimated width in purple.

PAGE 53

53 New Scan Clustering / Feature Extraction New Objects Moving Object List Moving Object Association Prediction Other Objects Tracked Moving Objects Static Object List Static Object Association New Moving Objects Moving Object Classification Static Objects Pose Estimate Figure 2 5 Flowchart outlining the general steps in a SLAM +DATMO system.

PAGE 54

54 C HAPTER 3 IMPLEMENT ED APPROACH L ocalization, map generation, and moving object tracking are complex problems and cannot be solved easily. Initially, each problem was approached independently without consideration of the others. The introduction of S imultaneous L ocalization and Mapping (SLAM) s howed that the problems of local ization and map generation could be treated together and that performing both tasks simultaneously generated a better r esult than when they were tackled separately Recent work done in combining SLAM with moving object tracking (DATMO) in Simultaneous Localization, Mapping and Moving Object Tracking ( SLAM +DATMO ) systems have further s hown that by combining all three tasks, better results are obtained. T his dissertation introduces a novel approach to the SLAM+DATMO problem which generates an object map instead o f the more common point maps, feature maps or grid maps It then address es some of the issues that arise when using a W orld Model Knowledge Store (WMKS) to store the generated map and introduces a methodology for extending the system for use with m ultiple laser range finders ( LADAR) This chapter serves to introduce the three main elements of the research and provide background for understanding. The first section present s the SLAM+DATMO process and discuss es the necessary steps. Next, an introduction into the WMKS is provided along with a set of requirements to facilitate inform ation exchange. Finally the methodology for combining multiple LADAR is discussed All threshold and parameter values used in the implemented approach are given in Table 36.

PAGE 55

55 Simultaneous Localization, Mapping, and Moving Object Tracking T he topic of SLAM+DATMO is still relatively new and there are still many challenges to be addressed. Most of the work up to this point has involved the use of grid based approaches for moving object detection and representation. However, grid based approaches are gener ally slower and require more storage space than feature based approaches. Also, previous SLAM and SLAM+DATMO work have treated the mapping aspect simply as a means to perform localization and attempt to ensure map consistency to deal with the loop closing problem. The topics of m ap representation and difference detection have not been addressed, although they introduce interesting possibilities with respect to higher level environmental understanding. The work presented here seeks to address these topics by focusing on three aspects. First, the developed system attempted to represent a singular physical object wi th a singular representation. For example, a building should be identified as a single polygon or continuous line and not as a collection of point s, or a series of unconnected lines. Second, objects are detected and represented completely in the featurespace, that is, grid based approaches and image processing techniques are not used. In general, gridbased techniques are slower in processing speed and it was believed that it would affect real time operation, which is a critical requirement when working with a fully autonomous vehicle. Also, gridbased approaches require a lot of storage space which would cause a bottleneck when using the WMKS. Finally, given a saved map, the system attempts to detect differences in the sensed environment and indicate new and missing objects A flowchart outlining the developed system is shown in Figure 31 In this section the different steps of the SLAM+ DATMO system are discussed.

PAGE 56

56 Object Detection and Representation The first challenge in any SLAM+DATMO system is to identify the objects that exist in the environment based on the current LADAR scan. There are generally two steps related to object detection: f irst, points obtained from the LADAR are clustered together to identify objects in the environment, and second, features are extracted from the clusters in an attempt to model the object and simplify the object representation. Any reasonable feature can be used but circles, lines and bounding boxes tend to be the most popular. The work presented here models static objects using line segments and moving objects using bounding boxes as they provide reasonable approximations of the objects found in a semi structured environment, such as a city. As every newly detected object is assumed to be static a description of the method for detecting and representing static objects is discussed first with a discussion on moving objects given later. Clustering As mentio ned above, t he first step i n the object detection process was determining which points should be grouped together as part of the same object. Although, there are many different clustering techniques, the work presented used an adaptive distance threshold m ethod [31] This method considers the distance between two consecutive points to determine cluster membership. If the distance between two points is within a threshold the points are considered to belong to the same cluster, otherwise the points belong to two different clusters Given two points, and taken from the LADAR scan ( Figure 3 2 ), the distance can be calculated by = 2+ 2 2 (3 1)

PAGE 57

57 The distance threshold is then calculated by 0+ 1min { rOA, rOB} (3 2) with 1= 2 ( 1 ) (3 3) The constant 0, allows the algorithm to accommodate for sensor noise and the overlap of pulses at close range while 1 allows for the distance between points to grow as they move further away from the LADAR One advantage of this technique is that only consecutive points need to be considered. Once a point has been added to a cluster, it is never reexamined during the clustering stage. Another advant age is that the approach is simple and does not require complex calculations. These two factors make the algorithm relatively fast when compared to other methods. Feature e xtraction As mentioned, s tatic objects were represented using a series of line segments Each segment co nsisted of a start point, an end point, and a variance, where the variance was a fixed value found through experimentation. Tables 31 and 32 provide a complete list of the fields used. Each line segment point was maintained in th e LADAR coordinate frame until processing was complete. The points were stored in either the native polar coordinate system or in t he Cartesian coordinate system when in the LADAR frame and using latitude and longitu de coordinates when in the global frame. The Iterative End Point Fit (IEPF) algorithm [23] which i s illustrated in Figure 33 and described below was used to extract the line segments from the detected clusters. 1. Initial: Consider a set consisting of points. = { 0, 1, } (3 4)

PAGE 58

58 2. Form a line between the first and last points in = { 0, } (3 5) 3. Calculate the distance f rom the line to each point in to form the set = { 0, 1, } (3 6) 4. Find the point with the maximum distance in set 5. If > split the set into two sets such that 1= 0, 1 2= (3 7) 6. Repeat steps 2 to 5 with new sets 1 and 2 until the sets can no longer be separated. The calculation of the distance between line and each point in step 3 of the algorithm given above, was calculated by considering the triangle formed between the first point, the last point, and the target point where is the triangle height and the distance between the first and last points is the triangle base, As can be calcul ated using Equation 31 the distance can be calculated using = 2 (3 8) where 2 = | 0sin ( 0) + sin ( ) + 0sin ( 0 )| (3 9) Object Classification The presence of moving objects in the environment causes a number of challenges during the object tracking and localization stages If they are considered during the matching and resolution stage, it is possible to incorrectly update an object even though it should not be updated. Also, the use of dynamic objects during the localization and mapping stage leads to error. Therefore it is very important to classify

PAGE 59

59 the detected objects as either static or dynamic as soon as possible. A free space violation method [38] (Figure 34) was used to perform this function. In this method the objects detected in the current scan are compared against a free space polygon generated from the previous LADAR scan. Free space polygon generation The generation of the free space polygon was performed using a straightforward method that exploited the fact that the points were maintained in polar coordinates. For each point in an object an offset distance, was subtracted from the actual point distance to account for sensor noise. For the space between consecutive objects an arc was approximated at the maximum scan distance of the LAD AR. Figure 3 5 shows a simplified example of the generation method while Figure 3 6 shows actual system output. Moving object detection and r epresentation Ideally, any new object that overlaps the free space region sh ould immediately be considered to be a moving object H owever, sensor noise and position error can cause static objects to appear to be moving. To deal with these inherent errors a probabil istic approach was taken where objects were assigned a confidence that they were static or moving. New obj ects that were found to overlap the free space polygon where given a positive moving probability while those that did not where assigned a negative probability. During the matching and resolution stage, the overall moving probability of a stored object was updated by adding the moving confidence of the new objects to the confidence of the stored objects. When the confidence of the stored object passed a positive threshold it was classified as a moving object and was treated differently All objects were treated as static until the moving confidence threshold was met. It was

PAGE 60

60 possible for a static object to become a moving object but a moving object could not return to being a static object. When an object was determined to be moving, its representation was changed from a series of line segments to a bounding box using a list of five points with the first and last point the same (Table 3 3) The bounding box around an object was determined using a method similar to the one used in [32] where the longest segment from the extracted object was found and treated as the object length. The line segments were then rotated such that the longest segment was aligned to the x axis. A bounding box was generated by finding the minimum and maximum x and y values and the length and width of the bounding box calculated. If the length or width of the new bo unding box wa s below a minimum size the bounding box was set to the minimum If the size was less than the previous object size, the previous bounding box was used. The minimum length and width values were chosen from the work presented in [31] This method ensured a moving object had a minimum size consistent with a car or truck as all moving objects were assumed to be one or the other. It also allowed the bounding box to grow if the detected object was larger than the initial estimate However, since the box size could not shrink the effect of occlusion could be mitigated. After the bounding box was created all the points were then returned to their original rotation to produce an oriented bounding box. Figure 3 7 gives an example of the bounding box generation process for a moving object. Object Tracking In the work presented, tracking has two meanings. First, it is t he ability to determine if objects that were previously detected still exist in the environment and second, it is the ability to monitor a moving object s motion through the environment. T o

PAGE 61

61 perform tracking, the newly detected objects must be matched to the previously detected objects. An object overlap method was used to perform the matching. If two objects overlapped they were considered to match and therefore were the same object. Enclosure generation Ideally, static objects that still exist in the envir onment would always overlap exactly. However, the influence of sensor noise greatly affects the matching process. The points obtained between scans can differ and therefore, a previously detected single object is detected as two objects in the current scan or vice versa. Also, the position of the objects can differ greatly. T o minimize the influence of sensor noise enclosures were generated around each object. Two enclosure generation methods were considered. First, a simple process similar to the free spac e polygon generated method, which exploited the fact that the points were in polar coordinates was explored. Given a line segment represented by two points such that = { ( 1, 1) ( 2, 2)} the enclosure is represented by four points such that = { ( 1 1) ( 1+ 1) ( 2+ 2) ( 2 2)} where is the variance of the line from the line extraction stage. However, this method was found to be invalid as it was possible for a line segment to have an enclosure with a zero width if the segment lay along a LADAR scan line. The second method generated the enclosures by using the buffer function in the GEOS library which is an open source C++ library for modeling and manipulating 2dimensional linear geometries. This second method was found to be more rel iable and robust although slower. It was determined that system robustness was more important that speed and as such the GEOS buffe r method was chosen. Figure 38 shows examples from both enclosure generation methods.

PAGE 62

62 Object matching and resolution T o match the stored and new objects every new object was compared to every stored object. Although this method is relatively slow, it produced the most consistent results when compared to other explored methods. The scenarios that were considered to be possible after matching are listed below and illustrated in Figure 3 9 SCENARIO 1: A new object matches one stored object and the stored obj ect only matches one new object. SCENARIO 2: A new object matches one stored object but the matching stored object matches multiple new objects. SCENARIO 3 A new object matches multiple stored objects but the each of the matching stored objects only match the one new object. SCENARIO 4 : A new object matches multiple stored objects but one of the matching stored objects matches multiple new objects. SCENAR IO 5 : A new object matches multiple stored objects and there exists another new object that matches the same stored objects. SCENARIO 6: A new object matches one old object but that old object overlaps another old object. Normally, scenarios five and six w ould not be possible due to the properties of the LADAR. However, the enclosure generation method employed made it possible for new object enclosures to overlap and therefore these two possibilities could occur It was found that updating objects based on these scenarios was difficult and undesirable. Therefore, new object enclosures were checked to ensure that they did not overlap and a ny objects that did w ere treated as the same object and merged. A t first appearance scenarios three and four appear very differ ent to scenarios one and two respectively, however, they can be made the same by merging any old objects than are matched by the same new object during the matching process. In other words, if two stored objects were matched by the same new object, the two stored objects were considered to be

PAGE 63

63 the same object. Therefore, scenario three becomes just like scenario one and scenario four becomes like scenario two and the update problem reduces to handling only two possibilities First, the update process for sce nario one is considered. When dealing with moving objects the update process is simple. The newly extracted object is converted to a bounding box using the methodology described above. When updating static objects the process is more complex. As mentioned, one goal of the presented research was to generate an object map and, therefore, the representation of static objects needed to be updated as more data was obtained. In other words, the object representation needed to be improved if more data was availabl e and previously sensed portions that were occluded or out of the viewable area needed to be preserved. Therefore, the stored static objects could not simply be replaced by the new objects. There are a number of possibilities when considering how the stored and new objects will overlap. The new object could be longer than the stored object and the object length should be increased or t he new object could be shorter than the stored object due to occlusion but the overlapping section provides a more accurate representation and should be included. Another problem was determining which line segments should be updated based on the new scan points and which should remain the same. T o avoid these problems a new cluster was generated based on the stored object and the new object scan points and t hen the object was regenerated. One challenge when regenerating the object is maintaining the correct point ordering. P oint s must be maintained in scan order and therefore, the points cannot be simply grouped together and sorted. This would lead to an ordering that is inconsistent

PAGE 64

64 with the LADAR properties and cause the line extraction algorithm to produce bad results. Therefore, the update process consisted of four steps. First, the line segments of the stored object wer e converted into pseudo clusters The clusters were generated by moving along each line segment at angles consistent with the angular resolution of the LADAR and finding the intersection point between the ray from the LADAR and the line segment (Figure 310). Next, the pseudocluster points from the stored object were associated to the scan points of the matching new object using a closest distance criterion Third, a new cluster was generated by examining the associated points. Any unassociated points from the stored pseudo cluster were added to the new cluster as is while associated points wer e added depending on their position along the object. If the points were in the middle of the object that is they were not the last pseudo point of the stored object, a point corresponding to the average distance between the pseudo points and the new points was added to the cluster. This was done to reduce changes caused by sensor noise. If the last pseudo point had numerous new points associated to it, then al l the new point s would be added to the cluster as is Finally, an object was regenerated from the newly constructed cluster. Figure 311 illustrates an example of this process. When considering a process for updating objects in scenario two it can be seen that the process devised for scenario one can be applied iteratively with the updated object being used in the next iteration. That is, the stored object can be updated using the first new object to generate an improved object representation and then the improved object is updated using the second new object. Using this iterative approach,

PAGE 65

65 the update process for scenario two is simplified and the entire update process is reduced to handling the case where a single stored object matches a single new object Missing object detection After the matching process, it was possible for some objects to remain unmatched due to sensor noise, platform motion, object motion, object removal or occlusion. It was considered that there were three categories f or an unmatched object : nonvisible, occluded, or missing. A nonvisible object was one that was outside the viewable range of the sensor and therefore did not intersect with the viewable r egion polygon. These objects were removed from consideration from the sys tem to eliminate unnecessary comparisons during the object matching process. An occluded object was wit hin the viewable region but did not lie inside the free space region and, therefore, it c ould not be deter mined if the object still existed or n ot. Final ly, missing objects were objects that lay within the free space region and were missing from the environment in the current scan. However, the influence of sensor noise caused some objects to be detected sporadi cally, and since the objects were compared against the free space region from the current scan it was possible for an object to be only missing in the current scan. If an object was removed every time it was not detected objects would constantly be appearing and disappearing. Therefore, a probabilist ic approach was taken. A n existence confidence was attached to every object to determine if it was missing or not. At each scan, the existence confidence was updated based on if the object was detected, occluded, or missing. If an objects existence confidence dropped below a threshold value it was said to be missing. It is important to note that if the object was occluded its existence confidence could not become negative (or more negative).

PAGE 66

66 Position Estimation When the robot moves from a position to a position the difference between and is approximately known from the vehicles positioning system (GPOS). However, the estimate is usually imperfect due to error present in the positioning sensors. The task of the position estimation stage of the SLAM+DATMO system is to improve the position estimate by aligning the scans taken at and referred to as and respectively. Assuming the position of the vehicle when scan is taken is the position estimation system finds the rotation and translation for such that after the applying the transformation, is aligned with However, it is generally impossible to perfectly align the scans due to the presence of sensor noise and occlusion. Sensor noise introduces small deviations which can be characterized using the sensors inherent error distribution function. On the other hand, occlusion introduces large differences between the scans which cannot be modeled by the error distribution function but can be treated as outlier data and can be ignored. The estimation method presented here attempts to match the points from the current scan with the modeled static objects from previous scans and was based on the method presented in [9] It is important to note that only the s tatic objects that were matched during the object matching step were used to generate the position estimate. In general, for each point in a simple rule is applied to determine the corresponding point in A least squares solution is then computed using the equation: ( ) = | + | = 1 2 (3 10) to calculate the relative rotation and translation where

PAGE 67

67 is the number of corresponding point pairs = (, ) (3 11) = cos sin cos (3 1 2) The closed form solutions for and are given by = 2 tan 1 2+ 2 2 (3 13) =1 cos ( ) + sin ( ) + (3 14) =1 sin ( ) + cos ( ) + (3 15) where: = 1 + = 1 (3 16) =1 + + = 1 (3 17) =1 + = 1 (3 18) and Sy= y = 1 (3 19) = = 1 (3 20) = = 1 (3 21) = = 1. (3 22) The new rotation and translation is then applied to the current posi tion estimate to reduce the position error between the two scans and the process is repeated until the solution converges. The method described above has been used extensively for position estimation in point map approaches where every point is maintained. However, the presented

PAGE 68

68 research generates an object map not a point map and, therefore, the method cannot be directly applied. Fortunately, the method used for object resolution already converts the stored line segments into points and associat es the stored and new points. Therefore, once the points have been associated, the present SLAM method can be applied directly. The convergence criterion used was simply a set number of iterations of the algorithm. Multiple iteration thresholds wer e consider ed but through experimentation it was found that 20 iterations gave the best results when considering accuracy versus speed. It was assumed that the error of the initial position estimate produced by GPOS was small and would not greatly affect the matching process. Therefore, object matching was only performed onc e while t he stored and new object points were reassociated during every iteration. World Model Knowledge Store Although, t he use of knowledge stores for centralized storage is not new to the field of robotics none of the work reviewed attempted to use an external central repository. In fact, most of the systems did not address the issue of data sharing at all and were architected to be completely self contained. The research presented here attempt s to address this shortcoming by providing a mechanism for disseminating information about detect ed objects to other components of the autonomous vehicle. The inclusion of such a mechanism facilitates modularity a nd component reuse among autonomous systems ; two important aspects within the field of robotics. The exact implementation details of the WMKS are outside of the scope of the research presented here but was based on the work done in [46] The WMKS backend was implemented using a PostgreSQL database extended for geospatial support using PostGIS. Data exchange was facilitated through the use of m essaging based on a draft

PAGE 69

69 version of the AS 4 World Modeling Knowledge Store standard with modifications as needed. Challenges There are two approaches that can be taken when communicating with the external WMKS: a synchronous approach and an asy nchronous approach. In the synchronous approach when a command is sent to the WMKS the client waits until it is completed before continuing operation. In the asynchronous approach the client sends the command and continues processing without waiting. The client may or may not need confirmation that the command actually completed. In general, the synchronous approach is easier to understand and implement. However, when dealing with real time systems transmission latency becomes a major concern. If latency is high, the client may wait a long time before the command is completed and real time operation is affected. The effect of transmission latency is decreased in an asynchronous appr oach but introduces problems with data synchronization as there will be some period of time when the WMKS is out of date. In the presented research real time operation was deemed critical. Delays in processing could result in one of two possibilities: LADA R data is skipped or lost during the wait period, or the system processes old LADAR data that no longer represents the current environment. Neither scenario was desirable and could lead to unsafe behavior by the autonomous vehicle. Therefore, an asynchronous approach to WMKS communication was taken. T o combat the issues of data synchronization, a local storage cache was used to store objects of immediate interest. All objects within range of the LADAR were stored in the local cache and the cache was synchro nized periodically with the WMKS. In addition a confirmation scheme was used

PAGE 70

70 to improve robustness In this scheme, every command elicits a response from the WMKS and allows the client to recover from synchronization errors. Transmission loss was another m ajor concern when dealing with WMKS communication. If a command is sent to the WMKS and the confirm ation is not received, it is impossible to determine if the failure was as a result of transmission loss or by some problem in the WMKS. Improper handling of the failure could lead to discrepancies between the local cache and the WMKS. S ay a create command is sent to the WMKS but is not received due to transmission loss. If the client resends the create command, the system recovers. H owever, if the client assumes the command was successful the cache and the WMKS are out sync. Conversely, if the command successful completed but the confirmation was never received, the WMKS and the cache are in sync if the client assumes transmission loss. However, if the client resends the create command, the WMKS contains tw o copies of the objects and becomes out of sync with the local cache. T he issues surrounding transmission loss were considered to be outside the scope of the research presented here. Therefore, to deal with the described challenges, guaranteed data delivery was assumed. If a command sent to the WMKS did not execute it was assumed that it was caused by WMKS failure and not by transmission loss. Messaging To add, modify, or delete objects in the WMKS a mechanism for transmitting data b etween the computing resources was required. It was determined that a messaging scheme would be used to send commands to and receive responses from the WMKS. There were three main commands the WMKS needed to handle: adding an object, modifying an object, and querying an object. It was determined that objects would not

PAGE 71

71 actually be deleted from the WMKS but instead updated to a special status to indicate deletion. This functionality provided an easy method for checking the correctness of the SLAM+DATMO system. If objects were deleted it would be impossible to determine if the system erroneously deleted a detected object or if the object was never detected. The six messages needed to handle the required functionality are described below However, before the messages can be introduced an overview of the object representation within the messages is needed. Message object description The objects stored within a WMKS can vary widely depending on the requirements of the implemented system and introduces an interesting challenge when considering the messaging between a client and the WMKS. One approach would be to implement custom messages for every possible object type the WMKS supports. However, this approach leads t o major changes when support for a new object or modifications to existing objects are required. A second approach would be to have a limited number of general messages that can be used for any object type. The object would be described within the message with a format that can be generated, parsed, and understood by the client and the WMKS. The second approach is used within the implemented WMKS messaging scheme. An object is represented by a ser ies of properties or attributes, where each attribute is described using three fields. ATTRIBUTE ID. An enumeration that uniquely identifies what the attribute represents. E.g. height, weight, centroid, color, outline. ATTRIBUTE DATA TYPE. An enumeration that uniquely identifies the type of the data used to repr esent the attribute. E.g. double, integer, point, polygon, list. ATTRIBUTE DATA. The actual value of the attribute, which can also be a list of attributes. E.g. A polygon is a list of points.

PAGE 72

72 Enumerations are used for the Attribute ID and the Attribute Dat a Type as a means of reducing the message size. String descriptions could be used instead of enumerations but would lead to larger messages and are more difficult to parse. Create Knowledge Store Objects message The Create Knowledge Store Objects message i s used to add new objects to the WMKS and consists of three mandatory fields. MESSAGE PROPERTIES. The message properties field is used to request a certain behavior from the WMKS when it receives the Create message. There are five possible behaviors based on the value of the field. o NO RESPONSE. The WMKS will do nothing after the objects have been created. o CONFIRM CREATE. The WMKS will send a Report Knowledge Store Objects Creation message after the objects are created. o CONFIRM OBJECT COUNT. The WMKS will in dicate the number objects that were successfully created in the Report message. o CONFIRM WMKS OBJECT IDS. The WMKS will provide the list of the WMKS Object IDs for all the created objects. o CONFIRM WMKS OBJECTS. The WMKS will provide the list of objects that were created including their WMKS Objects IDs. REQUEST ID. A client set enumeration that can be used to correlate a Create message with its corresponding Report message. WMKS OBJECT LIST. The list of objects that need to be created. Report Knowledge Store Objects Creation message The Report Knowledge Store Objects Creation message is used in response to the Create Knowledge Store Objects message to indicate if the requested object additions were successful. The message consists of two mandatory fields and three optional fields. PRESENCE VECTOR. This field indicates which of the optional fields are provided in the message.

PAGE 73

73 REQUEST ID. The Request ID of the original Create message. WMKS OBJECT COUNT. An optional field that indicates the number of the objects that were successfully created. WMKS OBJECT ID LIST. An optional field that provides the WMKS Object IDs of all the created objects in the order they were provided in the Create message. Objects that could not be created are indicated with a WMKS Object ID number of zero. WMKS OBJECT LIST. An optional field that lists all the WMKS Objects that were successfully created. Objects that were not created are not listed. Modify Knowledge Store Objects message The Modify Knowledge Store Objects message is used to update existing objects in the WMKS and consists of four mandatory fields. MESSAGE PROPERTIES. The message properties field is used to request a certain behavior from the WMKS when it receives the Modify message. There are two possible behaviors based on the value of the field. o NO RESPONSE. The WMKS will do nothing after the objects have been modified. o CONFIRM MODIFY. The WMKS will send a Report Knowledge Store Objects Modify message after the objects have been modified. REQUEST ID. A client set enumeration that can be used to correlate a Modify message with its corresponding Report message. QUERY FILTER. A description of the objects that need to be modified within the WMKS. The filter is constructed using the A ttribute ID, Attribute Data Type, and Attribute Value fields but also allows for complex queries involving AND, OR, and NOT operations. ATTRIBUTE LIST. A list of attributes that need to be updated for all objects that matched the query filter. Report Knowledge Store Objects Modify message The Report Knowledge Store Objects Modify message is used in response to the Modify Knowledge Store Objects message to indicate if the requested object updates were successful. The message consists of three mandatory fields.

PAGE 74

74 MODIFY SUCCESS. A boolean value that indicates if the query filter in the Modify message was valid. REQUEST ID. The Request ID of the original Modify message. WMKS OBJECT COUNT. The number of objects that were modified. Query Knowledge Store Objects message The Query Knowledge Store Objects message is used to search for existing objects in the WMKS and consists of four mandatory fields and one optional field. PRESENCE VECTOR. This field indicates which of the optional fields are provided in th e message. MESSAGE PROPERTIES. The message properties field is used to request a certain behavior from the WMKS when it receives the Query message. There are two possible behaviors based on the value of the field. o NO ADDITIONAL PROCESS ING. The WMKS will re turn only the objects that match the specified query. o RETURN DEPENDENT OBJE CTS. The WMKS will return the dependents for the objects that match the specified query. A dependent is a WMKS object that is attached to another WMKS object. For example, if the WM KS stores Wheel objects and Car objects separately, then the Wheel object could be a dependent of the Car object, since a Car has four wheels. REQUEST ID. A client set enumeration that can be used to correlate a Query message with its corresponding Report message. QUERY FILTER. A description of the objects that need to be reported from the WMKS. The filter is constructed using the Attribute ID, Attribute Data Type, and Attribute Value fields but also allows for complex queries involving AND, OR, and NOT operations. RETURN FILTER. An optional field that lists the object attributes that should be returned in the Report message. The use of the Return Filter allows a client to only deal with attributes important to the client and helps reduce message size. Repor t Knowledge Store Objects message The Report Knowledge Store Objects message is used in response to the Query Knowledge Store Objects message to list the objects that matched the query. The message consists of two mandatory fields.

PAGE 75

75 REQUEST ID. The Request ID of the original Query message. WMKS OBJECT LIST. The list of WMKS Objects that match the objects within the Query message. Updated SLAM+DATMO Object Representation When using the WMKS, additional fields must be added to the previously discussed SLAM+DAT MO object representations. These fields fall into one of two categories: fields that the SLAM+DATMO system uses to monitor the status of the object in the WMKS, and fields that are required by the WMKS. Monitoring the object status in the WMKS is important for two purposes. First, it prevents duplicate objects. If the SLAM+DATMO system does not know that an object has been sent to storage then it may attempt to store the object again. The WMKS is a simple storage mechanism and does not check for duplication or perform any form of object resolution. Therefore, it is the SLAM+DATMO systems responsibility to ensure duplication does not occur, especially since it will adversely affect system performance. The addition of a storage status field performs this purpose. When an object is sent to the WMKS, its status is changed to SENT_TO_STORAGE which tells the system that it should not send it again. When the object is successfully stored in the WMKS, its status is changed to STORED and the system is now able to send modification requests. The second reason for monitoring the objects status is to minimize traffic between the WMKS and the SLAM+DATMO system. The SLAM+DATMO system can potentially run at 10Hz when using a single LADAR and objects may be updated at that rate. If the objects in the WMKS were updated every cycle there would be a large amount of traffic between the WMKS and the SLAM+DATMO system which would prevent requests from other components from being processed. To minimize data traffic three fields were

PAGE 76

76 introduced: update count, update time, and confirm time. The update count field was used to count the number of times the object had been updated by the SLAM+DATMO system, while the update time field tracked the las t time a request to update the object was sent to the WMKS. The confirm time field was used to track the time the last WMKS update was successfully performed. There were two criteria for updating an object in the WMKS. Updates were requested immediately when certain special fields were changed, such as a change of the identification status to DUPLICATE. Updates to other fields did not immediately generate a request but were sent to the WMKS if the last update request was successfully completed, the object had been modified numerous times, and a sufficient period of time had passed since the last confirmed update. The additional fields and threshold values used are given in Tables 34 and 36 respectively. These additional monitoring fields were not stored wi th the object in the WMKS and were only used internally to the SLAM+DATMO system. In addition, the WMKS adds two fields to every stored object which are listed in Table 35 The WMKS ID field is a unique ID for all objects within the WMKS while the WMKS Ob ject Type is a field that is used to determine what type of object is being described. The WMKS object type field is important for message parsing and for converting the WMKS objects to the SLAM+DATMO representation while the WMKS ID field is never used as the Object ID field is more useful. L aser Range Finder Fusion A popular approach to LADAR fusion in the past has been through the use of gridbased methods where a grid is generated and updated by both LADAR independently. Alth ough reasonable, gridbased approaches were deemed not appropriate within the context of the research presented here, which seeks to avoid complex image

PAGE 77

77 processing methods. Recently, some work using f eaturebased fusion approaches have been explored but they have been limited to stat ic environments using a stationary platform. The work presented here extends the shared object fusion method outlined in [45] to include the use of a moving platform in the presence of a dynamic envir onment. D ata from each LADAR is processed separately but synchronously. The local object cache was shared between the two LADAR but was only updated by one LADAR at a time and data from one LADAR was only processed if no processing was being performed on the other O bjects generated in the current scan were only compared to the free space polygon generated by the last scan from the same LADAR. Therefore, it was important that both LADAR be processed before any one LADAR is processed again. The synchronous nature of the fusion method alleviated the challenges of data coherence and conflict resolution which occur with asynchronous approaches. T o further simplify the problem, it was assumed that both LADAR scan along the same plane which is at a zero degree inclination to the vehicles x y plane. As each LADAR is processed independently it was not assumed that the scan data was taken at the vehicles current position. Instead, a history of the vehicles position was maintained and the LADAR scan time was used to determine the global position of the LADAR when the scan was taken. The research approach was presented in this chapter. The discussion began with a description of the SLAM+DATMO system including the object representation, detection algorithms, and position estimation technique. Next, the methodology f or using the WMKS was presented along with a description of the required messaging.

PAGE 78

78 Finally, the scheme for using multiple LADAR was outlined. In the next chapter the testing methodology is introduced.

PAGE 79

79 Table 3 1 Fields used to represent a static o bject Field Name Field Data Type Description Object ID Unsigned Integer An identifier that uniquely enumerates the objects detected by the system. Geometry List of Line Segments A list of line segments that represents the shape of the object. Centroid Point A point that represents the centroid if the object geometry. Existence Confidence Integer A confidence value that indicates if the object still exists or not. Moving Confidence Integer A confidence value that indicates if the object is moving or not. Identification Status Byte The moving objects identification status. The status can be one of four values: UNKNOWN, KNOWN, MISSING, and DUPLICATE. Table 3 2 Fields used to represent a line segment Field Name Field Data Type Description Start Point Point The first point of the line segment End Point Point The last point of the line segment Variance Double The variance of the points used to generate the line Table 3 3 Fields used to represent a moving object Field Name Field Data Type Description Object ID Unsigned Integer An identifier that uniquely enumerates the moving objects detected by the system. Geometry List of Line Segments A list of line segments that represents the shape of the object. Length Double The length of the moving object. Width Double The width of the moving object. Geometry List of Points A list of points that represents the bounding box of the moving object. A list must contain five points where the first and last points are the same. Centroid Point A point that represents the centroid of the moving object geometry. Existence Confidence Integer A confidence value that indicates if the object still exists or not. Moving Confidence Integer A confidence value that indicates if the object is moving or not. Identification Status Byte The moving objects identification status. The status can be one of four values: UNKNOWN, KNOWN, MISSING, and DUPLICATE.

PAGE 80

80 Table 3 4. Additional fields needed when using the WMKS Field Name Field Data Type Description Storage Status Byte The objects storage status in the WMKS. The status can be one of three values: Not Stored, Sent to Storage, and Stored. Update Count Integer The number of times the object has been modified. Update Time Double The last time the object was updated in the WMKS. Confirm Time Double The time the last WMKS update was confirmed. Table 3 5. Fields added to all WMKS Objects Field Name Field Data Type Description WMKS ID Unsigned Integer An identifier that uniquely enumerates the objects within the WMKS. WMKS Object Type Byte An enumeration that indicates the type of the object that is being represented. Table 3 6. Threshold values and parameters used in the SLAM+DATMO system Parameter Name Value Description Minimum Scan Distance 0.0 m The minimum distance a point can be from the LADAR for it to be considered a valid point. Maximum Scan Distance 200.0 m The maximum distance a point can be from the LADAR for it to be considered a valid point. Minimum Cluster Distance, 0 0.25 m The minimum distance between two scan points for them to be considered part od the same cluster. Line Break Distance, 0.25 m The maximum distance a point can be from an extracted line segment before the line segment will be sub divided. Line Segment Variance 0.40 m The variance assigned to every line segment of an object. This value is used when generating the object enclosures. Laser Angle Error 0.16 degrees The error associated with the angular accuracy of the LADAR. Region Distance Offset, 0.60 m The buffer distance between the free space polygon and the extracted objects. Minimum Moving Object Length 4.0 m The minimum length of a moving object. Minimum Moving Object Width 2.0 m The minimum width of a moving object.

PAGE 81

81 Table 3 6. Continued. Parameter Name Value Description Maximum Object Confidence 1000 The maximum value for the objects existence confidence. Minimum Object Confidence 1000 The minimum value for the objects existence confidence. Save Object Confidence 600 The confidence value at which an object will be stored in the WMKS. Delete Object Confidence 600 The confidence value at which an object will be removed from the SLAM+DATMO cache if it is not stored in the WMKS. Initial Object Confidence 300 The initial existence confidence of a detected object. Step Detected Confidence 50 The step value of the existence confidence when an object is detected. Step Occluded Confidence 10 The step value of the existence confidence when an object is found to be occluded. Step Missing Confidence 50 The step value of the existence confidence when an object is not detected and is not occluded. Minimum Occluded Confidence 0 The minimum value the existence confidence can become when an object is occluded. F ree Space Overlap Threshold 50 % The percentage of overlap between the free space polygon and the object enclosure that must occur for an object to be considered in free space. Moving Object Enclosure Overlap Threshold 10 % The percentage of overlap between the free space polygon and the object enclosure that must occur for an object to be considered moving Moving Object Segment Overlap Threshold 0 % The percentage of overlap between the free space polygon and the object line segments that must occur for an object to be considered moving. Minimum Moving Confidence 100 The minimum value for the objects moving confidence. Maximum Moving Confidence 100 The maximum value for the objects moving confidence. Initial Moving Confidence 0 The initial moving confidence of a detected object. Step Moving Confidence 10 The step value of the moving confidence when an object is detected to be moving. Step Static Confidence 10 The step value of the moving confidence when an object is detected to be static.

PAGE 82

82 Table 3 6. Continued. Parameter Name Value Description Save Moving Confidence 30 The moving confidence value at which an object can be added to the WMKS. Object Update Count 10 The number of times an object must have been updated before it will be updated in the WMKS. Object Update Time 1.0 second The time between updates of a single object in the WMKS. Figure 3 1 Flowchart outlining the presented approach

PAGE 83

83 Figure 3 1 Continued.

PAGE 84

84 Figure 3 1 Continued.

PAGE 85

85 Figure 3 2 Example of the clustering process. If the distance rAB between points A and B are within a threshold distance, the points are considered part of the same cluster. Figure 3 3 The Iterative End Point Fit (IEPF) a lgorithm. The algorithm searches for the point Pj with the greatest distance from the line through P0 and Pn. If the distance is greater than the threshold T, the line P0Pn is broken into two lines P0Pj and PjPn and the process repeated for the two new lines.

PAGE 86

86 Figure 3 4 Example of t he moving object detection method. The grey area represents the free space region detected using the LADAR while the white area represents the occluded region. When the object moves it overlaps the free space region and is identified as a moving object. Figure 35. An example of the free space polygon generation method. The blue circle represents the LADAR while the red lines represent the detected objects. The green lines outline the generated free space polygon.

PAGE 87

87 A B Figure 3 6 Free space region generated around the vehicle. A) The extracted objects are used to generate the free space region. B) The free space region overlaps objects detected in previous scans. The objects overlap the free space polygon due to sensor noise, or removal from the environment.

PAGE 88

88 A B C D Figure 37. Generation of the oriented bounding box used for moving object representation.

PAGE 89

89 A B Figure 3 8 Example of the enclosures generated around the line segments. A) Enclosures generated in LADARs polar coordinate system. B) Enclosures generated using the GEOS librarys buffer function.

PAGE 90

90 Figure 39 Possible scenarios that can occur after object matching.

PAGE 91

91 Figure 3 10 Pseudocluster points (black) are generated from the line segments of the stored objects.

PAGE 92

92 A B C D Figure 311 Object resolution example. A) Pseudocluster points (black) ar e generated along the stored objects (red). B) The current scan points (green) are associated with the pseudo cluster points based on the closest distance. C) Points are added to a new cluster (orange) based on their position along the object. D) A new object is generated from the new cluster which produces a better representation of the object.

PAGE 93

93 CHAPTER 4 TESTING METHODOLOGY Informal testing was performed throughout the development process to ensure the correctness and validity of the implemented algorithms. Testing approaches such as unit test s, integrat ion tests, and regression tests were commonly run. However, a formalized test plan is necessary to evaluate the final performance of the overall system and provide a framework for analysis. In this chapter, the testing methodolo gy used in the completed system is outlined. The chapter begins with a description of the platform us ed for generating the test data and is followed by an outline of the test plan. Finally, the metrics used for evaluation are presented. Test Platform The Urban NaviGator ( Figure ) is an autonomous sports utility vehicle that was developed by the Center for Intelligent Machines and Robotics ( CIMAR) for the 2007 Defense Advanced Research Projects Agency ( DARPA) Urban Challenge and served as the platform for collecting the data for developing and testing the Simultaneous Localization, Mapping, and Moving Object Tracking ( SLAM+DATMO ) system presented. It was built off of a 2006 Toyota Highlander Hybrid chassis whi ch was modified significantly to provide the functionality required for autonomous navigation and obstacle avoidance. A hybrid system was chosen to exploit the internal electrical system and reduce the workload required to implement a power management syst em In this section the hardware and software details of the test platform are provided. Hardware Actuation. The actuation required for vehicle control was implemented using two methods. Steering and shifting control was implemented using Animatics SmartMotors

PAGE 94

94 which were connected to the steering column and the gear shifter respectively. Throttle and brake control was enabled through the existing vehicle driveby wire system via a custom controller that passed fake throttle and brake data to the vehicle. Motor position commands and desired throttle and brake efforts were sent to their respective controllers via an attached tablet computer. Sensor package. The Urban NaviGator contains a comprehensive sensor package to provide data for localization, terrain estimation, and obstacle detection. Localization was accomplished by combining data from three Global Positioning System ( GPS) units, a General Electric ( GE ) Aviation North Finding Module (NFM) and wheel encoders using a Kalman filter to provide estimates of the vehicle position and orientation in the global frame. Six SICK LMS 291 and two SICK LD LRS1000 laser range finders (LADAR) are mounted on the vehicle and are used primarily for terrain estimation and obstacle detection. There are also four Matrix Vi sion BlueFox cameras mounted to the vehicle which are used for pathfinding and lane line detection. Although, there are a myriad of sensors present on the platform, only the localization system and the two SICK LD LRS1000 range finders were used for the p resented research. The fields of view of the two SICK LADAR are shown in Figure 4 1 Computing r esources. In addition to the tablet computer required for actuation control, a distributed computing package was implemented. The rear row of seats was replaced with a custom computer rack that could support up to twelve ATX motherboards. Each motherboard contained AMD X2 4600 processors and eighty gigabyte hard drives. The computers ran a mixture of Ubuntu 8.04 Linux and Windows XP operating systems and were connected using a gigabit Ethernet network. An incar

PAGE 95

95 development environment was provided using a dual head keyboardvideomouse (KVM) switch which connected the installed computers to either one of two rear seat workstations. Remote computer access was also provided using a Cisco Aironet 350 Series wireless bridge pair. Software There are four software elements required for the presented research: the LADAR dat a server, the Global Position and Orientation Sensor (GPOS) component, the World Model Knowledge S tore (WMKS ) and the SLAM+DATMO system. The LADAR data server communicates with the SICK LD LRS1000 range finders over a Controller Area Network (CAN) data interface and rebroadcasts the data to all clients via IP multicast. The use of the data server prov ides two useful functions. First, it allows multiple software components to use the LADAR data and second, it allows for software to be easily moved from one computer to another without modification to the existing code. The GPOS component provides the pos ition of the vehicle in latitude, longitude, and a ltitude measurements using the localization sensors described above. Details on the WMKS and the SLAM+DATMO have been previously given. Data exchange between GPOS, the WMKS, and the SLAM+DATMO system was ac hieved through the use of the Joint Architecture for Unmanned Systems (JAUS) messaging standard. Test Plan Testing was performed using recorded data taken with the Urban NaviGator and reproduced in a custom simulation environment. The LADAR and position data were recorded simultaneously which allowed for the data to be replayed as if happening in real time or slowed down to facilitate testing. There was no change in the data format between the simulation environment and the actual platform which means the

PAGE 96

96 developed algorithms should be able to run on the robotic platform without modification. All testing was conducted under Ubuntu 8.04 LTS on a laptop using an Intel 1. 67 GHz T2300 D uo Core with 4 GB of RAM Testing was divided into multiple stages with a discussion of each testing stage given below. Single LADAR Testing The fi rst testing stage used a single LADAR to test the different elements of the system In addition to providing a series of baselines that were used to compare the correctness of the LADAR fusion approach, this testing stage allowed for detecting and fixing simple problems during development and showed proof of concept of the presented approaches. Static object detection and t racking First the clustering, line extraction, and static object matching algorithms were tested to evaluate correctness and repeatability. It wa s important that the clustering, line extraction, and matching algorithms produce repeatable resul ts or the system output could vary widely between tests. Therefore, they were first test ed in an environment in which both the vehicle and all objects were static. However, the effect of sensor noise in the LADAR data influences the repeatability of the al gorithms Therefore, the first series of tests fix ed the LADAR data fed to the system After these algorithms were evaluated sensor noise was introduced while still maintaining a static platform and environment. This allowed testing of the object update algorithms as sensor noise would cause an object to be detected differently during each scan. Finally, the influence of platform motion on static object detection and tracking was evaluated by moving the vehicle through a stat ic environment.

PAGE 97

97 Moving object detection and t racking Next, the moving object detection and tracking algorithms were evaluated. The first step involved testing the free space violation method to determine its effectiveness at detecting moving objects. Also, the ability of the system to track the objects over time was evaluated. T o remove the influence of platform motion on the test results, testing was performed with the platform at a fixed position and with static and moving objects in the environment. Once, the effectiveness of the moving object detection and tracking system was evaluated testing was done with the vehicle moving through a dynamic environment to observe the effect of platform motion on the algorithms. Position estimation All previous testing was performed without using the position estimation algorithm to provide some evaluation of the difference between using and not using position estimation in the system. Therefore, during this testing stage many of the same tests that were previously performed were repeated. First the accuracy of the position estimation system was evaluated with the platform in a fixed position and with no moving objects in the environment. Testing without sensor noise was performed to provide a baseline of the position estimation algorithm followed by testing in the presence of sensor noise to evaluate the influence of sensor noise on accuracy. Next, the LADAR data was fixed while the GPOS provided position was updated as if the vehicle was moving through the environment. This test evaluated the position esti mation algorithms accuracy when a known off set was applied. The e stimation algorithms were also tested by moving the vehicle through a static environment and checking that it could correctly track th e GPOS estimate and make corrections as necessary. Finally, the algorithms were tested by keeping the vehicle in a fixed location and introducing a

PAGE 98

9 8 moving object in the environment to evaluate the effect on the position estimation system. W orld M odel K nowl edge S tore access w ithout position estimation The next set of testing involved evaluating the ability of the system to add and update objects in the WMKS and testing its ability to identify WMKS object s that no longer exist in the environment. This testing was decomposed into three stages. First, the vehicle and environment were both held static while objects were detected and added to the WMKS. The objects stored in the WMKS were then examined for correctness. Next, some objects were modified and new objects were introduced to evaluate the systems ability to not only retrieve stored objects at startup but to modify existing WMKS objects and add new objects to an existing database. Next, the performance of the adding and modifying objects in the WMKS was evaluated when the platform moved through a static environment. T his test was performed to ensure that objects could be successfully updated in the WMKS as their representations were modified through platform motion and the effect of the platfor m motion on retrieving objects from the WMKS. Finally, testing was performed to evaluate the accuracy of adding static object s in the presence of moving objects. It was decided that only static objects would be stored in the WMKS to reduce research complex ity. The system assumed that all objects from the WMKS were static and, therefore, it was important that moving objects were not added. World Model Knowledge Store access w ith position estimation When objects are added to the WMKS their points are represented in the global frame. However, due to error in GPS, there is no guarantee that if a vehicle is placed in the same position numerous times, the global position will always be exactly the same.

PAGE 99

99 One aspect of the system presented is that it should be able to correct for differences between the position of stored static objects and the position of the objects sensed in the environment. This stage of testing evaluated the capability of the system to do just that. Testing was performed in two stages. First, both the vehicle and the environment was held static with the stored WMKS objects being slightly offset from the sensed objects due to GPOS differences. Next, the vehicle was moved through the static environment. The ability of the system to correctly calcul ate the position error and maintain the corrected position was evaluated. Multiple LADAR Testing After testing with a single LADAR was completed, a number of tests were performed to evaluate the fusion scheme presented. The effectiveness of the fusion sche me was compared to the results when using a single LADAR. Not all tests that were run with a single LADAR were re run using multiple LADAR as the exact same algorithms were used in both cases. The purpose of this testing stage was to evaluate the fusion sc hemes effectiveness without having to modify the algorithms. Metrics A number of metrics were used to evaluate the performance of the system at each stage of testing and to identify areas for improvement In general, the performance of detection systems have been evaluated by comparing the system output from the expected results of a human. The same approach was taken in this research. After running the number of objects generated by the system were counted and compared to the number of the objects that would be expected by a human. The same approach was performed for both the static and moving object detection systems and provides a general qualitative feel for the accuracy of the system. The time taken for the algorithm

PAGE 100

100 to execute was also evaluated and analyzed and the average time for each function logged. Finally, an analysis on the position estimation system performance was done by tracking the changes of the corrected and uncorrected position estimates from the vehicle origin. In this chapter the methodology for testing the implemented approach was presented. First, the hardware and software capabilities of the test platform were outlined followed by a description of the developed test plan. Finally, the metrics for evaluating the performance of the research algorithms was discussed. In the next chapter the generated results are presented.

PAGE 101

101 Figure 4 1 Scan regions for the two SICK LD LRS1000 laser range finders used for testing. The vehicle is repr esented as the black rectangle in the center of the image.

PAGE 102

102 CHAPTER 5 RESULTS In this chapter the results generated from the test plan laid out above are discussed. Test data was taken from the Gainesville Raceway located in northwest Gainesville, FL. In addition to having a large open area with a number of roads that simulate the road network of an urban environment, the site is closed off from the general public and allow s for controlling the presence of moving objects. Although there are a limited number of buildings, the site was more than adequate for use in the requi red testing. The results from each stage of testing outlined in the test plan from Chapter 4 are present ed and discussed below. Single LADAR Testing As mentioned, th e f irst testing stage involved using a single laser range finder ( LADAR) to provide a series of baselines Single LADAR testing was performed using both the driver and passenger side LADARs indiv idually to check if there were any major differences in algorithm performance between LADAR. Figures 51 show satellite images of portions of the test site used during testing with overlays of actual point data. Static Object Detection and Tracking Figure 52 shows the output from the object detection system when given a set of raw data points. The system is able to successfully detect objects from the scan points and when run without the presence of sensor noise, the system will consistently detect the same objects. Figure 53 shows an example of the objects detected from the system using the passenger side LADAR with a static environment and a static vehicle without the presence of senor noise. In T able 51, which is based on the results seen in Fig ure 5 3, it is shown that five static objects were expected while nine objects were detected.

PAGE 103

103 Although it appears that the system detected many more objects than expected, objects 9, 10, and 11 are generated from a single c hain link fence. It is expected t hat some LADAR beams went through the holes in the fence and caused the incorrect object separation. Also, the detection of objects 3 and 4 are caused by incorrect sensor placement which causes the sensor to hit the ground. When taking into consideration t hese factors, the performance of the det ection algorithm is actually quite good. Object matching without sensor noise was found to consistently match the correct stored and new objects. In general, the system performs well without the pres ence of sensor noise. Figure 54 shows the execution times for running the algorithm without the presence of sensor noise in a static environment with a static vehicle and without the use of the position estimation algorithm or the knowledge store. The average time taken f or execution is about 0.13 seconds which is close to real time given that the LADAR collect data at 10 Hz. However, this represents the best case run time for the system and indicates that real time execution of the complete system is not possible with the current algorithm and with computing resour ces used for testing. Figure 55 shows the average time taken for different functions within the system to execute. It can be seen that point association takes the longest amount of time, which is a reasonable result. The algorithm is very inefficient and requires every pseudo cluster point generated from the stored object is compared to every scan point from its matching new object. All other functions that take a long time to execute involve the use of the GEOS library either to generate polygons or to test for polygon overlap. When sensor noise is introduced to the detection system, the number and shape of the extracted objects starts to vary over time. Figure 56 shows how the detection

PAGE 104

104 process is affected over each time step and illustrates the ability of the matching and updating algorithms to correctly identity objects that are the same and update them accordingly. Figure 57 shows how objects can be merged over time based on the changes in the sensor data to produce more accurate results. In Figure 5 7 A three objects were detected when a single object was expected. However, after some time in Figure 5 7 E the three objects were merged into the single expected object. As shown in T able 52 which is based on the results shown in Figure 5 8, the performance of the system improves over time due to the influence of sensor noise. Figure 59 shows that the introduction of sensor noise has an effect on the overall execution time of the system wit h the average run time increasing to about 0.19 seconds Figure 510 shows that the average execution time for associating the stored and new object points almost doubles. This increase can be explained by the fact that an old object is updated multiple times when there are multiple matching new objects. When sensor noise was not present, every stored object matched only one new object but now the possibility of multiple matches exists. Therefore, the number of times the point association algorithm is executed is increased. It can also be observed that the ti me for updating of the existing confidence of the static objects now appear s on the graph. This is also reasonable result as some static objects will now overlap with the free space region polygon. To determine if an object is free space or not the area of the overlapping polygon is calculated using the GEOS library which is relatively slow. The next step of testing involved moving the vehicle through a static environment and observing the correctness of the algorithms. It was found that the object tracking and updating algorithms worked well but there were a number of issues. First, the affect

PAGE 105

105 of vehicle pitch and roll greatly affected what objects could be det ected by the LADAR. When turning, the roll of the vehicle would cause the plane of the LADAR scan to go above some objects In these cases, the assumption that the scanning plane of the LADAR was parallel to the ground no longer held and caused a failure of the tracking algorithm. Also, there were inconsistencies introduced into the object representation by the object resolution algorithm. Sometimes, a stored point would be kept when it should have been removed from the representation and caused the object shape to become distorted. Figure 511 and Fi gure 5 12 show a few images of the stored objects when the vehicle moves through the environment. Although, the updating algorithm was sometimes inconsistent it also produced some promising results. I t can be seen that the algorithm does successfully outline some of the objects in the environment. One thing that is observed in Figure 5 12 is that the motion of the platform affects the moving object classification process as two static objects were incorrectly detected as moving. This may be caused by errors in the Global Position and Orientation Sensor ( GPOS ) position estimate since the current testing does not include the position estimat ion algorithm. F igure 513 shows that there is a large effect on the total execution time when the vehicle starts to move through the environment with the average execution time of the system increasing to about 0.35 seconds. In Figure 5 1 4 it is seen that the main functions that cause this increased execution time are separating the static and moving objects and the updating of the existence confidence of the static objects Both these functions rely on the GEOS library. Also, platform motion probably causes a lot more objects to be considered during these two functions and, therefore, would increase processing time.

PAGE 106

106 Mo ving O bject Detection and Tracking Next, testing was done to evaluate the system performance at detecting moving objects in the environment. First, the moving object detection and tracking system was tested with the vehicle in a fixed position to avoid any issues that may occur due to platform motion. Figure 515 shows a moving object being successfully detected when it comes into view from being occluded. The object is first detected as a static object but is converted to a moving object when the moving confi dence passes the threshold. In Figure 5 1 5 D it can be seen that the bounding box is larger than the detected points since the original bounding box would be smaller than the minimum size. However, in Figure 5 15 E the length of the bounding box is just a bout the correct size for the det ected points. Figure 516 shows an object that starts from a static position and then begins to move. The object has a high static probability which takes a long time to reverse. However, due to the occlusion, the object is split in two and the new half of object is quickly determined to be a moving. As the object moves away from its original position the existence confidence of the remaining misclassified static object goes to the minimum indicating that it is missing. When a moving object is successfully detected the number of tracking errors due to part ial occlusion is decreased. In F igure 517 the object moves behind a static object and becomes partially occluded. However, the system is able to successfully and correctly track the object until it is no longer occluded. However, if the object shape changes drastically and does not correspond to the expected L shape of a car or truck, the moving object tracking algorithm breaks down. Figure 518 shows the failure of the tr acking algorithm due to a bad sensor position, which causes the LADAR to move from striking the side of the vehicle to striking its wheels. When this occurs it can be seen that the bounding box does look as

PAGE 107

107 expected. Also, sporadic static objects start to appear as the new cluster s can no longer be associated with the moving object. The total execution time for the system during this testing stage is shown in Figure 5 19 and is found to be similar to the execution times with a static pl atform and a static environment The average execution time is about 0.20 seconds. However, the function that causes the greatest slow down is seen to be function that separates the objects into moving and static objects instead of the point association function (Figure 520) The final step in testing the detection and tracking systems involved testing with a dynamic platform in a dynamic environment This test was used to evaluate the completeness of the system. Upon examination of Figure 52 1 it can be seen that the algorit hms were successful in tracking the moving object as the platform moved through the environment. The system was also able to simultaneously track the moving object and update the static object representations. The problems of inconsistent static object updating and incorrect classification of static objects as moving objects (Figure 52 2 ) that were seen when the platform moved through a stat ic environment were also seen during this stage. The average execution time for the algorithm during this stage of tes ting was found to be about 0.22 seconds (Figure 523) with the point association and object separation functions taking the longest time (Figure 5 24). Position E stimation The evaluation of the position estimation system was done in five stages. First, the system was evaluated with fixed LADAR data and a static platform and static environment. This test provided a baseline for comparison of the position estimation performanc e in all other testing stages. Figure 52 5 shows how the distance from the origin changes over time for the corrected and uncorrected position estimates. First, it

PAGE 108

108 should be noted that the uncorrected position estimate does change over time although the vehicle is not moving due to drift in the GP O S position. Also, it is seen that the corrected position does not exactly follow the uncorrected position. This is due to the introduction of error by the object approximation and association process. Although, the LADAR points are the same during every cycle the pseudo points used during association are generated from simplified object representations. Therefore, it is impossible for the old points and the new points to ever be perfectly aligned and introduces an inherent error in the position estimate. However, the error introduced is small. Maximum errors of 0.14 m and 0.10 m are seen in the Universal Transverse Mercator ( UTM ) x and y directions respectively and a rotational error of less than 0.30 degrees. When real LADAR data was used sensor noise had an effect on the position estimate as can be seen in Figure 526. In addition to the error introduced by the object approximation described above, the LADAR data itself changes over time and introduces error. Howev er, the error between the corrected position and the origin is still relatively small. Errors of 0.20 m or less are seen in the UTM x and y directions and a rotational error of less than 1 degree is experienced. One interesting test performed was to change GPOS position estimate in a known and repeatable manner but fix the LADAR data. Ideally, the corrected position estimate should remain unchanged while the uncorrected position estimate follows he vehicle motion. Figure 527 shows that the position estimat ion algorithm performed fairly well. The corrected position estimate of both the UTM x and y positions are within 1 m from origin as opposed to an uncorrected UTM y position of over 15 m Also, the corrected

PAGE 109

109 yaw position is within 2 degrees of the origin v ersus an uncorrected yaw position of over 25 degrees. Next, the ability of the corrected position estimate to correctly follow the vehicle as it moves through the environment was evaluated. The platform was moved through a static environment and the correc ted and uncorrected position c hange was monitored. Figure 528 shows how the corrected position estimate correlated to the uncorrected position estimate. It can be seen that the corrected and uncorrected position estimates followed each other very closely for most of the graph. However, the corrected and uncorrected position estimates in the UTM y position diverge slightly near to the end of the graph. Unfortunately, it is not possible to determine if this divergence is caused by an error in GPOS as a result of GPS or by the map based correction. Finally, the position estimation system was tested in the presence of moving objects. To evaluate its effectiveness, the vehicle was fixed in place and a moving obj ect was allowed to move through the environment. Figure 52 9 shows how the position estimate was affected by the presence of a dynamic object. In general, the corrected position seems to behave in a similar manner to the previous tests with a static vehicl e. Although, the corrections in the UTM y position and rotation appear to be noisier, it is unclear if this was caused by the presence of the moving object or not. However, the error of the system is still reasonable. The change in the UTM x and y positions are less than 0.30 m and 0.15 m respectively and the change in the yaw is less than 0.5 degrees. World Model Knowl edge Store W ithout Position E stimation The next series of tests involved the use of the World Model Knowledge Store ( WMKS) and ensured that objects could be successfully stored, updated, and retrieved.

PAGE 110

110 Figure 530 shows that static objects were successfully added to the WMKS when the object existence confidence passed the threshold value. Eventually, the previously stored o bjects 29 and 31 wer e merged into object 29. Figure 531 shows that the updated object 29 is successfully updated in the WMKS and object 31 is no longer visible as its identification field has been changed to DUPLICATE. Next, object retrieval from the WMKS was tested Objects that were generated using the driver side LADAR and were previously added to the WMKS were retrieved to seed the SLAM+DATMO system. The passenger side LADAR was then used to update the local objects and the WMKS was checked to ensure that the previously stored objects were updated and the new objects were added ( Figure 532 a nd Figure 533). N ext the ability of the system to detect that previously stored objects no longer exist ed in the environment was evaluated. Object 29 was added to the WMKS and then moved t o a new location. As seen in Figure 53 4 the system wa s able to successfully decrease the existence confidence of object 29 to a value 1000 and increase the confidence on the newly detected object 43 up to a value of 1000 and update their confidence values in the WMKS. Tests were also performed to ensure that the objects were updated in the WMKS as their representation was modified by motion through the environment and that moving objects were not added to the WMKS with the results shown in Figure 5 35 and Figure 5 36. Wo rld Model Knowledge Store W ith Position E stimation When an object is added to the WMKS it is stored in the global frame. However, when the objects are retrieved at a later time, there is no guarantee that the global position will be the same due to the influence of error in GP O S. Figure 53 7 shows a situation where the stored WMKS are not aligned with the current LADAR scan.

PAGE 111

111 Therefore, it was important that the vehicle position can be corrected to cause the stored objects and the cur rent LADAR scan to become aligned. The first test involved checking if the position estimate would correct the error and maintain the correction if the vehicle is not moving and with no moving objects present. Figures 5 3 8 to 5 40 show the position correct ion system successfully adjusting the vehicle position to reduce the discrepancy between the WMKS object positions and the sensed object positions. In F igure 541, it can be seen that the correction system immediately applies a correction, which is consist ent with the difference between the WMKS objects and the sensed objects and is able to maintain the correction fairly well. When examining the execution times of the entire algorithm it was found that the average execution time was 1.08 seconds (Figure 5 4 2 ) with the position estimation algorithm taking an average of about 1 second to run (Figure 54 3 ). The position correction between the stored WMKS objects and the current LADAR scan was also tested with the platform moving through the environment to evaluate if the position correction will accurately follow the uncorrected position estimate. Figure 54 4 shows that an initial correction is applied and maintained for most of the graph especially when considering the graph shown in Figure 54 4 A. It is in teresting to note that the corrected and uncorrected positions start to converge close to the end of the graph. Platform motion t hrough the environment did not great ly affect the execution time of the entire algorithm. The average execution time was found t o be about 1 second (Figure 545) with the position estimation algorithm again taking an average of about 1 second to run (Figure 546).

PAGE 112

112 Multiple LADAR Testing The LADAR fusion scheme was tested next. Figure 547 shows a satellite image with an overlay of data points from both the LADAR while Figure 54 8 shows a close up image of the points. It can be seen that the driver and passenger side LADAR points are not aligned which caused a problem when running the system. Due to this large discrepancy in point alignment some of the implemented algorithms produced undesirable but predictable results. Therefore, not all the tests run using a single LADAR were re run. Static Object Detection and Tracking The performance of the static object and tracking system was first tested with a static platform in a static environment. Figures 549 to 55 1 show some output from the system. It was found that the object detection system worked well without any changes and the objects coul d be correctly updated. The misalignment of the points between the LADAR had minimal effect in this test and was treated as sensor noise by the system. It was found that the object representations were updated faster than when using a single LADAR. This re sult was expected since there was now twice as much data and one LADAR would compensate for spaces incorrectly introduced by the other LADAR. The average time take for the system to process a single LADAR was about 0.19 seconds and Figure 5 5 2 shows the general trend for the total execution time. It can be seen that the algorithm performs in a similar manner to the same test using a single LADAR. One difference between two tests was the increase in average execution time for updating the static object confi dence (Figure 553) This can be explained by the fact that the scan points are not aligned and therefore, more objects overlap the free space polygon than when a single LADAR is used.

PAGE 113

113 When the system was tested with platform motion through the environment it was found that the misalignment between the scan points became very large ( Figure 5 5 4 ). This large error between the scan points caused many of the algorithms to break down as some of the simplifying assumptions about the format of the data no longer held. However, for the small interval when the alignment error was not large enough to cause incorrect object matches, the system seemed to perform fairly reasonably. Due to this misalignment problem any further testing with platform motion through the env ironment would be inconclusive. Therefore no further tests with a dynamic platform were performed. Moving Object Detection and Tracking The next step involved evaluating the performance of the fusion scheme when detecting and tracking moving objects in the environment. In general, the system performed better at detecting moving objects when using multiple LADAR than when using a single LADAR. Figure 55 5 shows an object that is detected when it is stationary and begins to move. The system is able to quickl y detect the motion and reclassify the object. This result is not surprising as there is now more data available to make a better classification decision. However, the tracking algorithm performs worse than when using a single LADAR due to the differences between scan point alignments. Figure 55 6 shows the detection of the object using the driver side and passenger side points and it can be seen that the bounding box does not enclose all the points from both LADAR. Therefore, when the LADAR used is switched the position of the bounding box changes and does not smoothly track the object through the environment.

PAGE 114

114 Position E stimation It was expected that the position correction provided by the system would oscillate due to the scan point misalignment. Figure 5 5 8 shows that this is indeed what happened. T he object detection algorithm averages the object position between the misaligned scan points such that the detected object lies between the two point sets (Figure 557). Therefore, when the position correction is calculated it is oscillates between the positive and negative directions. Due to this oscillation effect no further testing of the position estimation algorithms was deemed useful. World Model Knowledge Store W ithout P osition E stimation Finally, the effect of using multiple LADAR on when objects were added to the WMKS was explored. As mentioned, it was determined that no further testing with platform motion would be useful and since moving objects are not added to the WMKS, testing with the WMKS was only performed with a static platform and static environment. When the same test was performed using a single LADAR, it was found that most detected objects were added to the WMKS at the same time and then updated as necessary. However, when using multiple LADAR some objects were added to the WMKS before other objects. This was due to the difference in the rate of change of the existence confidence of the objects that were det ected by both LADAR. Figure 559 shows how the existence confidence of the objects c an vary greatly due to sensor overlap. Figure 560 shows the objects in the WMKS at two different times. T he objects that were initially added lie within the overlapping region of both LADAR and, therefore, their existence confidence increased quickly. One interesting by product of the fusion approach was the correction of error from one LADAR by the other LADAR. The placement of the passenger side LADAR caused it to hit the ground and incorrectly

PAGE 115

115 detect the ground as a static object. When the passenger side LADAR was tested alone, these ground objects were added to the WMKS as is shown in Figure 561A. However, since they are not detected by the driver side LADAR, their existence confidence never passed the threshold value required for them to be added to the WMKS when using both LADAR (Figure 56 1 B). Discussion A discussion on the results obtained is now presented. This section seeks to provide the authors assessment of the proposed SLAM+DATMO system and the LADAR fusion scheme and identify weaknesses s hortcomings, and possible future improvements. Single LADAR Performance In general, the system performed fairly well when using a single LADAR. However, performance was greatly improved when the platform was fixed in the environment. The object detection and tracking algorithms were able to correctly identify objects and deal with th e presence of sensor noise. The representations of static objects were accurately updated when possible and were not adversely affected when the LADAR data caused the algorithm to incorrectly detect a single object as multiple objects. Moving objects could be detected and consistently tracked as they moved through the environment even when they were partially occluded. Moving object tracking was seen t o be fairly smooth despite that fact that a simplistic approach was taken for matching and tracking. The moving obj ects were only matched based on an overlap method as opposed to the use of Kalman filters or linear predictions as seen in other work. The addition of a more sophisticated matching and tracking algorithm should greatly improve performance. Static o bjects were correctly added and updated in the WMKS and could be retrieved at

PAGE 116

116 a later time. Also, objects in the WMKS could be detected as missing through the use of the existence confidence which would decrease to a minimum value when an object was removed from the environment. However, when platform motion was introduced, the detection and updating algorithms did not perform as well. Static objects w ould sometimes be misclassified as moving and inconsistent object representations would occur. In fact, the update algorithms would sometimes completely fail. One possible reason for o bject misclassification is the incorrect correlation between the vehicle position and the scan. The free space polygon used to detect moving objects is stored in the global frame, and therefore, the global position of the vehicle needs to be known. However, the LADAR and the positioning system are not synchronized and the exac t position of the vehicle when a LADAR scan is taken in unknown. A best guess estimate is used based on the time of the scan and the position of the vehicle at that time. However, this estimate could be incorrect due to lag in the positioning system, lag i n the LADAR, or some other error. If this occurs, objects would appear to move when they were in fact stationary. The position estimation algorithm was also shown to perform reasonably well. The error introduced by the method was small and did not have an adverse effect on the results. The corrected position estimate was able to consistently follow the uncorrected position when platform motion was introduced and kept the vehicle close to the origin when the LADAR data was artificially fixed and the vehicle s GPOS position was allowed to change. It is important to note that the position correction system did not incorporate any form of smoothing, filtering, or averaging to improve performance and was based solely on the position of the objects in the environm ent. The correction

PAGE 117

117 between each time step was simply calculated and applied. It is believed that performance will greatly improve if the calculated correction was incorpor ated into the Kalman filter used in t he GPOS component that also incorporates the Gl obal Positioning System ( GPS ) North Finding Module ( NFM ) and wheel encoder inputs. One alternative method to the one presented would use the points along the objects extracted in the current scan instead of the raw data. Pseudo points could be generated from the extracted objects similar to the points generated along the stored object and be used during the point association stage. This method would allow objects to become perfectly aligned when sensor noise is not present. The biggest weakness of the presented system was the amount of time taken for processing. Real time operation was not possible on the test hardware especially when position estimation was used. One way to improve the average execution time would be to run the position estimate at a slow er rate, such as once per second. However, one important caveat is that the error in the initial estimate must be small to facilitate accurate object matching. If the error is large objects will not be correctly matched and the matching algorithm would hav e to be repeated which would add additional processing time. Also, optimization of the functions used from the GEOS library and the parallelization of some of the algorithms would greatly improve processing speed. LADAR Fusion Scheme The presented LADAR sc heme worked fairly well with much better performance when the vehicle was static than when the vehicle moved. However, this is not surprising as it was also true of the system performance when using a single LADAR. The use of multiple LADAR improved the performance of the detection algorithms d ue to increased amount of data and made object tracki ng more robust against occlusion.

PAGE 118

118 However, the scheme was highly dependent on the correct alignment of the two LADAR and could cause system failure if the mi salignment was large enough. Although, the expectation of sensor alignment is not unreasonable, the accuracy of the alignment needed for the fusion method should be quantified since exact alignment between different sensors is very difficult if not impossi ble.

PAGE 119

119 Table 51. Expected objects versus detected objects based on Figure 53. Expected Detected Static Objects 5 9 Moving Objects 0 0 Table 52. Expected objects versus detected objects based on Figure 57. Expected Detected at A Detected at B Static Objects 5 10 8 Moving Objects 0 0 0 A B Figure 51. Satellite imagery from the Gainesville Raceway with an overlay of LADAR point data. A) The data obtained from the driver side LA D AR. B) The data obtained from the passenger side L A D AR

PAGE 120

120 A B Figure 5 2. Extraction of objects from sensor points A) The raw scan points obtained from the passenger side LADAR. B) The objects extracted from the raw data with the object enclosures shown after running the detection algorithm.

PAGE 121

121 Figure 53. Objects detected using data from the passenger side LADAR.

PAGE 122

122 Figure 54 Total execution t imes with a static vehicle and static en vironment without the presence of sensor noise, the use of position estimation, or access to the world model knowledge store. 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0 50 100 150 200Total Execution Time (s)Iteration

PAGE 123

123 Figure 55 Average function execution t imes with a static vehicle and static e nvironment without the presence of sensor noise, the use of position estimation, or access to the world model knowledge store. 0.00 0.01 0.02 0.03 0.04 0.05 clusterPoints extractObjects checkForOverlap separateObjects generateRegions convertToLadar moving matching moving resolution movingObjectAssociation addObjects convertToLadar static matching createPseudoClusters associate points static resolution updateMatchedStaticObjects updateMovingConfidence updateStaticConfidence Average Execution Time (s)Function

PAGE 124

124 A B C D E Figure 56 Sensor noise causes the extracted objects to vary over time. A) Five objects were detected at t=0 and stored (red). B) Four objects (blue) are detected at t=1 which overlap all the previous objects. C) The update of the stored objects using the new objects caused the number of objects to be reduced to three. D). Si x objects are detected at time t=2. E) The number of stored objects increases to four.

PAGE 125

125 A B C D E Figure 57 The resolution algorithm allows objects to be combined and updated over time A) Three objects were detected and stored. Although, object 35 has no points associated with it was detected in a previous scan and remains stored. B) Object 29 is extended as the previously unassociated LADAR points shift due to differences between scans. C) The objects do not change despite differences in sensor data as the changes between the points are small. D) Objects 29 and 35 are merged into a single object. E) Objects 29 and 31 are merged.

PAGE 126

126 A B Figure 58 Objects detected using data from the driver side LADAR. A) Initial objects detected. B) The objects remaining after some time has elapsed.

PAGE 127

127 Figure 59 Total execution t imes with a static vehicle and static en vironment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store. 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0 50 100 150 200Total Execution Time (s)Iteration

PAGE 128

128 Figure 510 Average function execution t imes with a static vehicle and static environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 clusterPoints extractObjects checkForOverlap separateObjects generateRegions convertToLadar moving matching moving resolution movingObjectAssociation addObjects convertToLadar static matching createPseudoClusters associate points static resolution updateMatchedStaticObjects updateMovingConfidence updateStaticConfidence Average Execution Time (s)Function

PAGE 129

129 A B Figure 511 Obj ects detected using the passenger side LADAR when the vehicle moves through a static environment. A) Six objects are initially detected. B) The initial object matching and updating process appears to work well when moving in a straight line. C) As the vehi cle makes a corner it is observed that the update algorithm sometimes causes inconsistent representations (object 2 next to object 135). D) The update algorithm is able to successfully merge objects 139 and 135 and produce reasonable outline for object 2.

PAGE 130

130 C D Figure 511 Continued

PAGE 131

131 A B Figure 51 2 Objects detected using the driver side LADAR when the vehicle moves through a static environment. A) Seven objects are initially detected. B) The update algorithm has a problem dealing with the corner of the building (object 20). C) The building corner representation is fixed. D) A moving object is incorrectly detected. E) The building outline (object 20) looks reasonable but the other object representation does not and another moving object is detected.

PAGE 132

132 C D Figure 51 2 Continued

PAGE 133

133 E Figure 51 2 Continued

PAGE 134

134 Figure 513 Total execution t imes with a moving vehicle and static en vironment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store. 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 0 50 100 150 200 250 300 350 400Total Execution Time (s)Iteration

PAGE 135

135 Figure 514 Average function execution t imes for different functions with a moving vehicle and static environment with the presence of sensor noise but without the use of position estimation, or access t o the world model knowledge store 0.00 0.02 0.04 0.06 0.08 0.10 0.12 clusterPoints extractObjects checkForOverlap separateObjects generateRegions convertToLadar moving matching moving resolution movingObjectAssociation addObjects convertToLadar static matching createPseudoClusters associate points static resolution updateMatchedStaticObjects updateMovingConfidence updateStaticConfidence Average Execution Time (s)Function

PAGE 136

136 A B C D E Figure 515 Occluded moving object is detected when it comes into view. A) Object is completely occluded. B) A few LADAR strike hot the object but it still cannot be detected by the detection system. C) The object is detected and initially assumed to be static. D) The object moving confidence passed the threshold and the object is converted to a moving object. E) The object is completely in view and the bounding box is updated.

PAGE 137

137 A B C D E Figure 51 6 Static object becomes a moving object. A) The object is initially detected when it is stationary and is classif ied as a static object. B) The object starts to move but the moving confidence does not cause it to be classified as moving so the object is updated as a static object. C) Occlusion causes the object to be split into two. D) The new half of the object is c lassified as a moving object leaving old half. E) The old object existence confidence has been decreased to the minimum.

PAGE 138

138 A B C Figure 517 Moving object is partially occluded. A) The moving object starts to pass behind another object. B) The moving object is partially occluded but is correctly detected and tracked due to the use of the oriented bounding box. C). The object is no longer occluded and was never lost.

PAGE 139

139 A B C D Figure 518 Object tracking degradation due sparse LADAR strikes. A) An unexpected object shape is detected due to the position of the sensor. B) The number of points striking the object starts to decrease. However, the object is still successfully tracked. C) The pos ition of the bounding box changes drastically between scans. D) Static objects start to appear as the points are no longer continuous or closed enough together.

PAGE 140

140 Figure 51 9 Total execution t imes with a static vehicle and dynamic en vironment with the presence of sensor noise, but without the use of position estimation, or access to the world model knowledge store. 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0 50 100 150 200 250Total Execution Time (s)Iteration

PAGE 141

141 Figure 520 Average function execution t imes with a static vehicle and dynamic environment with the presence of sensor noise but wit hout the use of position estimation, or access to the world model knowledge store. 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 clusterPoints extractObjects checkForOverlap separateObjects generateRegions convertToLadar moving matching moving resolution addObjects convertToLadar static matching createPseudoClusters associate points static resolution updateMatchedStaticObjects updateMovingConfidence updateStaticConfidence Average Execution Time (s)Function

PAGE 142

142 A B Figure 52 1 Moving object successfully tracked as the platform moves through the environment. A) The object is detected before the platform begins to move. B) The platform starts moving towards the moving object. C) The platform begins to turn and can still track the object. D) The moving object continues to be tracked as the static object representations are updated.

PAGE 143

143 C D Figure 520 Continued.

PAGE 144

144 Figure 5 2 2 Object incorrectly identified as a moving object due to error introduced through platform motion (Object 25)

PAGE 145

145 Figure 523 Total execution t imes with a dynamic vehicle and dynamic en vironment with the presence of sensor noise, but without the use of position estimation, or access to the world model knowledge store. 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0 50 100 150 200 250Total Execution Time (s)Iteration

PAGE 146

146 Figure 524 Average function execution t imes w ith a dynamic vehicle and dynamic environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store. 0.00 0.01 0.02 0.03 0.04 0.05 0.06 clusterPoints extractObjects checkForOverlap separateObjects generateRegions convertToLadar moving matching moving resolution addObjects convertToLadar static matching createPseudoClusters associate points static resolution updateMatchedStaticObjects updateMovingConfidence updateStaticConfidence Average Execution Time (s)Function

PAGE 147

147 Figure 525 Distance from the origin of the corrected and uncorrected position estimates when running with fixed LADAR data on a s tatic platform in a static environment. A) Change in the UTM X position. B) Change in the UTM Y position. C) Change in Yaw. 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 1 21 41 61 81 101Distance (m)IterationA Uncorrected Corrected 0.02 0 0.02 0.04 0.06 0.08 0.1 0.12 1 21 41 61 81 101Distance (m)IterationB Uncorrected Corrected 0 0.05 0.1 0.15 0.2 0.25 0.3 1 21 41 61 81 101Angle (Deg)IterationC Uncorrected Corrected

PAGE 148

148 Figure 526 Distance from the origin of the corrected and uncorrected position estimates when running with real LADAR data on a static platform in a static environment. A) Change in the UTM X position. B) Change in the UTM Y position. C) Change in Yaw. 0.15 0.10 0.05 0.00 0.05 0.10 0.15 0.20 0.25 1 21 41 61 81 101Distance (m)IterationA Uncorrected Corrected 0.05 0.00 0.05 0.10 0.15 0.20 1 21 41 61 81 101Distnsce (m)IterationB Uncorrected Corrected 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1 21 41 61 81 101Angle (Deg)IterationC Uncorrected Corrected

PAGE 149

149 Figure 5 2 7 Distance from the origin of the corrected and uncorrected position estimates when running with fixed LADAR data on a dynamic platform in a static environment. A) Change in the UTM X position. B) Change in the UTM Y position. C) Change in Yaw. 0.60 0.40 0.20 0.00 0.20 0.40 0.60 0.80 1.00 1.20 1 21 41 61 81 101 121Distance (m)IterationA Uncorrected Corrected 20.00 15.00 10.00 5.00 0.00 5.00 1 21 41 61 81 101 121Distance (m)IterationB Uncorrected Corrected 30.00 25.00 20.00 15.00 10.00 5.00 0.00 5.00 1 21 41 61 81 101 121Angle (Deg)IterationC Uncorrected Corrected

PAGE 150

150 Figure 528 Distance from the origin of the corrected and uncorrected position estimates when running with real LADAR data on a dynamic platform in a static environment. A) Change in the UTM X position. B) Change in the UTM Y position. C) Change in Y aw. 10 0 10 20 30 40 50 1 21 41 61 81 101 121 141 161 181 201 221 241Distance (m)IterationA Uncorrected Corrected 30 25 20 15 10 5 0 5 1 21 41 61 81 101 121 141 161 181 201 221 241Distance (m)IterationB Uncorrected Corrected 200 150 100 50 0 50 1 21 41 61 81 101 121 141 161 181 201 221 241Angle (Deg)IterationC Uncorrected Corrected

PAGE 151

151 Figure 529 Distance from the origin of the corrected and uncorrected position estimates when running with real LADAR data on a static platform in a dynamic environment. A) Change in the UTM X position. B) Change in the UTM Y position. C) Change in Yaw. 0.15 0.1 0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 1 21 41 61 81 101 121 141 161Distance (m)IterationA Uncorrected Corrected 0.15 0.1 0.05 0 0.05 0.1 0.15 1 21 41 61 81 101 121 141 161Distance (m)IterationB Uncorrected Corrected 0.1 0 0.1 0.2 0.3 0.4 0.5 1 21 41 61 81 101 121 141 161Angle (Deg)IterationC Uncorrected Corrected

PAGE 152

152 A B Figure 530 O bjects are added to the WMKS. A) The existence confidence of objects 29 and 31 has passed the threshold and should be added to the WMKS. B) Objects 29 and 31 have been successfully added to the WMKS. A B Figure 531 Objects ar e updated over time. A) Object 29 was updated by merging two objects. B) Object 29 is updated in the WMKS.

PAGE 153

153 A B C Figure 532 Stored objects are retrieved from the WMKS and updated using the new LADAR data A) T he objects in the WMKS are successfully retrieved. B) The LADAR points only correspond to some of the retrieved objects. C) The retrieved objects are updated using the current LADAR scan.

PAGE 154

154 A B Figure 533 The WMKS is updated. A) The previousl y stored objects. B) Previously stored objects are updated (objects 32 and 22) and newly detected objects are added.

PAGE 155

155 A B C D Figure 53 4 Object is detected as missing and new object is detected. A) Objects stored in WMKS. B) Previously stored object is not present and new object is detected. C) Confidence of missing object (30) goes to 1000 and new object (43) has a confidence of 1000. D) WMKS confidence is updated.

PAGE 156

156 A B Figure 535 Reconstructed objects are successful ly stored in the WMKS. A) The objects in the SLAM+DATMO system. B) The objects in the WMKS.

PAGE 157

157 A B Figure 53 6 Moving objects are not added to the WMKS. A) Moving object 37 is detected by the SLAM+DATMO system. B) Object 37 is not added to the W MKS.

PAGE 158

158 Figure 53 7 Objec ts stored in WMKS are not aligned the current LADAR scan.

PAGE 159

159 A B C D Figure 53 8 Objects are matched and the current position is updated. A) The WMKS objects and the current LADAR scan are not aligned. B) The objects extracted using the current scan match some of the stored WMKS objects. C) The object points are associated in order to perform the position correction. D) T he vehicle position is updated and causes the object s to become closer to being aligned.

PAGE 160

160 Figure 53 9 WMKS object (purple) versus corrected s tored objects (green with red p oints) v er sus e xtracted objects (blue with green points) Figure 540 The correct position causes the WMKS objects (purple) to become aligned with the s ensed objects (red)

PAGE 161

161 Figure 541 Distance from the origin of the corrected and uncorrected position estimates when running with real LADAR data on a static platform in a static environment with a difference between the retrieved WMKS objects and the sensed objects. A) Change in the UTM X position. B) Change in the UTM Y position. C) Change in Yaw. 0.00 0.20 0.40 0.60 0.80 1.00 1 21 41 61Distance (m)IterationA Uncorrected Corrected 0.00 0.10 0.20 0.30 0.40 0.50 1 21 41 61Distance (m)IterationB Uncorrected Corrected 0.10 0.00 0.10 0.20 0.30 0.40 0.50 1 21 41 61Angle (Deg)IterationC Uncorrected Corrected

PAGE 162

162 Figure 54 2 Total execution t imes with a static vehicle and static en vironment with the presence of sensor noise, the use of position estimation, and access to the world model knowledge store. 0.90 1.00 1.10 1.20 1.30 1.40 1.50 0 10 20 30 40 50 60Total Execution Time (s)Iteration

PAGE 163

163 Figure 54 3 Average function execution t imes with a static vehicle and static environment with the presence of sensor noise, the use of position estimation, and access to the world model knowledge store. 0.00 0.20 0.40 0.60 0.80 1.00 1.20 clusterPoints extractObjects checkForOverlap separateObjects generateRegions convertToLadar moving matching moving resolution addObjects convertToLadar static matching createPseudoClusters associate points slam time static resolution updateMatchedStaticObjects updateMovingConfidence updateStaticConfidence Average Execution Time (s)Function

PAGE 164

164 Figure 54 4 Distance from the origin of the corrected and uncorrected position estimates when running with real LADAR data on a dynamic platform in a static environment with a difference between the retrieved WMKS objects and the sensed objects. A) Change in the UTM X position. B) Change in the UTM Y position. C) Change in Yaw. 2.00 0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 16.00 1 21 41 61 81 101 121 141 161Distance (m)IterationA Uncorrected Corrected 30.00 25.00 20.00 15.00 10.00 5.00 0.00 5.00 1 21 41 61 81 101 121 141 161Distance (m)IterationB Uncorrected Corrected 20.00 0.00 20.00 40.00 60.00 80.00 100.00 1 21 41 61 81 101 121 141 161 Angle (Deg)IterationC Uncorrected Corrected

PAGE 165

165 Figure 545 Total execution t imes with a dynamic vehicle and static en vironment with the presence of sensor noise, the use of position estimation, and access to the world model knowledge store. 0.60 0.70 0.80 0.90 1.00 1.10 1.20 1.30 1.40 0 20 40 60 80 100 120 140 160Total Execution Time (s)Iteration

PAGE 166

166 Figure 54 6 Average function execution t imes with a dynamic vehicle and static environment with the presence of sensor noise, the use of position estimation, and access to the world model knowledge store. 0.00 0.20 0.40 0.60 0.80 1.00 1.20 clusterPoints extractObjects checkForOverlap separateObjects generateRegions convertToLadar moving matching moving resolution addObjects convertToLadar static matching createPseudoClusters associate points slam time static resolution updateMatchedStaticObjects updateMovingConfidence updateStaticConfidence Average Execution Time (s)Function

PAGE 167

167 Figure 5 4 7 Satellite imagery f rom the Gainesville Raceway with an overlay of LADAR point data from the driver and passenger side LADARs. A B Figure 54 8 Points from the driver and passenger side L A D AR arent aligned. A) The difference in the alignment of the raw data points. B) The difference in the alignment of an object extracted from the passenger side LADAR and the scan points from the driver side LADAR.

PAGE 168

168 Figure 549 Different objects are extracted between t he driver and passenger side LADAR. Figure 550 Objects are successfully updated by data from the driver and passenger side LADAR.

PAGE 169

169 A B Figure 551 Objects are updated faster when both LADAR are used.

PAGE 170

170 Figure 55 2 Total execution t imes with a static vehicle and static en vironment with the presence of sensor noise, but without the use of position estimation, or access to the world model knowledge store using multiple LADAR 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0 50 100 150 200 250 300 350 400 450Total Execution Time (s)Iteration

PAGE 171

171 Figure 55 3 Average function execution t imes with a static vehicle and static environment with the presence of sensor noise but without the use of position estimation, or access to the world model knowledge store using multiple LADAR. 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 clusterPoints extractObjects checkForOverlap separateObjects generateRegions convertToLadar moving matching moving resolution movingObjectAssociation addObjects convertToLadar static matching createPseudoClusters associate points static resolution updateMatchedStaticObjects updateMovingConfidence updateStaticConfidence Average Execution Time (s)Function

PAGE 172

172 A B Figure 55 4 Large difference between the driver and passenger side points when platform is in motion. A) The difference between the points when travelling in a straight line. B) The difference between the points when turning.

PAGE 173

173 A B C Figur e 555 Moving object detection using multiple LADAR. A) The object is first detected when is it static. B) The object begins to move but is still misclassified as a static object and is incorrectly updated. C) The object is correctly reclassified as a moving object and the object representation corrected.

PAGE 174

174 A B Figure 55 6 Placement of moving object changes due to differences between points from each LADAR. A) The object is detected using the driver side points. B) When the object is detected using the passenger side points it appears to move back wards (to the right) despite the fact that the object has moved forward (to the left). Figure 55 7 The stored object is averaged to lie between the misaligned scan points.

PAGE 175

175 Figure 55 8 Distance from the origin of the corrected and uncorrected position estimates when running with real LADAR data from both the driver and passenger side on a static platform in a static environment. A) Change in the UTM X position. B) Change in the UTM Y position. C) Change in Yaw. 1.2 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 21 41 61 81 101 121 141 161 181 201Distance (m)IterationA Uncorrected Corrected 2 1.5 1 0.5 0 0.5 1 21 41 61 81 101 121 141 161 181 201Distance (m)IterationB Uncorrected Corrected 0 0.5 1 1.5 2 2.5 3 3.5 1 21 41 61 81 101 121 141 161 181 201Angle (Deg)IterationC Uncorrected Corrected

PAGE 176

176 Figure 559 Object confidence changes at different rates due to sensor overlap. A B Figure 560 Objects are added to the WMKS at different times due to the difference rates of change of the object confidence. A) A small number objects are added to the WMKS first. B) More objects object are added to the WMKS after some time.

PAGE 177

177 A B Figure 561 Some detected objects are not added to the WMKS due to discrepancies between the LADAR. A) When using the pass enger side LADAR the object detected due to the ground is added to the WMKS. B) When using both LADAR, the object is not added to the WMKS.

PAGE 178

178 CHAPTER 6 CONCLUSIONS AND FUTU RE WORK A novel method for performing simultaneous localization, mapping, and moving object tracking has been presented in this dissertation. Also, formalized methodologies for using an external World Model Knowledge Store ( WMKS) and fusing data from multiple laser range finders ( LADAR) were introduced. In this chapter the discussion on the presented research is finalized by first presenting pot ential areas of future work. Next, conclusions are drawn from the discussions on the approach, testing methodology and results presented in C hapters 3, 4 and 5 respectively. This section also outlines the contributions of the work to the field of robotics Future Work The research presented here demonstrated the feasibility of a purely featur e based approach to Simultane ous Localization, Mapping and Moving Object Tracking ( SLAM+DATMO ) but the system suffered from a number of shortcomings. First, the system used assumptions based on the LADAR characteristics to simplify the line extraction process. One possible improvement would be the use of statistical models to improve the line extraction and object update processes. The iterative end point fit could be augmented through the use of a total least squares fit to more accurately represent the objects in the presence of sensor noise. The object update process would also benefit from the use of statistical methods to improve the object representation when the platform moved through the environment. Another major limitation of the update process is that an object could never be split in two different objects. Consider the case where any alley between two buildings is blocked by a fence due to construction. The

PAGE 179

179 two buildings would appear as a single object. However, when the fence is removed, it would desirable to split the object, which cannot be done with the current algorithm. When discussing the use of the WMKS many simplifying assumptions were made. One important assumption was that the objects detected by the SLAM+DATMO system were not be modified by any other components on the robot However, in an autonomous system it is beneficial to use many sensors to improve perception and environmental understanding. Ideally, not only would each sensor would be able to detect and add objects to the WMKS but they would also be able to modify objects added by other components. However, issues with conflict resolution and data synchronization would need to be addressed. If two software components attempt to modify an object but the changes conflict with each other, the question of which component to trust arises. Also, if the asynchronous communication approach described in Chapter 3 is used there will always be some period of time when one of the local caches will be out of sync with the WMKS. The questions on how to handle this problem also need to be answered. The LADAR fusion scheme introduced was proven to perform reasonably well, but it suffered from error in the LADAR positioning. A formalized methodology for calibrating the multiple LADAR and ensuring they are aligned would be a major improvement. Another interesting topic for future work is in the area of multiple robotic agents. The fusion approach discussed dealt with LADAR on the same platform, but could be feasibly extended for fusing LADAR data from multiple platforms. However issues dealing with position agreement between agents and transmission time would have to be solved. The scheme was also limited to only allow sequential operations on the

PAGE 180

180 LADAR objects due to problems with conflict resolution and object updating. One ar ea that could be explored is devising a parallel operation methodology for the fusion scheme. The sequential approach outlined is limited on the number of LADAR than can be fused based on the processing speed of the algorithm and the LADAR scan rate. Howev er, this limitation may be overcome through the use of parallel processing. Finally, the presented work attempted to generate singular representations for detected objects. Although, the system was shown to be reasonably successful there is much room for improvement. Object representations were poor for nonregular objects such as trees and rocks. Also, there was no method for detecting or handling when an object had been completely described. One advantage of the singular representation approach is the ability to add contextual data to the objects. An object could be tagged as a tree, a restaurant, a school, etc. and highlevel planner could use the contextual data to modify its plan. Imagine a planner avoiding restaurants downtown in a city at lunch time to avoid the lunch rush, or perhaps redirecting around schools during drop off and pickup times. The ability to use contextual data in planning would be a major step forward for the field of robotics. Conclusions The roles of robotic systems continue to gr ow daily and are expected to deal with more complex and real life situations. As the task complexity increases, the need for safety also increases. It is important that autonomous systems be able to perform their jobs without placing themselves or others a t risk. The first step towards achieving this goal lies in perception as a robot cannot avoid a situation it cannot sense. This dissertation has presented the authors work in developing a novel approach for perceiving and understanding the environment around the vehicle in the presence of

PAGE 181

181 moving objects and using multiple LADAR It also introduced a method for sharing the information with other components within the autonomous system. It began by introducing the challenges associated with perception in autonomous vehicles, especially in the presence of moving objects, and outlined a problem statement in Chapter 1. Next, previous work in the areas of SLAM, DATMO, and SLAM+DATMO were presented and discussed in the Chapter 2. Chapter 3 outlined the authors approach to the problem and d etails on the implemented method. A description of the testing environment, validation procedure and metrics were provided in Chapter 4 while the obtained results were discussed in Chapter 5. In general, SLAM and DATMO have been treated separately with little consideration given to the interaction between static and dynamic elements. Also, most SLAM approaches generated either a point or feature map, which had no real world interpretation. They also did not attempt to detect di fferences between the map and the sensed environment and simply match the detected points or features for localization purposes. The presented work introduced a method for generating an object map, which has a real world interpretation and can be enhanced with contextual attributes, such as object classifications, images, etc, using a spatial reconstruction approach. The a pproach extended and refined an objects representat ion as the viewing angle changed and more information about the object was known. The map was constructed in the presence of moving objects while simultaneously estimating the vehicle s current position and orientation. Secondly the work formally outlined an approach for using a shared world model knowledge store while attempting to maint ain real time sensor processing. Previous,

PAGE 182

182 approac hes w ere self contained and did not share the generated map or detected objects with other elements within the system Any improvements using additional sensors such as the addition of cameras could not be easily integrated and required a major overhaul of the system. As robots develop greater functionality and are tasked with greater responsibilities, modularity becomes very important. It leads to the development of robust vehicles that can function in multiple scenarios and even recover if some of their elements fail. The methodology outlined for using a shared WMKS is an important first step towards that goal. Finally, an approach for fusing multiple LADAR sources on a single platform was outlined. The use of multiple low cost LADAR sources to provide a 360 degree view around the vehicle is sometimes preferred to the use of a single, highcost LADAR especially when there are limitations on sensor placement. However, v ery few fusion schemes have been formally proposed for dealing with multiple LADAR, especially when keeping the data in vector space. Grid approaches to data fusion either force a loss of data resolution or resort to image processing techniques, which can be slow and complex. Many of the popul ar vector based LADAR processing algorithms exploit the sensor properties and break down when those properties are not present. The presented fusion scheme treated each LADAR independently and upheld the sensor property requirements for the vector based al gorithms.

PAGE 183

183 LIST OF REFERENCES [1] S. Nof, "HANDBOOK OF INDUSTRIAL ROBOTICS," New York, NY, USA: John Wiley & Sons, 1985. [2] H. M. Huang, "Autonomy Levels for Unmanned Systems (ALFUS) Framework Volume I: Terminology Version 1.1," 2004. [3] H. Durrant Whyte and T. Bailey, "Simultaneous localization and mapping: part I," Robotics & Automation Magazine, IEEE, vol. 13, pp. 99110, 2006. [4] "Urban Challenge Technical Evaluation Criteria," Virginia, 2006. [5] C. C. Wang and C. Thorpe, "A hierarchical object based representation for simultaneous localization and mapping," Sendai, Japan, 2004, pp. 412418. [6] C. D. Crane III, D. G. Armstrong II, R. Touchton, T. Galluzzo, S. Solanki, J. Lee, D. Kent, M. Ahmed, R. Montane, S. Ridgeway, S. Velat, G. Garcia, M. Griffis, S. Gray, J. Washburn, and G. Routson, "Field report: Team CIMAR's NaviGATOR: An unmanned ground vehicle for the 2005 DARPA Grand Challenge," Journal of Field Robotics, vol. 23, pp. 599623, 2006. [7] J. Vandorpe, H. Van Brussel, and H. Xu, "Exact dynamic map building for a mobile robot using geometrical primitives produced by a 2D range finder," in Robotics and Automation, 1996. Proceedings., 1996 IEEE International Conference on, 1996, pp. 901908 vol.1. [8] B. Schiele and J. L. Crowley, "A comparison of position estimation techniques using occupancy grids," in Robotics and Automation, 1994. Proceedings., 1994 IEEE International Conference on, 1994, pp. 16281634 vol.2. [9] F. Lu and E. Milios, "Robot pose estimation in unknown environments by matching 2D range scans," Journal of Intelligent and Robotic Systems: Theory and Applications, vol. 18, pp. 249275, 1997. [10] C. Lara, L. Romero, and F. Calderon, "A robust iterative closest point algorithm with augmented features," Atiz apan de Zaragoza, Mexico, 2008, pp. 605614. [11] S. T. Pfister, K. L. Kriechbaum, S. I. Roumeliotis, and J. W. Burdick, "Weighted range sensor matching algorithms for mobile robot displacement estimation," Washington, DC, United states, 2002, pp. 16671674. [12] A. Siadat, A. Kaske, S. Klausmann, M. Dufaut, and R. Husson, "An Optimized Segmentation Method For A 2D Laser Scanner Applied To Mobile Robot Navigation," IFAC Symposium in Intelligent Components and Instruments for Control Applications, pp. 153158, 1997.

PAGE 184

1 84 [13] V. Nguyen, A. Martinelli, N. Tomatis, and R. Siegwart, "A comparison of line extraction algorithms using 2D laser rangefinder for indoor mobile robotics," in Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on, 2005, pp. 19291934. [14] S. Zhang, M. Adams, F. Tang, and L. Xie, "Geometrical Feature Extraction Using 2D Range Scanner," in Control and Automation, 2003. ICCA '03. Proceedings. 4th International Conference on, 2003, pp. 901905. [15] M. Adams, Z. Sen, and X. Lihua, "Particle filter based outdoor robot localization using natural features extracted from laser scanners," in Robotics and Automation, 2004. Proceedings. ICRA '04. 2004 IEEE International Conference on, 2004, pp. 14931498 Vol.2. [16] A. Diosi and L. Kleeman, "Advanced sonar and laser range finder fusion for simultaneous localization and mapping," Sendai, Japan, 2004, pp. 18541859. [17] J. Guivant, E. Nebot, and S. Baiker, "Localization and map building using laser range sensors in out door applications," Journal of Robotic Systems, vol. 17, pp. 565583, 2000. [18] G. A. Borges and M. J. Aldon, "Motion estimation by iterative 2D features matching in range images," San Francisco, CA, USA, 2000, pp. 31973202. [19] R. Madhavan and H. F. D urrant Whyte, "2D mapbuilding and localization in outdoor environments," Journal of Robotic Systems, vol. 22, pp. 4563, 2005. [20] Y. Xu, C. Zhang, W. Bao, L. Su, and M. Wang, "A robust pose estimation algorithm for mobile robot based on clusters," Wuhan, China, 2008, pp. 10031010. [21] S. T. Pfister, S. I. Roumeliotis, and J. W. Burdick, "Weighted line fitting algorithms for mobile robot map building and efficient data representation," Taipei, Taiwan, 2003, pp. 13041311. [22] R. Labayrade, C. Royere, D Gruyer, and D. Aubert, "Cooperative fusion for multi obstacles detection with use of stereovision and laser scanner," Autonomous Robots, vol. 19, pp. 117140, 2005. [23] F. Nashashibi and A. Bargeton, "Laser based vehicles tracking and classification usi ng occlusion reasoning and confidence estimation," Eindhoven, Netherlands, 2008, pp. 847852. [24] E. Prassler, J. Scholz, and A. Elfes, "Tracking multiple moving objects for real time robot navigation," Autonomous Robots, vol. 8, pp. 105116, 2000. [25] A Almeida, J. Almeida, and R. Araujo, "Real time tracking of moving objects using particle filters," Dubrovnik, Croatia, 2005, pp. 13271332.

PAGE 185

185 [26] Y. Jin Xia, C. ZiXing, and D. ZhuoHua, "Detection and tracking of moving object with a mobile robot using l aser scanner," in Machine Learning and Cybernetics, 2008 International Conference on, 2008, pp. 19471952. [27] C. Premebida and U. Nunes, "Segmentation and geometric primitives extraction from 2D laser range data for mobile robot applications," 2005. [28] R. A. MacLachlan and C. Mertz, "Tracking of moving objects from a moving vehicle using a scanning laser rangefinder," Toronto, ON, Canada, 2006, pp. 301306. [29] B. Kluge, C. Kohler, and E. Prassler, "Fast and robust tracking of multiple moving objects w ith a laser range finder," Seoul, Korea, Republic of, 2001, pp. 16831688. [30] C. Stiller, J. Hipp, C. Rossig, and A. Ewald, "Multisensor obstacle detection and tracking," Image and Vision Computing, vol. 18, pp. 389396, 2000. [31] K. C. J. Dietmayer J. Sparbert, and D. Streller, "Model based Object Classification and Object Tracking in Traffic Scenes from Range Images," Proceedings of IV IEEE Intelligent Vehicles Symposium, Tokyo, 2001. [32] D. Streller, K. Dietmayer, and J. Sparbert, "Object tracki ng in traffic scenes with multi hypothesis approach using laser range images," in 8th World Congress on Intelligent Transport Systems Sydney, Australia, 2001. [33] D. Streller and K. Dietmayer, "Object tracking and classification using a multiple hypothes is approach," Parma, Italy, 2004, pp. 808812. [34] A. Mendes, L. C. Bento, and U. Nunes, "Multi target detection and tracking with a laserscanner," Parma, Italy, 2004, pp. 796801. [35] G. A. Borges and M. J. Aldon, "Line extraction in 2D range images for mobile robotics," Journal of Intelligent and Robotic Systems: Theory and Applications, vol. 40, pp. 267297, 2004. [36] C. Premebida and U. Nunes, "A multi target tracking and GMM classifier for intelligent vehicles," Toronto, ON, Canada, 2006, pp. 313318. [37] C. Premebida, G. Monteiro, U. Nunes, and P. Peixoto, "A Lidar and visionbased approach for pedestrian and vehicle detection and tracking," Seattle, WA, United states, 2007, pp. 10441049. [38] M. Lindstrom and J. O. Eklundh, "Detecting and tracking moving objects from a mobile platform using a laser range scanner," Maui, HI, United states, 2001, pp. 13641369.

PAGE 186

186 [39] F. Fayad and V. Cherfaoui, "Tracking objects using a laser scanner in driving situation based on modeling target shape," Istanbul, Turk ey, 2007, pp. 4449. [40] H. Zhao, X. W. Shao, K. Katabira, and R. Shibasaki, "Joint Tracking and Classification of Moving Objects at Intersection Using a SingleRow Laser Range Scanner," in Intelligent Transportation Systems Conference, 2006. ITSC '06. IE EE, 2006, pp. 287294. [41] C. C. Wang, C. Thorpe, S. Thrun, M. Hebert, and H. Durrant Whyte, "Simultaneous localization, mapping and moving object tracking," International Journal of Robotics Research, vol. 26, pp. 889916, 2007. [42] T. D. Vu, O. Aycard, and N. Appenrodt, "Online localization and mapping with moving object tracking in dynamic outdoor environments," Istanbul, Turkey, 2007, pp. 190195. [43] C. C. Wang and C. Thorpe, "Simultaneous localization and mapping with detection and tracking of movi ng objects," Washington, DC, United states, 2002, pp. 29182924. [44] H. Zhao, M. Chiba, R. Shibasaki, X. Shao, J. Cui, and H. Zha, "A laser scanner based approach toward driving safety and traffic data collection," IEEE Transactions on Intelligent Transportation Systems, vol. 10, pp. 534546, 2009. [45] N. Johnson, C. D. Crane, III, A. Arroyo, and E. Schwartz, "Feature Based Object Detection using Multiple 2D Laser Range Finders," in Florida Conference on Recent Advances in Robotics Jacksonville, FL, 2010. [46] D. Kent, "Storing and Predicting Dynamic Attributes in a World Model Knowledge Store," in Department of Mechanical and Aerospace Engineering. vol. Ph.D. Gainesville, FL: University of Florida, 2007.

PAGE 187

187 BIOGRAPHICAL SKETCH Nicholas McKinley Jo hnson was born and raised in the twin island nation of Trinidad and Tobago, where he attended St. Marys College, one of the top high schools in the country, and graduated near the top of his class in 1999. In 2001, he was awarded a Founders Scholarship to attended Howard University in Washington, DC. During his time at Howard he was inducted into the Golden Key Honor Society and Tau Beta Pi: The Engineering Honor Society, and also served as the President of the Billiards club, the Vice President of the Robotics Club, and the Cataloger of the student chapter of Tau Beta Pi. Nicholas was also awarded the I nternational Engineering Consortium William L. Everitt Student Award of Excellence and received first place for his senior project at the 15th Annual E lectr ical and C omputer E ngineering day. He received a Bachelors of Sc ience in e lectrical e ngineering in 2005 and graduated second in his class with Summa Cum Laude honors. He was accepted to the PhD program at the University of Florida in Gainesville, Fl where he received his Masters of Science in electrical engineering in August 2010 and his PhD in December 2010. During his time at Florida he worked at the Center for Intelligent Machines and Robotics a nd was one of the lead members for University of Flori das 2007 D efense A dvanced R esearch P rojects A gency Urban Chal lenge team. His research focused on perception in autonomous ground vehicles but he is also interested in world modeling and artificial intelligence.