Path Planning and Sensor Knowledge Store for Unmanned Ground Vehicles in Urban Area Evaluated by Multiple LADARs

MISSING IMAGE

Material Information

Title:
Path Planning and Sensor Knowledge Store for Unmanned Ground Vehicles in Urban Area Evaluated by Multiple LADARs
Physical Description:
1 online resource (157 p.)
Language:
english
Creator:
Yoon,Ji Hyun
Publisher:
University of Florida
Place of Publication:
Gainesville, Fla.
Publication Date:

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Mechanical Engineering, Mechanical and Aerospace Engineering
Committee Chair:
Crane, Carl D
Committee Co-Chair:
Dixon, Warren E
Committee Members:
Schueller, John K
Dankel, Douglas D
Schwartz, Eric M

Subjects

Subjects / Keywords:
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre:
Mechanical Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract:
The research in this dissertation is about unmanned ground vehicles. The research proposed several approaches to sense the environment of the vehicle such as terrain evaluation and static and moving obstacles detection, store multiple sensors? information with effective memory usage, and generate a path the vehicle needs to follow without collision avoidance. This research is to support to minimize the number of the victims in wars by replacing the soldiers with these robots and increase the safety for drivers by applying these technologies to the commercialized vehicles. It supports also the people who need the special help by performing danger jobs, providing services, and so on. The research will be more helpful as the technologies are improved.
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility:
by Ji Hyun Yoon.
Thesis:
Thesis (Ph.D.)--University of Florida, 2011.
Local:
Adviser: Crane, Carl D.
Local:
Co-adviser: Dixon, Warren E.

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
lcc - LD1780 2011
System ID:
UFE0043145:00001


This item is only available as the following downloads:


Full Text

PAGE 1

1 PATH PLANNING AND SENSOR KNOWLEDGE STORE FOR UNMANNED GROUND VEHICLES IN URBAN AREA EVALUATED BY MULTIPLE LADARS By JIHYUN YOON A DIS SERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2011

PAGE 2

2 2011 Jihyun Yoon

PAGE 3

3 To m y p arents

PAGE 4

4 ACKNOWLEDG MENT S I would like to begin by t hanking my parents for supporting me and their patience during the long process to a higher degree. T hey have supported my education since my elementary school and kept me motivated by their interest in the projects I worked on. I would also lik e to show my appreciation to my advisor Dr. Carl D. Crane and his wife Mrs. Sherry Crane for their support and guidance from the first day I met them in South Korea and throughout my graduate education. Likewise, I would like to thank my committee, Dr. W arren Dixon, Dr. John Schueller, Dr. Douglas Dankel, and Dr. Eric Schwartz for their valuable comment s and advice This research and all of CIMAR s unmanned vehicle work was made possible by the support of the Air Force Research Lab at Tyndall Air Force Base in Panama City, Florida, so I would like to extend a special thanks to the staff that I was able to work with there. Finally, I would like to thank my first teacher in CIMAR, Sanjay Solanki, and my coworkers, Eric Thorn, Nicholas Johnson, Jaesang Lee, Steve Velat, Drew Lucas, Gregory Garcia, Vishesh Vikas, Ryan Chilton, Jonathon Jeske, David Armstrong, and Shannon Ridgeway. W e shared many exper ience s in the 2007 DARPA Urban Challenge.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................. 4 LIST OF FIGURES .......................................................................................................... 7 LIST OF TABLES .......................................................................................................... 10 LIST OF ACRONYMS ................................................................................................... 11 1 INTRODUCTION .................................................................................................... 14 Background ............................................................................................................. 14 Focus ...................................................................................................................... 17 Problem Statement ................................................................................................. 19 Motivation ............................................................................................................... 20 2 LITERATURE REVIEW .......................................................................................... 24 Localization ............................................................................................................. 24 Simultaneous Localization and Mapping (SLAM) ............................................. 24 Scan Matching Localization .............................................................................. 25 Monte Carlo Localization .................................................................................. 27 LADAR Data Evaluation .......................................................................................... 28 Segmentation and Classification ...................................................................... 28 Terrain Evaluation ............................................................................................ 31 Obstacle Detection ........................................................................................... 34 Sensor Knowledge Store ........................................................................................ 37 Triang ulated Irregular Network ......................................................................... 37 R trees .............................................................................................................. 38 Quadtree .......................................................................................................... 38 Planning and Control .............................................................................................. 39 Replanning Algorithm ....................................................................................... 40 Probabilistic RoadMap ..................................................................................... 42 Potential Fields ................................................................................................. 44 Other Methods .................................................................................................. 45 Occupancy Grid ................................................................................................ 47 Control Strategies ............................................................................................. 49 3 THEORETICAL APPROACH .................................................................................. 52 LADAR Terrain Evaluation ...................................................................................... 53 Traversability Grid Map .................................................................................... 53 Mapping in Urban Areas ................................................................................... 54 Sensor Knowledge Store ........................................................................................ 55 Vector based Path Planner ..................................................................................... 57

PAGE 6

6 Bezier Curve Fitting .......................................................................................... 58 Rapidly exploring Random Tree (RRT) ............................................................ 59 4 IMPLEMENTATION DETAILS ................................................................................ 65 Urban NaviGator ..................................................................................................... 65 Architecture ...................................................................................................... 65 Hardware .......................................................................................................... 66 Road Boundary Detection ....................................................................................... 67 Peak Detector ................................................................................................... 68 Slope Variance Method .................................................................................... 71 Grid based Method ........................................................................................... 72 Sensor Knowledge Store ........................................................................................ 74 Storage of Sensors Information ........................................................................ 74 Bottom up Quadtree ......................................................................................... 76 Extraction of Quadtree ..................................................................................... 77 Save and R eload .............................................................................................. 79 Path Planner ........................................................................................................... 80 Occupied Grid .................................................................................................. 80 Vector based Path Planning for Roadway Navigation ...................................... 81 Vector based P ath Planning for Parking Behavior ........................................... 83 RRT approach ............................................................................................ 84 Typical parking behavior ............................................................................ 87 Para llel parking behavior ........................................................................... 89 Path Planning for N point Turn Behavior .......................................................... 91 5 EXPERIMENTAL RESULTS ................................................................................. 115 Test Plan ............................................................................................................... 115 Road Boundary Detection .............................................................................. 115 Sensor Knowledge Store ................................................................................ 116 Path Planner ................................................................................................... 116 Test Metrics .......................................................................................................... 117 Results .................................................................................................................. 118 Road Boundary Detection .............................................................................. 118 Sensor Knowledge Store ................................................................................ 120 Path Planner ................................................................................................... 122 6 CONCLUSIONS AND FUTURE WORK ............................................................... 147 Future W ork .......................................................................................................... 147 Conclusions .......................................................................................................... 148 LIST OF REFERENCES ............................................................................................. 150 BIOGRAPHICAL SKETCH .......................................................................................... 157

PAGE 7

7 LIST OF FIGURES Figure page 1 1 The Urban NaviGator: Team Gator Nations Urban Challenge entry .................. 23 3 1 Traversability Grid Map ...................................................................................... 61 3 2 Grid Map movements ......................................................................................... 61 3 3 Circular Buffer ..................................................................................................... 62 3 4 TSS diagram with Edge Finder ........................................................................... 62 3 5 Hierarchical data structure Quadtree .................................................................. 63 3 6 RRT graph after 1000 iterations ......................................................................... 63 3 7 RRT node expansions ........................................................................................ 64 4 1 Urban NaviGator system architecture ................................................................. 93 4 2 LADAR Sensors Layout ...................................................................................... 94 4 3 VFs Layout and Vehicle Coordinate Frame ........................................................ 94 4 4 Peak Detection Procedures ................................................................................ 95 4 5 Slope Variance method ...................................................................................... 96 4 6 Grid based method ............................................................................................. 97 4 7 Quadtree Sensor Knowledge Store Component ................................................. 98 4 8 Bottom up built Quadtree ................................................................................... 98 4 9 Algorithm for Bottom up Quadtree Building ........................................................ 99 4 10 Quadtree Extraction .......................................................................................... 100 4 11 Quadtree Extraction example ........................................................................... 101 4 12 Traditional occupied grid method of CIMAR ..................................................... 102 4 13 Example of problem of traditional occupied grid method .................................. 102 4 14 New occupied grid approach ............................................................................ 103 4 15 Vector based roadway navigation path planner ............................................... 103

PAGE 8

8 4 16 Control points selection for short range paths .................................................. 104 4 17 Predefined speed profile of selected course ..................................................... 104 4 18 Parking process flow chart ................................................................................ 105 4 19 Configuration space for RRT algorithm ............................................................ 106 4 20 Tree expansion examples ................................................................................ 107 4 21 Minimum radius circle reach condition .............................................................. 108 4 22 Example of tree connection .............................................................................. 108 4 23 Depthfirst search (DFS) ................................................................................... 109 4 24 Candidate paths ............................................................................................... 109 4 25 Final path .......................................................................................................... 110 4 26 Smoothing algorithms ....................................................................................... 110 4 27 Requirements for Smooth Path ........................................................................ 111 4 2 8 Parallel parking scenario .................................................................................. 111 4 2 9 Parallel parking first step .................................................................................. 112 4 30 Configuration space for npoint turn ................................................................. 112 4 31 Four cases for npoint turn ............................................................................... 113 5 1 The University of Florida campus test area ...................................................... 125 5 2 Gainesville Raceway pit area with points defining test areas. .......................... 126 5 3 Selected course on the site for roadway navigation behavior. .......................... 127 5 4 Gainesville Raceway pit area for parking behavior. .......................................... 128 5 5 Gainesville Raceway pit area for parallel parking behavior. ............................. 129 5 6 Edge result map and traversability value .......................................................... 130 5 7 Grid map result comparison ............................................................................. 131 5 8 Full view of final results of Sensor Knowledge Store ........................................ 132 5 9 Local view of final results of Sensor Knowledge Store in area2 ....................... 132

PAGE 9

9 5 10 Extracted array from area of Figure 59 ............................................................ 133 5 11 Lane change behavior for single lane ............................................................... 134 5 12 Obstacle avoidance in a lane ............................................................................ 135 5 13 Lane change behavior for multiple lanes .......................................................... 136 5 1 4 Typical parking behavior ................................................................................... 137 5 15 Parametric results of parking behavior using RRT ........................................... 139 5 16 Parallel parking behavior .................................................................................. 140 5 17 An n point turn for case 1 ................................................................................. 141 5 18 An n point turn for case 2 ................................................................................. 142 5 19 An n point turn for case 3 ................................................................................. 1 43 5 20 An n point turn for case 4 ................................................................................. 144

PAGE 10

10 LIST OF TABLES Table page 4 1 Sensor Knowledge Store Data Structure .......................................................... 114 4 2 Messages between components ...................................................................... 114 5 1 Road Boundaries Detector Uprate. ................................................................... 145 5 2 Test results for road boundary detection of area1. ........................................... 145 5 3 Test results for road boundary detection of area2. ........................................... 145 5 4 Test results for road boundary detection of area3. ........................................... 145 5 5 Test results for road boundary detection of area4. ........................................... 145 5 6 Test results for road boundary detection of area5. ........................................... 146 5 7 Test results for road boundary detecti on of area6. ........................................... 146 5 8 Test result for Sensor Knowledge Store ........................................................... 146

PAGE 11

11 LIST OF ABBREVIATIONS AFRL Air Force Research Lab CIMAR Center for Intelligent Machines and Robotics DARPA Defense Advanced Research Projects Agency DUC DARPA Urban Challenge GPOS Global position and orientation sensor GPS Global positioning system LADAR Laser detection and ranging MDF Mission data file NFM North finding module PMP Partial motion planning/planner POMDP Partially observable Markov decision process PRM Probabi listic roadmap REDCAR Remote detection challenge and response RHC Receding horizon control/controller RNDF Road network definition file RRT Rapidly exploring random tree SARB Smart Arbiter SLAM Simultaneous localization and mapping TG Traversability grid THH Toyota Highlander Hybrid UTM Universal transverse Mercator

PAGE 12

12 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy PATH PLANNING AND SENSOR KNOWLEDGE STORE FOR UNMANNED GROUND VEHICLES IN URBAN AREA EVALUATED BY MULTIPLE LADARS By Jihyun Yoon August 2011 Chair: Carl Crane Major: Mechanical Engineering T he autonomous driving in urban areas has more problems to be solved t han the off road driving has. The research in this dissertation proposes the new methods for detecting road boundaries, saving information of the environment, and generating path that the vehicle has to follow for unmanned ground vehicles running in urban areas T he algorithm to detect road boundaries is implemented by several methods to assign the confidential values to the results derived from the methods. T he positions of the detected boundaries are transformed to the traditional grid map with the confidential values. T he traversability values of the cells corresponding to the positions are adjusted by the confidential values. T he Sensor Knowledge Store to save the information of the surrounding of the vehicle uses the Quadtree data structure. T he storage is very efficient for the usage memory and easy to search and update the nodes. The information that is saved in the storage helps the vehicle to generate an optimal path. T he new path planning algorithm is based on vector method that is differ ent method from presented method. T he path planner for the special behaviors, such as

PAGE 13

13 parking and npoint turn, are using Rapidly exploring Random Tree (RRT) algorithm to find paths to perform the behaviors. The research presented covers the theory of the se new methods on an unmanned ground vehicle platform at the University of Floridas Center for Intelligent Machines and Robotics (CIMAR). Results have shown that the approaches in this dissertation are very useful for the autonomous driving in urban areas

PAGE 14

14 CHAPTER 1 INTRODUCTION The field of robotics is developing very rapidly, especially as more and more robotic vehicles are adapted to application in the real world. T he se vehicles are being used to operate most everywhere manual ly driven vehicles have been used in the past. In agriculture, the military, industry and home security they are being designed to take over the dirty, dangerous, and difficult aspects of manual labor. One emerging field is unmanned urban driving using car like robots. S ince there are many expected and unexpected obstacles such as pedestrians, other cars, or rough terrain, the implementation is complex In addition, several special behaviors such as parking complying with traffic signal s, and lane changing should be perform ed using intercommunication with other vehicles A lthough the implementation of u nmanned v ehicles is difficult and challenging, this is a research field that is receiving much attention Background Unmanned vehicles of many different shapes and sizes are designed to perform a variety of tasks, which they execute with varying levels of independence. Some of the vehicles are controlled by human operators via wired or wireless input, but others can be operated without any human assistance T he latter is classified as an autonomous vehicle Unmanned underwater vehicles are used for military purposes to monitor a protected area for new unidentified objects and by researchers to study lakes, the ocean, and the ocean floor. Unm anned aerial vehicles military role is growing at unprecedented rates. Unmanned ground vehicles (UGVs) are operated on rough and paved terrain and spaces that are unreachable for humans

PAGE 15

15 I t is the UGVs, operating without any human interaction except init iation, that are under consideration for this research. T o complete a mission with this vehicle, there are thousands of necessary and complicated components that have to work together successfully T he initiation process which is the only action that the operator performs, is to command the vehicle to reach a goal point from the initial position. T he initial commands can also include an abstract mission, such as a special behavior or surveillance, for a defined area. A level of evaluation and recognition is required by a fully autonomous vehicle. Given only a goal point during the initialization process, the robot has to localize around itself recognize its surrounding s, map a path through the surrounding terrain to reach the goal, and follow the generated path with minimum error. O ne problem of robot localization is that the robot has to determine its location relative to its environment. T he robot most likely will need to have at least some idea of where it is f or it to be able to operate successfully. T o determine its location, a robot needs knowledge of two kinds of information. First, it has predefined information gathered by the robot itself during previous runs or has that information supplied by an external source. Second, the robot obtains information about the environment through a process of evaluation of data from the raw sensors. In general, the predefined information specifies certain features that are timeinvariant such as static obstacles or traffic lines. T o gain this knowledge, the robot has to have single or multiple sensors such as cameras and range laser sensors and be able to translate data from the instruments to readable format. O ne option is a map describing the environment. T he map can be geometric or topologic Geometric maps describe the environment in metric

PAGE 16

16 terms. Topologic maps describe the environment in term of characteristic features at specific location. A nother option is that the robot learns the map of the environment whi le it is navigating through it, which is known as Simultaneous Localization And Mapping ( SLAM ) [1] and Detecting And Tracking Moving Objects ( DATMO ) [2] SLAM generates an environmental mapbased on sensor data, while simultaneously updating the robot s current position. A nother approach is to use the Global Positioning System (GPS) from satellite s coordinated with an inertial navigation system that can determine its position and motion to the robot All the options are created from the raw data from the sensors. T he raw data must be combined to generate a representation that is useful to the robot. T he m ost common way to construct a representation is to generate a grid map. A g rid map rasterizes the environment al data from the sensors into corresponding cells and assigns a degree to each cell. S till a nother way is to use a vector based representation to describe the environment. This method is closer to the raw data and describes the environment in terms of distance and angle from the robot. T o determine the control input to accomplish a goal, a proper path is necessary. T his task is named path planning or motion planning. P ath planning consists of the computation of the angle between the direction the vehicle is currently heading and the goal. T o change the vehicle s direction one needs kinematic and dynamic model s of the vehicle and heuristic algorithms that can estimate the optimal path and generate control inputs to maneuver T here are many algorithms to find the optimal path under development. T he means of the optimal which can be changed depending on the

PAGE 17

17 devel oper are the shortest travel distance, the shortest travel time, or some other metric that differentiates one potential path from another. O nce an optimal path has been determined the inputs must be computed and given to the actuator of the robot. T ypic ally, the control inputs consist of steering commands to change the latitude states of the vehicle and accelerator and decelerator commands t o change the longitudinal state of the vehicle. T hese inputs are implemented through mechanical devices or electric al signal s. In robotic vehicles, steering commands are implemented by operating a motor connected to the steering column of an Ackermansteered vehicle. Velocity changes can be implemented by the electrical signal sent from a computer that controls acceler ation and deceleration or by pushing or pulling on a brake or accelerator pedal using a motor. This sequence of the process has to be continually supplied with current information about the new state of the vehicle and the state of its environment until the vehicle completes its task. T his task could require considerable computational resources to process all the raw data and run the algorithm necessary to generate the o utput commands. This background section briefly described all th es e process es which will be described and discussed in detail later. Focus This proposed research has two major parts: localization and path planning. L ocalization for a UGV is a difficult bu t crucial problem for several reasons. Localization data around the vehicle are used to create the path plan with certainty about the surrounding environment to reach a goal point. There are several kinds of sensors but this research focus es only on localization using laser range sensors. F or urban area driving, the process includes detecting static and moving obstacles, terrain evaluation,

PAGE 18

18 traffic line detection, and road edge detection. For l ocalizationbased on a grid m ap, the raw data are converted to points in global coordinate systems, the points are dropped into the corresponding cells, and then each cell is evaluated to determine it s degree of traversability W hen the vehicle is running at high speed, however, the cell might not have sufficient points to be evaluated. U sing a vector based evaluation about the environment using a typical grid map could aid in solving this problem. T here are many approaches to localization using a vector base. In this document, the approach of the topologic map that is projected onto the typical grid map will be used. A nother problem in localization is the range limit that results from the fixed size of the grid map or the sensor s hardware limit. E xceeding the limit may result in problem s for the path planner especially during high speed driving. I f any obstacles or sharp turns suddenly show up, the vehicle has to quickly slow down. This action has a negative effect on the vehicle and could result in an accident or mechanical failure. T his could also happen when a human is driving in a strange area. W hat if a human has already driven the course? Crucial information on the area where the vehicle has passed could be effectively saved in sensor data storage to help to plan a much better path. T his information in s torage also can assist the path planner to establish speed profiles of the vehicle when it is driving the course by viewing the course beyond the grid map s size. M any path planning algorithms are using grid mapbased searching, but sometime the methods are not appropriate for urban area driving. I n addition, for special behaviors such as parking or n p oint turn, the path planner based on the vector based approach could be a better solution.

PAGE 19

19 A nother problem for path planning and control is that there mus t be consideration for the motion constraints of any hardware. T his is an especially important issue for car like robots because they are subject to a nonholonomic constraint. T his means that while a vehicle may have three degrees of freedom, it can only translate in the direction it i s immediately facing. In other words, the number of control inputs in to the system is fewer than the number of the degrees of freedom in the system s configuration space. F or this reason, every time the planner generates an optimal path, it has to consider this constraint to make the vehicle track the path with minimum error. Problem Statement A formal description of the general UGV localization can be stated as follows: Given the raw data sent from the laser range sensors, a vehicle s position in the global coordinate system, and a set of the vehicle position data, the introduction of an algorithm should be able to generate a final representation combined with topologic map in the format that can be sent to other components and understandable to the vehicle. G iven the position information of the vehicle, the sensor knowledge store is generated to save necessary information about the surrounding environment. T o manage memory and time complexity, the Quadtree algorithm [3] is introduced. L astly, once the localized representation on the environment surrounding the vehicle and the stored info rmation are provided to the path planner, the latter generates the optimal path to obtain the goal and the algorithm must be able to compute the inputs to the vehicle actuator s.

PAGE 20

20 Motivation T his research attempt s to advance the capabilities of autonomous ground robots that have been developed at the Center for Intelligent Machine and Robotics (CIMAR) at the University of Florida and at the Air Force Research Lab (AFRL) at Tyndall Air Force Base. CIMAR and AFRL have been collaborating on projects for many years and have successfully devised automated vehicles for such applications as clearing mine fields, detecting unexploded bombs, and patrolling the perimeters of military installations. Motion planning, localization, and sensor knowledge store development are well represented in the current projects. Over the years, many different groups of people at CIMAR have successfully automated over ten UGVs. A t Tyndall Air Force Base, research has been conducted with the aim of using ground robots for military base perimeter defense as part of the Remote Detection Challenge and Response (REDCAR) project. The project concept involves three different robotic platforms as follows: mobile long range detection and assessment, agile lethal and nonlethal ch allenge and res ponse, and small scale search in areas not accessible to larger platforms. T he large scale platform integrates such autonomous navigation technologies as motion planning environmental sensing and modeling components developed by CIMAR with a proprietary Operator Control Unit (OCU) to achieve platform control. T he most recent autonomous vehicle research at CIMAR has focused on a largescale autonomous vehicle for navigation in an urban environment. The vehicle, named the Urban Navigator ( Figure 11 ) was designed to compete in the Urban Challenge coordinated by the Defense Advanced Research Projects Agency (DARPA) and held in Victo r ville, California in November of 2007. T he challenge was the most recent contest

PAGE 21

21 built on the success of the previous robotic challenges in 2004 and 2005. While the previous competitions were for off road navigation, the 2007 competition focused on autonomous driving capabilities in urban environments in the presence of other moving vehicles, including other robots. Robots had t o utilize complex behavior s to demonstrate the ability to drive similar to that which happen when a human is driving a vehicle and also balance the desire for speed with the necessity of obeying all traffic rules and driving safe ly Basic navigation and tr affic requirements included staying in the lane, obeying speed limits, maintaining a safe following distance, passing slow moving or stopped cars, completing a U turn and obeying intersection rules. Requirements for advanced navigation and traffic included navigating an obstacle field, parking, merging with traffic, and defensive driving [4]. The vehic le, built off a 2006 Toyota Highlander Hybrid sport utility vehicle, was modified to provide the capabilities required by the challenge. The chassis supported drive by wire throttle and braking capabilities although a proprietary interface board was const ructed to command the throttle and brakes. Smart motors were added to automate steering and gear shifting operations and to rotate the side laser range sensors. The sensor network, designed to meet the perception requirement of the challenge, was composed of six fixed laser range sensors, two articulated laser range sensors, four cameras, three GPS receivers, wheel encoders, and an inertial navigation system. A custom built computer rack was populated with twelve dual core computers. The localization with laser range sensors (LADAR s) used by CIMAR for the Urban Challenge was based on the grid map approach. This approach was similar to the method used in the 2004 and 2005 competitions [5] However, since more LADARs

PAGE 22

22 were added for this competition, the evaluation algorithm was required to be faster and more flexible depending on the vehicles behavior. Basically, t he LADARs provide the raw data to the computers in charge of the localization process, the data are converted to find the corresponding cell for each point cloud, and each cell is assigned a traversability value. Once the grid map representation is generated, it goes to the other components to be used. The path planner used in this competition consisted of a receding horizon controller running a modified A search algorithm to find the optimal path through a traversability grid sent f rom the localization component. The A* algorithm created a search tree through the grid map, evaluated the cost of the nodes in the tree, and used these costs heuristically for searches through the tree to find a least cost path to a predetermined goal. For a best first search algorithm, the A* algorithm keeps track of the costs of all nodes on the tree and selects the lowest cost node for expansion [6] The Urban Challenge along with collaborative research efforts with AFRL illustrate that there are still many research topics to be developed. Researchers at CIMAR continue to supplement existing navigation and localization capa bilities. Some of the topics are proposed in this document. One topic is enhanced localization represented by a grid map combined to a vector representation with LADARs to compensate for the weakness of using only a grid map A nother topic is the sensor knowledge store. T he main functionality of this storage is to support path planning by providing information on distances further than the sensors can see. Additionally, in the case where the vehicle is getting ready to sweep an area where it has already sw ept and the information on the course is in storage, it

PAGE 23

23 can have a predefined speed profile and optimal path by considering road curvature, max allowed speed, and static obstacle existence before starting. F inally, with the above information, a new path pl anner a hybrid of previous work and a vector based path planner is proposed for the urban area driving project T he facilities and resources at CIMAR allow an in depth study of this claim and improve the chances of this type of technology coming to fruition. Figure 11 The Urban NaviGator: Team Gator Nations Urban Challenge entry which was taken by CIMAR just before the competition in November, 2007.

PAGE 24

24 CHAPTER 2 LITERATURE REVIEW To compare and contrast the newly devised localization, sensor data storage, and path planning algorithm a review of published research was conducted. T he review begins by investigating how others have approached environment representation with segmentation and classification with 3D point clouds. T he issue of localization is one of the crucial features for UGVs, so several different approaches were studied. There are various sensor algorithms to date that have been used for detecting obstacles and estimation terrain. L ocalization Simultaneous Localization and Mapping (SLAM) Many researchers have introduced SLAM for both indoor and outdoor localization problems. O riginally developed by Hugh Durrant Whyte and John J. Leonard [7], SLAM is a technique used by robots and autonomous vehicles to build a map within an unknown environment (without a prior i knowledge) or to update a map within a known environment (with a priori knowledge from a given map) while at the same time keeping track of their current location [1]. Using a laser range finder along with odometry, the robot can collect the point clouds related to the v ehicle position. The robot pos itions with point clouds are forme d and odometry provides deadreckoned transformation between two positions. The SLAM attached to an Extended Kalman Filter uses the information about the position [8] There is another approach that introduces graph formulation to the SLAM problem. Every node of the graph represents a robot pose and the set of the laser beams taken

PAGE 25

25 at this pose. Edges of the graph represent relative transformation between nodes computed from matching the laser beam sets [9]. Some researchers have shown that SLAM is not able to run well in crowded urban environments since it is very difficult to extract static and continuous landmarks. For this reason, moving objects have to be detected and filtered out using SLAM with Detection and Tracking of Moving Objects ( DATMO ) [10] The EKF SLAM and GraphSLAM are on opposite ends of SLAM algorithm. There are many alternative algorithms that are under development including Fast SLAM, UKF SLAM, and CDKF SLAM [11] Scan Matching Localization Some localization algorithm s using laser range scanner s compare a set of the points from the sensors with the set previously observed. To implement this, it should be assumed in advance that the error caused by the sensor is smaller than the one from the GPS and INS system. Geometry matching extracted from the laser s can between the current and previous set refines the result s of the GPS and INS. 2D point clouds are split into n segments, fitted by the least squares method, and the algorithm extracts features corresponding to each segment. This process runs with one scan line and then if the features are successfully extracted, it finds the features that can be matched with ones in the previous scan. W ith the comparison, i t computes rotation and translation between the two positions [12] The features could also be considered as landmarks also and thi s method could be used for post processing to build a map [13] In a usual setting, the power poles, trees, and street lights are good examples for the landmarks since these features can

PAGE 26

26 be extracted easily by searching for vertical columns that have height jumps to both their left an d right neighbor ing columns. Given the landmarks locations previously saved, the landmarks currently extracted will be associated with given information to find the absolute position of the vehicle. Then the landmarks matching algorithm plays a crucial rol e in tracking the current position, given the previous position and associated landmarks. Many researchers use the Kalman Filter or a particle filter to obtain the estimate using information from the sensors and the landmarks. Another interesting approach is to extract lines with segmentation from the laser scans [14] In static environments with sufficient geometric features, such as was at different angles and other obstacles, point wise scan matching can be used to determine the egomotion of the moving horizontal laser scanner. T o test robust ness of the matching process, a RANdom SAmple Consensus (RANSAC) algorithm in which two sets of candidate point matches are randomly selected is approached and the position is computed. S ometime s the mean that selects the keypoint of the point clouds by such m ultiple methods as segmentation, curvature clusters, and meanshift is better for place recognition [15] T he research also proposes several ways to model the structure of the region around a keypoint location. An efficient way to solve the place recognition p roble m is using keypoints or feature similarities in the 2D map [16] T he knearest neighbors ( kNN) is introdu ced to apply to keypoint voting based solutions for the place recognition problem.

PAGE 27

27 Monte Carlo Localization Monte Carlo Localization (MCL) is a Monte Carlo method [17] to determine the position of a r obot g iven a map of its environment. I t is basically an implementation of the particle filter method applied to robot localization. The key idea of the sample based approach is to represent the posterior belief by a set of N weighted and random samples [18] When the robot moves, MCL generates N new samples approximating the robots position after the motion command. The sample is generated randomly from the previously computed samples with probability. The r aw sensor data are incorporated into sample set s by Bayes rule in Markov localization. S ince MCL uses finite sample sets, some samples are not generated close enough to the correct robot position. T his method also can be implemented with 3D point clouds collected indoor s [19] although t here are some drawbacks. A 3D world model i s harder to build than a 2D model and needs to compute the likelihoods of many beams. S o the algorithm needs to reduce the points intelligently and then compares it to a 2D map. A nother interesting research topic using this method is applying a particle f ilter to estimate the full state of the robot and utilizing multilevel surface maps (MLS) that allow the robot to represent vertical structures and multiple levels in the environment [20] I n this work, to estimate the pose of the ro bot, the probabilistic localization following the recursive Bayesian filtering is introduced. T o incorporate 2D motion into the 3D map, i t obtains a possible outcome of the action by applying a probabilistic model, and then adapts the motion vector to the shape of the 3D surface traversed by the robot. T o adapt the motion vector, it is discretized into segments corresponding to the size of the

PAGE 28

28 cells in the MLS map. A fter concatenating all transformed motion vector segments, a new 3D motion vector is obtained. LADAR D ata E valuation Segmentation and C lassification T he s egmentation and classification technique is a very useful process to build a map of the environment. T his process works with 3D poi nt clouds to cluster the points having different character s into the appropriate groups. The work presented in [21] classifies each point into one of three classes : surface, linear, and scatter character. T he approach has two issues: shape model computation and classification of shape features. F or the shape model computation, a voxel list is introduced and each point is compared with the neighboring one and the character of the point is determined. O nce the character is decided, the point is classified in to its corresponding class. M uch research has introduced the distribution of a normal vector of a point and its neighbors to segment between the groups with different character s [22] H ere the normal vector means the average normal vector with neighbor points so to compute the normal vector, it has to find its neighbors first. T here could be millions of points so the data have to be organized or a quick searching algorithm is necessary. F rom the normal vectors of points, the eigenvalues are found and the points are classified into the corresponding groups by the properties of the values. O ne interesting approach is using an octree structure based split and merge segmentation method for or ganizing point clouds into clusters of 3D planes [23] T his process sta rts by splitting all data into eight equal subspaces. T hen the subspaces are split into another 8 sub spaces recursivel y until it is distributed close to a 3D best fit

PAGE 29

29 pla ne or less than 3 points. T hen a merge process is performed with checking if the normal vectors of neighbor planes are similar to the plane. I f the vectors are obviously different, the plane is segmented from the neighbor plane, so it will not be in same segmentation. Segmentation is performed for a detection of the pl anes with different properties using the RANSA C based plane model [24] T he iterative refinement of the detected RANSAC plane uses inlier points that form a densely connected set. T he largest connected set is found by recursively searching the inlier point cloud from different seed points. A fter the segmentation, the particular 3D planes are partitioned into sets of rectangular polygons based on their support from the 3D point cloud. A nother interesting method is using the l ocal convexity criterion with normal vectors [25] T his method stores all the 2D points o n an undirected graph having nodes and edges. T hen a normal vector of each point is computed using neighbor ing points. A crucial criterion is that if the normal vectors have approximately the same direction, two points can be considered as locally convex and be assumed to be in the same segment. T his criterion allows the segmentation to be efficiently solved by a regiongrowing algorithm as follows: F irst it selects a random seed point and grows the segment until no more nodes are added, then it deletes the segment from the graph. C ovariance is also a useful method to segment 3D points [26] C ovariance analysis is often used as a starting point in the classification of 3D point clouds based on geometric properties. T his is performed by determining the covariance matrix for a local neighborhood surrounding the point of interest referred to as the index point It then proceeds by examining its eigenvalue and eigenvectors. A s is known, if the point is

PAGE 30

30 on a smooth surface, the variance should be close to zero. T he difference in the two largest eigenvalues can represent the character of the point since they represent the variance in the principle directions on the plane. A small difference will represent an in terior point, and a large difference represents a boundary point. T he K means clustering is well adapted to laser data processing, since different feature attributes that may be geometric or textural can be used depending on the desired classes. T his algorithm consists of providing a hierarchical spitting clustering to extract ground points [27] F or the clustering, mean and standard deviation of the point s height are used. I t considers the cluster whose average height is minimu m as the ground cluster. T he K means is implemented in a hierarchical way that splits the ground cluster into two classes in which the cluster standard deviation is higher than a predefined threshold. U ntil the algorithm is converged, the ground cluster is split into two classes. A fter the process, the point clouds are segmented into ground, off ground, and nondetermined points. T o classify urban point clouds, the shapes of the segmented feature can help with labeling [28] T he isolated points are removed and all the rest are projected into a 2D scalar image representing the height of the point cloud to find potential object locations. T hen each object is segmented. O nce the segments representing potential objects are found, features are extracted describing the shape of the objects. T he shapes of the features are decided based on the number of points, estimated volume, average height, st andard deviation in height, and the standard deviations in the two principle horizontal directions. B ased on the information, the feature vector for each candidate object is classified with respect to a training set of manually labeled object locations.

PAGE 31

31 T errain E valuation F or urban area driving, there has been much research on how to develop a real time drivable road feature. T his process includes finding the road boundary of the edge, estimating the center of the traffic lane, evaluating terrain roughness and so on. I n the 2004 and 2005 competitions, the robots were required to drive off road but in the 2007 challenge they were required to drive and stay in the traffic lane. A m inimum requirement for urban driving is distinguishing between road edge and road profile. T he research shown in [29] extracts road edge, which is a curb from the LIDAR range data. T he road and road edge points are first detected in elevation data, but this should not be an absolute height. T hen the points are processed by selecting the candidate regions for road segment classification. T he candidates are filtered by a rule based scheme such as the minimal road width requirem ent. T he line representation of the projected points onto the ground plane is identified and compared to a simple road model in an aerial view to determine whether the candidate region is a road segment with roadedges. I f the sensors can provide a suffic ient number of points, the grid mapbased analysis could be useful even in the urban area. T he sensor that had been used in the 2007 competition by many teams was the Velodyne HDL63E S2 a sensor that can provide enough data point s to evaluate the environment at high speed. T hree types of the principal component analysis ( PCA ) point cloud PCA, grid based PCA, and hierarchical PCA were used to extract the road features [30] A s mentioned above, the point clouds generate three eigenvalues to characterize the point. F or the grid based PCA, the LADAR points are dropped into a 2D grid centered around the origin of the sensor. A h ierarchical PCA is developed to recursively subdivide the point clouds and

PAGE 32

32 extended from a 2D grid with z height. Based on the information from PCA s, drivable and nondrivable areas are distinguished and terrain roughness is analyzed. U rban and semi urban roads are of irregular geometry with sharp curvatures, and corners contain signs and crossings, and traffic is characterized by lower speeds, many stops, and can be bidirectional. A n important feature of these road environments is the e xistence of curbs on either side of the road. T he detection and tracking of the curbs ahead of the vehicle using LADAR can be implemented by an Extended Kalman Filter (EKF) [31] T o extract the edge, the algorithm works on the consecutive three points of the LADAR. T he d ynamic model to be used for EKF is set using elementary trigonometry of the range of the points. A fter filter prediction, if the error between the predicted value and true value is greater than an appropriate threshold, the point is segmented. I n some cas es a road does n o t have curbs for the boundary but must be discriminated from grass. T o detect grass, a statistics model of range in grass was introduced in [32] T he measured data are pre processed with a nonlinear filter that outputs the maximum value of range within a window of contiguous samples. T he classification method also can be used to extract road features f rom the point clouds. F or this case, the work in [33] classifies t he points into only two classes : smooth terrain and 3D textured terrain such as vegetation area T his algorithm evaluates four features to characterize each point: single pixel statistics, standard deviation, spectral characteristics, and derivative statistics. B ased on the evaluated features, each point is classified into vegetation or nonvegetation. A fter the process, the analysis of the

PAGE 33

33 density of points in a conical support region placed at a data point provides an estimat e of the type of point and filter s out the underlying support region. M ost of the autonomous vehicles have vision sensors to detect traffic lines or lanes but when the sensors are exposed directly to sunlight or after sunset they could not work properly, so if the laser range sensor can detect them and determine the street type, it can make the vehicle program more robust [34] T he f irst step is the segmentation of the laser range data based on the distance between two consecutive scan points. T his paper defines the region of interest (ROI) that may have an impact on the vehicle s future motion. T he width and relative position of the lane is determined by observing the scan points on both sides of the lane within a lower range of view. S till another interesting area of research for terrain evaluation is estimating terrain roughness using self supervised machine learning [35] F or surface classification, this uses a square scoring matrix S of size N2 for N laser points. C omparison of feature r with feature c yields a score to the matrix. T he w largest elements are extracted and accumulated in ascending order to generate the total score based on the area where the real wheels will pass in future. T he total score is the output of the classifier which means the parts with a total score greater than a predetermined threshold are classified as rough terrain and the parts less than the thres hold are assumed to be smooth. T he most interesting part is self learning the parameters that are used to give the score. W hen the vehicle traverses the area that already has a corresponding score, the roughness coefficient is calculated. T he objective function for learning is Tp Fp where Tp and Fp are the true and false positive classification rates respectively. A true positive occurs when a given patch of road has ruggedness coefficient that exceeds a

PAGE 34

34 user selected threshold and it is important to minimize false positives. T his approach is very useful to improve the terrain evaluation capability of the vehicle. A nother terrain character that has to be carefully determined during driving is negative obs tacles such as holes, ditch es or cliff s. T his problem can be easily solved relative to the other problems [36] L ike other solutions, t his starts by data accumulation and terrain classification. T hen all the clouds belong to a voxel grid, which is a 3D map, and the map is projected onto a 2D grid map. U sually negative obstacles occur only in regions where there is a transition from visible to occluded voxels and their pos itions are represented in the 2D grid map. Obstacle Dete ction T he algorithm for obstacle detection is a crucial part to help the autonomous vehicles drive safely without colliding with any static or moving obstacles. M any researchers implementing methods to detect static obstacles have had success ful results. However, detecting and tracking moving obstacles which was one of the challenges in the 2007 competition, and feature extraction still remain considerable challenges. A single planar scanner cannot be relied upon to distinguish between obstacles and the terrain changing to uphill. F or this reason, many autonomous vehicles are equipped with multiple scanners to see the front of the vehicles or by a 3D laser scanner that can see far O ne approach is to mount multiple sensors at different height on the v ehicle [37] I f two sensors of approximately the same range report different height s, it could be an obstacle, otherwise it is assumed to b e an uphill or nonflat road. T he research uses a 2D lookup table with 1 meter resolution to store segmentations created by the obstacle sensor and the table can be used to construct the algorithm of

PAGE 35

35 O (1) time complexity to find segmentation nearest a poin t. O nce any obstacle is found, it is dropped into the appropriate row and column of the table. T he main goal of this clustering is to track objects over time. A t each time step, the new objects are associated with a similar group from the previous time step. T he bounding boxes for two associated groups are compared and compute the velocity of the object. T his research group has used the Velodyne sensor so it can see at least two corners of object s to track the objects well. M ost of the object tracking algorithm s start by grouping the point clouds. T he segmentation process can provide the polygonal model of the environment and can be implemented by some algorithm derived from image processing techniques used for edge detection [38] T he Ramer algorithm [39] is used to segment whole onescan p oints. F rom this step, all data points that can not be included in any segment are removed. A ll generated segments are connected to those closest and analyzed individually based on length, number of vertices, and orientation with respect to the sensor. T hen the segments are split again based on the distance from their neighbor s and if two segments share the same point, they belong to the same object. Finally, each segment is grouped into predefined objects of classes such as car, motorcycle, truck, or unde fined. T here is an obvious limit to use only a laser scanner to detect and track the objects. F or this reason many researches are trying to fuse multiple sensors, such as laser scanner, vision sensor, radar and so on. U sually, laser scanners are used wit h vision sensors for detection and tracking [40] A 3D vision sensor and a 2D laser scanner can estimate an ensemble of measurement positions from the real world. T wo

PAGE 36

36 measurement points are assigned to the same obstacle if their Euclidean distance is below a predetermined threshold. T racking is performed using a Kalman Filter that is based on a simple model in terms of the obst acles position, speed, acceleration, height, width and length. B oth sensors have advantages and disadvantages, so by properly fusing these two, a good result can be expected. M ost research use s the range data returned from the laser scanners to detect and track objects. However, many sensors can also provide the intensity which may be called reflectance of the returned signal of every beam. T he reflectance provides more information about the obstacle s or vertical surfaces. U sing thi s property, a model of the laser reflectance signal is proposed [41] T he model uses actual surface reflectance, angle of the beam with the surface and the range. T he returned intensities are different for different objects so obstacles can be detected by these differences. Another useful area of research for autonomous vehicles is estimation and prediction of collision risk [42] A Bayesian Occupancy Filter (BOF) operates with a 2D gr id representing the environment. E ach cell contains a probability of the cell occupancy. Given an observation, the BOF updates the estimate of the each cell. Fast C lustering Tr acking (FCT) takes two inputs, the grid estimated by the BOF and t he prediction of the tracker that provides a region of i nterest for each object being tracked. I f there is more than one region of interest in the tracked objects which are overlapped, they extract the cluster which is associated with multiple targets T he n the local SLAM problem is solved by dividing the laser points into stationary and moving sets. G iven the occupancy grid map and the current state of the vehicle, the laser impacts generated

PAGE 37

37 from stationary or moving object s can be separated using a threshold of occupancy probability. F or driving in an urban area, other important objects that must be detected are pedestrians. U sually pedestrians are in already cluttered areas, so it is considerably difficult to detect and track them. T he means to d o that is to use multiple sensors such as vision sensors and laser scanners [43] T his research proposes lidar based detection, tracking, and classification similar to the research mentioned previously and also uses a set of Haar Like features to extract information from any given image from the vision sensor. T hen this performs a core process that is combines the objects classified by two sensors by a sum rule. T he sum decision rule depends on the prior probability of occurrence of each class and the posterior probabilities from the respective classifiers. S ensor K nowledge S tore T here is much information obtained from various sensors: height of the terrain, obstacle existence, terrain roughness, road curvature, road boundary, and so on. The S ensor knowledge store (sensor information database) was developed to represent the environment evaluated by such sensors as vision, radar, and laser scanners. W ith this inform ation, the vehicles can plan a path to get to a goal and determine the driving constraints in advance. Triangulated Irregular Network One of the reasons to use sensor data storage is to reduce the amount of data. Collecting and saving all point clouds ar e usually inevitable so to increase the efficiency of the algorithm and reduce computation time the storage is introduced. O ne method to do this is Triangulated Irregular Network, TIN, algorithm [44 ]

PAGE 38

38 T he TIN was originally devised by Randolph Franklin at Simon Fraser University in 1973 [45] I t is a data structure used in a geographic information system for the vector based representation of a surface. TIN is a very useful idea to represent a surface, but not the best for the storage because it is irregular so it takes considerable time to find a triangle corresponding to the data and update it. C onsequently, it was proposed to map the TIN onto a static grid map [46] R tree s R trees are tree data structures that are applied to the spatial access method to index multi dimensional information [47] T his splits space with hierarchically nested, and possibly overlapping, minimum bounding rec tangles. C urrently the laser sensors can provide very dense point clouds, so to store, load, and visualize them, R tree is one of the efficient methods [48] I n this research, 4 1018 data points can be stored in Oracle, which partitions the points into fixed size blocks and stores the blocks as rows in a separate block. L oading and visualization steps can be processed in a f ixed block similar to the storing process. T he most useful aspect of the tree is indexing 3D data. T he approach shown in [49] has used a V reactive tree, which is based on combining R trees with an importance value. T he V reactive tree is a 4D R tree optimized for 3D v isualization. Quadtree A Quadtree is a tree data structure in which each internal node has exactly four children. I t is used to partition a 2D space by recursively subdividing it into four quadrants or regions. T his is firstly discussed by Raphael Finkel and J.L. Bentley in

PAGE 39

39 197 4 [50] Also the researc h in this document used this Quadtree for the sensor knowledge store. O ne feature of a global information system is generation of the potentially large amount s of information. T he variety of potential data types is also large and extremely diverse, including nu merical, categorical, or vector valued data F or these reasons, the research in [51] had five pro posed criteria to be satisfied. First, a hierarchical data structure is required. Second, a regular hierarchy is more efficient that an irregular one. Third, it should be easy to convert from existing structures used for a geographic information system and back again. Fourth, the system should have a clear and simple coordination to the currently system of latitude and longitude. Lastly, the purposes of a geographic information system are primarily for planning, analysis, and inventorying of geographic phenomena. Generation of a largescale digital elevation model (DEM) often incorporates the Quadtree algorithm. O ne report noted that DEM is characterized by its combination of adaptive Quadtreebased processing with common iterative interpolation and filtering methods [52] T hese filtered out nonground d ata points are based on the Quadtree iterative algorithm and segmentation algorithm. P lanning and C ontrol T wo individual t asks will be introduced: a geometric path or time varying trajectory of the vehicle s motion and control task to follow the generated path. T he path should be optimized for travel time, distance, or minimum collision potential. S ometimes, all of the parameters should be optimized. T he s econd task considers the path as an input and generates input to control steering and change vehicle speed. M any researchers have tried to find a viable solution for dealing with the nonholonomic and other

PAGE 40

40 constraints of UGVs. Although many methods were provided, there has been a tradeoff between stability, robustness, and performance. Rep lanning Algorithm S ome path planning algorithms for autonomous robots are run over and over again at a high rate. H owever, several types of algorithms make use of the previous solution trajector y from the start point to the goal location and only make minor adjustments to the trajectory when there is any change in the environment on the path. O nce a vehicle has recognized the environment successfully, it starts generating a plan to get to a goal T he work in [53] determined the path based on the graph data structure which consists of a set of nodes and edges. T o construct the graph for the plan, traversability rays in the visible range on the cost map are emitted at different angles from the location of the vehicle. T hen a cost and Exploration Bias (EB) are assigned to each cell, where the EB is zero unles s the edge that connects the node to the robot collides with an obstacle. F inally, if any edge intersects another edge, they are merged and translated in different shapes. A fter the graph is generated, the algorithm find s a path which is the object of this research. T o do this the algorithm needs to select an interim goal based on a heuristic function. T he next step is to plan a path from the current location to the interim goal by way of the shortest path. T he vehicle is perfor ming this process over and over again until it gets to the goal point. S ometime s the information about the environment is imperfect or incomplete. A s a result, a path generated with that information may be invalid or not optimal. F or this reason, it is cr ucial that the algorithm must be able to acquire new information. W hile m any algorithms are providing a solution to the problem by replanning, the algorithm may not have enough time to compute the best solution, so it needs to settle for using a

PAGE 41

41 suboptimal solution. One of t he solutions proposed for this problem is known as the anytime algorithm [54] that start s by computing an initial, potentially highly suboptimal solution and the n improve s this solution as time allows A* based anytime algorithms show that the heuristic values used by A* provides substantial speedups at the cost of solution optimality. M any incremental algorithms such as A* and D* have been used in robotics for mobile robot navigation in unknown or dynamic environments. However, many of the researchers recognized the approach has limitations. F or example, the cos t per unit of traverse given to each cell is not continuous which means there is difficulty navigating the respective area of the environment. For this reason, a research er devised the Field D* algorithm [55] T his algorithm is also an incremental replanning algorithm but incorporates interpolated path cost. Field D* is an extension of classical grid based planners that uses linear interpolation to produce more natural paths with less cost through grids. The path planning can be decomposed into two levels: local and global. T he local planning is to avoid obstacles while driving and react to data from sensors as quickly as possible. M ethods that use a uniform sized grid map must allocate memory for regions that may never be traversed or do not contain any obstacles. T o compensate for this drawback, the framed Quadtree data structure has been suggested [56] T his data structure has been used to extend a path planner on the uniform grid cells. A regular Quadtree can provide efficient memory usage to the path planner but paths generated by Quadtree are suboptimal because it is constrained to segments between the centers of the cells. T o solve this constraint, it was modified in order that cells of the highest

PAGE 42

42 resolution are added around the perimeter of each Quadtree region. H owever, there is still a problem in this algorithm in that in a congested area it needs more memory than a regular grid map. Probabili stic Road Map T he probabilistic RoadMap (PRM) was developed by Svestka and Overmars for single car like robots in a static environment [57] T he basic idea of PRM is to take random samples from the configuration space of the vehicles, check them for whether or not they are in the free space, and use a local planner to attempt to connect these configurations to other nearby configurations. F or this, a graph search algorithm is applied to determine a path between the vehicle and goal configuration. T he RPM planner consists of two steps: a construction phase and a query phase. I n the construction phase, the algorithm connects a configuration with its neighbors to make a complete graph; in the query phase, the path is obtained from the graph using Dijkstra s shortest path algorithm. T he research reported in [58] plans the path in high dimensional configuration spaces. D uring the construction phase, the roadmap is constructed in a probabilistic way for a given scene. T he roadmap is an undirected graph R = (N, E) The nodes in N are a set of configurations of the vehicle appropriately chose n over the free configuration space (C space). A n edge ( a, b ) corresponds to a feasible path connecting the configurations a and b This path is computed by a local path planner. S ince the computation of the path can be completed very quickly, it is not stored in the roadmap. I n the query phase, the roadmap is used to solve individual path planning problems in the input scene. Given a start configuration s and a goal configuration g the method first tries to connect s and g to two nodes a and b in N I f it is successful, it

PAGE 43

43 searches R for a sequence of edges in E connecting a to b Finally, it transforms this sequence into a feasible path for the robot by recomputing the corresponding local paths and concatenating them. A nother important aspect is to analyze the probability of failure o f PRM as a function of all the relevant parameters [59] W hen pairs of point a and b are connected by including the upper bound of the failure probability they can be connected by some path that keeps them uniformly away from the obstacles. T he idea is to cover the path with balls overlapping to a certain degree. T he idea mentioned uses only the minimum distance from the obstacle but if this is achieved rarely, t he distance should be varied. T his research shows how to implement the PRM with a simple collision free algorithm. T here is a new PRM algorithm for single and multiple query path planning. For the algorithm, PRM does not build a roadmap of feasible paths, but rather a roadmap of paths assumed to be feasible [60] T he concept is termed, Lazy PRM which lazily evaluates the feasibility of the roadmap as planning queries are processed. T he first step of the algorithm is to build an initial roadmap with a vehicle node and a goal node. T he next step is the selection of the neighbor to be connected with a node in the roadmap. T hen it finds the shortest path between the vehi cle and the goal node. If the path is found, it checks if the path is free from obstacles Otherwise, it reports failure or starts the node enhancement step to add more nodes to the roadmap and search again. A t the node enhancement step, the algorithm generates a set of new nodes, insert s them in to the roadmap, and select s neighbors in the same way as when the roadmap was initially built. T o add the new nodes it selects a number of points, called seeds, in the roadmap and then randomly distributes a new point close to each of them. W hen

PAGE 44

44 the planner has found a collisionfree path, it terminates and reports the path. F or the laziness, information about which nodes and edges have been checked for collision is stored in the roadmap, and as long as the obstacl es field remains the same, it uses the same roadmap when processing subsequent queries. A s several queries are processed, more and more of the roadmap will be explored, and the planner will eventually find paths via nodes and edges that have already have been checked for collision. This stored information makes the planner efficient. Potential Fields Another approach was proposed for robot path planning that consists of incrementally building a graph connecting the local minima of a potential field which was developed by Oussama Khatib [61] defined in the robot s configuration space and concurrently s earching this graph until a goal confi guration is attained [62] T his was an efficient approach that did not require any precomputation steps to be performed over the potential field function. T he process to construct num erical potential fields in the configuration space of a robot cons ists of two major steps. First potential fields, called W potentials are computed in the robot workspace. E ach W potential applies to a selected point in the robot, called a control point, and pulls this point toward its goal position by a function named the arbitration function. T he next step is to construct a good C potential in the configuration space. T he path search algorithm only considers the neighbors of a configuration that are collision free s o there may be two types of local minima: the natural minima (where the gradient is zero) and the minima located at the boundary of free configuration space. I n the potential field approach, obstacles are assumed to carry electric charges and the resulting scalar potential field is used to represent the free space. T his method

PAGE 45

45 also can be used for local and global planner s [63] I n this research, the obstacles can be represented by specifying their surface or volumes and if a potential field representation of the free space is used, then the valleys of potential minima (MPV) define the Voronoi diagram. T he global planner in [63] selects the shortest path from the MPV graph between the start and the goal nodes with the minimum heuristic estimate of the chance of collision. O nce the global path is designated, the local planner starts by analyzin g the potential field in two ways. First, when the initial path and orientations involve potential collisions, the local planner modifies the robot s configuration to minimize that potential and attempt s to avoid the collision. Second, although the path and orientation are free of collisions, they may not be optimal in the sense of the shortest path and the minimum changes in orientation. T he local planner uses the potential field as a penalty function in a numerical algorithm that optimized the path length and orientation change. T hen if a solution is found, it will find further optimized solutions with the numerical algorithm. O ther Methods W hile p revious sections introduced several approaches to perform path planni ng for robots, many other interesting approaches still remain for the modifying path planning algorithm. T he research in [64] shows the path planning algorithm using the Vector Field Histogram (VFH) method. I t employs a twostage data reduction technique and three levels of data repres entation. T he highest level holds the detailed description of the robot s environment and this level is expressed with a 2S Cartesian histogram grid that is update d by range sensors. At the second level, a 1D polar histogram is constructed around the robot s location. The third level of data is the output of the VFH algorithm with data composed of the path for the drive and steer controllers of the robot.

PAGE 46

46 T he first step of the data reduction is to map a histogram at the first level onto the polar histogram at the second level. T he second datareduction stage computes the required steering angle. T he robot can determine the free sector, steering angle, and speed command with the histogram grid and polar histogram. A lthough this method does not generate a path, but it finds the traversable area and computes input commands. T he work described in [65] is based on the G rid D istance F ield (GDF), which is based on the W potential in [62] T he GDF has several features. F irst field strength changes linearly with path length from the goal. S econd obstacles are avoided by virtue of not falling on the trajectory path created by the gradient. N ext, the linear gradient is generated by a simple and fast algorithm that is linear based on the number of grid cells. L ast, the resulting paths do not have smoothly varying trajectories. T he path generation has two steps in this case. The GDF is computed and then a sequence of local waypoint s is extracted from the GDF. T he problem of finding the shortest path under the curvature constraint was first introduces in the pioneering work of Dubins [66] I n robotics, a Dubins car refers to a vehicle with a minimum radius that only moves forward. U sing this car, the work proposed in [67] determines a cost function, then uses it to compute individual trajectories T o do this, they introduced the Hamilton Jacobi formulation ( HJ equation) for path planning. T he optimal paths from an initial pose can be computed by tracing characteristic curves defined by the HJ equation back to the goal position. In addition the Dubins car concept is being applied to many robotics path planning areas. T he l ast useful and simple method is using a Bezier curve, a parametric curve frequently used in computer graphics that can be scaled in definitely. T hus the curve is

PAGE 47

47 very useful to generate a smooth path. I n the paper that proposes a way to use this approach, the most important part is the selection of control points [68] T hey used a 4th order Bezier curve, so it needs five control points to generate a curve. U sually the first point and the last point for the curve are the vehicle s initial position and goal position, respectively. C ontrol points are selected that smoothly lie in the direction of the line connecting the start point and goal point. I f there is any obstacle on the generated path, the vehicle needs to generate a new one with a continuous curvature. I n this case, a new intermediate goal point is selected to find another path to avoid the obstacles. A fter that, it generates two curves. T he f irst curve uses the vehicle s position and the intermediate goal ; and the second one uses the intermediate goal and the original goal. T hen the algorithm connects the two curves to generate a complete path to avoid the obstacles. T he path planner using the Bezier curve is very simpl e and can work well but the selection of the control points is very important. Occupancy Grid Mobile robots are often designed to operate in environments that are unstructured and unknown. In these cases, the system has no a priori model or knowledge of its surroundings. Occupancy grid structures offer a means for a robot to map and rationalize this unknown space. Each cell in the array contains a probability estimate that identifies whether the space is occupied or empty. O ccupancy grids are mostly used to quickly check if there is any obstacle in some position. There are several ways to create the grid. T he research shown in [69] has proposed the two ways to build the grid: Least Median Squares (LmedS) and gradients. T he LmedS method uses a nonlinear optimization method to find the dominant plane in the point clouds and then uses this

PAGE 48

48 plane to build the occupancy grid based on the distance of each point in the cloud from the dominant plane. T he gradient based method relies on the idea that local gradients in the point cloud data can be used to determine whether a particular cell is occupied by an obstacle T he f inal occupancy grid is generated by fusing two maps built using the two methods C ollision checking is performed by mapping the vehicle s shape onto a further occupancy grid and recursively t ransforming this shape along the predicted trajectory T he occupancy grid also can be used with probabilistic Velocity Obstacles (VO s) for dynamic obstacle avoidance [70] T he researchers suggested three steps to avoid obstacles. F irst is the cell to cell approach. E ach cell is considered occupied or nonoccupied, the velocity of the objects is already known, and the grid is relative to the robot. T he specified collision search space is reduced to the velocities reachable within the next time step with dynamic and kinematic constraints as a maximum acceleration value with a maximum and minimum velocity in each direction. S econd is to compute the probability of collision. E ach cell also stores a probabilistic estimation of its state. T he l ast step is to choose the control input. T he occupancy grid provided from the previous two steps gives a set of control inputs to the vehicle. T he obstacle s and the vehicles usually cannot be considered as invariant disk shapes. A ssuming the simple model of a rigid robot that is restricted to translation, th e configuration space obstacles can be expressed in terms of the Minkowski sum of the robot and obstacle shape [71] This approach decomposes the vehicle shape into disks and does the Minkowski sum. T hen the sum is decomposed into the axis aligned rectangles.

PAGE 49

49 Control Strategies T he control task for an UGV involves the regulation of the vehicle to a predetermined motion command. T he goal is to minimize such errors as heading error s or lateral error s between the vehicle s state and a desired state. T his is implemented by commanding the vehicle plant inputs in a deliberate manner that is often specified by a mathematically defined function, or procedure. Many control methods have been developed for this purpose. T he research in [72] provides a number of examples of the control input for steering. T hey divide tracking methods into three major groups: g eometric tracking, tracking with a kinematic model, and tracking with a dynamic model. A g eometric vehicle model only needs the turning radius and wheel base distance. T he pure pursuit which is the method of geometric tracking is the most common approach to the path tracking problem for mobile robots. I t consists of geometrically calculating the curvature of a circular arc that connects the rear axle to a goal point on the path ahead of the vehicle T he goal point is determined from a look ahead distance A nother geometric tracking method is the Stanley method which is used by Stanford University s autonomous vehicle entry in the DARPA Grand Challenge. T his method is a nonlinear feedback function of the cross track error measured from the center of the front axle to the nearest path point. A he ading error is computed using the heading of the vehicle and the heading of the path at the nearest point. T he re sulting steering is a function in terms of the two errors. T he kinematic bicycle model is a common approximation used for robot motion planning, simple vehicle analysis and deriving intuitive control laws T he wheels are assumed to have no lateral slip an d only the front wheel is steerable. T he t wo previous models ignored the vehicle dynamics but that ignorance had a

PAGE 50

50 negative impact on tracking performance as speeds were increased and path curvature varies. T o track the trajectory, the vehicle needs to o btain a reference steering angle and a reference speed for the next time interval to get back to the reference path ahead from a current deviated position. The path tracking can be separated in two ways: g lobal position feedback and separate steering and driving control [73] I f the path to be tracked is specified in Cartesian coordinates, g lobal position feedback computes the steering angle with the position error and heading error. S ince the guide point is on the center of the rear axle, steering and driving reference inputs are computed from the given path and vehicle speed, respectively in the geometric model. T he extension of [73] had been proposed in [74] T he path tracking problem is approached first by formulating a control problem with the consideration of characteristics of conventionally steered vehicles which uses three strategies. T he geometric path tracking pursues time history of position reduci ng the second problem to a steering problem. By locating the guide point on the center of the rear axle, the path determines only steering motion for tracking. T he l ast strategy is nested feedback control loops. T he steering angle can be computed by the st eering planner, the computed angle goes to the steering controller, and the controller will operate the steering by comparing the computed angle with the real current steering. I n a similar process the vehicle speed can be computed. Model predictive cont rol is an advanced technique often used to solve optimization problems under certain constraints. T he research shown in [75] presented a result

PAGE 51

51 where a wheeled mobile robot was regulated to an arbitrary set point using a receding horizon controller. T hey dem onstrated that the vehicle stability could be achieved.

PAGE 52

52 CHAPTER 3 THEORE TICAL APPROACH Environment evaluation and path planning are challenging tasks for autonomous vehicles. T his dissertation presents novel approaches to these tasks by examining the e fficient storage of sensor information. T his chapter is divided into three functionality components : sensor evaluation, sensor storage, and path planning and control strategy. First, a component that is composed of multiple LADAR sensors called the Terrain Smart Sensor (TSS), evaluates the surrounding s of the vehicle and provides that information to the other components. In the case of urban area driving, one of the most important aspects is to find road boundar ies that the vehicle should not traver se. T here are many ideas to use properties of the LADAR beams and road boundaries. T he information sent from the TSS can be used immediately by other components and also be saved to help the path planner when the vehicle drives in the same area in the futu re. T he second section is about the sensor knowledge store component. T here are many ways to save the information, but the algorithm needs to consider efficient memory management since how far the vehicle will run is unknown. T his chapter introduces the Quadtree, a data structure algorithm for sensor knowledge store and how to save, update, and send the necessary information is also presented. Last the path planner based on the vector method is described. T he vector based planner works depending on the vehicle s behavior T here are several behaviors for the vehicle, roadway navigation that is the basic and most frequent behavior, npoint turn and parking. T he planner for each behavior runs using different approaches such as the Bezier curve and rapidly exploring random tree (RRT). This chapter explains these approaches generally

PAGE 53

53 and the next chapter applies the description to implementation on an unmanned ground vehicle. LADAR Terrain Evaluation Traversability Grid Map T he surrounding environment of an outdoor autonomous vehicle is highly unstructured and dynamic. T he environment consists of natural obstacles that include positive obstacles such as trees and rocks, negative obstacles such as cliffs and ditches, and road boundaries such as curbs, traffic lines, and intersections. O ther obstacles can include vehicles and pedestrians. T he classic method for terrain evaluation u ses a 2D Traversability Grid representation (Figure 3 1) that is always oriented in a north e ast dir ection with specific number of rows and columns, specific size, and specific resolution. E ach cell of the grid map has a traversability value assigned by evaluating the points collected in the cell. T he lower value represents the lower traversable probabil ity and vice versa T he representation is not only able to distinguish highly nontraversable obstacles but also able to notice differences in the terrain to the vehicle, which helps the vehicle to avoid the obstacles and to adjust its speed to pass throug h rough terrain. The procedure for this grid map is as follows : 1. Transform the distance data reported from the each LADAR sensor to the global coordinate system using yaw, pitch, and roll angles of the vehicle reported from a component equipped with a GPS and an NFM. 2. Assign each transformed point into the cell with corresponding row and column. 3. Evaluate each cell by mean height, plane slope, etc. of points assigned into the cell. 4. Determine the traversability value using the predefined properties.

PAGE 54

54 T he circular buffer implementation of the Traversability Grid is very efficient in updating the grid to the new position as the vehicle moves. T he main advantage of using a circular buffer in place of a 2D array is that for a 2D array, as the vehicle moves for every new position of the grid, data from the cells in the old grid are copied into the corresponding cell indices in the new grid, but an expensive computing operation of copying the grid data every time can be avoided by using a circular buffer. Fi gure 32 shows the change in the grid position due to the movement of the vehicle. As shown in the figure, all the data corresponding to the overlapping cells in the two grids have to be copied from the old grid position to the new grid position for a 2D array representation. The circular buffer implementation of the Traversability Grid avoids this computationally expensive operation by stor ing the data in the grid as a 1D array of size equal to the number of cells in the grid. Figure 33 shows this procedure. Mapping in Urban Areas The aforementioned classic grid map approach had worked well for the 2005 DARPA Grand Challenge that was for off road competition, but it is deficien t for driving in an urban area. One problem is how to detect low height road boundaries such as curbs. Usually, the difference between the mean height of the cell with a curb boundary and road cells is not large enough to be distinguished. For this reason, the component needs extra functionality called the vector based edge finder to detect those road boundary features. Figure 34 shows the TSS component with the Edge Finder functionality. T he procedure of this functionality is as follows: 1 The data points from the LADARs that can evaluate the terrain are transformed into a global coordinate system. 2. The data points are separated into three groups ( road, road boundary, and linear object features ) by comparing each point with neighboring points.

PAGE 55

55 3. The traversability value of the cell corresponding to each point is adjust ed depending on the confidence value of the point for each group. For example, if the point has a high confidence value for the road boundary feature, the cell should be adjusted to a non traversable value. T here could be noise from the sensors, so the a lgorithm is likely to misclassify the noisy data. T he risk of completely trust ing the classification is obvious so the confidence value is introduced. T hree different methods grid based, height var iance, and slope between points are used to find the road boundaries. I n the case where all three methods classify into the same group, the point has a high confidence value for the group. T he information about the classification is converted onto the grid map and sent to the other component via JAUS messages to be used immediately or to be saved in the Sensor Knowledge Store for the future. Sensor Knowledge Store This section describes the Quadtree algorithm which is one of the several possible methods that can be used to store an unk nown amount of terrain data. T he Quadtree algorithm is often used for visualizing large terrain. The Triangulated Irregular Network (TIN) method was recently developed but it has difficult y executing geographical operations such as neighbor finding, searching, and updating. T he purpose of using the Quadtree algorithm is to show an easy way to implement these operations. The use of a Quadtree, where each node in the structure has four children, is used to an advantage for geographic data storage I f necessary the geographic region can be divided into four subregions, to provide the resolution needed to effectively map or describe the environment. Large sections of contiguous terrain do not need to be divided, as increased geographical resol ution is not needed in these regions. Detailed

PAGE 56

56 areas can be divided into four regions repeatedly to obtain the needed resolution to accurately describe the environment. In this way, infinite resolution is theoretically attainable, while memory storage for the environment description is minimized. Chapter 4 describes how the Quadtree approach can be used to store the data from multiple sensors such as LADARs, GPS, and vision sensors, to model a large driving area. T he primary reason for implement ing this sensor knowledge store was to make path planning faster when the vehicle will be driving in an area that was already evaluated. For example, the sensors on the vehicle can estimate the environment around the vehicle for limited distances, i.e. 20 m a head for terrain evaluation and 3 0 m for obstacle detection. T h is fact mean s the vehicle path planner can make decisions based only on this short range. Using this sensor knowledge, the planner can expand its range to longer distance unless there is a mov ing obstacle. A lso this component can help generate intermediate waypoints between two waypoints since it already knows the environment around the waypoints. A Q uadtree is a tree data structure in which each internal node has up to four children, which represent NE, NW, SW, and SE A Quadtree is mos t often used to partition a twodimensional space by recursively subdividing it into four quadrants or regions. I n this study the regions are square. T he Quadtree i s able to save any type of data including double, int eger or char. There are two ways to implement the Quadtree algorithm top down and bottom up In this component the bottom up method is used because the size of the area that the vehicle has to traverse is unknown. U sing th is method the size o f the area that is covered by the Quadtree can be built up. Figure 35 shows how the Quadtree is implemented. A larger quadrant is a node at a higher

PAGE 57

57 hierarchical level on the Quadtree S maller quadrants appear at lower levels. The advantage of this structure is that regular decomposition provides for simple and efficient data stor age, retrieval, and processing. A s shown in Figure 35 if the (n1)th level is 0.25 0.25 m resolution, then the nth level should h ave a 0.5 0.5 m resolution. Vector based Path Planner A novel path planning method based on vectors will help the current planner, based on a grid map, have the opportunity to find a better path T he current planner is using different approaches dependi ng on the vehicle s behavior. F irst of all, a predefined speed profile is developed. The vehicle can know the course through which it has to pass by using the Road Network Definition File ( RNDF ) so the algorithm can define the best speed profile for the vehicle to follow. There are several elements such as road curvatures, the existence of obstacles, stop signs, and road roughness that can decide the speed at each segment W hile the vehicle is in roadway navigation behavior, the new planner can make the decision to find the optimal path more easily by assuming that waypoints are on the center of the traffic lane. I ntuitively, if there is no obstacle and the road is smooth an optimal path could be the path connecting each waypoint. W ith this optimal path, other candidate paths are decided by expanding the optimal path to the left and right laterally in constant step distances. T here are at least two difficult path planning problems in urban driv ing the npoint turn and parking. P revious research in [76] based on A* for the DARPA Urban Challenge ( DUC) has worked very well for these problems. T o improve the algorithm, a new algor ithm, Rapidly exploring Random T rees (RRT), is introduced for the problem. The motion planning problem for nonholonomic platforms in the environment starts

PAGE 58

58 from the first step that defines the 2D configuration space. Random nodes to be used for RRT are generated in the space. Then the algorithm expands the tree without collision and if it is succes s ful in finding a solution following the goal criterion, the path will be smoother. Bezier Curve Fitting Bezier curves were widely publicized in 1962 by the French engineer Pierre Bezier, who used them to design automobile bodies. T he curves were first developed in 1959 by Paul de Casteljau using de Csteljau s algorithm, which become a numerically stable method to evaluate Bezier curves. A Bezier curve is a parametric curve that is sometimes used to generate a smooth path for autono mous vehicle path planning and related fields. In vector methods, Bezier curves are used to model smooth curves that can be scaled indefinitely. Vector based paths are not bound by the limits of rasterized images and can be intuitive ly modified. Bezier cur ves are also used in the time domain, particularly in animation and interface design, e.g., a Bezier curve can be used to specify the velocity over time of an object such as an icon moving from A to B, rather than simply moving at a fixed number of pixels per step. T he vector based path planning uses this concept T he curve can be implemented at different orders and requires (n + 1) control points for each order, where n is the order of the curve. Quadratic and cubic Bezier curves are most common and this dissertation also implements these two curves, since the higher the degree of curves, the more expensive it is to implement. A common adaptive method is recursive subdivision, in which a curve s control points are checked to see if the curve approximates a line segment to within a small tolerance. I f not, the

PAGE 59

59 curve is subdivided parametrically into two segments, t t and the same procedure is applied recursively to each half. T he formula of the generalized Bezier curve is ( ) = ( ) = ( 1 ) (3 1) where bi,n is known as a Bernstein basis polynomials of degree n T he rational Bezier curve adds adjustable weights to provide closer approximations to arbitrary shapes. T he numerator is a weighted Bernsteinform Bezier curve and the denominator is a weighted sum of Bernstein polynomials. Rational Bezier curves can be use d to represent exactly segments of conic sections. Given n + 1 contr o l points Pi, the rational Bezier curve can be described by ( ) = ( ) ( ) (3 2) B oth normal and rational curves are used for this dissertation. Rapidly exploring Random Tree (RRT) Because the robotic path planning problem has received a considerable amount of attention there are many proposed path planning methods. The problem of driving through a complex environment is embedded and essential in almost all car like robotics applications. Since the urban area is usually a dynamic environment, moving obstacles have to be considered when planning. The motion planning problem for nonholonomic platforms in the environment starts from the first approach that defines the 2D configuration space. This research introduces the Rapidly exploring Random Tree (RRT) described in [77] RRTs are constructed incrementally in a way that quickly reduces the expected distance of a randomly chosen point to the tree. Figure 36 shows an RRT s result after 1000

PAGE 60

60 iterations without nonholonomic constraints T he red dot in the center of the figure represents the initial state of the tree. RRTs are particularly suited for path planning problems involving obstacles and several constraints. Usually an RRT alone is insufficient to solve a planning problem so the algorithm can be considered as a part of planner that has many constraints and other methods. There are several good properties of RRTs: 1. The expansion of an RRT is biased toward unexplored portions of the state space. 2. The distribution of vertices in an RRT approaches the sampling distribution, leading to consistent behavior. 3. A n RRT is probabilistically complete under very general conditions 4. The RRT algorithm is relatively simple. 5. An RRT always remains connected, even though the number of edges is minimal. 6. An RRT can be considered as a path planning module, which can be adapted and incorporated into a wide variety of planning systems. 7. Entire path planning algorithm s can be constructed without requiring the ability to steer the system between two pr escribed states Figure 37 shows the nodes in the tree are expanded: r represents a ge nerated random point, xnear is the closest point to the random point in the tree, and xnew represents the new input expanded from the xnear. T he approach to adapt this algorithm to urban driving for special behaviors such as parking and npoint turns, are discussed in the next chapter.

PAGE 61

61 Figure 31 Traversability Grid Map Figure 32 Grid Map movements

PAGE 62

62 Figure 33 Circular Buffer Figure 34 TSS diagram with Edge Finder

PAGE 63

63 Figure 35 H ierarchical data structure Quadtree Figure 36 RRT graph after 1000 iterations

PAGE 64

64 Figure 37 RRT node expansions

PAGE 65

65 CHAPTER 4 IMPLEMENTATION DETAI LS This chapter introduces the technical details of the current implementation of the components for the research in this dissertation. T he final objectives of the research are better sensing performance, effective and feasible storage of sensor data, and a n ovel path planning method using the sensor information for urban area driving for the autonomous vehicles. The present resources at CIMAR made this goal a reality in terms of software development and the ability to test the method on a reliable robotic tes t platform. First of all, the most recent robotic ground vehicle platform designed and built at CIMAR, the Urban NaviGator is described and hardware equipment on the vehicle is briefly explained Next, the software component to implement the components is discussed in detail. Urban NaviGator The Urb an Navigator, shown in Figure 11, is a fully autonomous sport utility vehicle that was designed and developed at CIMAR for the 2007 DARPA Urban Challe nge robotics vehicle competition and served as the robotic test platform for the implementation and testing of the components explained in the previous chapter. T he robot was built on a 2006 Toyota Highlander Hybrid (THH) chassis that was heavily modifie d to meet the requirements of the competition. T he point of the hybrid vehicle is to autonomously run on its internal electric power train and/or its internal combustion engine. Architecture The system architecture of the Urban NaviGator is shown in Figure 4 1 and is compo sed of the four main elements: p erception, p lanning, i ntelligence, and control.

PAGE 66

66 The p erception element contains all the sensor hardware and software components that provide data about the surrounding environment to the rest of the system. This includes three GPS units, eight LADAR sensors, four vision sensors, and the software components t o perform such tasks as understanding the vehicle s position, detecting obstacles, estimating terrain, and traffic lane detection. The p lanning element runs using two files: the Mission Data File (MDF) and the Route Network Definition File (RNDF). T he MDF defines the locations the robot has to visit and the order in which the robot has to visit them. T he RNDF defines the entire drivable environment as a set of GPS based waypoints and check points to plan highlevel and midlevel trajectories. The midlevel plans are provided to the low level motion planning algorithm to calculate the goal point at which it aims its trajectory. The i ntelligence element is responsible for collect ing and analyz ing information about the vehicle, the environment, the status of the mission, and the available operation behaviors to best choose the next course of action for the vehicle. Lastly, the c ontrol element executes low level motion planning, which generates the control inputs implemented by the vehicle to navigate through its environment avoiding static and moving obstacles. Hardware The vehicle is equipped with a comprehensive sensor package used for localization, terrain estimation, and obstacle detection. This package includes three SICK LMS 291 S05 LADAR sensors, three SICK LMS 291 S14 LADAR sensors and two SICK LD LRS 1000 long range LADAR sensors for obstacle detection and terrain estimation. Also four Matrix Vision BlueFox USB color cameras are used to detect the traffic line and find the path. A Novatel GPS and two Garmin GPSs are combined with a

PAGE 67

67 GE Aviation North Finding Module (NFM) for vehicle localization. The NFM estimates the vehicle s position and orientation in its global frame using Kalman filter estimation. F or all data processing, a computing package in the Urban NaviGator and a laptop, main computing unit, are used. The computers on the vehicle are powered by an ATX motherboard, AMD X2 46000 processors, and Ubuntu 8 04 and Windows XP operating system s. T he main laptop has an i7 core processor and Windows 7 operating system. A ll the computers can transfer data through two gigabit Ethernet network switches. Actuation of the Urban NaviGator is facilitated by several methods. For steering, an Animatics SmartMotor is attached to the steering column and receives position commands that are associated with the steering commands coming from the motion planning algorithm. Throttle and brake automation is enabled through the use of the existing driveby wire system already implemented on the vehicle. A custom controller was designed and incorporated to pass throttle and brake commands to the vehicle when it is in an fully autonomous state. Road Boundary Detection O ne of the reasons that the autonomous navigation is still a considerable chal lenge is the difficulty of describing the environment to the robot in a way that captures the variability of natural environments. For example, there are many different terrain conditions such as short or tall grass, forested areas, and trees, that are naturally described as having a 3D texture rather than a smooth surface. I n contrast to off road driving, in urban driving there are many more ground conditions in which vehicles can or can not traverse. For example, the curb which is the boundary of the street is a nondrivable condition, but for off road driving the curb is a

PAGE 68

68 drivable area because it has little or no height. Thus urban driving needs more precise evaluations than off road driving. In this section, three ways to detect road boundaries of urban areas are introduced: the peak detector method, the slope variance method, and the grid based method. F or thes e methods, four LADAR sensors, T errain, N egative, driver side V ertical F an (VF), and passenger side VF are used shown as Figure 42. Peak Detector P eak detection, for onedimensional or multidimensional signal s, is often encountered in signal processing. Finding peaks in experimental data is a very common computing activity and many techniques have been established. In this dissertation, a 1 D signal is used along with a method that is based on a SavitzkyGolay smoothing filter [78] This peak finding technique us es the zero crossing derivatives of the smoothing, locally fit, Savitzy Golay polynomials. This is a very fast peak detector because the Savitzy Golay smoothing algorithm can be slightly altered to directly report the first derivatives. T he input signal for this method is the variance of the z height of the point clouds. T he variance of the points is computed in a group of five neighbor points that is composed of itself, the previous two points and the next two points. I n this dissertation, the group is named a unit T he important thing is that in the unit there could be invalid data points that have longer distances than the maximum distance of the sensors, so the algorithm needs to filt er these points out. One unit for the other two methods is also defined like this. First, the algorithm needs to transfer the LADAR beam into the proper coordinate frame, depending on the type of the sensor. T he input data should be in an x y

PAGE 69

69 coordinate s ystem, where y is the z height of the data points. For the Terrain and Negative sensors, the x values of the data points in the vehicle frame can be easily used, since the sensors are fixed toward the front of the vehicle. However, the VFs sweep with smart motors, so the x values of the input data needs to be transferred from the x value in the vehicle coordinate system as follows using the smart motor position shown in Figure 43. = cos ( ) sin ( ) (4 1) where is the VF s angle. O nce the transformation is complete, the input data also can be used for the other two methods. T he s econd step of this method is to smooth the first derivative of the y values, which are the z height s of the data points. T he first derivative that represents the average slope between three adjacent points can be expressed simply: = (4 2) A t this point extra grouping is necessary to find local maxima. Each group has a specific number of points that is 15 in this dissertation which is the group width, and the number of points should be odd so the mid point is the center of the group. T he fa st smoothing is operated in each group as follow s: = ( ) ( ) ( ) ( ) (4 3) where m is the half width of the group. T he l ast step is least square curvefitting for the groups of the points whose smoothed first derivative is higher than the slope threshold value and the computed y

PAGE 70

70 value is higher than the amplitude threshold value. T he curvefitting is operated in 2ndorder p olynomial fitting : = + + (4 4) Once the coefficients a, b, and c are computed, the peak position, height, and width can be found using the quadratic coefficients and equation for a Gaussian peak: = ( ( ) ) ^ ), (4 5) = / ( 2 ), (4 6) = 2 35703 / ( ( 2 ) ( ) ) (4 7) where h, x and w represent peak height, position, and width, respectively and 2.35703 is from the formula of the Gaussian peak T he height of the peak is necessary to distinguish the road boundaries from vertical obstacles such as cars and walls. T he road boundaries have relatively lower peak height s compared to vertical obstacles D etermining positions of the peaks is necessary simply to let the vehicle know where the edges are. In addition, ascertaining the width of the peaks supplies the range of the edge that can be considered as edge. T he reason the width is necessary is for formulating a confidence value of the detected road boundary. T he confidence value rises as more sensors detect the edges at the same position using multiple methods. T o match a detected edge by this method with an edge using other methods, some offset of the edge is necessary, since it rarely happens that the two points are identical For this reason, the width of the peak is needed to add to the confidence value of the peaks. Th ese procedures using sample data are shown in Figure 44. T he t op three graphs are z height road profile, standard deviation of five points of z height, and

PAGE 71

71 smoothed z height using equation (43). T he fourth figure from the top shows the fitted curve using points in the blue circle using equation (44). T he bottom figure shows the final result. Slope Variance Method H uman s can detect most road boundaries by change of height, color, or materials. T he LADAR sensors that are used for this dissertation however can only detect the change of the height from the road profile and other changes that ca n be detected by vision sensor s, so the basic concept, to detect the road boundaries starts with determining change s of the height. O ne thing that should be considered is that there can be misclassification s when trying to detect the change in absolute height, since there will be uphi ll and downhill on the road that the vehicle has to traverse. For this reason, detecting dramatic change of the slope between the point clouds is very helpful to detect these variations in elevation. T his method, using the variance of the slope shown in F igure 45 is very simple, but work s very well. In the figure, the red dots represent the points hitting the curb area and the black arrows are orientations of the slope between two points. T he slope ( Si) of each point is computed relative to the previous point. A ll the points are in the coordinate frame that has been obtained from the previous section for Peak Detection and the slope is calculated as : = ( ) (4 8) Filtering the invalid points out is an important step. I t is highly likely that using a threshold value for the slope will result in misclassification. F or example, if the road has a gradually declined (or inclined) profile, the vehicle can drive on the road. H owever, if

PAGE 72

72 the road slope is higher than the threshold, it could be classified as a non traversable edge. The variance of the slope in unit defined in the previous section, is used t o prevent this. T his variance indicates how much the difference of the slope of the points in the unit is. T he higher variance usually can be derived from the rough terrain or the unit that contains the LADAR beam hitting road boundaries such as curbs. T he variance of the unit is expressed as: = ( ) (4 9) where is the mean of the slopes in the unit and n is the number of slopes that are one less than the number of points in the unit. W hat if all the points in the unit are hitting objects, not road class, such as wall s or curbs? T he variance must be less than the threshold value and the point could be classified incorrectly. T his can be prevented simply by introducing another threshold value for the slope of the point that should be considered road group. Grid based Method T he ro ad boundaries were detected using the previous two methods but this may not be enough and there could be noisy values. T o minimize noisy values and increase the confidence value, a new method, the Grid based method, is introduced. T his Grid based method is different from the classic fixed sized grid map method. T he grid size can be changed by the number of the valid LADAR beams and the total width that each sensor covers. This method can also use the point clouds computed at previous sections. T he experiment shows the average of the total width of the LADAR beams is enough to determine the grid size of each cell for the sensors on the roof since the

PAGE 73

73 lengths of the beam of the sensors in the x direction are equal when there are no ext ernal factor s such as yaw, pitch, or roll angle. However, the VFs need the variant cell size depending on the length of the beams. T he longer the beam length is, the wider the size necessary and vice versa. E ach grid size is computed with a simple formula = ( ) (4 10) where ri is the length of beams, is the angle between two points, and k is the widthtuning constant that make the sum of the sizes of grids equal to the total width of the LADAR beams. O nce the grid size is decided, the each point falls into a corresponding cell by its position and the detection process is starting T his method uses the number, the maximum height difference, and the slope change rate of the points in each cell. T he point s in cells that are in road features are usually distributed laterally, so the cell has two or three data points at most with low height difference and slope change rate. T he points in cells that are in road boundaries (or linear obstacles) are distributed vertically, however, so the cell has more than three points with a higher height and slope change rate than the road feature cells have. B ased on these properties, cells t hat have the beams hitting the road boundaries can be distinguished from the drivable areas. Figure 46 shows examples for this method resulted from the sensors on the roof (upper) and VF (lower) A s shown in the figure, the width of each cell is constant for all beams for the sensors on the roof, but the width of each cell varies depending on the beam length of the VF sensors. A fter extracting such road boundaries as curbs, vertical objects, or walls using these three methods, a confidenc e value for the e dge is assigned to each point. T hen

PAGE 74

74 the three confidence values are applied to the cell corresponding to the point on the grid map and the traversability values of the cells will be modified depending on the confidence value. Sensor Knowledge Store T his section describes how the Quadtree approach was used to store the data from the sensors to create an accurate model of the operating environment The algorithm is flexible and allows for the storage of virtually any data type. Examples include the mapping of traffic lines, obstacles, and terrain elevation. In the next chapter e xperimental results are provided that show the operation of the algorithm in realistic driving environments. The Quadtree component is composed of the Quadtree Generator and the Quadtree Extractor as shown in F igure 47 Storage of Sensors Information For autonomous driving it is necessary to store and integrate several sensors information, such as that from the obstacle detection sensor, the terrain estima tion sensor, and the traffic line detection sensor. This information is placed into the appropriate node of the Quadtree corresponding to the vehicle position as obtained from GPS and inertial sensors A ll the sensors information are reported from the other components with a different update rate than the Quadtree generating rate, so it is likely that this component could miss reported data. T o prevent this, the component has a queue, so incoming data are queue d in order. T he Quadtree is able to store all the information from the various sensors I f the point to be stored is in the range the Quadtree covers, t he procedure for this approach is two steps as follow s:

PAGE 75

75 1 A leaf node corresponding to the new information has to be found. T o find this node, the new point needs to traverse h (height of the tree) nodes at most until reaching the leaf node from the root node of the tree. I n order to traverse to the leaf node, the position NE, NW, SW, or SE of a child of the current node to which the point has to move should be decided by comparing the center of the current node and the position of the data point to be saved and the center of the child has to be computed. T he center and position can be saved to omit this computation, but since one of the requirements for this approach i s to minimize storage usage, they are not saved and the positions of all the nodes can be computed from the position of the root node of the tree. F or this step, all the positions are represented by a Universal Transverse Mercator (UTM) coordinate system v alue transformed from geographic coordinates that are composed of latitudinal and longitud inal values. 2 The next step is to store the new data in the node or update the data in the node by comparing new data. T he data are composed of the roughness of terrain, static obstacle existence, traffic line existence, and way point existence in specific bit s of a char data type shown in Table 41. In addition, the measured mean height in the node that is usually double data type is saved as convert ed char type data. T here are two reasons to save these data into char type data O ne is to reduce the necessary storage memory. The other is to increase the speed of send ing the message extracted from the tree to the path planner or some other component th at needs the plane model information. This will be described later. I n case the new data point is out of the Quadtree range, the Quadtree needs to grow larger and taller by finding a new center position of the potential new root. T his is described in the next section. I f the vehicle will be run ning the same loop repeatedly, the data will be stored during the first run and the data in the leaf node will be updated from the second run. T he update operations are composed of traffic line adjusting, obstac le re detection, and terrain re evaluation operations This can act as an extra sensor by providing the information that is already saved during the first run when any of the sensors does not work pro perly due to a hardware failure, communication problem s, weather problem s, or dim light and only GPS sensor can provide the vehicle s position. U nless there are moving obstacles or new obstacles, it can work very well.

PAGE 76

76 T he formula for the t ime complexity of search and insert (or update) of this approach is ( ) where n is the number of nodes in the tree, since the height of the tree can be computed as lgn. Bottom up Quadtree T he p revious section described how to insert a node and update node data when the new data are in the Quadtree range. This sect ion one of the core part s of this component, describes how a new tree is built when new data are out of the current max range. Since the vehicle does n o t know the size of the area that it has to traverse, the algorithm does n o t know how large the Quadtree should be before the vehicle finishes driving. A lthough the vehicle will be provided a set of waypoints that it has to traverse, it cannot anticipate how big an area the sensors will cover. S o if the size of the area where the vehicle has traversed is greater than the maximum range of the current Quadtree, the current root becomes one of the children (or descendants ) of the root node that is newly generated. I f the old root became a descendant that is not a child, the algorithm has to make connecti ng nodes between it and the new root. In this approach a leaf node s area size is 0.25 m resolution and the maximum size of the tree with current height h will be 0.25 2( h 2) T he level of the leaf node is always 0. In addition, one can use a smaller resolution, such as chang ing from 0.25 to 0.125 m although the required memory allocation will be larger. Fig ure 4. 8 shows how the bottom up Quadtree is built. In the far left figure there is only one node, which consists of a root and leaf node. In this case, the root node (also leaf node) has a 0.25 0.25 m resolution. Then, as the Quadtree gets bigger (center of F igure 4 8 ) a new data that has to be saved is out of the current range, so the root

PAGE 77

77 node becomes one of the children of a new root T he fact that the old root becomes the child of new root means the difference between the level of the old root and new root is one. T he procedure from left to right in the figure represents the old root becoming one of the children of the new root. The s ame procedure is applied to growing up from the middle of F igure 4 8 to the right figure but the level of the new root is two levels higher than old root. In this case, the old root becomes one of the descendants of the new root, so they need to be connected by generating new intermediate internal nodes between them. Figure 49 shows the algorithm of bottom up Quadtree building. T he time complexity of building a new root and connecting a new root to an old root i s ( ) for the same reason as that in the previous section. E xtraction of Quadtree The Quadtree component is composed of the Quadtree Generator and the Quadtree Extractor ( Fig ure 4 7 ) The Quadtree Generator is the function that constructs the Quadtree using the method described in the previous sections The other function is the Quadtree Extractor. The concept of the Q uadtree E xtractor is quite simple. This function is necessary to send terrain information of the requested area to the other com ponents. The other components that need the ground data make and send a request message with the start point and goal point of the necessary area. T hen leaf nodes (or the nodes at the desired level) of the tree inside the rectangle defined by the start point and the goal point are extracted and the information is stored in an array with the corresponding resolution. B efore starting extraction, if the length or width of the area to be extracted is shorter than minimum values, they should be adj usted to minimum values. With

PAGE 78

78 adjusted points, the number of rows ( r ) and the number of columns ( c) of the grid map to be reported are decided. T he first step of the extraction is to find the lowest common ancestor ( LCA ) node of the start point and goal point to lower the searching start point. T he LCA node can be found by comparing the position of the start and goal points relative to a node. F or example, if the position s of the children of a node for the goal and start points are sa me, there is a lower common node but if the positions of the children of a node are different (e.g. NW for start and SW for end point), the node is the LCA O nce the LCA node is found, the extraction process starts from that node. Figure 410 shows the ext raction of the necessary area of the built Quadtree I n the figure, the start point and the end point queried from other components are shown in a grid map and corresponding nodes in the tree are shown. T he LCA node of two points is found in the tree, and then the searching process runs in the tree shown by the purple line. T he next step is to search for a corresponding leaf node (or node with the desired resolution) to each cell from the LCA and bring the height and environment data in the node to the cell of the grid map. This search step runs from the start point to the end point of the grid map. T here could be a case where the corresponding node to a cell does not ex ist. I n this case, the cell simply remains an unknown data cell. T he time complexity of the this process is r c ( ) where r, c and n represent the number of rows and columns of the grid map and the number of nodes, respectively. A s shown in Fig ure 4 7 the path planner (or an other component) quer ies the Quadtree Extractor with a message composed of the requested ID, the start point, the

PAGE 79

79 goal point, and the requested resolution T he Quadtree Extractor then search es the area and pick s the information and stores the char type array. T he array is then conver ted into the message with the requested ID and adjusted start and goal points and the message is sent to the other components. Fig ure 4 11 shows an example of the extraction. The upper figure is a display of the current Quadtree. I n this case, the difference of the y coordinate between start point and goal point is smaller than the threshold value of 20 m and the distance between the x coordinates is 50 m T he height of the map is set as 20 m. T he nodes in the green box are to be ex tracted. The l ower figure is the result of the extraction. T he size of this map is 80 200 cells with 0.25 m resolution. The cells colored yellow represent the leaf nodes that have the data of the terrain other cells are null status. The data structure of each cell of the extracted grid map is shown in Table 4 1. T he first two bits are reserved for future use. O bstacle existence is located at the 5th bit and is updated every loop if the existing obstacle is a moving obstacle. Messages among the components to report a set of data from the LADAR sensor components to the storage component and from the storage to the other components are shown in Table 42. S ave and R eload Another feature, sav ing the Quadtree when the program is terminated and then reloading the previously saved tree when next starting the program is useful when the vehicle drives in the area where it has been before In this case using waypoints from the Roadway Network Definition File (RNDF) t he planner can create a path and set the vehicle speed based on the static obstacles, traffic line s, and terrain roughness that is

PAGE 80

80 stored in the Quadtree T hen when driving around the course the vehicle will acquire local environment data from the sensors and if there is any new obstacle or environment information, such as a moving obstacle, the data in the Quadtree will be updated Path Planner T he r emaining problem of the approach in this dissertation is path planning to drive the UGV without obstacle collision. M any res earchers have proposed a vari ety of methods for this problem such as A*, D*, graphbased, and so on. A ll the methods have advantages and disadvantages. T his section describes how to generate candidate paths and choose an optimal path from them for the UGV for urban area driving such as roadway navigation, parking, and npoint turn s, using sensed information mentioned in previous sections. Occupied G rid O ne of the critical functionalities of path planning is to check for potential ob stacle collision. T his can be done faster by introducing the grid map method. T his method detects the cells occupied by obstacles and checks if the planned paths pass through the cells. I f so, the path will be ignored or much more cost will be added to the path. T he traditional way for CIMAR to avoid obstacle collision is, shown in Fi gure 412: 1. The vehicle is assumed to be one cell of grid map Y aw angle is not considered. In Figure 412, the vehicle is the black cell at the ce nter of the grid map. 2. The cells occupied by obstacles are dilated with constant diameter s since the vehicle is a rectangular object not a point. Yellow cells in Figure 4 12 represent the obstacles that are expanded with constant diameter. T his approach has some problems however :

PAGE 81

81 1. The vehicle is not a circular object, which means the length from its center of gravity to the border is different depending on the direction. For example, length to the front bumper from the center is longer than t o the rear bumper from the center. 2. Many experiments have shown that although the vehicle can pass between two obstacles, dilation with constant diameter could interrupt this run Figure 413 shows this problem I n Figure 413 (a), the car that is the green rectangle can pass between the obstacles that are orange rectangles However, in Figure 413 (b), the expanded obstacles based on the constant diameter s make the car unable to pass between them. T o resolve these problems, a novel way to check for potential obstacle collision is introduced. T his method assumes the vehicle is a rectangle, not a point and use s the grid map to quickly check it. I t expects position and heading angle of the vehicle on each node of the planned paths, finds a set of c ells that are occupied by the rectangular vehicle shown in Figure 414, and check s if the cells have obstacle s or not. In this figure, there are five possible paths. A series of blue boxes represent the estimated vehicle s position on each node of a path. T he red paths and blue paths represent the ones with and without potential obstacle collision respectively. T his method also can be applied when the planner estimates the cost of the paths using terrain roughness. T he path planner that is developed in thi s dissertation uses this novel method to check potential obstacle collision. V ector based Path Planning for R oadway N avigation M any proposed methods for path planning work very well for off and onroad driving. However, the path planner in this dissertation is focusing on driving in urban areas. T he paths can be generated more easily for roadway navigation in urban areas by introducing two assumptions: 1. All the waypoints that the vehicle has to pass are on the center point of a traffic lane. 2. The center point of traffic lanes is smooth enough for the vehicle to traverse.

PAGE 82

82 W ith these assumptions, the center of candidate paths can be generated by connecting the current waypoints that are given by the RNDF providing component and if there is no obstacle on the path, this is considered an optimal path at the last moment A set of candidate paths can be expanded from this center path laterally with constant distance and the number of paths and the distance ar e decided depending on the width of the traffic lane in which the vehicle is located. Usually, the number of candidate paths on left and right sides of center are even and the closer the paths, the lower the path cost. Figure 415 shows this path planner. T he path planner for roadway navigation is composed of short range paths and long range paths The short range paths are established for a smooth transition from the current position to a certain position on a collision free path. T he length of the paths is decided by the current vehicle speed. I ntuitively the vehicle needs to rotate the steering wheel more slowly at high speeds for stable driving, so the length should be longer for smoother paths. The paths are generated using the cubic Bezier curve equation. The parametric form of the curve is ( ) = ( 1 )+ 3 ( 1 ) + 3 ( 1 ) + (4-11) where t [0, 1] and P 0 P 4 ar e control points. The selection of control points is shown in Fig ure 416. P0 is the vehicl e s position, P1 is the point a meter ahead, P2 is the point b meter toward the vehicle direction, and P3 is the lateral point of the point in look ahead distance from the vehicle. P0 and P1 are invariant and P2 and P3 are variant depending on the index of lateral path. T he generated short paths are connected to long range paths.

PAGE 83

83 Long range paths are generated by simply connecting end points of the short rang e paths and the lateral points of the current waypoints. Usually these paths are us ed to check if the vehicle will collide with any obstacle when the vehicle follows a certain path. In addition, each section that is a line connecting two adjacent waypoints has a predefined recommend speed to help path planning This speed profile is published by considering the existence of a stop sign, terrain roughness and static obstacles reported from the Sensor Knowledge Store, traffic line, and road curvatures. Figure 417 shows the predefined speed profile where the red c ircles represent stop signs. The thicker blue lines represent higher speed and the lighter blue lines represent lower speed. With this new path planning algorithm, the vehicle can generate the candidate paths and find an optimal p ath without much memory usage during the roadway navigation behavior that happens most frequently as the vehicle is driving. Vector based P ath P lanning for P arking B ehavior One of the most challeng ing behaviors in urban driving is the parking behavior. In this dissertation, two types of parking lots are considered: a typical parking and a parallel parking lot. T o approach this problem, the Rapidly exploring Random Tree (RRT) algorithm, which is mentioned in C hapter 3, is used to find routes from the vehicle s position to the goal parking lot. However, the vehicle that is the test platform has a nonholonomic character, so several constraints are considered, such as steering angle limit, angular speed, vehicle shape, and so on. T he parking process for the two types of parking lots is shown in Figure 418 and there are three subbehaviors: TO_PARK_LOT, FROM_PARK_LOT, and TO_EXIT.

PAGE 84

84 RRT approach Rapidly exploring Random Trees (RRT) is an incremental method to quickly explore an entire configuration space. The f irst challen ge for this algorithm is to set the configuration space. In this dissertation the configuration space is set by the function in terms of the vehicle s orientation ( Vh), the desired goal s orientation ( Gh), and the distance between the vehicle and the goal ( d ), where the difference between Vh and Gh is called the heading variation, expressed as: Cs = f(Vh, Gh, d). ( 4 12) U sing equation ( 4 12 ), when the vehicle arrives at the position near to where the parking algorithm has to start, the configuration space is defined. D uring the process of setting the space, if the heading variation or distance is less than the stated threshold, the minimum variation or minimum distance is assigned to the function. Furthermore, if the algorithm canno t fin d the optimal path to get to the goal, the variation and distance increase by a defined step size to make the space bigger. A n example of the configuration space is shown in Figure 419, where the green rectangle, the light blue rectangle, and the space fi lled with pink points represent the vehicle, parking lot, and configuration space, respectively. Random nodes will be generated in this space and the algorithm operates with the nodes. T he most crucial part of the algorithm is tree extension. T he proposed method is bi directional RRT, so there are two tree extensions: forward tree and backward tree extension. T he only different thing between these two trees is xinit and xgoal. T he xinit for the forward tree and backward tree are the vehicle po sition and parking lot position, respectively. Alternatively the xgoal for the forward tree and backward tree are the

PAGE 85

85 parking lot position and vehicle position respectively Except for this difference, the trees are extended as follows. T he extension part can be divided into five parts as : g enerating random node, xrand, in Cs; finding the nearest node xnear using xrand from k d tree ; computing new input, xn ew; checking for collision ; and adding xn ew to the tree graph G (V, E), where V and E are nodes and edges respectively, if it is in collision free space T he closest node of the nodes in G to the xrand can be found by the k d tree, where in this dissertation k equals two, since the path planner assumes that the vehicle is driving on a plane. T he root of the kd tree is the initial point of the vehicle. F or simplicity of the proposed method, the closest one has a minimum Euclidean distance. If the xnear is found successfully it is necessary to find a new input extended from xnear. T o do this, a random steering angle, is generated in the range of max to max with a normal distribution. U sing a vehicle dynamic equation derive d from [79] and the step distance, which is three meters in this research, will yield a new status of the vehicle, xn ew. T he next step is to check the collisionfree status of xn ew using the proposed o ccupied grid method. The final step is undertaken if and only if the xn ew is obstaclefree, xn ew Cs, free. I f it is obstacle free, the xn ew is added to G and the kd tree T he new edge connecting xn ew and xnear is also added to G The above steps are terminated when the elapsed time, tloop, is longer than the minimum searching time, which is 0.1 s ec in this approach, and the generated candidate paths are more than a minimum number, which is ten, or the elapsed time, tloop, is longer than time limit, tlimit which is 3 sec in this research Figure 420 shows examples of the tree extensions (upper : forward, lower: backward)

PAGE 86

86 T he candidate paths literally mean the available paths for the vehicle to get to the parking lot. T here are four criteria to check if the new node is assumed to get to the goal position: GOAL_REACH, CIRCLE_REACH, CONNECT_TREE and INIT_POS_REACH Intuitively, if the distance between the n ew node of the forward tree and the goal position is less than the threshold value and the difference between the orientation of the new node of the forward tree and the desired orientation is less than the threshold value, it can be assume d that the node arrives at the goal position and it is correct to add the goal node, not a new node, into the graph ( GOAL_REACH ). H owever, from experience, the node that is assumed to be the goal may not be found in the allowed max search time. T o make the rule more rela xed the minimum radius circle of the vehicle is introduced whose radius is = ( ) (4 13) where L is the wheel base distance and max is the maximum front wheel angle. Figure 421 shows how to check if the new node in the min circle is useful or not. The left and right radius circles have the trajectory in clockwise and counterclockwise directions, respectively. T he procedure to check this condition is written as: 1. Decide the side of the circle (left or right) 2. xnew and xnear. 3. Find the intersection point of the edge and the circle and compute tangential in the proper direction of the circles path at the point. 4. node and edge are useful or not

PAGE 87

87 O nce the new node and edge are deemed useful ( CIRCLE_REACH), a set of the path from the tangent point to the goal point following the circle is added to the candidate path. A nother condition for the goal reach is to check if the forward tree and backward tree are connected properly in Figure 422 ( CONNECT_TREE ) T he conditions to connect two trees are: 1 Distance between a node of the forward tree and a node of the backward tree will be shorter than the threshold distance. 2 If point B of the forward tree and point C of the backward tree are connected and the B is expanded from point A, the algorithm can compute the difference of orientation of and T o connect two trees, this orientation difference will accommodate the vehicle s steering limit. I f conditions 1 and 2 are satisfied, the route from the vehicle position to point B of the forward tree (route 1) and the route from point C to the goal position of the backward (route 2) are connected and the connected route is registered as the candidate path. T o acquire route 1 and route 2, the Depth first search (DFS) in Figure 423 algorithm is used. T he l ast condition is INIT_POS_REACH where the new node expanded from backward tree reaches to the vehicle s initial position. This condition is similar to condition GOAL_REACH but the initial point o f the forward tree becomes the goal point for the backward tree and the goal point of the forward tree becomes the initial point for the backward tree. Typical parking behavior N ow the planner has a set of candidate paths that are blue lines shown in Figure 4 24. I n the figure several paths do not touch the goal point since they reach to one of the minimum radius circles.

PAGE 88

88 T he next step with the paths is to find an optimal path for them. T he heuristically computed cost is given for each path. T he factors that are considered to compute the cost are length of the path, the sum of the vehicle s heading change, and type of goal to be reached. S o the cost ( Ci) of the each path i, can be expressed as = + + (4-14) where kd and kh represent heuristic gains of length and heading change, respectively and Ri is the cost depending on the goal reach type. I n this dissertation, the path that reaches to minimum circle is set as higher cost than other types since it is harder for the vehicle to follow the circle trajectory exactly unless it is a neutral steering vehicle. W ith these costs, the optimal path is selected. T he optimal path is the shortest path with the least orientation change, but it still could be worse than human driving or the path generated by other methods since it is computed by random points. The optimal path is the white line in the Figure 42 4. F or this reason and the smooth behavior of the vehicle, the simple algorithm in Figure 42 6 is used to smooth the path and yellow line in Figure 425 represents the final smooth optimal path. S imply stated the algorithm shows how to find the smallest number of the straight line segments. L ine 7 of the algorithm determines if the straight segment is usable or not T o decide this, the straight line has to be collision free. A lso the difference between the previous line segment and the new line segment has to be drivable. F or example, the left side of F igure 4 2 7 shows a very sharp corner between two segments and i t is very difficult for the vehicle to follow the path. If the new segment is like that shown on

PAGE 89

89 the right side of F igure 4 2 7 there is one more necessary step to make the corner smooth The arc of a circle is inserted between the segments For parking using RRT, t here are two options to generate a path to the goal position. One is a static path planning algorithm that generates an optimal path when the vehicle enters the parking area and just follows the path, and the other one is planning a new path ev ery loop of the algorithm. Each has advantages and disadvantages. T he advantage of the static path planning algorithm is that it makes it easy and stable for the vehicle to follow the generated path but the disadvantage is that the reaction time to avoid new or moving obstacles is longer than the replanning algorithm. T he advantage and the disadvantage of the second way are exact opposite of the first way. After the parking behavior is completed the vehicle could try to back out of the parking space and exit from the parking lot For these movements, the same procedures as for the parking behavior are executed. In this dissertation, the replanning method is selected, but moving one method to the other method is actually very simple. Parallel parking behavior T he other type of parking is parallel parking. W hen the vehicle is trying to park i n a parallel parking spot it needs to arrive to the virtual parking spot first using the RRT algorithm mentioned in previous sections. Figure 4 2 8 shows the parallel parking scenario. T he yellow rectang le repres en ts the virtual parking spot and it is laterally

PAGE 90

90 located L (constant offset depending on the vehicle size) meters away from the real parking spot O nce the vehicle stops at the virtual parking spot it needs to estimate the parking spot size using vision sensors or LADAR sensors. T his process extracts a rectangle as a real parking spot and the vehicle will decide if the parking spot is valid or not. I f the vehicle decides the spot is valid, it starts the parking process as follows: 1. Go forward until position in Figure 4 2 9 Position is decided by the parking spots size, the vehicle speed, and the steering constraints. This position could vary depending on the vehic le. In this dissertation, position is decided depending on the minimum turning radius of the vehicle. In the US and other countries where the driver sits on left side of the car, parallel parking lots are on the right side of the road, so position is the moment when the right min radius circle of the vehicle touches at a point tangentially the left min radius circle of the expected car position when it is completely parked. 2. G o backward by following the two arcs in red in Figure 429. I f the vehicl e can follow the arcs completely, it can park easily without the steps after this. 3. C heck for potential collision at the rear and side of the vehicle. 4. I f there is any potential collision, then it should stop and control the steering and accelerator to get to the goal position. D uring the process from 1 to 4, the vehicle keeps checking if it is completely in the parking spot A simple feed forward controller is used to control the vehicle s steering i.e. = ( ) (4-15) where kh is the gain for the heading error and eh is the heading error. T his control equation is using only the heading error between the vehicle s orientation and the parking sp ot s orientation, since if any part of the vehicle is in on the parking sp ot the lateral error is small and can be ignored.

PAGE 91

91 W hen the vehicle is parked completely, it could try to come out from the parking spot and this operation can be done using steps 3 and 4 of the parallel parking process. W hen the algorithm decides that the vehicle is out of the spot it starts planning using the RRT procedure to go to the exit of the parking area. Path P lanning for N point T urn B e ha v ior A nother problem that is necessary for urban driving is the n point turn w hen the traffic lane is completely blocked or the vehicle needs to traverse other trajectories. The classic method for this behavior is simple: 1. Turn steering wheel to full left position and go forward until just b efore the vehicle hits any obstacle or cross es the traffic line. 2. Turn steering wheel to full right position and go backward until just before the vehicle hits any obstacle or cross es the traffic line. 3. If necessary, r epeat 1 and 2 steps until the v ehicle can drive on the desired path without hitting any obstacle or crossing any line. However, there could be some situations that can be done with less steering wheel operations, s o the proposed method is to use the RRT algorithm and the configuration space is just a simple circle that is the blue circle in Figure 430 T he green points are new waypoints that the vehicle needs to pass, so a random tree is generated during max searching time or until it finds a proper path to move the vehicle to a new waypoint. I f it cannot find the proper path, it acquires a node that has a minimum orientation different from the desire heading and the algorithm decides one case of four cases ( shown in Figure 431) using the node point. In cases 1, 2, and 3, the tr ee cannot find a proper path to move the vehicle to any waypoints, since a road with two traffic lanes does not have enough space for the vehicle to turn around. Therefore, the vehicle needs to follow the path to get to the final

PAGE 92

92 node that has a minimum or ientation difference from the desired heading. W hen the vehicle gets to that point, the algorithm builds the tree again and moves the vehicle. T his process repeats until the vehicle can drive on the new path. In case 4, the tree finds a proper path to the waypoints, so, by following the path, the vehicle can change its heading to the desired orientation. This chapter described the specific implementation of the new sensing, storage, and path planning method for an unmanned ground vehicle tasked with navig a ting dynamic urban environments The next chapter describes the testing procedure and analyzes and discusses the results of the various tests conducted.

PAGE 93

93 Figure 41 Urban NaviGator system architecture

PAGE 94

94 Figure 42 Picture of LADAR Sensors Layout on the Urban NaviGator that was taken by Jihyun Yoon. Figure 43 VFs Layout and Vehicle Coordinate Frame

PAGE 95

95 Figure 44. Peak Detection Procedures

PAGE 96

96 Figure 45 Slope Variance method

PAGE 97

97 Figure 46 Grid based method

PAGE 98

98 Figure 47 Quadtree Sensor Knowledge Store Component Figure 48 Bottom up built Quadtree

PAGE 99

99 Figure 49 Algorithm for Bottom up Quadtree Building

PAGE 100

100 F i gure 410. Quadtree Extraction

PAGE 101

101 F igure 411. Quadtree Extraction example

PAGE 102

102 Fi gure 412. Traditional occupied grid method of CIMAR (a) (b) Figure 413. Example of problem of traditional occupied grid method

PAGE 103

103 Figure 414. New occupied grid approach Figure 415. Vector based roadway navigation path planner

PAGE 104

104 Figure 416. Control points selection for short range paths Figure 417. Predefined speed profile of selected course on t h e Google sa t e l lite map

PAGE 105

105 Figure 4 18. Parking process flow chart

PAGE 106

106 Figure 419. Configuration space for RRT algorithm

PAGE 107

107 F igure 420. Tree expansion examples

PAGE 108

108 Figure 421. Minimum radius circle reach condition Figure 422. Example of tree connection

PAGE 109

109 Figure 423. Depth first search (DFS) Figure 424. Candidate paths

PAGE 110

110 Figure 425. Final path Figure 426. Smoothing algorithms

PAGE 111

111 Figure 42 7 Requirements for Smooth Path Figure 4 2 8 Parallel parking scenario

PAGE 112

112 Figure 4 2 9 Parallel parking first step Figure 4 30. Configuration space for n point turn

PAGE 113

113 C ase 1 C ase 2 C ase 3 C ase 4 Figure 4 31. Four cases for n point turn on the Google satellite map.

PAGE 114

114 Table 41 Sensor Knowledge Store Data Structure Bit Content Values First 2 bits Reserved 3 rd bits Edge existence 0 : false 1 : true 4 th bit Waypoint existence 0 : false 1 : true 5 th bit Obstacle Existence 0 : false 1 : true 6 th bit Traffic line Existence 0 : false 1 : true 7 and 8 th bits Terrain Roughness 00 : untraversable 01 : rough 10 : smooth 11 : unknown Table 42 Messages between components TSS to SKS Path Planner to SKS SKS to Path Planner G rid map size Query ID Report ID G rid map resolution Start query point Adjusted start report point Grid map center position End query point Adjusted end report point Edge information grid map G rid map resolution Edge information report map Height information grid map Grid map resolution

PAGE 115

115 CHAPTER 5 EXPERIMENTAL RESULTS T he research outlined in this document was implemented and tested extensively. Because of the complexity of maintaining a functional autonomous vehicle and the need for considerable resources in terms of manpower and facilities required to execute autonomous testing, testing of path planning was first conducted in simulation to verif y the performance of the various developed algorithms T esting the sensors algorithm was done by human driving T his chapter outlines the various testing methods and metrics used to measure the performance of these methods. Test Plan A ll the tests for the components were performed using a computer with an i7 core 1.70 GHz CPU, 4.0 GB memory, 1.0 GB video card memory and 100 Mbps Ethernet communication. Road Boundary Detection T he road boundary detection algorithm could be applied to t he UGV for several of the operating behaviors encountered during the DUC. T his test was performed with human drivers Usually there are many road boundaries and objects in the urban areas, so the test of the algorithm focuses on detecting artificial road boundaries such as fences or curbs. T o check validation of this algorithm in real time, the vehicle was driven at 20 mph, which is the speed limit in test area, the LADARs scanned the environment, and the algorithm tried to detect the road boundaries with the point clouds transmitted from the sensors.

PAGE 116

116 Figure 51 is an aerial map of the University of Florida campus and the red arro ws represent the route the vehicle drove. T he results acquired from the areas in the yellow boxes were analyzed and compared by type of LADAR type of method and area since the areas have continuous road boundaries. Sensor Knowledge Store T his component is a new component for the UGV of CIMAR, so the important aspects that are needed to verify are how effectively the information is saved and how fast the search time is. I n addition, the time to receive the information from the other component and send the extracted information to the other component is also verified. T his test was also done with the test road boundary detection algorithm since the component evaluates the environment of the vehicle and sends the information to this component immediately. T his test was performed with a human driving. A fter the test was done, the result with the number of nodes and amount of memory used was analyzed. Path Planner T he tests of new the path planner algorithm for roadway navigation behavior, typical and parallel parking behaviors, and npoint turn behavior were performed by simulation with the RNDF acquired from the test site at the Gainesville Raceway ( Figure 5 2 ) T his site has served as the test facility for many of CIMAR s autonomous ground vehicle projects. T he site provides paved road segments used to test many of the required behaviors for the DUC, including straight and curved road segments with multiple lanes, intersections, deadend streets and a parking area. Sections of this area were selected for simulating the required test environments. I n the figure, blue points and red circles represent waypoints and stop signs respectively.

PAGE 117

117 T he first step in the tests is to set waypoints that the vehicle has to pass and decide its type of behavior. Figure 53 shows the selected course in yellow line segments. T he second step is to set virtual obstacles with a simulator built in C/C++ and OpenG L to verify that the generated path can make the vehicle able to avoid the obstacles by finding candidate paths and an optimal path. Figure s 5 4 and 55 show close up visualizations of the waypoints defining the road segment used for the parking and parall el parking tests. I n those figures, the polygon in light purple color is open area, the green rectangle is vehicle the blue rectangle is parking spot the green arrow is entrance to parking area, and the pink arrow is exit from parking area. Test Metrics A number of different performance measures were selected to be recorded to evaluate the new methods described in this dissertation. These metrics served to investigate the effectiveness of the new methods in allowing a robotic system to navigate in urban areas. The first set of metrics is associated with how well the road boundaries detection algorithm can find the road boundaries depending on the sensor and method. It was found that the rate of detecting the curbs depends on the sensor type and curbs typ e. The second set of metrics is the memory usage and Quadtree building time complexity of the Sensor Knowledge Store and this result is compared with the storage in the classic grid map. The last set is the search rate of the path planner. For the roadway navigation behavior, the update rate of the path planning loop is found. For the parking and parallel parking behaviors, the number of nodes and update rate of the RRT algorithm is

PAGE 118

118 recorded depending on the distance to the goal point. In addition, the error between the heading of the vehicle and the desired orientation of the parking spot is recorded. All these values were logged in a separate thread by the corresponding component so that it did not interfere with the performance of the rest of the algorithm. The result ing data was recorded, plotted, and analyzed to determine the results of each of the tests. Results Road Boundar y Detection T he test for this component was done by a human driving on the University of Florida campus. T he original go al of this component was to find artificial or natural boundaries of the traffic road and assign the probability that the boundary exists to the proper cell of the grid map. O nce the detection process is done, the final step is to adjust a current traversability value of each cell depending on the probability of a road boundary. Figure 56 shows the edge grid map resulting from area1 of Figure 51 and the color table, depending on the traversability values. Except for unknown cells t he init ial values for cells was 32 a neutral value. I f a cell has 0% probability for edge existence, as would be true for a road, the cell retains the neutral value. O n the contrary, if it has 100% probability, which means that all four sensors detect an edge at the same point, the edge value of the cell change s to 2, which is the lowest available traversability value of the grid map. Figure 57 compares the classic grid map with the grid map that has the traversability values adjusted, depending on the probabili ty of the edge existence. T he upper figure shows the classic grid map by cell base evaluation and the lower figure is based on the grid map with the new method. I n this area, the vehicle was driving to

PAGE 119

119 South (downward in the figure) and the vehicle is on the center cell of the grid map. A s previously mentioned, the traditional grid base method was not enough to distinguish road boundaries in urban areas, which means the boundaries can be detected by lower traversability values than a smooth road, but the cells are still drivable areas. The traversability values of cells having the road boundaries could be lowered by the confidence value estimated using the new method. Table 51 shows the update rate of this component depending on the then current situation. Intuitively, a faster update rate of the sensor algorithm could give the other components a better chance to prepare their own action using the information but the desire update rate is set as 40.00 H z to prevent wasting memory us age. W hen the component only evaluates the vehicle s environment, the component can run the fastest but this does not make sense since the component has to send the information to other components. T herefore, the update rate when it sends all the messages has to be considered for performance evaluation. T he result was 33.85 Hz when it sent all the messages to only one other component, which was obviously slower than the desired update rate speed, but enough for the current autonomous driving car that runs at 30 mph. Table 52 through Table 57 show the final result s acquired from area1 through area6 of Figure 51 using three m ethods and four sensors in real time testing. A sensor, Terrain LADAR, provided the best result s since it is fixed to see ahead of the vehicle and evaluates terrain at a shorter distance, and finer resolution than the other sensors. Another sensor, Driver Vertical Fan, returned the worst result s, since it was rotating

PAGE 120

120 using an attached motor and usually the road boundaries of the lane where the vehicle was driving were on the passenger side. F rom the results, the shorter the beam distance and the smaller the angle between the beams, the better the estimation that could be provided. Sensor Knowledge Store T he testing for this component was performed using a TSS component test where the component sent the edges informat ion, terrain data, and obstacle information to the Sensor Knowledge Store component via a JAUS message. O nce this SKS received the message, it unpacked it by following the rules for the message and then the unpacked information is saved to the Quadtree Fig ure 58 shows the result ing image of the Quadtree using OpenGL software to render the visualization. Th e thick b lack line (actually they are a series of square s with 0.25 0.25 m resolution ) shows the path of the vehicle similar to that of Fig ure 51 As shown in this figure, the upper parts and lower right parts of the center are not estimated but the parts are generated automatically since some of the lower left part is evaluated. Figure 59 shows the local image when the vehicle was driving around area2 of Figure 51 and was driving to the West. I n the figure, only the cells with the lowest resolution, 0.25 x 0.25 m in this approach, are painted. T he yellow cells are unknown areas. T he green cells and pink cells represent smooth and rough terrain, respectively. T he blue cells and red cells represent cells with road boundaries and cells with obstacles that are nondrivable areas respectively. Figure 510 is an array extracted from the same area as Figure 59 and the colors that are assigned to each cell represent the same characters as in Figure 59, but the

PAGE 121

121 nodes that are internal nodes are converted to specified resolution cell and are given unknown traversability values. O nce the converting process is done, the array is sent with specific messages to the other components. Table 5 8 s hows the result ing parameters obtained from the Quadtree. Of particul ar note is the memory usage. E ven though the range of the map is quiet large the size of the data is small enough to be saved. H owever, if the user does n o t pay considerable attention, the system could fail due to computer memory limit or hard disk storage limit S o the user has to consider which information is to be saved into the Quadtree and where the data will be stored, such as a hard d isk d rive or memory. I n this research, the used memory for each node of the Quadtree is only 18 bytes at most since it only saves the pointers of four children (16 bytes at most), environment al data (1 byte), and scaledheight information (1 byte). T he update rate of the component is distinct from the uprate of Quadtree building, since the Quadtree building process is operated by a different thread. T o verify the effectiveness of this approach, the usage memory of the Quadtree is compared with the fixed size grid map. T he total range of this test was 8192 x 8192 m, so if the algorithm uses a fixed g rid map of 0.25 m resolution, the grid map has 32 786 rows and 32, 786 columns. T he algorithm saves 2 bytes of information into each cell, so it uses 2049.13 Mbytes. T hus the proposed algorithm requires a slightly longer time to search proper nodes for point clouds and extract necessary areas to arrays, but it can reduce the memory used dramatically. From the result s, it is verified that the Quadtree data structure is good enough to be used to save huge sensor information data to recall in future driving.

PAGE 122

122 Path Planner T his component was tested by a simulator generated by C/C++ and OpenGL. T he first test was roadway navigation behavior that happens most frequently in autonomous driving. T o test this behavior, a course that is composed of a set of waypoints was selected and a set of the obstacles was placed to observe several situations: 1. The obstacle completely blocks a single traffic lane. 2. The obstacle partially blocks a single tra ffic lane. 3. The obstacle blocks multiple lanes completely T he path planner needs to control the vehicle, depending on which o bstacle is the case. Figure 511 is about case 1 where the single lane is completely blocked by the obstacle. I n this case, the vehicle has to stop a safe distance from the obstacle (Figure 5 11 (a)), and check if the next lane is clear. I f it is the algorithm generates the proper paths and the vehicle follows the optimal path with lane change speed that is defined at 10 mph in this dissertation (Figure 511 (b)). I n this figure, the black square on the original lane is the projected (not the true) vehicle position onto the original lane. U sing the projected vehicle, the algorithm checks if the true vehicle passed the obstacle or not. I f it passed, the algorithm generates paths to move back to the original lane (Figure 511 (c)). I f the two vehicles are very close, the algorithm goes back to original algorithm to pass the defined waypoints (Figure 511 (d)). Figure 5 12 shows the result of case 2. I n this case, the vehicle can avoid the obstacle without changing lanes. Each of the generated paths has a cost and the paths that would hit the obstacle have much higher cost s so the optimal path is sought O nce the optim al path is found, the vehicle can avoid the obstacle by following that path.

PAGE 123

123 F igure 513 shows the result s of case 3. I n this case, the vehicle can change lanes without stopping when the current lane is completely blocked by obstacles. T he procedure to do this is same as case 1 except the vehicle does not have to stop. T he next behavior that the planner has to consider is the parking behavior. A s mentioned previously, the path for the parking behavior can be found using the RRT algorithm. F igure 514 shows the result of typical parking behavior. T he vehicle approaches to an open area (Figure 514 (a)). W hen the vehicle gets to the area, the algorithm generates the path using the RRT algorithm and the vehicle follows the path (Figure 5 14 (b) ). O nce the vehicle is properly parked, the algorithm needs to make a temporary goal to get the vehicle out of the parking place (Figure 5 14 (c)). W hen the vehicle is out of the parking place completely, the algorithm generates new paths to move the vehic le to the exit of open area (Figure 514 (d)). F igure 515 shows the parametric results of the RRT algorithm. Figure 5 15 (a) (b), and (c) are the graphs of the number of nodes the number of candidate paths, and searching time, depending on the distance between the current vehicle position and goal position, respectively T he result s show that the parameters and the distance are two different things. T he number of parameters is decided depending on the minimum searching time that initially is defined as 0.1 sec in this dissertation. T here is an other type of parking behavior parallel parking. Figure 516 shows the result of the behavior. Figure s 5 16 (a) and (b) are the same procedures as a typical parking place, but the goal for this behavior is a virtual parking lot in a lateral position to the original goal point. O nce the vehicle is in the virtual parking space position, the algorithm starts moving the vehicle (Figures 5 16 (c) and (d)) in the way that was

PAGE 124

124 described in C hapter 4. Figure s 5 16 (e) and (f) are the same steps as described at typical parking behavior. T he last path planning test performed in this dissertation is npoint turn behavior. T he four cases mentioned in C hapter 4 are shown in Figure 51 7 through Figu re 5 20 respectively.

PAGE 125

125 Figure 51 The University of Florida campus test area taken from the Google map.

PAGE 126

126 Figure 52. Gainesville Raceway pit area with points defining test areas on the Google satellite map.

PAGE 127

127 Figure 53. Selected course on the site for roadway navigation behavior on the Google satellite map.

PAGE 128

128 Figure 54 Gainesville Raceway pit area for parking behavior on the Google satellite map.

PAGE 129

129 Figure 55 Gainesville Raceway pit area for parallel parking behavior on the Google satellite map.

PAGE 130

130 (a) (b) Figure 56 (a) The picture taken by JihyunYoon from the Gale Lemerand road in the University of Florida campus, (b) Edge result map and traversability value

PAGE 131

131 Figure 57 Grid map result comparison

PAGE 132

132 Figure 58 Full view of f inal result s of Sensor Knowledge Store Figure 59 Local view of final result s of Sensor Knowledge Store in area2

PAGE 133

133 Figure 510. Extracted array from area of Figure 5 9

PAGE 134

134 (a) (b) (c) (d) Figure 511. Lane change behavior for single lane. a) The vehicle stops in safe distance, b) Path generation to the next lane, c) Returning to the original lane, d) Returning to roadway navigation behavior

PAGE 135

135 (a) (b) (c) Figure 512. Obstacle avoidance in a lane. a) Finding optimal path to avoid the obstacle, b) Avoiding the obstacle by following the optimal path, c) Returning to roadway navigation behavior.

PAGE 136

136 (a) (b) (c) (d) Figure 513. Lane change behavior for multi ple lanes a ) Finding the optimal path to the next lane without stop, b) Moving to the next lane by following the optimal path, c) Returning the original lane, d) Returning to roadway navigation behavior.

PAGE 137

137 (a) (b) ( c) ( d ) Figure 51 4 Typical parking behavior a) Initial position for the typical parking problem, b) Approaching to the parking lot, c) Completed parking, d) Approaching to the exit of the parking area.

PAGE 138

138 (a) (b)

PAGE 139

139 (c) Figure 515. Parametric result s of parking behavior using RRT

PAGE 140

140 (a) (b) (c) ( d ) ( e ) ( f ) Figure 51 6 Parallel parking behavior

PAGE 141

141 (a) (b) (c) (d) Figure 517. An n point turn for case 1. a) Stop in safe distance, b) The first movement for n point turn, c) Path generation to change driving direction, d) Returning to roadway navigation behavior.

PAGE 142

142 (a) (b) (c) Figure 518. An n point turn for case 2. a) S top in safe distance, b) Changing the vehicle direction by driving backward, c) Returning to roadway navigation behavior.

PAGE 143

143 (a) (b) (c) Figure 519. An n point turn for case 3. a) Stop in safe distance, b) Changing the vehicle direction by driving backward, c) Returning to roadway navigation behavior.

PAGE 144

144 (a) (b) (c) (d) Figure 520. An n point turn for case 4. ) Stop in safe distance, b) Changing the vehicle direction by driving forward, c) Approaching to the one of goal points d) Returning to roadway navigation behavior

PAGE 145

145 Table 51 Road Boundaries Detector Uprate Situation A verage update rate (Hz) Desired 40.00 W ithout sending message 38.52 With sending only grid map message 37.56 W ith sending edge grid map and height map 36.52 With sending all messages 33.85 Table 52 Test result s for road boundary detection of area1. LADAR name Terrain Negative Driver VF Passenger VF Scan lines n umber 1000 1000 1000 1000 E dges lines number 896 847 512 6 83 Peak detector (%) 82.45 79.56 65.35 71.43 Grid method (%) 76.53 73.23 55.29 65.27 Slope method (%) 92.58 91.25 81.39 85.92 Table 53 Test result s for road boundary detection of area2. LADAR name Terrain Negative Driver VF Passenger VF Scan lines number 1000 1000 1000 1000 E dges lines number 952 939 629 7 3 5 Peak detector (%) 88.50 77.92 62.19 75.20 Grid method (%) 79.37 71.51 62.21 69.31 Slope method (%) 95.81 93.57 84.70 88.20 Table 54 Test result s for road boundary detection of area3. LADAR name Terrain Negative Driver VF Passenger VF Scan lines number 1000 1000 1000 1000 E dges lines number 984 97 7 628 80 3 Peak detector (%) 89.22 79.56 65.35 71.43 Grid method (%) 71.29 69 3 6 49.31 75.41 Slope method (%) 96.88 94.30 80.91 88.72 Table 55 Test result s for road boundary detection of area4. LADAR name Terrain Negative Driver VF Passenger VF Scan lines number 1000 1000 1000 1000 E dges lines number 9 09 851 6 1 7 80 3 Peak detector (%) 79 73 7 1 37 60 21 69 77 Grid method (%) 71.4 3 79 88 5 2 .2 1 59 9 7 Slope method (%) 8 9. 74 87. 5 0 80 21 83 01

PAGE 146

146 Table 56 Test result s for road boundary detection of area5. LADAR name Terrain Negative Driver VF Passenger VF Scan lines number 1000 1000 1000 1000 E dges lines number 9 79 925 385 671 Peak detector (%) 8 7 .4 0 82 87 6 1 09 7 5 98 Grid method (%) 82 50 7 8.09 5 9 44 71 24 Slope method (%) 9 3 25 9 0 47 8 0 1 9 8 8 26 Table 57 Test result s for road boundary detection of area6. LADAR name Terrain Negative Driver VF Passenger VF Scan lines number 1000 1000 1000 1000 E dges lines number 9 22 891 698 7 76 Peak detector (%) 8 8 5 3 80 .92 59 .3 1 7 0 22 Grid method (%) 80 29 7 7 9 3 55.29 65.27 Slope method (%) 9 5 37 88 39 76 58 8 1 .9 6 Table 58 Test result for Sensor Knowledge Store Name Value Component update rate (Hz) 39.75 Quadtree b uilding rate (Hz) 19.85 Dri ving d istance ( kilo meters) 9.27 Average vehicle s peed (mph) 20 Tree h eight 14 Tree range from center (m) 4096 X 4096 Global Center of Quadtree X (m) 369584.000094 Global Center of Quadtree Y (m) 3282071.391349 C overed area size (m 2 ) 426190.00 Total number of nodes 2290450 Total number of leaf nodes 1704760 Memory u sage (Mb) 39.318

PAGE 147

147 CHAPTER 6 CONCLUSIONS AND FUTURE WORK T his dissertation describes new and novel methods for currently existing and new ly generated component s to support autonomous driving. This final chapter seeks to continue the discussion of this new method by first presenting several areas of potential future work that arose during the development and testing of the new methods It then draws conclusions from the theory and implementation discussed in Chapters 3 and 4 and from the test results of the validation process described in Chapter 5. It also outlin es the main contributions of the proposed methods to the field of robotics. F uture W ork T he result s descri bed in C hapter 5 show that these approaches can provide better sensing performance, store huge amounts of information effectively, and give more options to the vehicle for path planning, but there is still room to improve the solutions to problems discussed in this dissertation. F or the road boundaries detection, a better and more precise sensing method is necessary. T he test in this dissertation was perf ormed at 20 mph, so this update rate was enough to sense the environment. T he vehicle will need to run at higher speeds in future, however, so the update rate will have to be higher to provide sufficient infor mation about the environment. In addition, the test result s were collected and analyzed for artificial road boundaries, but there are various types of boundaries, so the test shall be performed and verified with those types of boundaries. S ince it s range of application is so wide, t his Sensor Knowledge Store algorithm is quite useful although there are still several areas that need to improve. The user may want to save more kinds of data types. F or this reason, the algorithm needs to store the

PAGE 148

148 information using more efficient data structures. A s the user wants to save more kinds of data types, the future memory should be larger than the current size, so the developers must consider better storage and use it effectively. T he last point for improv ement is extraction speed. E xtraction is the necessary step to send information for necessary areas in proper size, but the process needs a slightly longer time than when using a general grid map since it has to find corresponding nodes one by one. S o the developer must find a way to extract the array from the tree directly. Some of these can be addressed with more efficient programming techniques and others with faster hardware and greater storage capacity I mmediate future work will deal with testing the algorithm on a real vehicle in cases where moving obstacles are introduced into the environment. The methods proposed in this dissertation were vector base and the method presented is an A* algorithm that is grid based T here is no solution yet to the question of which one is better, so a new method that combines two algorithms may be a good solution for the problem. T he p arking algorithm in t his dissertation shows a navigation algorithm that incorporates a rapidly exploring random tree method. T o reduce the search time, many constraints were used in the algorithm. T he feasibility of this proposed method has been demonstrated via simulation, using performance parameters obtained from an existing autonomous vehicle system Conclusions S ensing and path planning tasks are by any means not trivial, even after decades of research and development. The ability of a robot to sense obstacles and generate an optimal path to move itself from its current state to its goal state without colliding wi th any objects is very important M oving obstacles in the robots environment obviously

PAGE 149

149 makes these problems much more complicated. T he 2007 DARPA Urban Challenge, in which unmanned ground vehicles were required to navigate urban environments while interacting with other moving vehicles, including other unmanned vehicles, is good example of the need for moving object detection. The research presented in this dissertation is to provide new and novel approaches to the sensing and path planning methods. The first chapter provides an introduction to the sensing and path planning problem s in general, while Chapter 2 recalls previous work s tha t proposed several approaches to solve the problems in this dissertation This is followed by a discussion of the theory behind the newly presented methods in Chapter 3 and an outline of the current implementation of the method on the Urban NaviGator in Ch apter 4. A description of the validation procedure and summary of testing results is then provided in Chapter 5. The concept of the methods discussed in this dissertation is a particularly novel approach to the presented research, and the test results desc ribed in Chapter 5 provided abundant evidence that these new methods are capable of improving autonomous driving not only for unmanned ground vehicles, but for other types of unmanned vehicles and other types of robotic systems. T he new methods combined in future research also can provide new potential applications and solutions to existing problems in current robotic systems.

PAGE 150

150 LIST OF REFERENCES 1. H. Durrant Whyte and T. Bailey. Simultaneous Localization and Mapping (SLAM). Robotics and Automation Magazine, 2006. 2. C Wang, C. Thorpe, and A Suppe Ladar Based Detection and Tracking of Moving Objects from a Ground Vehicle at High Speeds IEEE Intelligent Vehicles Symposium (IV2003) 2003. 3. M de Berg, M K reveld, M Overmars, and O Schwarzkopf Computational Geometry : Algorithms and applications Springer Verlag 2000. 4. DARPA Urban Challenge Technical Evaluation Criteria, from http://www.darpa.mil/grandchallenge/rules.asp, 2006. 5. S. Solanki Development of Sensor Componenet for Terrain Evaluation and Obstacle Detection for an Unmanned Autonomous Vehicle. Ph.D Dissertation, University of Florida, 2007. 6. P. Hart, N. Nilsson, a nd B. Raphael. A Formal Basis for the Heuristic Determination of Minimum Cost Paths IEEE International Conference on Robotics and Automati on, 1968. 7. J. Leonard and H. Durrant Whyte Mobile Robot Localization by Tracking Geometric Beacons IEEE Transactions on Robotics and Automation, pages 376 382, 1991 8. D. C ole and P. Newman. Using Laser Range Data for 3D SLAM in Outdoor Environment s. IEEE International Conference on Robotics and Automation (ICRA) 2006. 9. R. Kummerle, B. S teder C. Dornhege, A. Kleiner, G. Grisetti, and W. Burgard Large Scale Graphbased SLAM using Aerial Images as Prior Information. Robotics: Science and Systems 2009. 10. C. Wang and C. T horpe. Simultaneous Localization and Mapping with Detection and Tracking of Moving Objects IEEE intern ational Conference on Robotics and Automation (ICRA) 2002. 11. J. Zhu, N. Z heng Z Yuan, Q Zhang, X Zhang, and Y He A SLAM Algorithm Based on the Central Difference Kalman Filter IEEE Intelligent Vehicles Symposium 2009 12. X. Yuan, C. Zhao, and Z. Tang Lidar Scan matching for Mobile Robot Localization. Inform ation Technol ogy J ournal 9: 2733 2009.

PAGE 151

151 13. C. Brenner Vehicle Localization using Landmarks Obtained by a LIDAR Mobile Mapping System Paparoditis N., Pierrot Deseilligny M., Mallet C., Tournaire O. (Eds), IAPRS, Vol. XXXVIII, Part 3A, 2010. 14. N Naikal, A. Z akhor and J. Kua Image Augmented Laser Scan Matching for Indoor Localization. Technical Report No. UCB/EECS 200935 University of California, B erkeley, 2009. 15. M Bosse and R. Z lot. Keypoint Design and Evaluation for Place Recognition in 2D LIDAR Maps Robotics and Autonomous S ystems, Volume 57, p ages 12111224, 2009 16. M Bosse and R. Z lot. Place Recognition Using Keypoint Similarities in 2D Lidar Maps Experimental Robotics, Springer Tracts in Advanced Robotics p ages 363372, 2009. 17. D. Frenkel Introduction to Monte Carlo Methods. NIC Series, Vol. 23, page 2960, 2004. 18. D Fox, W. B urgard F Dellaert, and S Thrun, Monte Carlo Localization: Efficient Position Estimation for Mobile Robots National Conference on Artificial Intelligence (AAAI), 1999. 19. O Wulf and B. W agner Using 3D Data for Monte Carlo Localization in Complex Indoor Envronments European Conference on Mobile Robots (ECMR) 2005. 20. R Kummerle, R. T riebel P Pfaff, and W Burgard. Monte Carlo Localization in Outdoor Terrains using Multilevel Surface Maps Journal of Field Robotics, Vol. 25, Issue 67, p ages 346 359 2008. 21. M Hebert, and N. V andapel. Terrain Classification Technique from LADAR Data for Autonomous Navigation. Collaborative Technology Alliances conference 2003. 22. C Yu, and X. Y ua. Terrain Classification for Autonomous Navigation Using Ladar Sensing Information Science and Engineering (ICISE) 2009. 23. M. Wang and Y. Tseng., LiDAR Data Segmentation and Classification Based on Octree Structure Int ernational Archives of Photogrammetry and Remote Sensing, ISSN 1682 1750, Vol.XXXV, part B3, 2004. 24. J. Bauer, K. K arner K Schindler, A Klaus, and C Zach, Segmentation of Building Models from Dense 3D point clouds 27th Workshop of the Austrian Association for Pattern Recognition, 2003. 25. F. Moosmann, O. P ink and C. Stiller. Segmentation of 3D Lidar Data in nonflat Urban Environments using a Local Convexity Criterion. IEEE Intelligent Vehicles Symposium 2009.

PAGE 152

152 26. D. Belton and D. L., Classification and Segmentation of Terrestrial Laser Scanner Point Clouds Using Local Variance Information. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XXXVI, Part 5, pages 4449 2006. 27. N Chehata, N. D avid and F Bretar LIDAR Data Classification using Hierarchical K mean clustering International Society for Photogrammetry and Remote Sensing (ISPRS), 2008. 28. A. Golovinskiy, T. F unkhouser and V. Kim, Shapebased Recognition of 3D Point Clouds in Urban Environments International Conference on Computer Vision (ICCV) 2009. 29. W. Zhang LIDAR Based Road and RoadEdge Detection IEEE Intelligent Vehicles Symposium 2010. 30. F.Neuhaus, D. Dillenberger J.Pellenz, and D. Paulus Terrain Drivability Analysis in 3D Laser Range Data for Autonomous Robot Navigation in Unstructured Environments Emerging Technologies & Factory Automation, 2 009. 31. W. Wijesoma, K. K odagoda and A. Balasuriya, Road Boundary Detection and Tracking Using Ladar Sensing IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 20, NO. 3 2004. 32. R. Manduchi Obstacle Detection and Terrain Classification for Autonomous Off Road Navigation. Autonomous Robots 18, pages 81 102 2005. 33. M Hebert, N. V andapel S Keller, and R Donamukkala. Evaluation and Comparison of Terrain Classification Techniques from LADAR Data for Autonomous Navigation. Army Science Conf erence, 2002. 34. J. Sparbert, K. D ietmayer and D Streller Lane Detection and Street Type Classification using Laser Range Images IEEE Intelligent Transportation Systems Conference, 2001. 35. D Stavens and S. T hrun. A Self Supervised Terrain Roughness Estimator for Off Road Autonomous Driving Conf erence on Uncertainty in AI 2006. 36. N Heckman, J. L alonde, N Vandapel, and M Hebert Potential Negative Obstacle Detection by Occlusion Labeling IEEE/RSJ International Conference on Intelligent Robots and Systems 2007. 37. J. Leonard, J. H ow S Teller, M Berger, S Campbell, G Fiore, L. Fletcher, E Frazzoli, A Huang, S Karaman, O Koch, Y Kuwata, D Moore, E Olson, S Peters, J Teo, R Truax, and M Walter, A Perception Driven Autonomous Urban Vehicle Journal of Field Robotics 2008.

PAGE 153

153 38. A. Bargeton and F. N ashashibi. Laser based Vehicles Tracking and Classification using Occlusion Reasoning and Confidence Estimation. IEEE Intelligent Vehicles Symposium 2008. 39. U. Ramer An Iterative Procedure for the Polygonal Approximation of Plane Curves Computer Graphics and Image Processing, 1(3) : 244 256 1972. 40. C. Stiller, J. H ipp C. Ro ssig, and A. Ewald Multisensor Obstacle Detection and Tracking Image and Vision Computing 18, pages 389 396 2000. 41. J. Hancock, M. H erbert and C Thorpe. Laser Intensity Based Obstacle Detection IEEE/RSJ International Conference On Intelligent Robotic Systems (IROS '98) 1998. 42. C Laugier, I. P aromtchik M Perrollaz, M Yong, A Ngre, J Yoder, and C Tay ArosDyn: Robust Analysis of Dynamic Scenes by means of Bayesian Fusion of Sensor Data Application to the Safety of Car Driving ICRA10 Workshop on Robotics and Intelligent Transportation System 2010. 43. C Premebida, G. M onteiro, U Nunes, and P Peixoto. A Lidar and Visionbased Approach for Pedestrian and Vehicle Detection and Tracking IEEE Intelligent Transportation Systems Conference, 2007. 44. X Liu, Z. Z hang LIDAR Data Reduction for Efficient and High Quality DEM Generation. XXI Congress of the International Society of Photogrammetry and Remote Sensing (ISPRS2008) 2008. 45. T Peucker, R. F owler J Little, The Triangulated Irregular Network American Society of Photogrammetry: Digital Terrain Models (DTM) Symposium 1978. 46. Q Clevis, G. T ucker S. Lancaster, A Desitter, N Gasparini, and G Locke, A Simple Algorithm for the Mapping of TIN Data onto a Static Grid: Applied to the Stratigraphic Simulation of River Meander Deposits Computers and Geosciences Volume 32, Issue 6, p ages 749766, 2006. 47. A. Guttman R Trees: A Dynamic Index Structure for Spatial Searching ACM SIGMOD International Conference on Management of Data, p ages 4757, 1984. 48. S Ravada, M. H orhammer and B. Kazar, Oracle Spatial Point Cloud: Storage, Loa ding, and Visualization. Oracle 2010. 49. B. Schn, M. B ertolotto, ,D. Laefer, and S. Morrish Storage, Manipulation, and Visualization of LiDAR Data International Society of Photogrammetry and Remote Sensing, 2009. 50. J. Bentley and R. Finkel Quad Trees: A Data Structure for Retrieval on Composite Keys Acta Informatica 4, (1): 1 9. 1974.

PAGE 154

154 51. W. Tobler and Z. Chen A Quadtree for Global Information Storage. Geographical Analysis 18(4): 360371 1986. 52. J. Li, H. F an, H Ma, and S Goto Determination of LargeScale Digital Elevation Model in Wooded Area with Ariborne LIDAR Data by Applying Adaptive Quadtreebased Iterative Filtering Method. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science, Volum e XXXVIII, Part 8 2010. 53. J. Lee, R. M ottaghi C Pippin and T Balch Graphbased Planning Using Local Information for Unknown Outdoor Environments IEEE International Conference Robotics and Automation( ICRA '09 ) 2009. 54. M Likhachev, D. F erguson, G Gordon, A Stentz, and S Thrun. Anytime Dynamic A*: An Anytime, Replanning Algorithm International Conference on Automated Planning and Scheduling (ICAPS) 2005. 55. Dave Ferguson and A. S tenz Field D* : An Interpolationbased Path Planner Internatio nal Symposium on Robotics Research (ISRR) 2005. 56. A Yahja, A. S tenz S Singh, and B. Brumitt. FramedQuadtree Path Planning for Mobile Robots Operating in Sparse Environments IEEE Conference on Robotics and Automation (ICRA) 1998. 57. L. Kavraki P. Svestka J. Latombe, and M. Overmars Probabilistic Roadmaps for Path Planning in Highdimensional Configuration Spaces IEEE Transactions on Robotics and Automation, 12 (4): 566 580 1996 58. L. Jaillet J. Corts and T. Simon. Sampling Based Path Planning on ConfigurationSpace Costmaps Robotics, IEEE Transactions 2010 59. L. Kavraki, M. Kolountzakis and J. Latombe, Analysis of Probabilistic Roadmaps for Path Planning IEEE Transactions on Robotics and Automation, Volume 14, Issue 1, Number 1, pages 166171 1998. 60. R. Bohlin and L. Kavraki Path Planning using Lazy PRM Robotics and Automation, 2000. 61. O. Khatib Real Time Obstacle Avoidance for Manipulators and Mobile Robots International Journal of Robotic Research, Vol.5, No.1, 1986. 62. J. Barraquand, B. Langlois, and J. Latombe. Numerical Potential Field Techniques for Robot Path Planning IEEE Transactions on Systems, Man and Cybernetics, V olume 22, Issue 2, 1992. 63. Y Hwang and N. A huja. A Potential Field Approach to Path Planning IEEE T ransaction on Robotics and Automation, VOL. 8, NO. 1, 1992.

PAGE 155

155 64. J. Borenstein and Y.K oren. The Vector Field Histogram Fast Obstacle Avoidance for Mobile Robots IEEE Journal of Robotics and Automation, Vol 7, No 3, 1991. 65. A. Maida, S.G olconda, P. Mejia, A. Lakhotia, and C. Cavanaugh. Subgoal based Local navigation and Obstacleavoidance using a Griddistance Field. International Journal of Vehicle Autonomous Systems (IJVAS) pages 122 142 2006. 66. L. Dubins On Curves of Minimal Length with a Constraint on Average Curvature, and with Prescribed Initial and Terminal Positions and Tangents American Journal of Mathematics Vol. 79, No. 3 1957. 67. R Takei, R. T sai H Shen, and Y Landa A Practical Path planning Algorithm for a Vehicle with a Constrained Turning Radius: a HamiltonJacobi Approach. American Control Conference, 2010. 68. L Han, H.Y ashiro, H Nejad, Q Do, and S Mita Bzier Curve Based Path Planning for Autonomous Vehicle in Urban Environment IEEE Intelligent Vehicles Symposium 2010. 69. K. Usher Obstacle Avoidance for a Nonholonomic Vehicle using Occupancy Grids Australasian Conference on Robotics & Automation (ACRA) 2006. 70. C Fulgenzi, A. S palanzani and C Laugier Dynamic Obstacle Avoidance in Uncertain Environment Combining PVOs and Occupancy Grid. IEEE International Conf erence on Robotics and Automation, 2007 71. Ziegler, J., Stiller, C, Fast Collision Checking for Intelligent Vehicle Motion Planning. Intelligent Vehicles Symposium (IV), 2010 IEEE. 72. J. Snider Automatic Steering Methods for Autonomous Automobile Path Tracking. tech nical report CMU RI TR 0908, Robotics Institute, Carnegie Mellon University 2009. 73. S Singh and D. S hin. Position Based Path Tracking for Wheeled Mobile Robots IEEE Conference on Intelligent Robot Systems p ages 386 391 1989 74. S Singh and D. S hin. Explicit Path Tracking by Autonomous Vehicles Robotica, Vol. 10 p ages 539 554 1992. 75. D. Gu and H. Hu A Stabilization Receding Horizon Regulator for Nonholonomic Mobile Robots IEEE Transactions on Robotics, Journal Volume 21, Issue 5, p ages 10221028 2005 76. E Thorn Autonomous Motion Planning Using A Predictive Temporal Method. Ph.D. Disseratation. University of Florida, 2009.

PAGE 156

156 77. S. LaValle Rapidly exploring random trees: A new tool for path planning TR 98 11, Computer Science Dept., Iowa State University 1998. 78. A. Savitzky and M. Golay Smoothing and Differentiation of Data by Simplified Least Squares Procedures Analytical Chemistry 36 (8): 16271639 1964. 79. T. Galluzo Simultaneous Planning and Control for Autonomous Ground Vehicles Ph.D. Dissertation. University of Florida, 2006.

PAGE 157

157 BIOGRAPH ICAL SKETCH Jihyun Yoon was born and raised in Jinhae, Kyungnam, South Korea He received his B.S. degree in automotive engineering from the Kookmin University in Seoul, South Korea in February 2006. He is currently working as a Graduate Research Assistant and completing a d octoral degree at the Center for Intelligent Machines and Robotics (CIMAR) at the University of Florida. His research focuses on unmanned ground vehicle LADAR sensing and path planning He plans to continue his career as a r esearch engineer at Samsung Electronics in South Korea.