VECTOR PURSUIT PATH TRACKING
FOR AUTONOMOUS GROUND VEHICLES
JEFFREY S. WIT
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
Jeffrey S. Wit
The author would like to convey his appreciation to his supervisory committee
(Dr. Carl Crane, Dr. Joseph Duffy, Dr. Paul Mason, Dr. John Schueller, and Dr. Antonio
Arroyo) for their support and guidance. Special thanks go to Dr. Crane who gave the
author the opportunity to work on the autonomous vehicle project and continually
provided important insight and advice.
This work would not have been possible without the support of the Air Force
Research Laboratory at Tyndall Air Force Base, Florida. Thanks go to Al Neese and the
rest of his staff.
The author's work presented in this dissertation focuses on only part of the tasks
required for autonomous navigation. Other project members have addressed the
remaining tasks. Therefore, thanks go to those who have worked on the autonomous
navigation project, both past and present, at the Center for Intelligent Machines and
Robotics. Individual thanks go to the project manager, David Armstrong, for his
invaluable input, and to office mate David Novick, for his unending programming advice.
Finally, special thanks go to the author's wife, Jennifer Lisa Wit, who provided
continuous encouragement and inspiration needed to finish this dissertation.
TABLE OF CONTENTS
ACKN OW LED GM EN TS................................................................................................. iii
ABSTRA CT .......................................................................................................................vi
INTROD UCTION ............................................................................................................... 1
Problem Statem ent........................................................................................................... 1
Project Background ......................................................................................................... 2
H history of V vehicles Autom ated at CIM A R ................................................................. 2
Evolution of the N TV 's Architecture .......................................................................... 5
Research M otivation........................................................................................................ 7
Research Objective.......................................................................................................... 9
REV IEW OF THE LITERA TURE ................................................................................... 11
Autonom ous Ground Vehicle Applications................................................................... 11
Planetary Rovers........................................................................................................ 11
A agricultural V ehicles................................................................................................. 12
Cleaning V ehicles...................................................................................................... 13
Passenger Vehicles .................................................................................................... 14
M military Vehicles ....................................................................................................... 16
Security Vehicles ....................................................................................................... 18
Inspection V ehicles.................................................................................................... 19
Autonomous Ground Vehicle Navigation Architecture ................................................ 20
Behavioral Architecture............................................................................................. 21
Hierarchical Architecture........................................................................................... 22
Hybrid Architecture................................................................................................... 40
VECTOR PU RSU IT PA TH TRA CKIN G ......................................................................... 41
Screw Theory Basics ..................................................................................................... 41
Vector Pursuit................................................................................................................ 44
Defined Coordinate System s..................................................................................... 45
M ethod 1 .................................................................................................................... 47
M ethod 2.................................................................................................................... 56
Desired Vehicle V elocity State.................................................................................. 63
EXECUTION CON TRO L................................................................................................. 65
Fuzzy Controller............................................................................................................ 66
Inference M echanism ................................................................................................. 68
Defuzzification .......................................................................................................... 70
Fuzzy Reference M odel Learning Control ............................................................. 71
Vehicle Linear V elocity FRM LC .................................................................................. 75
Vehicle Angular Velocity FRM LC ............................................................................... 79
RESU LTS ..........................................................................................................................83
M ethod for Evaluating Path Tracking ........................................................................... 86
N aviation Test V vehicle (N TV) .................................................................................... 86
Sim ulation M odel ...................................................................................................... 87
Throttle M odel........................................................................................................... 89
Steering M odel........................................................................................................... 89
Sim ulation Results..................................................................................................... 91
Im plem entation Results............................................................................................. 94
N avigating the N TV in Reverse .............................................................................. 102
Cyberm otion K2A Im plem entation Results ............................................................. 103
All-purpose Remote Transport System (ARTS) Implementation Results .................. 105
CON CLU SION S AND FUTU RE W ORK ...................................................................... 107
Future W ork................................................................................................................. 108
APPENDIX A MAX INTERFACE SPECIFICATION................................................. 110
APPENDIX B NTV SIMULATION RESULTS ........................................................... 196
APPENDIX C NTV EXPERIMENTAL RESULTS...................................................... 239
LIST OF REFEREN CES................................................................................................. 299
BIOGRAPH ICA L SKETCH ........................................................................................... 307
Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy
VECTOR PURSUIT PATH TRACKING
FOR AUTONOMOUS GROUND VEHICLES
Jeffrey S. Wit
Chairman: Dr. Carl D. Crane III
Major Department: Mechanical Engineering
The Air Force Research Laboratory at Tyndall Air Force Base, Florida, has
contracted the University of Florida to develop autonomous navigation for various
ground vehicles. Autonomous vehicle navigation can be broken down into four tasks.
These tasks include perceiving and modeling the environment, localizing the vehicle
within the environment, planning and deciding the vehicle's desired motion, and finally,
executing the vehicle's desired motion. The work presented here focuses on tasks of
deciding the vehicle's desired motion and executing the vehicle's desired motion.
The third task above involves planning the vehicle's desired motion as well as
deciding the vehicle's desired motion. In this work it is assumed that a planned path
already exists and therefore only a technique to decide the vehicle's desired motion is
required. Screw theory can be used to describe the instantaneous motion of a rigid body,
i.e., the vehicle, relative to a given coordinate system. The concept of vector pursuit is to
calculate an instantaneous screw that describes the motion of the vehicle from its current
position and orientation to a position and orientation on the planned path. Once the
desired motion is determined, a controller is required to track this desired motion.
The fourth task for autonomous navigation is to execute the desired motion. In
order to accomplish this task, two fuzzy reference model learning controllers (FRMLCs)
are implemented to execute the vehicle's desired turning rate and speed. The controllers
are designed to be dependent on certain vehicle characteristics such as the maximum
vehicle speed maximum turning rate. This is done to facilitate the transfer of these
controllers to different vehicles.
The vector pursuit path-tracking method and the FRMLCs were first tested in
simulation by modeling the Navigation Test Vehicle (NTV) developed by the Center for
Intelligent Machines and Robotics (CIMAR) at the University of Florida. In addition to
testing in simulation, vector pursuit path tracking and the FRMLCs were implemented on
the NTV. Results show that vector pursuit is more robust with respect to disturbances
and to different vehicle speeds compared with other geometric path-tracking techniques.
An autonomous vehicle is one that is capable of automatic navigation. It is self-
acting and self-regulating, therefore it is able to operate in and react to its environment
without outside control. The process of automating vehicle navigation can be broken
down into four steps: 1) perceiving and modeling the environment, 2) localizing the
vehicle within the environment, 3) planning and deciding the vehicle's desired motion
and 4) executing the vehicle's desired motion . There has been much interest and
research done in each of these areas in the past decade. The research proposed here
focuses on deciding the vehicle's desired motion and then executing that desired motion.
A path made up of two or more waypoints that an Autonomous Ground
Vehicle (AGV) must track. It is assumed that the AGV has a path planner,
position system, and a vehicle control unit that conform to the interface
specification in the MAX Architecture currently being developed at the
University of Florida. (See Appendix A)
A path-tracking algorithm for an AGV to navigate a given path accurately
at speeds up to 4.5 meters per second (-10 mph). This is the principle task
of the mobility control unit in the MAX architecture. This task can be
broken down into two subtasks. First, develop an algorithm that
determines the current desired motion of the AGV that causes it to track
the given path. Second, develop a control algorithm that executes this
The Center for Intelligent Machines and Robotics (CIMAR) began working with
autonomous vehicles in 1990 and has continued working with them to the present day.
The Air Force Research Laboratory located at Tyndall Air Force Base, Florida, sponsors
History of Vehicles Automated at CIMAR
In 1991, CIMAR completely automated its first vehicle. A Kawasaki MULE 500
all-terrain vehicle was modified for computer control and currently serves as a
Navigation Test Vehicle (NTV) at the University of Florida. Computer control of the
vehicle was accomplished by mounting motors and encoders on the vehicle's steering
wheel, throttle, brake and transmission. An integrated inertial navigation unit (INU) and
differential global positioning system (DGPS) provided real-time vehicle position and
velocity data for feedback. An array of sonar sensors was mounted on the front of the
vehicle to detect any unexpected obstacle in the vehicle's path. The NTV has undergone
several revisions, over the years, as current technology continues to advance. Figure 1.1
shows a picture of the NTV as it is today.
figure 1.1: iNavigauon iest venicie.
The technology developed on the NTV has been used to automate several other
vehicles. Figure 1.2 shows a John Deere Gator that was automated to serve as an
autonomous survey vehicle (ASV). It was designed to survey various Department of
Defense (DOD) facilities that contain buried unexploded ordnance (UXO). The John
Deere Gator tows a sensor package, which is composed of a magnetometer array and
ground-penetrating radar, over the entire area to be surveyed. As the ASV navigates, it
collects and stores time-tagged position data and data from the sensor package. This data
can then be postprocessed to determine the location of possible buried UXO.
Figure 1.2: Autonomous Survey Vehicle.
A John Deere Excavator also was automated using the technology developed on
the NTV. The John Deere Excavator, shown in Figure 1.3, was automated in order to
navigate to the location of buried UXO. After navigating to the location of the buried
UXO, an operator was able to dig up and remove the UXO through a tele-remote
The technology developed on the NTV also was used to automate a D7G
bulldozer for the Marines. Figure 1.4 shows the D7G bulldozer outfitted with a mine
plow and explosive netting. Its mission was to clear a 50x50-yard area of mines and other
obstructions in order to create a landing area for the deployment of the Marines and their
Figure 1.3: Autonomous John Deere Excavator.
Figure 1.4: Autonomous D7G Bulldozer.
The latest vehicle to use the technology developed on the NTV is the All-Purpose
Remote Transport System (ARTS) shown in Figure 1.5. ARTS is a commercially
available vehicle outfitted with a tele-remote package developed by Applied Research
Associates, Inc. of Tyndall Air Force Base, FL. This vehicle was automated for a
demonstration during the October 1999 Joint Architecture for Unmanned Ground
Vehicles (JAUGS) working group meeting held at the University of Florida.
Figure 1.5: Autonomous ARTS.
Evolution of the NTV's Architecture
The original NTV architecture was a blackboard approach. An area in memory
was created to which each system had access for reading and writing to allow them to
communicate with other systems. This approach has the advantage of allowing a system
the ability to share its resultant data easily and immediately with other systems running in
parallel. This architecture was implemented on the NTV with a VME chassis with
multiple 68030 CPU boards. Shared memory was created to allow the systems running
in parallel on different CPU boards to communicate their results via the VME backplane.
There are two major problems with this blackboard implementation that make it
difficult to maintain and upgrade. First, debugging system software can be very difficult.
For example, system A may have a memory leak that overwrites data in shared memory
but appears to be operating correctly. System B now uses this data not knowing it has
been overwritten by system A. By simply looking at its results, system B would appear
to have a software bug in it and system A would not. To make things worse, different
programmers may be responsible for different systems, where each programmer may
require changes to variables in shared memory. This has the possibility of quickly
becoming a debugging nightmare with each programmer blaming another.
A second problem with this blackboard implementation is the difficulty in
transferring only one system to another application or replacing an existing system with a
different one. Take for example a system that provides position feedback for the AGV.
Suppose the positioning system on AGV 1 was tested fully and known to operate
correctly. Now, it is desired to use this positioning system on a newly developed AGV 2.
In order for this transfer to work, both the hardware and software on AGV 2 must be
identical to AGV 1. That is, AGV 2 also must have a VME chassis and must have the
exact shared memory structure. Obviously this is not always the case, and substantial
hardware and software changes must be made in order to use the positioning system on
Because of these problems a new architecture was designed. Based on experience
from previous work, one main requirement was specified for this new architecture. The
architecture must allow systems to be self-contained submodules, where only the
interface of each submodule is defined rigorously. The effect of this requirement benefits
both the developer and the user. The developer now has a great amount of freedom in
choosing specific hardware and software for his or her system. And, the user now has the
ability to scale his or her AGV's functionality by combining different submodules.
Developing an architecture that meets this requirement is a two-step process
accomplished by first determining a list of submodules required to automate a vehicle
and then determining their interface. The Modular Architecture eXperimental (MAX),
currently being developed at the University of Florida, attempts to meet this requirement.
MAX currently consists of the following submodules: Position System (POS),
Vehicle Control Unit (VCU), Path Planner (PLN), Detection and Mapping System
(DMS) and Mobility Control Unit (MCU). The modular structure of MAX is shown in
Figure 1.6. The interface between each submodule defined by MAX (See Appendix A)
allows communication with other submodules and/or the user.
Figure 1.6: MAX sub-module structure.
As indicated in the problem statement, there are two tasks considered in this
research. The first task of this research is to develop an algorithm to determine the
current desired motion of the AGV that causes it to track a given path. Currently various
methods exist that are based on the geometry of some look-ahead point on the path
relative to a vehicle coordinate system. The distance to this look-ahead point is used as a
tuning parameter. Unfortunately there is a tradeoff in setting the look-ahead distance.
For accurate path tracking it is desirable to have a look-ahead distance that is small so
that the lateral error is reduced quickly. On the other hand, a large look-ahead distance is
desirable when considering system stability. These methods only consider the position of
the look-ahead point and not the orientation of the path at that point. The motivation
behind this part of the research is to allow for smaller look-ahead distances without
giving up system stability.
The second task of this research is to develop a control algorithm that executes the
AGV's desired motion. There are two main motivations for this work. The first
motivation is to have the ability to operate the NTV under various conditions and speeds.
Operating conditions most likely change as new applications are established for the
technology developed on the NTV. Some possible changes in operating conditions
include the weight of the payload, towing a trailer, the desired vehicle speed, and the type
of ground on which it is operating (i.e., asphalt, grass, sand, etc...). All of these
conditions affect the ability of the NTV to navigate a path accurately. Currently, if the
operating conditions are too different, the NTV must be re-tuned to achieve an acceptable
Using the MAX architecture, it is desired to develop an MCU that has the ability
to operate under these various conditions without the need to re-tune it. This suggests
that the MCU must have the ability to adapt to its current operating conditions.
The second motivation for this part of the research is to reduce the amount of time
required to transfer the technology to different vehicles. One of the main reasons for
developing a modular architecture is to have the ability of transferring a module from one
vehicle to another or to be able to use modules that are made up differently on the same
vehicle. This makes sense for a POS module since it is, for the most part, independent of
the vehicle it is on. For example, one positioning system could be made up of GPS and
INS units while another positioning system could be made up of just a GPS unit. Since
by using MAX the interfaces between the two positioning systems are the same, they can
easily be switched on the same vehicle or transferred to a new vehicle.
The ability to switch or transfer modules becomes much more difficult when
dealing with the MCU module. Without using MAX architecture, control of a ground
vehicle was accomplished typically by commanding a throttle position and steering wheel
angle for a car-like vehicle or commanding left track and right track velocities for a
tracked vehicle. Obviously the commands depended highly on the type of vehicle. By
using MAX, the commands to control the vehicle are now the same, a propulsive wrench
and a resistive wrench. Additionally, ground vehicles typically will use the same
components of the propulsive wrench and resistive wrench. The component Fx is used to
control the vehicle's linear speed, and the component Mz is used to control the vehicle's
Having the commands to control an AGV be the same for most ground vehicles
makes the idea of being able to switch out or transfer the MCU more feasible. Therefore,
the second motivation for this part of the research is to develop an MCU that can be
transferred to different vehicles with few or no changes to the MCU. This suggests that
the MCU must have the ability to adapt not only to different operating conditions but also
to different vehicles.
The objective of this research is to develop an adaptive control algorithm for the
NTV to track a given path accurately at speeds up to 4.5 meters per second. This task is
broken down into two subtasks. First, develop an algorithm to determine the current
desired motion of the AGV that causes it to track the given path. Second, develop an
adaptive control algorithm that executes the AGV's desired motion. '
The remainder of this dissertation is outlined as follows: Chapter 2 is a broad
overview of different AGVs and their navigation architectures. Chapter 3 introduces a
new path-tracking algorithm that gives the vehicle's desired motion based on the current
vehicle position and orientation relative to a path. Chapter 4 presents a fuzzy model
reference learning controller (FMRLC) to track the AGV's desired motion. Chapter 5
presents the development of a simulation of the NTV and presents the results of using the
simulation to test the new path-tracking algorithm and the adaptive control algorithm. It
also presents the test results from implementing the algorithms on the NTV. Chapter 5
concludes by presenting the test results from implementing the path-tracking algorithm
and adaptive control algorithm on a synchronous drive vehicle and a tracked vehicle.
And finally, Chapter 6 presents some conclusions and future work.
REVIEW OF THE LITERATURE
Recently, within the past couple of decades, there has been much research in the
area of autonomous mobile robots. The reason for the sudden interest in autonomous
mobile robots is the advancement of supporting technology. Both sensor and computing
technology have increased greatly. Sensors are more accurate and give more information
about the current state of the robot and its environment. And computers are faster and
have larger memory to run larger, more complicated programs. The advancement of
these two areas has made possible the idea of autonomous mobile robots. Today,
autonomous mobile robots consist of air, land, and sea vehicles. This chapter focuses on
the research done on autonomous mobile land vehicles, or autonomous ground vehicles
(AGVs). First we consider some of the current applications of AGVs. Then we review
the current research on various navigation architectures.
Autonomous Ground Vehicle Applications
There are many applications for autonomous ground vehicles. The motivations
for automating different vehicles are typically to reduce risk of human life or injury in
hazardous areas, to relieve human operators from overly monotonous tasks, or to increase
the precision of navigation. Some of these applications are discussed below.
Green et al. present an algorithm that achieves path tracking and obstacle
avoidance for a planetary rover [2,3]. Path tracking is accomplished through the
feedback of position and orientation errors relative to the planned path. The position and
orientation of the rover is estimated using an inertial navigation unit integrated with an
odometer. The rover avoids obstacles by creating an artificial potential field from the
data received from a range sensor. An obstacle avoidance error is calculated from this
artificial potential field. Both the tracking and the obstacle avoidance errors are used as
inputs to a linear-feedback steering controller. Simulated results of the controller are
Boissier presents the work done by the French Space Agency on planetary rovers
for the IARES Eureka project . The IARES mobile robot has six independent
steerable wheels, three rotating axles, wheel and walking modes, passive adaptation to
obstacles along the transversal axis and mixed passive/active longitudinal deformation,
active wheel loading equalization on slopes and maximum speeds of 0.10 m/s or 0.35
m/s. It has a SAGEM inertial unit for localization that uses zero velocity updates to
minimize the amount of drift in position. The IARES mobile robot also has stereovision
in order to create a digital terrain model that is used to navigate the vehicle. It was
evaluated successfully in different terrain conditions for both predictive tele-remote
operation and autonomous navigation.
O'Connor et al. at Stanford University rely solely on Carrier Phase Differential
GPS (CPGPS) to provide position and attitude feedback to control the position of
agricultural equipment relative to a preplanned path . The position and attitude are
calculated using four single-phase GPS antennas on the vehicle. The test platform used
by O'Connor et al. to test autonomous navigation is a John Deere 7800 tractor. A hybrid
controller is used to control the vehicle's heading. For large heading errors, a "bang-
bang" control technique is used. Otherwise, for small heading errors, a Linear Quadratic
Regulator is used. Tests showed the lateral position standard deviation to be less than 2.5
cm and the heading standard deviation to be less than 1 degree.
Another group interested in autonomous agriculture vehicles is from the Silsoe
Research Institute in Bedford, UK . Marchant et al. present a row-following
autonomous vision-guided agriculture vehicle. They use image analysis and odometer
data to localize the vehicle. A proportional controller is used to track the desired path.
Marchant tested the vehicle on four fields of cauliflower. The control error for these runs
was determined to be less than 20 mm RMS.
Hofher and Schmidt present MACROBE, an autonomous floor-cleaning and
inspecting robot [7,8]. Navigation is achieved by executing one of five preprogrammed
motion macros. A planner on MACROBE uses its current knowledge of the workspace
to generate a serpentine path made up of these motion macros. If an unexpected obstacle
is encountered, MACROBE adds it to its knowledge of the workspace and then plans a
Ulrich et al., from the Swiss Institute of Technology in Lausanne, Switzerland,
present an autonomous vacuum cleaner . A Koala robot is used as the platform for the
autonomous vacuum cleaner. The robot is equipped with a 2-DOF arm that is used to
facilitate the cleaning process. The arm also is used tactically to sense unknown objects
and then classify them as legs, walls, comers or unknowns. Through the use of the object
data along with compass and odometer data, the robot builds a map of its workspace. An
algorithm to clean the workspace begins by attempting to travel the perimeter of the
workspace. This allows the robot to build an initial map of its workspace. After the
perimeter is traversed, the robot attempts to clean the interior part of the workspace by
traveling back and forth between known walls. Ulrich tested the robot in a 2-3 square
meter area that was covered with sawdust. The robot was able to clean 95% of the area
in its internal map.
Nolfi uses a recently developed technique to evolve the desired behavior of an
autonomous vehicle to collect garbage and remove it from an arena . The platform
chosen is a Khepera robot that is developed at EPFL in Lausanne, Switzerland. It is a
wheeled vehicle controlled by two DC motors with incremental encoders. The Khepera
robot also is equipped with a gripper module that has 2-DOF and eight infrared proximity
sensors. The robot is automated through the use of a neural controller. The neural
network chosen is made up of seven sensory neurons, 16 motor neurons and no internal
neurons. A genetic algorithm is used to evolve this neural network to perform various
tasks such as exploring the environment, locating and picking up target objects and
removing the objects from the arena. As the network evolves, the number of successful
pickup and release tasks increases and the number of crashes decreases.
Two areas of research for the development of an Automated Highway System
(AHS) are vehicle longitudinal control and lateral control. Longitudinal control typically
involves controlling the vehicle's throttle and brake. Spooner and Passino present their
results of two fuzzy longitudinal controllers for vehicle following . The controllers
they use are a direct adaptive controller and an indirect adaptive controller that use
Takagi-Sugeno fuzzy systems. Performance results of their controllers in simulation are
Huang and Ren also have done work on vehicle longitudinal control . Their
work deals with a switching strategy between the throttle and brakes. They compute a
control signal for the throttle and a control signal for the brake. Each signal is optimized
in order to meet some tracking criterion by a learning algorithm. These two signals then
are used to determine brake and throttle positions. Results from simulations are
Vehicle lateral control, on the other hand, involves controlling the vehicle's
steering. Unyelioglu et al. present their design and stability analysis of a controller for
lane following . Their objective is to steer a vehicle so that it stays in the middle of
the lane. This is accomplished by defining a reference line in the middle of the lane and a
look-ahead point on the vehicle's longitudinal axis at a given distance in front of the
vehicle. The controller uses the offset distance between the look-ahead point and the
point on the reference line closest to the look-ahead point. Using Routh-Hurwitz stability
criterion they prove that for a given range of speeds, by choosing a sufficiently large
look-ahead distance, the system is stable for that range. Simulation results are given to
demonstrate the performance of their controller.
O'Brien et al. also address the lateral motion control of automated highway
vehicles . They designed an IHo. controller to track the center of the current lane on
both curved and straight highways. The result of considering performance requirements
in the controller design, is a controller that is robust to model uncertainty. The
controller's robustness to different speeds, road conditions and wind gusts are examined.
The controller is tested in simulation for various conditions. For each condition tested,
the lateral offset is less than 20 centimeters and the yaw angle error is less than 0.01
Two other areas of research dealing with passenger vehicles are active steering
assistance and parallel parking. The concept behind active steering assistance is to
monitor the driver's actions and to intervene when needed. Hsu et al. developed a system
named cooperative copilot that keeps a vehicle safely in its lane . The copilot
generates bounds of feasible steering angles and determines whether a correction should
be applied. The steering angle bounds are determined from the current road curvature,
vehicle motion and road width. A driving simulator is used to test the performance of the
copilot and to determine how it works with a human driver.
Parallel parking can be a difficult task for many people. Therefore automating
this procedure would be very useful and appreciated. Gorinevsky et al. developed an
automated parking control system that uses artificial neural network technology .
The neural network is used to generate a trajectory and to control the automated car. The
design is based on a radial basis function architecture to calculate the reference trajectory
and a feedback-feedforward controller to track the reference trajectory. The design is
tested in simulation for different parking situations.
Paromtchik and Laugier present an iterative algorithm for parallel parking based
on ultrasonic range data [17,18]. They use sinusoidal reference functions to control the
steering angle and the vehicle's velocity. The control scheme is implemented in a
reactive scheme in order to avoid obstacle collisions. They experimentally verify their
algorithm on a LIGIER electric autonomous vehicle.
There are many areas where the military is researching the use of AGVs. One
area is in a project for the United States Army that involves automatic target acquisition
(ATA) . A typical mission involves a scout driving from a secondary observation
point to a main observation point. This allows the vehicle to record a path using position
data from an integrated inertial navigation system and a differential global positioning
system. A remote operator then takes over and the ATA mission begins. The operator is
alerted to any possible target by the ATA, at which point the operator can request
additional data. At any point during the mission the operator has the option to command
the vehicle to return to the secondary observation point. The vehicle then autonomously
drives back to the secondary observation point. Murphy and Legowik from the National
Institute of Standards and Technology present their work on the mobility system that
controls the vehicle during autonomous navigation for this project. They use a pure
pursuit algorithm to track the recorded path and a gain-scheduling algorithm to track a
commanded speed. Results on performance of the autonomous navigation are not given.
Another area in which the military has shown an interest in AGVs is the Defense
Advanced Research Program Agency's (DARPA) program for Tactical Mobile Robots
(TMR) . The main goal of the TMR program is to develop the technology for small
robots that can be deployed easily in urban environments. This places some unique
requirements on system size, navigation capabilities, communication capabilities and
operator interface. The size restrictions they are trying to achieve are a maximum size of
24" x 20" x 8" and a maximum weight of 20-25 pounds. This allows the robot to be
deployed and controlled at the platoon or squad level. The TMR robots must be able to
navigate in urban environments. This requires the robot to be able to open and close
doors, to navigate over rubble, and up and down stairs. The environment may not be
communication-friendly, but each robot must keep in contact with its operator and other
TMR robots in the area. Finally, the TMR robots must be able to operate with a
minimum level of intuitive operator direction. This project currently is scheduled for
completion by the year 2002.
There are many applications for both indoor and outdoor security AGVs.
ROBART III is an indoors-nonlethal autonomous security response robot presented by
Ciccimaro et al. . It is designed to operate in a previously unexplored area with little
support required from the operator. It is capable of detecting intruders through the use of
eight passive-infrared motion detectors. The infrared motion detectors are validated
partially by a Doppler microwave motion detector. A black-and-white video surveillance
camera mounted to the robot's head is used for further assessment of possible intruders.
The nonlethal response capabilities include a Gatling gun and three sirens. The Gatling
gun is a six-barreled pneumatically powered gun capable of firing tranquilizer darts. A
visible laser is used to facilitate the accuracy of the gun when it is operated remotely.
The three sirens are capable of an ear-piercing 103 decibels that can alert those nearby
and disorient the intruder.
Pastore et al. present their work on the Mobile Detection Assessment and
Response System-Exterior (MDARS-E), an outdoor security AGV . Robotics
Systems Technology developed the MDARS-E. Navigation is accomplished by
combined inputs from differential GPS, a fiber-optic gyro, a wheel odometer, and
landmark recognition. Obstacle avoidance is achieved with a two-tier layered approach.
Long-range sensors are used to provide first-alert obstacle detection from 0 to 100 feet.
Short-range sensors are used to provide higher resolution data for precise obstacle
avoidance. The sensors that are used for obstacle detection include radar, laser ranging,
ultrasonic ranging, and stereovision. Two sensors are used for intruder detection, vision
and radar, to achieve a high probability of detection and to minimize false detections.
AIRIS 21 is an underwater inspection robot presented by Koji . The specific
task for the AIRIS 21 robot is to inspect the outside surface of a reactor pressure vessel of
nuclear power stations. It performs a nondestructive inspection of welds in the reactor
pressure vessel shell from the inside. The AIRIS 21 uses thrusters to provide a chamber
underneath it with negative pressure. This allows it to be sucked securely onto the
reactor pressure vessel's wall. Two drive wheels and one idle wheel enable it to
maneuver on the wall. Position of the robot is accomplished with a depth gauge, an
optical beam, gravity sensor and an encoder. The depth gauge is used to determine the
elevation of the robot. The optical beam is used to locate a known structure relative to
the robot. Then, a map of the operating environment is used to locate the robot. The
gravity sensor is used to determine the direction of travel while the encoder keeps track
of the distance traveled.
A wheeled, multi-articulated robot that operates in a sewage system is presented
by Cordes et al. . The objective behind this project is to be able to inspect Germany's
360,000-km long public sewage system. Germany's public sewage system is over 25
years old and possibly could be polluting the soil and ground water. The robot is
required to operate wirelessly, to navigate 90-degree turns and steps of 0.3 meters high,
and to operate in pipes with a diameter of 20 to 80 centimeters. The design looks like a
wheeled snake that consists of different modules. These modules include sensor, drive,
and power supply modules. This allows the driving and the sensing modules to be
Autonomous Ground Vehicle Navigation Architecture
In general, current navigation architectures are labeled as behavioral, hierarchical
or a hybrid of behavioral and hierarchical. Behavioral architectures, also known as
reactive architectures, assign the AGV to execute a particular behavior because of current
sensor readings. The behaviors are defined in such a way that they cause the AGV to
tend toward completing its task. This allows the vehicle to navigate reliably with quick
response in a dynamic environment. However, as the complexity of the AGV's task or
its operating environment increases, the number of behaviors usually increases as well.
This makes it very difficult to predict the behavior of the AGV, and it makes it more
difficult for the designer to determine the correct behavior for all possible sensor
readings. Also, behavioral architectures do not guarantee the best solution since they
consider only the current sensor readings.
Hierarchical, or top-down, architectures break down the AGV's task into subtasks
and create functions to achieve these subtasks. This allows for the design of a
straightforward approach to accomplishing the task. Hierarchical architectures typically
maintain a model of its operating environment. They use this model along with
sophisticated planners to determine the best course of action in order to achieve a task.
Unfortunately, using sophisticated planners also tends to be complex, and results in a
slow response to changing environments.
Hybrid architectures attempt to combine behavioral and hierarchical architectures
in order to attain the desirable qualities of both architectures while overcoming their
Some of the recent methods used to implement behavioral architecture include
potential field , fuzzy logic [26-32], neural networks [33-35] and genetic algorithms
[36,37]. Some researchers have combined one or more of these methods in an attempt to
overcome the weaknesses of a particular method with the strengths of another. Some of
these combinations are fuzzy-neural networks [38-42], fuzzy-genetic algorithms ,
fuzzy potential field [44,45] and fuzzy-neural networks-genetic algorithms .
Song and Sheen present a fuzzy-neural controller for obstacle avoidance of a
differentially driven vehicle . The operating environment is assumed to be unknown
completely and, the vehicle is required to maneuver to a target location. Heuristic rules
are combined with a neural network to map input from sonar sensors to the left and right
motor velocities. Two behaviors implemented for vehicle navigation include avoid
obstacle and danger. The avoid obstacle behavior attempts to navigate the vehicle in the
direction of the target unless impeded by an obstacle. The danger behavior is used to
escape from any undesirable situations. When the danger behavior is activated, the
vehicle spins around to find a direction of escape. The danger behavior takes priority
over the avoid obstacle behavior. Results are shown graphically of a robot navigating to
a target while avoiding walls and a box-shaped obstacle.
A sensory-based navigation scheme is presented by Tani and Fukumura . The
navigation architecture consists of two levels, a control level and a navigation level. The
control level incorporates a potential method in order to limit the desired trajectories so
that each one is smooth and avoids obstacles. This leaves the task of the navigation level
to decide the direction of travel at branches in the task space. A recurrent neural network
is used to accomplish this task. The network is trained through the supervision of a
trainer who knows the optimal path.
The mobile robot YAMABICO is used to test this navigation technique. The
experiment involves navigating the task space by alternating between a figure 8 route and
a figure 0 route. At a specific branch in the task space, the vehicle must switch between
the two different routes by deciding the direction of travel. Results of this test are shown
graphically where for the most part, the navigation level chose the correct direction of
travel at the various branches in the task space.
Hoffmnan and Pfister present a fuzzy logic controller to navigate a vehicle to a
goal point while avoiding obstacles . The fuzzy logic controller is used to map the
perceived input to an appropriate control action. This fuzzy logic controller is designed
automatically through the use of a genetic algorithm. The genetic algorithm uses an
objective function to select the best individuals for reproduction of offspring. The fuzzy
logic controller's performance is measured with respect to the two tasks of reaching the
goal and avoiding obstacles. If the vehicle collides with an obstacle the controller is
given a reward proportional to the number of steps prior to the collision. If the vehicle
does not collide but does not reach the goal in the allotted steps, an additional reward is
given depending on how close the vehicle is to the goal. If the vehicle is within a given
distance to the goal, the controller receives a third reward. The method was applied
successfully and the results are shown graphically.
Hierarchical architectures typically involve either a path-tracking or trajectory-
tracking algorithm. Since the work done here involves path tracking, a more detailed
review of hierarchical architectures is warranted. Desired paths or trajectories can be
generated in real-time based on current sensor readings or generated once based on a map
of the operating environment. The method used, either real-time or not, to generate the
paths or trajectories generally depends on whether the operating environment is known a
priori and if it is static. Once the path or trajectory is known, there are several different
techniques used to track the path or trajectory. Some of these techniques include
Proportional-Integral-Derivative (PID) [47-53], pure pursuit [54-56], sliding-mode
[57,58], state feedback [59-66], fuzzy logic [67,68], neural networks [69-73] and fuzzy
neural networks [74,75].
PID techniques calculate errors based on the path or trajectory and the current
vehicle pose and velocity. These errors, and possibly their derivative and integral, are
multiplied by gains to determine the controlled input to the system. The first method
used to control the NTV, called follow-the-carrot, is a PID technique. The follow-the-
carrot path tracking method comes from the idea of holding a carrot in front of a farm
animal in order to coax the animal to move in a desired direction. With this in mind, the
follow-the-carrot method calculates a desired heading from the current vehicle position to
a look-ahead point called the carrot. The look-ahead point is a point on the path that is a
given distance in front of the orthogonal projection of the current vehicle position onto
the path. A PID controller is used with the error between the vehicle's current heading
and desired heading as its input and outputs the current steering wheel angle. This
method works well for straight paths but has problems with curved paths. By having the
look-ahead point a certain distance in front of the vehicle on the path, the desired heading
causes the vehicle to cut comers. Even if the vehicle were able to track the desired
heading with no errors, the vehicle would still have errors in its position.
Kanayama and Fahroo propose a new steering function as a line tracking method
for nonholonomic vehicles . The current state of a ground vehicle can be represented
by its current linear speed, v, and its current path curvature, K = /r. Therefore, their
controller is designed to determine the optimal change in path curvature in order to track
a given line. They choose to control the vehicle's path curvature because it is related
more directly to vehicle control, and it is independent of the global coordinate system.
The steering function they propose is:
dK =- aK-b(O-0,)-cAd, (2.1)
where a, b and c are positive constants, K is the current vehicle's path curvature, 6-LA is
the vehicle's heading error and Ad is the vehicle's position error. Immediately, it is
apparent that there is a problem of mixed units in their proposed steering function.
Unfortunately, Kanayama and Fahroo did not address this issue. By requiring that the
magnitude of (0-0i) be less than n/2, they determined that the relationship between the
constants should be, a = 3k, b = k2 and c = k3, for the controller to be stable. The term k is
the gain of the steering function and controls how fast or how slow the vehicle converges
to the line. This technique was tested in simulation as well as on the autonomous vehicle
Yamabico. The results of these tests are shown graphically for different values of the
steering function gain k.
Egerstedt et al. present the autonomous navigation of a car-like robot by tracking
a reference point . As long as the vehicle's position and heading errors relative to the
reference point are small, the reference point moves along the path as the vehicle follows
it. If the errors are too large, the reference point may stop to wait for the vehicle.
Therefore, they call the reference point a virtual vehicle. The location of the virtual
vehicle depends on both the vehicle's current speed and position. Once the location of
the virtual vehicle is determined, the steering is controlled by the proportional controller:
6f =-k((p- pPd), (2.2)
where 5f is the steering angle, Vo is the vehicle heading, 'pd is the desired heading and k is
chosen based on the vehicle's maximum steering angle. This technique was tested on a
modified radio-controlled car and a Nomad 200. Results for both vehicles are shown
graphically and considered satisfactory.
A geometric path-tracking control of a differential drive vehicle that takes into
account the kinematic and dynamic properties of the vehicle is proposed by DeSanits
. The vehicle has rear differentially driven wheels and a front castor wheel. A
reference frame is placed at the center of the rear wheel's axle. Using this reference
frame, differential equations of the vehicle's dynamic model are derived. Then, this
model is simplified by assuming no slip in either the lateral or longitudinal directions. A
path is assumed to be defined by a set of continuous functions of position and orientation
that the guide point must track. It is assumed also that both velocity and acceleration
profiles of the path are given and described by continuous functions. A path-tracking
controller is designed then in terms of the heading, lateral, and velocity errors. Assuming
the errors are kept sufficiently small, the vehicle's controller can be decentralized
allowing separate controllers for speed and steering. It turns out that the speed controller
is in the form of a PI controller and the steering controller is in the form of a PID
controller. Therefore, the gains of the controllers are determined through the use of
classical PID techniques. An example of applying this control technique to a wheelchair
is given, but no results are given of its accuracy.
Lee and Williams present a control method for a differentially driven autonomous
mobile robot . The control structure is made up of two loops. In the vehicle
controller loop, a trajectory generator first provides the desired displacement and rate.
Then, the errors between the desired and actual are used as input to a PID controller that
converts them to a desired torque. The second loop calculates an error between a desired
posture and an actual posture. The desired posture is determined using the desired
displacement and rate along with a kinematic model of the vehicle. Similarly, the actual
posture is determined with the measured displacement and rate along with a kinematic
model of the vehicle. The error in posture is used then to calculate a torque in order to
drive the error to zero. The total commanded torque is the sum of the torque calculated
from the vehicle controller and the torque computed from the error in posture.
This navigation technique was tested both in simulation and experimentally.
Experimental results are shown graphically of the controller's ability to handle an initial
lateral error of 1 cm, initial longitudinal errors of 0.5, 1 and 2 cm, and initial heading
errors of 1, 2 and 3 degrees. The lateral error converged almost to zero in approximately
six seconds. The longitudinal and heading errors were able to converge to zero in about
Choi presents an adaptive controller for the lateral position of a vehicle for the
Intelligent Vehicle Highway System (IVHS) . The lateral error is measured using
look-down sensing which can be realized using electrified wires, radar reflection or
buried permanent magnets. Using the lateral error as input, a PD type controller is
presented. This results in the possibility of a steady state error. In order to deal with this,
the PD controller is modified by adding an unknown lateral disturbance force. This
lateral force is used to model unmeasured disturbances such as wheel misalignment,
unbalanced tire pressure, side wind, and offset errors on the steering actuator or its
sensor. This unknown lateral force is updated continually based on Lyapunov criterion.
The controller was tested on a track that is 330 meters long and 5 meters wide.
Permanent magnets, 2.2 cm in diameter and 10.2 cm long, were placed every meter. At a
low speed of 10 m/s, the vehicle followed the center of the track with a maximum lateral
error of 0.1 meters. The controller was tested also at a higher speed of 22 m/s and again
the maximum lateral error was 0.1 meters.
A control technique for high-speed autonomous navigation of a full-size outdoor
vehicle is presented by Shin et al. . This technique separates the control of the
vehicle speed and steering by choosing the center of the rear axle as the point on the
vehicle to control. The desired speed of the vehicle is determined by factors such as the
current path curvature and the vehicle's distance to nearby obstacles. To control the
vehicle's steering, a feedforward module that incorporates the vehicle's dynamics is used
in conjunction with a feedback controller. The control input then takes the form:
U, = R, + Ke,, (2.3)
where Ri is the feedforward compensation and Ke, is the feedback error multiplied by
The dynamic model of the feedforward compensator considers only the latency of
the steering. The latency is considered the dominant characteristic of the vehicle's
dynamics. It is modeled using a lumped system of first-order lag. The feedforward
compensator, in effect, sends commands in advance so that the steering maneuver starts
before a turn is encountered.
The feedback controller uses the vehicle's position, heading, and curvature errors.
Using the geometry of the errors, a quintic polynomial function is determined that
converges to zero at a specified look-ahead distance. Then, the variation of the steering
angle is determined from this polynomial. The look-ahead distance is used to adjust the
sensitivity of the system and is a function of the current vehicle speed.
Testing of this autonomous navigation technique was accomplished in simulation
and through experiments. In simulation, the technique was tested using an open-loop
controller, just the feedback controller, just the feedforward controller, and finally with
both the feedback and feedforward controller. The best results were obtained using the
feedback with the feedforward controller. The results of this technique had a position
error of 0.1 meters with a standard deviation of 0.1 meters, and a velocity error of 2.8
meters per second with a standard deviation of 4.8 meters per second.
Shin et al. used the autonomous vehicle Navlab as a test bed. The desired path
consisted of a 20-meter straight line ending with a 5-meter lateral jump and then followed
by an additional 80-meter straight line. Results are shown graphically for various
feedforward compensation times. With these results the feedforward compensation time
of Navlab is determined to be 0.5 seconds. Using this time, the navigation technique is
tested on a path that is over 500 meters in length at speeds up to 10 meters per second.
Results of this test are shown graphically and are considered acceptable.
Jagannathan et al. present the path planning and control of a nonholonomic
vehicle . A path planner that considers the nonholonomic constraints generates a
desired trajectory. The control structure consists of an inner feedback linearizing loop to
eliminate the nonlinearities in their equation to model the vehicle dynamics. A second
feedback linearization loop is required after converting the path trajectories to a local
vehicle coordinate system. Finally, Lyapunov techniques are used to design an outer
control loop to guarantee that the vehicle follows the desired trajectory. This selection of
the control law yields a PD controller.
The path planning and control proposed by Jagannathan et al. is tested in
simulation. The width of the vehicle is assumed to be 10 cm and the radius of its wheels
is assumed to be 3 cm. The position and velocity gains for the outer loop PD controller
are set to 100 and 20, respectively, for a critically damped system. Several tests are done
where an initial position and orientation are specified, as well as a goal position and
orientation. Results of these tests are shown graphically.
Murphy presents a simple vehicle and path following model for vehicle
navigation at highway speeds . A military HMMWV was modified by attaching
motors to the steering wheel, brake, and throttle. In addition, a video camera was
mounted on the vehicle in order to determine its lateral position on the road. Pure pursuit
is used to determine the instantaneous curvature of the vehicle's path. Using the models
developed, it is proven that the system's stability increases by reducing the controller
delay and decreases by increasing the vehicle speed. In order to compensate for the
computational delay of the vision, Murphy suggests using an inertial navigation sensor.
Ollero and Heredia present their stability analysis of a pure pursuit path tracking
technique that is applied to a computer controlled HMMWV . Kinematic equations
of the vehicle's motion are determined in terms of the vehicle's speed and angular
velocity. The vehicle's angular velocity is modeled by a first order differential equation.
The vehicle's desired turning radius is calculated using pure pursuit:
where L is the look-ahead distance and x is the lateral error. This is a proportional
controller where the look-ahead distance determines the gain to be applied to the lateral
error. Assuming a small lateral error and a small angle between the vehicle heading and
the heading from the vehicle position to the look-ahead position, they derive the
condition for stability to be L>1.
Next, the stability is analyzed by assuming a time delay, ;, of the steering
command due to computing and communication delays. Conditions for stability are
derived and shown graphically by plotting the nondimensional quantities r/T by LI(VT),
where Tis the steering time constant and Vis the vehicle velocity.
To determine the accuracy of their stability analysis, experimental data is taken of
the computer controlled HMMWV at speeds of 3, 6 and 9 meters per second. For each
speed, the minimum and maximum look-ahead distance that results in a stable system is
determined. The results are displayed graphically by plotting the stable look-ahead
distance determined by the analysis without delay and with delay as a function of velocity
and plotting the experimental results on the same plot. The experimental results require a
slightly larger look-ahead distance than the stability analysis with delay requires. This is
accounted for because of the fact that nonlinear terms are not considered in the vehicle
Ku and Tsai present an autonomous navigation of an indoor vehicle that follows a
person . The navigation technique presented is broken down into seven steps. First,
acquire an image. An image of the environment in front of the vehicle is captured using a
CCD camera that is mounted on the vehicle. In order to reduce the time to detect the
person to follow, a rectangular shape is attached to their back. The second step involves
detecting feature points of this rectangular shape. Third, transform the feature points
from the image coordinate system to a 3-dimensional space coordinate system and
determine the location of the person. Fourth, using a sequential pattern recognition
technique, determine if the person is walking straight or turning. Step five calculates the
speed of the person from the location of the person in consecutive cycles. Step six
calculates a desired turning radius of the vehicle using pure pursuit. Finally, step seven
controls the speed of the vehicle using a fuzzy control technique. This method is tested
using an autonomous vehicle and results are shown graphically. Successful and smooth
navigation is claimed while a person walks in different directions.
Balluchi et al. present a path-tracking controller designed according to sliding-
mode techniques for Dubin's cars, i.e., cars that can only move forward with curvature
bounds . They assume the forward velocity is given, and therefore consider only the
lateral stabilization of the vehicle to the desired path. The input of their controller
consists of the lateral and heading errors, the sign of the path curvature and the current
vehicle speed. Note that only the sign of the path's curvature is used and not its
magnitude. This is a result of assuming that the path shape is not known a priori. Using
the sliding-mode design technique an equivalent control is derived. This result did not
satisfy the minimum turning radius constraint of their Dubin's car. A control law similar
in form of the equivalent control is proposed instead. This control law converges to the
reference path while satisfying the constraints provided the initial position and heading
errors are small. This technique is tested in simulation and the results are shown
State feedback techniques generally use kinematic equations to model the
vehicle's motion. Then, these equations are converted and possibly linearized, to state
space equations. Using various methods, a feedback gain matrix is determined to control
Aguilar et al. present a path-following controller for differential drive mobile
robots . It is assumed that a path exists whose curvature is both continuous and
bounded. A moving reference frame is defined with the origin located at the orthogonal
projection of the vehicle's position onto the reference path and orientated with the
tangential of the path at that point in the direction to follow. Differential equations of the
position and heading errors are derived based on the location of the vehicle's reference
frame relative to the moving reference frame. Using these differential equations and
assuming a nonzero linear velocity, a state feedback controller is presented to control the
vehicle's angular velocity that drives the position and heading errors to zero.
Two constraints on the system are required for guaranteeing exponential stability.
The first constraint requires the distance from the vehicle to the path be less than the
current reference path curvature. This is required in order to be able to define the
reference frame uniquely. A second constraint is a result of dealing with discontinuities
with the path curvature. This constraint limits the distance the vehicle can be from the
path as a function of the current velocity.
The control laws are implemented on a robot of the Hilare family. The robot's
position and orientation are determined by integrating the variation of each wheel. Two
different paths made up of line segments and arcs are used to test the controller. Results
of these two tests are presented graphically.
Hemami et al. present their work on the path tracking control of a mobile robot
with front steering . Only the kinematic equations of the system are considered as the
vehicle is intended to operate at low speeds. The equations derived are based on a
coordinate system at the center of mass. With these equations, a state feedback controller
is designed to minimize the control input as well as the position and heading errors. The
performance index used to accomplish this is:
J=J(qi +q2, e+rtan2 t, (2.5)
where ed is the position error, 0 is the heading error, 8 is the steering angle and qi, q2,
and r are weighting factors. The state feedback gain matrix is derived as functions of
known variables and of the weighting factors. Examples are presented that calculate the
state feedback gain matrix at different forward velocities. No results of its accuracy to
track paths are given from real experimental data or simulation.
Guldner et al. present a controller for the automatic steering of passenger cars
. Some of the performance requirements of their design include being robust with
changing road adhesion due to different weather conditions, limiting the lateral
displacement to 0.15 meters with good road adhesion and 0.3 meters with poor road
adhesion, and keeping the passenger comfort similar to a manually steered vehicle. Their
control design considers a lookdown reference system where sensors to measure the
lateral offsets of the vehicle are placed on the front and rear bumpers. Dynamic
equations are derived in terms of the front and rear lateral displacements and their
derivatives. In order to deal with the performance requirements, the parameter space
approach in an invariance plane is used to determine a state feedback controller.
The controller is tested on a Pontiac 6000 STE Sedan. A 2-kilometer test track is
made up of straight sections as well as left and right turns with a turning radius of 800
meters. Magnets are placed every 1.2 meters over the entire track. The vehicle has a
gyroscope and accelerometer to record the motion of the vehicle, as well as
magnetometers on the front and rear bumpers. Results of the experiments are shown
graphically where the steady state error in the curves is approximately 0.2 meters for
good road adhesion and approximately 0.5 meters for poor road adhesion.
Behringer and Mi'ller present an autonomous vehicle based on vision that is able
to navigate on public roads in normal traffic . One of the requirements of this vehicle
is to be able to recognize intersections and then to navigate the vehicle in the right
direction. In addition to the vision, a dead-reckoning system, made up of an odometer
and gyros, is used to measure the current state of the vehicle. Separate feedback
controllers are used to control the vehicle's lateral and longitudinal movements. The
longitudinal controller is based on lookup tables to actuate the vehicle's brake and
throttle. The lateral controller uses state feedback where the states are defined to be the
lateral offset, yaw angle rate, yaw angle, slip angle, and steering angle.
The autonomous navigation is tested on a track that includes curves of constant
radii of 40, 50 and 100 meters, as well as curves with approximately clothoid shape. The
results of these tests are shown graphically. The steering algorithm is claimed to be
sufficiently reliable such that the operation on arbitrary intersections is assumed to work
The tracking control of a mobile robot, using a time-varying state feedback
controller based on the backstepping technique, is presented by Jiang and Nijmeijer .
Local and global controllers are presented based on a kinematic model of the vehicle. In
addition, another controller is presented based on a simplified dynamic model.
Simulations in MATLAB were carried out to test the local and global controllers. The
results of their simulation showed that the local controller performs better for small initial
tracking errors and that the global controller was able to handle large initial tracking
Astolfi presents a controller for chained systems with two control inputs using a
discontinuous state feedback control law and applies it to a drive car-like vehicle .
The kinematic model of the car is given by
x= cos 0vI (2.6)
= sin 0v1
where x andy are the location of the vehicle with heading 0, 0 is the steering wheel angle,
and vi and v2 are the vehicle velocity and steering wheel, respectively. This system is put
into a chained form using the state transformation:
x = x (2.7)
x2 =-sec20tan 0
and input change:
vI = cos 9
v2 -3sin2 tan 0 sec0u, + cos2 cos3 Ou2
Results of this controller, which was tested in simulation with different initial conditions,
are presented graphically.
Mouri and Furusho compare the results of using a PD controller versus using a
state feedback controller that was developed using linear quadratic (LQ) control for
navigating a vehicle on a highway . The PD controller uses the lateral error to
determine a steering command. The proportional gain can be increased to achieve the
desired response and still converge by setting the derivative gain up to a certain point.
After that point, continuing to increase the proportional gain results in not being able to
construct a controller that provides both good response and convergence. Because of this
fact, a state feedback controller is developed using LQ control, where the lateral velocity
and the lateral deviation are chosen as states.
These two methods were tested on a vehicle with a speed of 80 km/h. The lateral
offset was determined from a magnetic sensor on the front bumper of the car that was
able to detect magnetic markers buried in the road. The PD control had large overshoots
when attempting to improve the systems time response. The system was also more
susceptible to noise. The gains for the LQ control could be increased by a factor of 10
compared to the PD controller gains that gave them the desired response and still
achieved the desired lateral convergence.
Rekow et al. present an adaptive steering controller for tractors using a
differential global positioning system . The following vehicle model is used:
Y 0vx p2 0 0 y 0 (2.9)
S0 0 1 0 0 Vo 0
z = 0 0 -P3 v.P4 0 2, + 0 u,
0 0 0 0 1 8 0
6) 0 0 0 0 -p5 P o Ps5
where y is the lateral error, qp is the heading error, Q, is the yaw rate, 8 is the steering
angle, aw is the slew rate, vx is the forward velocity, and p2 through p5 are unknown
vehicle parameters. A least mean square algorithm is used to identify the unknown
parameters. A linear Kalman filter is used to estimate the unmeasured states required by
the least square algorithm. Finally, a feedback controller uses the estimated parameters
to calculate linear quadratic regulator control gains.
The control algorithm is tested using a tractor equipped with carrier phase
differential global positioning system that provides position data to within 2 cm and
attitude data to within 0.1 degrees. Results of these tests are shown graphically.
Additionally, the average lateral error is claimed to be 2.55 cm with a standard deviation
of 3.1 cm.
One of the more recent techniques of path or trajectory tracking is fuzzy logic.
One of the main attractions to using fuzzy logic is the ability to develop a controller
without the need of a precise vehicle model. Baxter and Bumby present a fuzzy logic
navigation controller for an autonomous vehicle in the presence of obstacles . Five
principles are used to develop fuzzy sets and rules to navigate to a desired location with a
desired orientation. First, if the vehicle is a large distance from the goal, then steer the
vehicle to have a heading that goes to the goal. Second, if the vehicle is a medium
distance from the goal, then steer the vehicle to have a heading that goes to the goal and
has the same orientation as the goal orientation. Third, if the vehicle is a small distance
from the goal, then steer so that the current orientation goes directly to the goal position
and equals the desired goal orientation. Fourth, if the third step is unattainable, then steer
away from the goal for a new approach. And fifth, if the vehicle is almost on top of the
goal position, then steer to achieve desired goal orientation. Obstacle avoidance is
achieved by adding rules that inhibit the vehicle from steering in certain directions. By
using rules that inhibit motion, the number of possible active outputs is reduced. The
navigation control is tested in simulation and experimentally at a constant speed of 0.1
m/s. Results of these tests are shown graphically.
SAnchez et al. present an adaptive fuzzy control for autonomous navigation .
The inputs to the fuzzy controller are the vehicle's distance from the goal point, the
vehicle's velocity, the difference between the vehicle's heading and the path heading, and
the vehicle's curvature. The outputs of the controller are the vehicle's required curvature
and velocity. The controller attempts to adapt to the current system and operating
conditions by using a learning function that estimates the values of the center and width
of membership functions of the input vector and the values of the singleton output vector.
The learning function uses measured data of the controller's input and the measured data
of the controller's output that an expert provides during a learning stage.
This control technique is applied to the autonomous mobile robot Romeo 3R.
Romeo 3R was developed by adapting a conventional tricycle electric vehicle. The
controller was trained first from data obtained in experiments performed with a human
driver. Results of the path-tracking algorithm with an initial position error are given
Another more recent technique to track paths or trajectories is neural networks.
Neural networks can be used to determine the controlled inputs to the plant based on
current measurements or it can be used to estimate model parameters. Yang et al. present
a predictive control approach to path tracking . The basic concept of their predictive
controller is first to estimate the future location and orientation of the vehicle based on
the current location and orientation and the current control inputs. An error is calculated
then based on this prediction by comparing it to the desired path. Finally, an
optimization technique is used to determine the output of the controller for the next time
The predictive controller uses a kinematic model of the vehicle that is dependent
on the current vehicle velocity and steering wheel angle. The vehicle velocity is modeled
by a simple linear system. The model of the vehicle steering, on the other hand, is
determined by using a neural network. Unfortunately, using a neural network to identify
the steering model is computationally intensive. Therefore, tuning this model must be
Yang et al. apply their predictive controller to a four-wheel outdoor vehicle,
THMR-III. Results of the vehicle's ability to track a given path are shown graphically
and considered quite satisfactory.
Fierro and Lewis present a controller that is designed to deal with trajectory
tracking, path tracking and stabilizing about a point [69-71]. The controller requires no
knowledge of the vehicle's dynamics. The task of the neural network is to learn the
vehicle dynamics on-line and a kinematic controller is used to determine the controlled
input to the system. The control scheme presented is valid as long as the velocity control
inputs are small, smooth and bounded, and the disturbances are bounded also.
The neural network control scheme is tested in simulation and compared to a
controller that assumes perfect velocity tracking, and a controller that assumes complete
knowledge of the vehicle's dynamics. The performance of each controller is shown
graphically. The performance of the controller assuming perfect velocity tracking is
considered poor. It is noted that the controller that assumes to know the vehicle's
dynamics requires exact knowledge in order to work properly. The neural network
controller's response is considered to be improved compared to the previous two
A guidance controller for automated transit vehicles is presented by Rajagopalan
and Minano . The controller is based on a feedforward neural network with the back
propagation algorithm for learning. The back propagation network is used because of its
capability to learn constantly through nonlinear mapping. The neural network takes the
current position and heading error as inputs and then generates the steering angle
command. This command is used by a kinematic model to determine the desired
velocities of the left and right wheels. The controller is tested in simulation where it is
able to reduce tracking errors quickly and minimize overshoot for vehicle speeds up to
Hybrid architectures [76-85] combine the methods described in the previous two
sections, and therefore is mentioned here briefly. Hybrid architectures typically are used
to accomplish path tracking or trajectory tracking, as well as obstacle avoidance. This is
accomplished by combining a technique that uses behavioral architecture for obstacle
avoidance, and a technique that uses hierarchical architecture for path tracking.
Therefore, some arbitration is required then to decide whether to track the path or
trajectory or to avoid the obstacle.
VECTOR PURSUIT PATH TRACKING
This chapter presents a new geometric path-tracking method for navigating AGVs
with nonholonomic constraints. This method uses the theory of screws that was
introduced by Sir Robert S. Ball in 1900 . Screw theory can be used to describe the
instantaneous motion of a moving rigid body relative to a given coordinate system. It
therefore is natural and appropriate to use screw theory to represent the instantaneous
desired motion of an AGV, i.e., a rigid body, from its current position and orientation to a
desired position and orientation that is on a given path. Before developing the new path-
tracking method, a brief overview of screw theory used in these methods is presented.
Screw Theory Basics
A screw consists of a centerline that is defined in a given coordinate system and a
pitch. The motion of a rigid body at any instant can be represented as if it was attached to
a screw and rotating about that screw at some angular velocity.
One way to define the centerline of a screw is by using Pliicker line coordinates.
Two points given by the vectors ri and 2 in a given coordinate system define a line as
shown in Figure 3.1. This line can also be defined as a unit vector, S, in the direction of
the line and a moment vector, So, of the line about the origin. From Figure 3.1 we see
S-= 2-r, (3.1)
So = r, x S
Figure 3.1: Line Defined by Two Points
The vectors (S ; So) are the Plicker line coordinates of this line. By defining S-
[L, M, N]T and So = [P, Q, R]T, and noting that rl = [xl, yh, zIT and r? = [x2, Y2, z2T, we see
Lx2 -x (3.3)
J(x2 -x1)2 + (Y2 Y)2 +(z2 -z)2
M=_X2 Y2 Y-
(x2 -x)2 +(Y2 -y)2 +(z2 -z)2
(N =z2 -Z1
V(x2 _x1)2 +(Y2 -(Y)2 +(z2 -z,)2
P = yN zIM,
Q = zL -xN,
Figure 3.2 depicts the instantaneous motion of a rigid body rotating with an
angular velocity, wC, about a screw, $, that has a centerline defined by (S ; So) and that has
a pitch, h. The velocity of any point on the rigid body is equal to the velocity due to the
rotation plus the translational velocity due to the pitch of the screw. The velocity of the
rigid body can be quantified by:
a4 = (oS; ,)SOh), (3.9)
S o=S + hS = r x S + hS, (3.10)
and r is any vector from the origin to the centerline of the screw. The instantaneous
velocity of a point in the rigid body that is coincident with the origin of the coordinate
system is given by:
Figure 3.2: Instantaneous Motion About a Screw.
Two specific screws are used in developing the path-tracking algorithms in this
chapter, translation screws and rotation screws. The motion about a screw with an
infinite pitch models pure translation of a rigid body at a velocity v along the direction S.
In the limit, as the pitch goes to infinity, (3.9) simplifies to:
v$ = (0;vS), (3.12)
which is a screw that has a centerline at infinity.
On the other hand, the motion about a screw whose pitch is equal to zero models
pure rotation of a rigid body. By substituting a pitch, h, equal to zero, (3.9) simplifies to:
0)$= (coS;aoS0). (3.13)
In addition to using rotation and translation screws, a property of instantaneous
screws that proves to be very useful is that they are additive. Note that the units of (3.12)
and (3.13) are the same even though (3.12) is a translation screw and (3.13) is a rotation
Vector pursuit is a new geometric path-tracking method that uses the theory of
screws. This is a new technique that is developed here and which represents one of the
contributions of this dissertation. It is similar to other geometric methods in that a look-
ahead distance is used to define a current goal point, and then geometry is used to
determine the desired motion of the vehicle. On the other hand, it is different from
current geometric path-tracking methods, such as follow-the-carrot or pure pursuit, which
do not use the orientation at the look-ahead point. Proportional path tracking is a
geometric method that does use the orientation at the look-ahead point. This method
adds the current position error multiplied by some gain to the current orientation error
multiplied by some gain, and therefore becomes geometrically meaningless since terms
with different units are added. Vector pursuit uses both the location and orientation of
the look-ahead point while remaining geometrically meaningful.
The first step in vector pursuit calculates two instantaneous screws. The first
instantaneous screw, $j, accounts for the translation from the current vehicle position to
the location of the look-ahead point while the second instantaneous screw, $r, accounts
for the rotation from the current vehicle orientation to the desired orientation at the look-
ahead point. The second step uses the additive property of instantaneous screws to
calculate $d, the sum of $j and 5., which defines the desired instantaneous motion of the
vehicle. Two different methods are considered to calculate the two screws, $, and &.
The first method initially ignores the nonholonomic constraints of the vehicle to calculate
IS and $,. and then deals with the constraints after adding the two instantaneous screws.
Conversely, the second method does not ignore the nonholonomic constraints to calculate
$j and S. It turns out, for this method, that the sum of $ and S, also does not violate the
nonholonomic constraints. Finally, the last step calculates a desired turning radius, or a
desired turning rate if the current vehicle velocity is considered, from $d.
Defined Coordinate Systems
Before developing the screw theory based path-tracking methods, a few
coordinate systems must first be defined. First, the world coordinate system is defined
where the x-axis points north, the z-axis points down and the y-axis points east to form a
right hand coordinate system. The origin of the world coordinate system defined here is
determined by the conversion from a geodetic coordinate system to a UTM coordinate
system. It is assumed that the desired path is given, or can be converted to, the world
coordinate system. The world coordinate system can be seen in Figure 3.3.
In addition to the world coordinate system, both a moving and the vehicle
coordinate systems are shown in Figure 3.3 also. A moving coordinate system is defined
where the origin is a point on the planned path, the look-ahead point, which is a given
distance called the look-ahead distance, L, in front of the orthogonal projection of the
vehicle's position onto the planned path. Its x-axis is oriented in the direction of the
planned path at that point, i.e., the direction from the previous waypoint wi-I to the current
waypoint w,, the z-axis is down and the y-axis is defined to form a right hand coordinate
system. Since the moving coordinate system's origin is located at the look-ahead point,
this coordinate system will be referred to as the look-ahead coordinate system. The
selection of the distance L will be discussed later.
---- > wy
Figure 3.3: Defined Coordinate Systems.
Finally, the vehicle coordinate system is defined where the x-axis is in the
forward direction of the vehicle, the z-axis is down and the y-axis forms a right hand
coordinate system. The origin of the vehicle coordinate system depends on the type of
vehicle. For nonholonomic vehicles, it is defined in a way that decouples the control of
the linear and angular velocities. For example, on a car-like vehicle with rear wheel
drive, the origin is defined to be the center of the rear axle. With these three coordinate
systems defined, the development of vector pursuit path tracking is presented now.
A method is required to indicate the coordinate system a vector is referenced
since more than one coordinate system was defined here. Therefore, vectors are written
with a leading superscript indicating the coordinate system to which they are referenced.
Recall that this first method initially ignores the nonholonomic constraints of the
vehicle. With this in mind and using (3.12), $, is defined to be:
W t = kt 000; W XLWXV WYLWY ) (3.14)
Id d '
where d is the distance from the look-ahead point to the vehicle position, (WxL,WyL) are the
coordinates of the look-ahead point in the world coordinate system, and (Wxv, Wyv) are the
coordinates of the vehicle position in the world coordinate system. The term k, is a
weighting factor that will be dealt with later. Similarly, using (3.13), $r is defined to be:
W $r = k(0,0,1;WyV,-WxV,O), (3.15)
where kr is a weighting factor. Note that the axis of rotation is chosen to be the origin of
the vehicle coordinate system so that no translation is associated with &. Now the
desired instantaneous screw, $d, is calculated to be:
W d= +W _$r (3.16)
=0(9OOk,; kr WkYV + W XldWXV ,-kw xv + W YLdwYV ),).
The weighting factors k, and kr are used to control how much the desired
instantaneous screw is influenced by and &,, respectively. To determine these
weighting factors it is noted from (3.12) and (3.13) that k, is a linear velocity and kr is an
angular velocity. Assuming the vehicle travels on the screw defined by $ at some
velocity, k, = v, the time required for the vehicle to reach the look-ahead point would be:
Using the same line of reasoning, if the vehicle travels on the screw defined by $
at some angular velocity, kr = co, the time required for the vehicle to rotate from its
current orientation to the orientation at the look-ahead point would be:
tr=, 0V (3.18)
where OL is the angle from the x-axis of the world coordinate system going clockwise to
the x-axis of the look-ahead coordinate system, O( is the angle from the x-axis of the
world coordinate system going clockwise to the x-axis of the vehicle coordinate system,
and their difference must be in the interval (-nj,r]. Next, the assumption is made that the
relationship between t, and tr can be defined by:
tr =kt,, (3.19)
where k is some positive constant greater than zero. Therefore, the weighting factors can
now be determined from:
kt= V, (3.20)
k OL -ov =o v((OL -) (3.21)
t, ktI kd
where again, the difference OL-Ov must be in the interval (-7n,7].
In order to determine the desired motion of the vehicle defined by this
instantaneous screw, the location of its centerline must be determined in the vehicle
coordinate system. To do this, the location of the desired instantaneous screw's
centerline is determined first in the world coordinate system by:
_, X Vyj-w k-(WY y k (3.22)
w Y L, vW lwy, +k x Lxv (3.23)
,= k d ) Yv +k _0v
Note that equations (3.22) and (3.23) are valid only ifkr, i.e. OL-Ovv, is nonzero. If
kr is nonzero, the location of the desired instantaneous screw's centerline in the vehicle
coordinate system is determined by:
V X = W Xv cos(0v )+wyv sin(0)v )(w Xs cos(Ov)+Wysd sin(Ov)) (3.24)
V d_ =-w xv sin( v)+Wyv cos(Ov) (-w xs sin(O,)+Wyd cos(O9)) (3.25)
Otherwise, if kr is zero, equation (3.16) reduces to equation (3.14), which is a
screw whose centerline in the vehicle coordinate system is located at infinity in a
direction perpendicular to the line that connects the vehicle position and the look-ahead
The desired motion of the vehicle can be determined now that the location of the
desired screw's centerline is determined in the vehicle coordinate system. An example of
a desired instantaneous screw and its associated desired motion is shown graphically in
Figure 3.4. In this figure, one instantaneous screw is executed continually over time to
exaggerate the desired vehicle motion. From Figure 3.4, it is noted that the initial desired
motion from the current vehicle location is a translation along the vehicle's negative y-
axis and a rotation clockwise. This motion is not possible for a vehicle that is constrained
to translational motion only in the direction of its current orientation. In other words, in
order for the vehicle in Figure 3.4 to translate in the direction of the current vehicle's
negative y-axis, it must first rotate counter-clockwise. This is opposite of the desired
rotation defined by the instantaneous screw. Therefore, it is noted that the possibility
exists where the vehicle may be unable to execute the motion defined by the desired
instantaneous screw defined in equation (3.16) because of the motion constraints of the
Figure 3.4: Vehicle Motion if Desired Instantaneous Screw is Continually Executed.
Nonholonomic constraints exist when the motion orthogonal to the vehicle's
forward direction is not possible. In other words, using the vehicle's coordinate system
defined earlier, motion is restricted at any instant only in a direction parallel vehicle's x-
axis. Therefore, the velocity along the vehicle's y-axis must be equal to zero. This can
be expressed as an equation in the world coordinate system through a simple coordinate
w i sin(O )- Wcos(v) = 0. (3.26)
In order to deal with these constraints, a new desired screw, id, is calculated based
on the previously calculated desired screw, $d. The new desired screw is determined by
first obtaining a new look-ahead point that is a distance L from the vehicle's position
along an arc defined by the desired screw (see Figure 3.5). A circle can then be obtained
that passes through both the new look-ahead point and the vehicle point and that is
tangent to the vehicle direction. The new desired screw, $., with its corresponding
desired screw, $, can be seen in Figure 3.5.
-. ^ '
Figure 3.5: Desired Screw, W$d, and New Desired Screw, w%,.
Unfortunately, this could place a restriction on the location of $'s centerline in
order for the new look-ahead point to exist. The distance from the vehicle position to the
centerline of & must be greater than VzL. This restriction turns out to be a case where the
vehicle needs to simply turn around, and therefore the location of the new desired screw,
$d, in the vehicle's reference frame can be determined by the vehicle's minimum turning
radius rmin using:
V X = 0 (3.27)
V Ys,, =r' (3.28)
if the direction of the desired screw's centerline is in the positive z-direction, or:
v Xs, = 0 (3.29)
$y' = -r'1,, (3.30)
if the direction of the desired screw's centerline is in the negative z-direction.
If the distance to the centerline of $ is greater than VzL, then two points exist on a
circle whose center is the centerline of $4 and whose radius is the distance to the vehicle
position that are a distance L away from the vehicle position. This can be seen in Figure
3.6. In order to determine the location of these two points in the vehicle coordinate
system, the angle from the x-axis of the vehicle coordinate system to the centerline of $d
is determined first by:
a = atan2(V yV X Sd) (3.31)
Next, it is noted through symmetry that the angle between the line from the
vehicle position to pi and the line from the vehicle position to &'s centerline is equal to
the angle between the line from the vehicle position to p2 and the line from the vehicle
position to the desired screw's centerline. Through simple geometry, the magnitude of
this angle can be determined by:
/=a3cos L (3.32)
2 =__ ayO
where fP must be in the interval (0,7/2] radians. Now the angle from the x-axis of the
vehicle coordinate system to both Pi and p2 can be determined by:
L\ \Y '/
Figure 3.6: Possible Look-ahead Points p, and p2.
Only one of these two points is used as the new look-ahead point, so to determine
which point to use, the direction of _d's centerline is considered. The new look-ahead
point is defined to be the point that is encountered first by traveling along the arc defined
by $d starting from the vehicle position. Therefore, if the direction of $d's centerline is in
the positive z-axis of the vehicle coordinate system, then the angle from the vehicle
coordinate system's x-axis to the look-ahead point is:
r = a -f. (334)
This is the case of the desired screw, $d, shown in Figure 3.6 where p, is determined now
to be the new look-ahead point. Similarly, if the direction of the desired screw's
centerline is in the negative z-axis of the vehicle reference frame, the angle from the x-
axis of the vehicle coordinate system to the new look-ahead point is:
r =a +/j. (3.35)
Since the angle from x-axis to the new look-ahead point is determined in the
vehicle coordinate system, the location of the new look-ahead point in the vehicle
coordinate system can be calculated by:
VXL = L cos(y), (3.36)
vYL = L sin(y). (3.37)
Now that the location of the new look-ahead point is known in the vehicle's
coordinate system, the location of the new desired screw's centerline can be located in
the vehicle's coordinate system. Assuming p = p, or p = p2, from Figure 3.7 we see that
the location of the new desired screw's centerline is on the vehicle's y-axis at a distance
R from the x-axis. From Figure 3.7,
2 2 = 2(3.38)
a +,x =R. (3.38)
a = R-vyp, (3.39)
V L2 = Z2-yp2, (3.40)
and solving for R gives:
Therefore, the new desired screw's centerline is located at:
v x, =0, (3.42)
v L2 (3.43)
| [y>S ,^ "" /./ j/ /' .( L
Figure 3.7: Locating v$e's centerline.
The direction of the new desired screw's centerline can be determined by the
location of the new look-ahead point in the vehicle's coordinate system. The direction of
the commanded screw's centerline depends on which quadrant of the vehicle's coordinate
system the new look-ahead point is located. This is summarized in Table 3.1.
Table 3.1: Desired Screw's Centerline Direction.
vsign of yp Screw's Centerline Direction
sign of x, sign of Along the z-axis
Positive Positive Positive
Positive Negative Negative
Negative Positive Negative
Negative Negative Positive
Note that when the new look-ahead point's x-value is negative, the vehicle's
velocity would have to be negative, or in other words, the vehicle direction would have to
change from forward to reverse. In order to keep the vehicle direction from changing, the
x-value of the look-ahead point must be greater than zero, otherwise the vehicle is
commanded simply to turn around. Equations (3.27) and (3.28) or equations (3.29) and
(3.30) are used again to calculate the location of the commanded screw's centerline in
Finally, it is important to note that the look-ahead distance, L, and the constant k,
are free choices and as such represent parameters that must be selected in order to
optimize or tune the vehicle's performance.
The second method developed to calculate $_ and $, takes into account the
vehicle's nonholonomic constraints. In order to satisfy the constraints, the centerlines of
the instantaneous screws must be on the vehicle's y-axis and a distance from the x-axis
greater than or equal to the vehicle's minimum turning radius. The requirement that the
instantaneous screws' centerlines be a distance greater than or equal to the vehicle's
minimum turning radius from the x-axis is ignored initially. It is ignored at first because
of the fact that some vehicles, e.g., a differentially driven vehicle, with nonholonomic
constraints have no minimum turning radius. Therefore, the only initial constraint placed
on the location of the centerlines of the instantaneous screws is that they must be on the
vehicle's y-axis. With this in mind, the screw to correct the translational error, $_, was
selected as the center of a circle that passes through the origins of the vehicle coordinate
system and the look-ahead coordinate system and which is tangent to the vehicle's
current orientation, i.e. the x-axis of the vehicle coordinate system. (See Figure 3.8)
Hence, $j is defined to be:
=k ,O,l;wy, + d' cos(O-)wx + sin(o9, (3.44)
S2 vYL 2 vYL
where d is the distance from the origin of the vehicle coordinate system to the origin of
the look-ahead coordinate system (where the look-ahead coordinate system is defined as
before), (VxL,VyL) are the coordinates of the look-ahead coordinate system's origin in the
vehicle coordinate system, (wxvwyv) are the coordinates of the vehicle position in the
world coordinate system, and Oy is the angle from the x-axis of the world coordinate
system to the x-axis of the vehicle coordinate system. The term k, is used again as a
weighting factor that will be dealt with later. Equation (3.44) is valid only if the term VYL
is nonzero. Otherwise, $ is determined by:
w k] ^ -"~ ^n (3.45)
W,= kt 0,0,0; WXL-WXV YLY- V 0) (3.45)
The instantaneous screw, $r, is defined to be:
w $r = kr (0,0,1;Wyv,- w, ,o), (3.46)
which is the same as equation (3.15), but the weighting factor kr is determined differently.
Figure 3.8: Instantaneous Screw for Translating to Look-ahead Point.
Now the desired instantaneous screw is determined as either
W $d=W $, w $= +k;k, Wyv +k, W yv +d 2 cos(V (3.47)
-s 2 YL
-krWxv +k Wxv + -- sin(OV ,0) ,
if the term yL is nonzero, or
$-= $+W$=0,0,kr;k y+kw-d (3.48)
W-d $+~= j~OO~ WYV +k dL~v
krxW +k,( Ld }oIJ
if the term vYL is zero.
The weighting factors k, and kr are used again to control how much the desired
instantaneous screw is influenced by $ and $r, respectively. These two weighting factors
are related again by the time required to translate to the look-ahead point and rotate to the
desired orientation. Assuming that the term vyL is nonzero, note that while the
instantaneous screw defined in equation (3.44) describes a motion to translate the vehicle
from its current location to the look-ahead point, it also describes a motion that rotates the
vehicle. This can easily be seen in Figure 3.9. Therefore, from equation (3.13), the
weighting factor kt is an angular velocity now instead of a linear velocity. The amount of
rotation, 0, can be determined by:
S=atan2((2Vy2 d2 (2VxL V YL ))- atan2((d2VyL ,o. (3.49)
where 0 must be in the interval (0,2nt] radians. It is noted that the last part of Equation
(3.49) will always be _C/2 radians depending only on the sign of VyL.
Figur Wy 0 w,_I
Figure 3.9: Rotation defined by $j Instantaneous Screw.
The time required to translate from the current vehicle position to the look-ahead
point, assuming that k, = aot, some angular velocity, is determined by
The time required to rotate from the current vehicle orientation to the orientation
at the look-ahead point must also account for the rotation, q0, due to $S. This will either
increase or decrease the time needed to rotate. Assuming kr = (,O, some angular velocity,
this time can be determined by:
tr(OL -0V)- (3.51)
Again, the assumption is made that the relationship between tt and tr can be
t, =kt, (3.52)
where k is some positive constant greater than zero. Therefore, the weighting factors can
now be determined from:
k, =cot, (3.53)
kr =Or = (OL -OV)-- (OL -OV) -O(()OL -OV)--) (3.54)
tr kt, k
Using equation (3.47), the centerline of the desired screw can be determined in
the world coordinate system by:
Wx -wX ki d 2 cos(0)j (3.55)
V ( kd f 2 cos(1
s_,- v k,+r 2YL }
+ ki d 2 sin(O, (3.56)
_=WYv + kq- d (0))
Note that the above calculations of the weighting factors assumed that VYL was
nonzero. If, on the other hand, vyL is zero, then from equation (3.12), the weighting
factor k, is a linear velocity. The amount of time to translate from the current vehicle
position to the look-ahead point at some velocity, k, = v, can be determined by:
The time required to rotate from the current vehicle orientation to the orientation
at the look-ahead point can be calculated using equation (3.51) where 0 is now zero.
Therefore, assuming kr = ao, some angular velocity, this time can be determined by:
tr = OL -V (3.58)
Using equation (3.52) for the relationship between the two times, the weighting
factors can be determined from:
k, = V, (3.59)
k = OL -O= OL -O v(OL-Ov) (3.60)
rtr ]4t kd
Using equation (3.48), the centerline of the desired screw can be determined in
the world coordinate system by:
W x $dy -tyv (3.61)
k, d yL0y =- -V
W Ys_,, d=_+k___ (3.62)
"^ =v ,=, -^+ ^ -
kr d )d ( OL ,
Finally, using equations (3.24) and (3.25), the centerline of the desired
instantaneous screw can be determined in the vehicle coordinate system to determine the
desired motion of the vehicle. Recall that the vehicle's nonholonomic constraints were
considered when calculating $_ and $ but that the minimum turning radius was ignored.
This has a nice result where vx~d will always equal zero, which does not break the
nonholonomic constraints. In order to comply with the minimum turning radius
constraint, the magnitude of Vy~d must be greater than or equal to the minimum turning
radius. If it is less than the minimum turning radius, equations (3.27) and (3.28) or
equations (3.29) and (3.30) are used again to calculate the location of the desired screw's
centerline in the vehicle coordinate system.
As in the first method, the direction of the desired screw's centerline is
determined by the location of the look-ahead point in the vehicle's coordinate system and
Table 3.1. Again, when the look-ahead point's x-value is negative, the vehicle direction
would have to change from forward to reverse. In order to keep the vehicle direction
from changing, the x-value of the look-ahead point must be greater than zero, otherwise
the vehicle is commanded to turn around. Equations (3.27) and (3.28) or equations (3.29)
and (3.30) are used again to calculate the location of the commanded screw's centerline
in this situation.
Finally, it is important to note again that the look-ahead distance, L, and the
constant k, are free choices in this method too and as such represent parameters that must
be selected in order to optimize or tune the vehicle's performance.
Desired Vehicle Velocity State
The desired velocity-state of the AGV for it to track the given path can now be
determined from the final desired screw calculated from either method 1 or method 2.
The velocity-state is made up of two vectors, a linear velocity vector, v = [vi, vy, v]T, and
an angular velocity vector, -o= [x, C ]r, that can represent the motion of any rigid
body in three-dimensional space. In the vehicle coordinate system the linear velocity of
the AGV is limited to the x-axis and the angular velocity is limited to rotation about the
z-axis because of the nonholonomic constraints. Therefore, only the terms v, and ao need
to be determined. The desired linear velocity, vx, is determined by the desired speed to
follow the path. The user, based on the current mission of the AGV, typically decides
this. The desired angular velocity, o,, is calculated based on the current location of the
desired screw's centerline and the current velocity of the AGV. The desired angular
velocity is calculated by:
= Vcuren, (3.63)
where Vysd is equal to vy d for the first method.
Recall that the task of an AGV to accurately track a given path was broken down
into two steps. The first step is to determine the AGV's desired motion, or velocity state,
which was accomplished in this chapter. The second step is to execute the desired
velocity-state. This is the topic of the next chapter.
Execution control is the task of executing the AGV's desired velocity-state as
determined in Chapter 3. After considering the motion constraints of an AGV, the only
two components of an AGV's velocity-state that can influence the system are vx, and ao.
By carefully choosing the origin of the vehicle's coordinate system, vx and ac can be
decoupled allowing for the design of separate controllers. There are a number of
different control techniques that would work here and therefore a design choice must be
Some of the more conventional control techniques include classical control,
proportional-integral-derivative control (PID), adaptive control, and state space methods.
These techniques require a relatively accurate model of the system in order to develop a
satisfactory controller. In addition, these techniques typically restrict the complexity of
the system model (e.g., linearity). Some of the newer control techniques that could be
used here include fuzzy logic control and neural networks. One draw back with neural
network controllers is that they typically require a long learning time where they are
"taught" how to control a system, before they can be effectively used. Fuzzy logic
controllers, on the other hand, do not require a model of the system and do not require a
long learning time. Instead, they rely on the knowledge of an expert on controlling the
particular system. Therefore, with all of this in mind, the proposed controllers of v, and
act are both chosen to be fuzzy controllers.
This chapter begins with an introduction to the fuzzy controller and a general
introduction to the fuzzy reference model learning controller (FRMLC) , which is a
direct adaptive controller. It concludes by presenting the designed FRMLC for executing
the AGV's desired linear and angular velocities, respectively.
Before designing any controller, the inputs and outputs of the process must be
determined. The input variables to the controller are used to determine how to control the
process. The output variables of the controller must therefore have some impact on
process. A feedback fuzzy controller shown in Figure 4.1 has three steps: fuzzification,
inference and defuzzification. The fuzzification step takes the crisp inputs of the process
and converts them to linguistic variables. The inference step uses these linguistic
variables to decide the best course of action based on the knowledge of an expert. The
expert's knowledge is stored in a rule-base made up of a set of if-then statements. The
defuzzification step takes the linguistic results of the inference step and converts them to
Figure 4.1: Feedback Fuzzy Controller.
Fuzzification is the process of taking a crisp value and converting it to a linguistic
variable. This is accomplished by using membership functions. Membership functions
take a crisp value and map it to a linguistic variable with a value between 0 and 1. For
example, Figure 4.2 shows graphically the membership functions that convert the crisp
value of height, h, to the linguistic variables short, medium and tall. Figure 4.2 shows the
very common triangular membership function with saturated boundaries. From Figure
4.2, a height equal to 5.8 feet gives the linguistic variable "tall" a membership value of
0.8, or /,taI(5.8) = 0.8. Similarly, the linguistic variables medium and short would have a
membership values of 0.2 and 0.0, respectively, or /,,,jum,,(5.8)= 0.2 and /4short(5.8)= 0.0.
S I I h (feet)
1 2 3 4 5 6 7
Figure 4.2: Height Membership Functions.
A membership function is not limited to being triangular. This can be seen by
other examples of membership functions in Figure 4.3. The choice of the membership
function depends on the application and the designer or the expert. Fuzzification is
therefore a highly subjective process where two different designers may quantify the
same variable differently and both be considered correct.
0 .5 0.5
Figure 4.3: Possible Fuzzy Membership Functions.
The inference mechanism is made to imitate the expert's decision process as if he
or she was controlling the process directly. In other words, it interprets the current state
of the process and then uses its knowledge of the plant to decide the best way to control
The knowledge of how to control the plant is represented by creating a rule-base
of if , then statements. Take, for example, the inverted
pendulum problem shown in Figure 4.4a. It is desired to balance the pendulum in a
vertical position by controlling the force F. Suppose that the angular error, 0, from the
vertical and its derivative are measured and used as inputs. Using the membership
functions shown in Figure 4.4b, one rule may be, if the error is "Positive Small" (PS) and
the change in error is "Negative Large" (NL), then the force is "Positive Medium" (PM).
A second rule may be if the error is "Zero" (Z) and the change in error is "Positive
Small" (PS), then the force is "Negative Small" (NS). A rule for each possible
combination of error and change in error can be determined similarly. If the number of
inputs is small, around two or three, a convenient way to store the rules is in a tabular
form as shown in Table 4.1.
b. NL NM NS Z PS PM PL
-30 -20 -10 10 20 30 (degrees)
NL NM NS z PS PM PL
-15 -10 -5 5 10 15 (degrees/sec)
NL NM NS Z PS PM PL
-8 -6 -4 -2 2 4 6 8 (Ibf)
Figure 4.4: Pendulum Example.
Table 4.1: Rule Base for Inverted Pendulum.
Force _______Change in Error
_______NL NM NS Z PS PM PL
NL PL PL PL PL PM PS Z
NM PL PL PL PM PS Z NS
NS PL PL PM PS Z NS NM
Error Z PL PM PS Z NS NM NL
PS PM PS Z NS NM NL NL
PM PS Z NS NM NL NL NL
____ PL Z NS NM NL NL NL NL
The inference step is simply the conclusions determined by the rule-base. In
order to determine the conclusions of the rule-base, the premise must first be quantified.
Typically, the premise contains two or more linguistic terms that are combined by the
"and" logical operator. Two common ways to define the "and" operator is the minimum
or the product of the operands. This can be easily seen through an example. Consider
again the example of the inverted pendulum. Suppose that the current angular error was
-6 degrees and the current change in error was -4 degrees/second. From Table 4.1, one
of the rules is, if the error is "Negative Small" (NS) and the change in error is "Negative
Small" (NS), then the force is "Positive Medium" (PM). Using the membership
functions from Figure 4.4b, the membership of linguistic variable negative small for the
-6 degrees error is 0.6 and the membership of linguistic variable negative small for the -4
degree/second change in error is 0.8. The premise can now be quantified for this rule by
finding the minimum or the product of these values, i.e. premise = minimum[0.6,0.8] =
0.6, or premise = (0.6)(0.8) = 0.48. The value premise is a measure of how applicable this
rule is to the current system state. This process is done for each rule, and the results
where premise > 0 are considered the conclusions of the rule-base.
Defuzzification is the step where the conclusions from the inference step are
converted to a crisp output. Two of the more popular defuzzification techniques are the
center of gravity (COG) and center average methods. The COG method calculates the
crisp output by:
U crisp b Jj (4.1)
where b, is the center of the membership function of the consequent of rule i. The
calculation of the term 4/,) is simplified greatly when the output membership functions
are triangular and symmetric. For this case, it can be calculated by:
JUpre"mi,,, 2} (4.2)
P/i) = W l premise( 2s '
where w is the width of the base of the triangle.
The center average method calculates the crisp output by:
u crisp b_ premise(o (4.3)
U, P11 premise(,)
Continuing with the inverted pendulum example where the angular error was -6
degrees and the change in error was -4 degrees/second, using equation (4.1) and using
the minimum function to quantify the premise of each rule gives a crisp output of:
u crisp = (4)(1.68) + (2)(0.72) + (2)(1.28) + (0)(0.72) = 2.44.
1.68 + 0.72 +1.28 + 0.72
Finally, using equation (4.3) and the minimum function to quantify the premise of each
rule gives a crisp output of:
u cisp = (4)(0.6) + (2)(0.2) + (2)(0.4) + (0)(0.2) = 2.57.
0.6 + 0.2 + 0.4 + 0.2
Fuzzy Reference Model Learning Control 
There are two general techniques for adaptive control, direct and indirect. Direct
adaptive control, shown in Figure 4.5, monitors a system's response and then modifies
the controller in order to achieve a specified desired performance. On the other hand,
indirect adaptive control monitors a system's response in order to identify parameters of
the system's model. The controller is designed as a function of these model parameters
to achieve a specified desired performance. A block diagram of an indirect adaptive
controller is shown in Figure 4.6. Fuzzy reference model learning control (FRMLC) is a
direct adaptive controller.
Figure 4.5: Direct Adaptive Controller.
Z ------ Sstem
r(t) u))Controller -- Plat -
Figure 4.6: Indirect Adaptive Controller.
The main parts of a FRMLC are the fuzzy controller, the plant, the learning
mechanism, and the reference model. The fuzzy controller has already been discussed in
the previous section, and the plant is simply the system to be controlled. The reference
model gives the desired system response based on the current input. The learning
mechanism uses the outputs of the plant and of the reference model in order to calculate
an error between the desired and actual response. This error is used then to decide how to
modify the rule-base of the fuzzy controller in order to drive the error to zero. A block
diagram of the FRMLC is given in Figure 4.7.
The reference model is used to specify the desired performance of the system.
The main constraint on the reference model is that it must be reasonable. It is not
reasonable to expect a system to achieve a better performance than what the system is
capable of achieving. Every system has its limitations, and these limitations must be
considered when choosing the reference model.
Figure 4.7: FRMLC Block Diagram.
Once the reference model is determined, a discrete error signal is calculated by:
e(kT) = ym (kT)- y(kT), (4.4)
where e(kT) is the current error, ym(kT) is the output of the reference model, y(kT) is the
output of the system, and T is the sample time. Depending on the system characteristics,
it may also be useful to calculate the discrete change in error by:
c(kT) e(kT) e(kT T) (4.5)
where c(kT) is the change in error, e(kT) is the current error from equation (4.4), and
e(kT-T) is the error calculated on the previous time sample. Then, these results and any
other system data are used to determine the necessary changes to the process inputs,
p(kT), by the learning mechanism.
The learning mechanism is made up of a fuzzy inverse model and a rule-base
modifier. The purpose of the fuzzy inverse model is to take the calculations e(kT) and
c(kT) and determine how to change the process input, u(kT), in order to drive e(kT) to
zero. The output of the fuzzy inverse model is the desired change in process input and is
represented by p(kT). First, the inputs are fuzzified by membership functions specified
by the designer. The inference mechanism then uses rules such as, if the error is
"positive small" and the change in error is "zero," then the change in process input is
"negative small." It is referred to as the fuzzy inverse model because these rules typically
depend on the plant dynamics. Finally, the output, p(kT), is defuzzified by the COG,
center-average or some other defuzzification technique. Then the output, p(kT), is used
to modify the controllers rule-base.
The fuzzy controller's rule-base is modified by first determining which rules are
active. In other words, determine which rule's certainty is greater than zero:
flprermse(,) > 0. (4.6)
Then, for all the rules that are active, the center of the mth output membership function is
b, (kT) = bm (kT- T) + p(kT), (4.7)
where bm(kT) is the current center of the mth output membership function, bm(kT-T) is the
center of the mth output membership function at the previous time sample, and p(kT) is
the desired change in process input that was calculated by the inverse model.
Vehicle Linear Velocity FRMLC
The first task in designing a controller is to determine its inputs and outputs.
Under the MAX architecture, the propulsive and resistive wrenches are used to control
the AGVs motion. Each wrench is made up of a force vector, f= [f,fy,f], and a moment
vector, m = [mr, my, mj]. The propulsive wrench is used to propel the AGV in the
direction of the force or about the axis of the moment. Since, by the careful selection of
the vehicle's reference frame, the only term that has an affect on the linear velocity isfx,
it is chosen to be the linear velocity's controller output. One of the inputs to the
controller is obviously the desired linear velocity, Vxd. A second input to the controller is
the vehicle pitch, Oy, since it can have a substantial effect on the AGV's linear velocity.
A block diagram of the FRMLC for the linear velocity is given in Figure 4.8.
[ ________ FuzaV Inverse Model --
__Fuzzy C'n R ule -Base___
Figure 4.8: Discrete Linear Velocity FRMLCV Block Diagram.
From Figure 4.8, the controller's input vj.dkT) is the desired linear velocity, and
the controller's input Oy(kT) is the vehicle's pitch. The gains, gv and go, are used to
normalize the inputs. By doing this, both inputs are fuzzified using the membership
functions given in Figure 4.9. Therefore, the gain gv is chosen to be l/v,,,ax, where v,,m, is
the maximum velocity of the AGV, and the gain go is chosen to be 1/y,,,,,, where 4y,", is
the maximum allowable pitch. Both of these terms, the maximum AGV velocity and the
maximum allowable pitch, are available from the VCU configuration message under the
5- N4 N3 2 N1 I P1 2 P3 4 PS
-1.0 -0.8 -0.6 -0.4 -0.2 0.2 0.4 0.6 0.8 1.0
Figure 4.9: Normalized Input Membership Functions.
The controller's output in Figure 4.8 fj(kT) is the first term in the propulsive
wrench. Using the output membership functions shown in Figure 4.10, the output of the
inference mechanism is normalized also. The gain gf is used to scale this output to allow
the controller to command the entire range of the term fx. In the MAX architecture, the
term f has the range from -100 to 100 percent, and therefore the gain gf is chosen to be
N5 N4 N3 N2 N1 IP1 2 P3 P4 P5
-1.0 -0.8 -0.6 -0.4 -0.2 0.2 0.4 0.6 0.8 1.0
Figure 4.10: Normalized Output Membership Functions.
Now that the inputs and the output of the fuzzy controller are defined, the rule-
base for the inference mechanism must be defined. Typically, if little or nothing is
known about the plant's characteristics, each rule's consequent is initialized to the
linguistic variable "zero." This requires the controller to completely learn the system it is
trying to control. By using the MAX architecture, an important conclusion about the
plant's characteristics can be made. This conclusion is that increasing the termfi should
have the general characteristic of increasing vx, and decreasing the termfj should have the
general characteristic of decreasing v.,. With this in mind, and using the membership
function defined in Figures 4.9 and 4.10, the rule base for the fuzzy controller is
initialized with the rules given in Table 4.2.
Table 4.2: Linear Velocity Initial Rule-Base
Force Desired Linear Velocity
N5 N4 N3 N2 N 1 Z PI1 P2 P3 P4 P5
N5 N5 N4 N3 N2 N 1 Z P 1 P2 P3 P4 P5
N4 N5 N4 N3 N2 N 1 Z P1 P2 P3 P4 P5
N3 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
N2 N5 N4 N3 N2 NI1 Z P1 P2 P3 P4 P5
NI N5 N4 N3 N2 NI Z P1 P2 P3 P4 P5
Pitch Z N5 N4 N3 N2 N1 Z PI1 P2 P3 P4 P5
P1 N5 N4 N3 N2 NI Z P1 P2 P3 P4 P5
P2 N5 N4 N3 N2 NI1 Z P1 P2 P3 P4 P5
P3 N5 N4 N3 N2 Ni Z P1 P2 P3 P4 P5
P4 N5 N4 N3 N2 NI Z P1 P2 P3 P4 P5
_____ P5 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
It is assumed in Table 4.2 that the pitch has no affect on the control of the AGV's
linear velocity. This assumption is made initially because there is not enough
information about the plant's characteristics to make a conclusion on how the pitch will
affect the control of the AGV's linear velocity. Therefore, the controller must learn how
to control the plant for different vehicle pitches.
The reference model takes the desired linear velocity as input and outputs an
estimate of what the vehicle linear velocity should be. The model implemented here is a
simple first order model. This was chosen for its simplicity where only one model
variable needs to be determined, the time constant. This time constant is set to the
systems average response time to various, commands.
The learning mechanism uses the linear velocity calculated by the reference
model and the current AGV linear velocity to calculate an error, e(kT) and change in
error, ce(kT). The error is scaled by the gain ge and the change in error is scaled by gee in
order to use the membership functions given in Figure 4.9 for fuzzification. These gains
are determined by the maximum possible errors. Therefore ge is set to Il/Vdesired and gee is
set to TI Vdesred, where Vdesred is the desired tracking speed and T is the time interval. The
rules used by the inference mechanism are given in Table 4.3. The conclusions of the
rule-base are defuzzified using the COG and the membership function in Figure 4.10.
And finally, the gain gp is used to control how fast the system adapts and is left as a
Table 4.3: Learning Mechanism Rule-Base.
Change in Change in error____
process input N5 N4 N3 N2 NI1 Z P1 P2 P3 P4 P5
N5 N5 N5 N5 N5 N5 N5 N4 N3 N2 N 1 Z
N4 N5 N5 N5 N5 N5 N4 N3 N2 N1 Z P1
N3 N5 N5 N5 N5 N4 N3 N2 N1 Z PI P2
N2 N5 N5 N5 N4 N3 N2 N1 Z P1 P2 P3
NI1 N5 N5 N4 N3 N2 N1 Z P1 P2 P3 P4
Error Z N5 N4 N3 N2 NI Z P1 P2 P3 P4 P5
P1 N4 N3 N2 N1 Z P1 P2 P3 P4 P5 P5
P2 N3 N2 N1 Z P1 P2 P3 P4 P5 P5 P5
P3 N2 N1 Z P1 P2 P3 P4 P5 P5 P5 P5
P4 NI1 Z P1 P2 P3 P4 P5 P5 P5 P5 P5
____ P5 Z P1 P2 P3 P4 P5 P5 P5 P5 P5 P5
Vehicle Angular Velocity FRMLC
The angular velocity FRMLC uses the block diagram given in Figure 4.11, which
is very similar to the linear velocity FRMLC block diagram. Here the controller
reference input, az,(kT), is the current desired angular velocity, and the input v(kT) is the
vehicle's current linear velocity. The linear velocity is chosen as an input since it is
expected that more slip will occur between the vehicle tires and the ground at higher
speeds, and therefore affect the vehicle's angular velocity. The gains, g and g,,, are used
again to normalize the inputs. The gain g0, is chosen to be l/Uzmax, where cp,max is the
vehicle's maximum angular velocity. Similarly, the gain gv is chosen to be 1/Vmax where
Vmax is the vehicle's maximum linear velocity. Again, the information required in order
to calculate these gains are given either by the Vehicle Control Unit (VCU) configuration
report or measured by the Position system (POS).
Figure 4.11: Angular Velocity FRMLC Block Diagram.
For vehicles with a nonzero minimum turning radius, (Oz,.max turns out to be a
function of the AGV's current linear velocity and its minimum turning radius:
v, Mu = Vcur.nt (4.8)
Note that when the current linear velocity is equal to zero, the gain g for vehicles with a
nonzero minimum turning radius is infinite. This is because the vehicle is not capable of
turning unless the linear velocity is nonzero. Since it is impossible for the vehicle to turn
unless the linear velocity is nonzero, the gain gw, is set to zero if the linear velocity is
zero. This is done so that the controller does not attempt to adapt for this case.
The controller's output in Figure 4.11 m(kT) is the last term of the propulsive
wrench. Using the output membership functions shown in Figure 4.10, the output of the
inference mechanism is normalized. The gain gm is used to scale this output to allow the
controller to command the entire range of the term mw. In the MAX architecture, the term
mz also has the range from -100 to 100 percent, and therefore the gain gm is chosen to be
Just as the MAX architecture provided information for the linear velocity
controller, it also provides some information about the angular velocity in order to
initialize the rule-base of its controller. It is expected that by increasing the term m, in
the propulsive wrench, the AGVs angular velocity will increase. And, by decreasing the
term m, in the propulsive wrench, the AGVs angular velocity will decrease. This is, of
course, with the exception when the linear velocity is equal to zero as mentioned earlier.
With this information, and using the membership functions defined in Figures 4.9 and
4.10, the rule-base for the angular velocity controller is initialized with the rules given in
Table 4.4: Angular Velocity Initial Rule-Base
Moment Desired Angular Velocity
N5 N4 N3 N2 NI1 Z PI1 P2 P3 P4 P5
N5 N5 N4 N3 N2 N1 Z PI P2 P3 P4 P5
N4 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
N3 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
N2 N5 N4 N3 N2 NI1 Z P1 P2 P3 P4 P5
N1 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
Linear Z N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
Vel. P1 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
P2 N5 N4 N3 N2 NI1 Z P1 P2 P3 P4 P5
P3 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
P4 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
____ P5 N5 N4 N3 N2 N1 Z P1 P2 P3 P4 P5
It is assumed in Table 4.4 that the linear velocity has no affect on the control of
the AGVs angular velocity. This assumption is made initially because there is not
enough information about the plant's characteristics to make a conclusion on how the
linear velocity will affect the control of the AGVs angular velocity. Therefore, the
controller must learn how to control the plant for different linear velocities.
The reference model here takes the desired angular velocity as input and outputs
an estimate of what the current vehicle angular velocity should be. The model
implemented, like the linear velocity controller, is also a simple first order model. Again,
this was chosen for its simplicity where only one model variable needs to be determined,
the time constant. This time constant is set to the systems average response time to
various mz commands.
The learning mechanism uses the angular velocity calculated by the reference
model and the current AGV angular velocity to calculate an error, e(kT) and change in
error, ce(kT). The error is scaled by the gain ge and the change in error is scaled by gce in
order to use the membership functions given in Figure 4.9 for fuzzification. These gains
are determined again by the maximum possible errors. Therefore ge is set to l/COdesired and
gce is set to T/Odesred, where ddes,,red is the desired angular velocity and T is the time
interval. The rules used by the inference mechanism are given in Table 4.3. The
conclusions of the rule-base are defuzzified using the COG and the membership function
in Figure 4.10. And finally, the gain gp is used to control how fast the system adapts and
is again left as a tuning parameter.
The new path-tracking algorithm developed in this dissertation, vector pursuit, is
a geometric technique. Geometric techniques use a look-ahead point, which is on the
path at a distance L ahead of the orthogonal projection of the vehicle's position onto the
path, to determine the desired motion of the vehicle. Unfortunately, there is a tradeoff in
determining the distance L. Increasing L tends to dampen the system leading to a stable
system with less oscillation. On the other hand, increasing L also tends to cause the
vehicle to cut comers of a path. Therefore, it is desirable to have a small look-ahead
distance in order to accurately navigate the path, but out of necessity, a large value
typically is used to achieve a stable system with little oscillation.
A factor that must be considered when choosing the look-ahead distance is the
vehicle speed. As the vehicle speed increases, the look-ahead distance typically needs to
be increased, too. Having a look-ahead distance greater than zero allows the vehicle to
start turning before it actually reaches a curve in the path. Starting the turn early is
desirable because of the fact that a certain amount of time is required for the vehicle to
execute a commanded turning rate. The faster the vehicle is going, the sooner the vehicle
needs to start its turn.
Ideally then, a geometric path-tracking technique would allow small look-ahead
distances to accurately track the given path, and not be sensitive to small changes in
vehicle speed. This chapter presents the results of tests done to determine vector
pursuit's ability to track paths accurately with different look-ahead distances and at
different speeds. For comparison, tests are done using follow-the-carrot and pure pursuit.
Follow-the-carrot is the original path-tracking technique used on the Navigation Test
Vehicle (NTV), and pure pursuit is currently a popular technique.
Another factor that must be considered when choosing the look-ahead distance is
the anticipated vehicle position and heading errors. These errors are obviously preferably
small. Unfortunately, this is not always the case. One example where large position and
heading errors may be expected is if an unexpected obstacle is encountered. Large errors
may exist once the vehicle navigates around the obstacle and then continues to track the
desired path. This chapter also presents results of tests where a jog in the middle of the
desired path is used to simulate a jump in the desired position and heading. Again,
follow-the-carrot and pure pursuit path-tracking techniques are used for comparison.
The NTV developed by CIMAR at the University of Florida was the main tool
used to test the vector pursuit path-tracking algorithm as well as the FRMLC controllers
developed in chapters 3 and 4, respectively. Before actually implementing the path-
tracking algorithms and the controllers on the vehicle, they were tested in simulation with
a simple model of the vehicle. After achieving positive results from simulation, the new
path-tracking algorithm and the FRMLC controllers were implemented on the NTV for
further testing. In addition to implementing them on the NTV for testing, they were
implemented also on a K2A robot developed by Cybermotion, Inc., of Roanoke, Virginia
(See Figure 5.1), and on an All-Purpose Remote Transport System (ARTS) (See Figure
5.2), which is a vehicle used by the United States Air Force Research Laboratory for
research and design. Before presenting the results of these tests, the method used for
evaluating the path-tracking algorithm is presented first.
f igure 5.1: Cybermotlon K2A.
r figure 3.z: All-purpose Kemote I transport system.
Method for Evaluating Path Tracking
In order to evaluate the path-tracking algorithm, two errors are measured. A
position error and a heading error are measured for every new position data from the POS
module. These errors are calculated relative to a coordinate system that is defined to
have its origin located at the perpendicular projection of the current vehicle location onto
the planned path. Its x-axis is orientated with the path direction at that point, its z-axis is
down, and its y-axis forms a right hand coordinate system. This coordinate system,
referred to as the perpendicular coordinate system, as well as the position and heading
errors, are shown graphically in Figure 5.3. Then the position error, e, is defined to be:
where (PxvPyv) are the coordinates of the vehicle position in the perpendicular coordinate
system defined above. Note that by the definition of this coordinate system, Pxv always
equals zero. Next, the heading error, 0e, is defined to be:
0e =OP -0V, (5.2)
where Op is the angle from the x-axis of the world coordinate system to the x-axis of the
perpendicular coordinate system, Ov is the angle from the x-axis of the world coordinate
system to the x-axis of the vehicle coordinate system, and 0e is in the interval (-7rt,7t].
Navigation Test Vehicle (NTV)
This section first presents the results of testing the vector pursuit path-tracking
algorithm and the fuzzy reference model learning controllers in simulation and then
presents the implementation results. It concludes with the results of tests done where the
NTV is driving backwards.
Figure 5.3: Defining Position and Heading Errors.
Using the world and vehicle coordinate systems defined in Chapter 3, a kinematic
model of the NTV is given by the following equations:
Wiv =vcos(Ov), (5.3)
w v =v sin(Ov ),(5.4)
where xXv and Wyv give the vehicle position, Ov is the vehicle heading, v is the vehicle
speed, 0 is the steering wheel angle and W is the vehicle's wheelbase.
Recall that the outputs of the controllers designed in Chapter 4 are the percent
force along the vehicle's x-axis and the percent moment about the vehicle's z-axis. On
the NTV, the magnitude of percent force maps directly to the percent of the maximum
throttle position and the percent moment maps to the percent of the maximum steering
wheel angle. Assuming that there is no slip between the tires and the ground, mapping
the percent moment to the percent steering wheel angle results in a linear relationship, at
a given speed, between the percent steering wheel angle and the current angular velocity.
On the other hand, mapping the percent force to the percent throttle results in a nonlinear
relation between the throttle position and the current vehicle speed.
In order to simulate the NTV's speed, a look-up table was created that gives the
vehicle speed based on the current throttle position. The results given in Table 5.1 are the
average speeds of the NTV after it had started moving. The results are specified as being
after the NTV had started moving because a larger percent throttle position was required
to get the NTV moving than was required to keep it moving. In an attempt to make the
simulation more realistic, a minimum throttle position was chosen before any motion
would occur. Once the NTV began moving the look-up table was used to determine the
vehicle speed but with a limit on the acceleration.
Table 5.1: Mapping of Percent Throttle to Average Vehicle Speed.
Percent Throttle Vehicle Speed (m/s)
With the NTV's linear and angular velocity determined as functions of the current
throttle and steering wheel positions, models are required for the NTV's throttle and
steering wheel. The models for the NTV's throttle and steering presented in the next two
sections were developed from data taken from the NTV.
In order to develop a simple model of the NTV's throttle, data was taken of the
response of the throttle to various commanded step inputs. With this data, it was
determined initially that a first-order model would be sufficient. In the end, a limit on the
throttle's velocity was required in order for the model to be more accurate. This
saturation point of the throttle's velocity was determined experimentally. Some of the
results of this model compared to the actual data are given in Figures 5.4 and 5.5.
.. .. acua
0 0A 115 1 2 3 IS 4
Figure 5.4: 40 Percent Throttle Step Input.
The model for the steering wheel was developed in a similar manner as the
throttle. Data was taken of the steering wheel's response to various commanded step
inputs. Again, with this data, it was determined initially that a first-order model would be
sufficient with a limit on the steering wheel's velocity. But, a limit on the steering
wheel's acceleration was required also in order for the model to be more accurate. Both
of these saturation points of the steering wheel's velocity and acceleration were
determined experimentally. Some of the results of this model compared to the actual data
are given in Figures 5.6 and 5.7.
a s 1 ii 2 La S i- 5 4&
Figure 5.5: 100 Percent Throttle Step Input.
-. / '
% 5 1 13 2 15 1 18
Figure 5.6: 40 Percent Steering Step Input.
% C. 1 \ ~ts u $ 4
Figure 5.7: 100 Percent Steering Step Input.
Three different paths are used to test the new geometric path-tracking algorithm.
A "U" shape path is used to test going from a straight section into a curve, and from a
curve back into a straight section. A figure eight path is used to test going from a right
curve into a left curve, and from a left curve into a right curve. And finally, a straight
path with a jog in the middle is used to test jumping from a small error in position and
orientation to large errors. For comparison, follow-the-carrot and pure pursuit path-
tracking methods are implemented and tested using the same paths in simulation. In
order to focus on each path-tracking technique's sensitivity to the look-ahead distance at
various speeds, the constant k for vector pursuit methods 1 and 2 was chosen first through
some initial experiments. The constant was chosen to be 4.0 and 1.5 for methods 1 and 2,
respectively. On account of the large number of tests, the results are shown graphically
in Appendix B.
The first tests, shown in Figures B. 1 through B.4, are a "U" shape path with a
tracking speed of 1.5 mps and a look-ahead distance of 3 meters. Each method was
capable of navigating this path with relatively small position and heading errors. Next,
using the same path, the look-ahead distance was increased to 5 meters, and the tracking
speed was increased to 3.0 mps for the tests shown in Figure B.5 through B.8. Again,
each method navigated the path with small position and heading errors. The last group of
tests using the "U" shape path is shown in Figures B.9 through B. 12. The tracking speed
for these tests was increased to 4.5 mps and the look-ahead distance was increased to 7
meters. Follow-the-carrot path-tracking method was unable to execute this path without
large oscillations coming out of the curved section. The other path-tracking techniques,
pure pursuit and vector pursuit methods 1 and 2, were able to execute the path with small
position and heading errors.
The next path used to test the different path-tracking techniques is a figure eight
path. Just as before, the tracking speed and the look-ahead distance were varied. Figures
B. 13 through B. 16 show the results of the path-tracking techniques with a tracking speed
of 1.5 mps and a look-ahead distance of 3 meters. Each technique was able to navigate
the path with small position and heading errors. Next, in Figures B. 17 through B.20, the
tracking speed was increased to 3.0 mps and the look-ahead distance was increased to 5
meters. Again, all path-tracking techniques tested were able to navigate the path with
relatively small position and heading errors. Figures B.21 through B.24 show the results
of increasing the tracking speed to 4.5 mps and the look-ahead distance to 7 meters. Just
as for the "U" shape path at 4.5 mps, the follow-the-carrot method is no longer able track
the path without large oscillations, while the remaining techniques were able to navigate
the figure eight path with small position and heading errors.
Finally, a path with a sudden jog in the middle is used to test the path-tracking
techniques. Initially, the tracking speed is set to 1.5 mps and the look-ahead distance is
set to 3 meters. Each path-tracking technique is tested with the distance of the jog
varying from 2 to 6 meters. These results are shown in Figures B.25 through B.44. Both
the follow-the-carrot and the vector pursuit method 1 resulted in oscillations after the jog.
Both pure pursuit and vector pursuit method 2 are able to navigate the path without
resultant oscillations. It is noticed that the pure pursuit method converges slightly faster
than the vector pursuit method 2. This is characteristic is the result of vector pursuit
method 2 considering the orientation of the look-ahead point as well as the position.
In Figures B.45 through B.64, the tracking speed is set to 3.0 mps now and the
look-ahead distance is set to 5 meters. Similar results are obtained from the follow-the-
carrot method and the vector pursuit method 2 after the jog as was obtained for the slower
tracking speed. Pure pursuit path-tracking technique results in large position errors, but
still no oscillations. Vector pursuit method 2 results in a much smaller position error than
pure pursuit, and also does not exhibit the oscillations, whereas follow-the-carrot and
vector pursuit method 1 do exhibit oscillations.
Finally, the tracking speed is set to 4.5 mps and the look-ahead distance is set to 7
meters. The results of these tests of the paths with a jog in the middle are given in
Figures B.65 through B.84. Again, follow-the-carrot and vector pursuit method 1 result
in large oscillations about the path. Pure pursuit results in some oscillation for the
smaller jogs and large position errors for the larger jogs. Vector pursuit method 2, on the
other hand, results in a very smooth transition from the path before the jog to the path
after the jog.