Vision-Based Map Building and Trajectory Planning to Enable Autonomous Flight through Urban Environments

Material Information

Vision-Based Map Building and Trajectory Planning to Enable Autonomous Flight through Urban Environments
Watkins, Adam Scott
Place of Publication:
[Gainesville, Fla.]
University of Florida
Publication Date:
Physical Description:
1 online resource (132 p.)

Thesis/Dissertation Information

Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Mechanical Engineering
Mechanical and Aerospace Engineering
Committee Chair:
Lind, Richard C.
Committee Members:
Crane, Carl D.
Dixon, Warren E.
Slatton, Kenneth C.
Graduation Date:


Subjects / Keywords:
Aircraft ( jstor )
Algorithms ( jstor )
Covariance ( jstor )
Genetic mapping ( jstor )
Landmarks ( jstor )
Navigation ( jstor )
Robotics ( jstor )
Sensors ( jstor )
State estimation ( jstor )
Trajectories ( jstor )
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
autonomous, map, particle, slam, trajectory, unmanned
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
Mechanical Engineering thesis, Ph.D.


The desire to use unmanned air vehicles (UAVs) in a variety of complex missions has motivated the need to increase the autonomous capabilities of these vehicles. This research presents autonomous vision-based mapping and trajectory planning strategies for a UAV navigating in an unknown urban environment. It is assumed that the vehicle's inertial position is unknown because GPS in unavailable due to environmental occlusions or jamming by hostile military assets. Therefore, the environment map is constructed from noisy sensor measurements taken at uncertain vehicle locations. Under these restrictions, map construction becomes a state estimation task known as the Simultaneous Localization and Mapping (SLAM) problem. Solutions to the SLAM problem endeavor to estimate the state of a vehicle relative to concurrently estimated environmental landmark locations. The presented work focuses specifically on SLAM for aircraft, denoted as airborne SLAM, where the vehicle is capable of six degree of freedom motion characterized by highly nonlinear equations of motion. The airborne SLAM problem is solved with a variety of filters based on the Rao-Blackwellized particle filter. Additionally, the environment is represented as a set of geometric primitives that are fit to the three-dimensional points reconstructed from gathered onboard imagery. The second half of this research builds on the mapping solution by addressing the problem of trajectory planning for optimal map construction. Optimality is defined in terms of maximizing environment coverage in minimum time. The planning process is decomposed into two phases of global navigation and local navigation. The global navigation strategy plans a coarse, collision-free path through the environment to a goal location that will take the vehicle to previously unexplored or incompletely viewed territory. The local navigation strategy plans detailed, collision-free paths within the currently sensed environment that maximize local coverage. The local path is converted to a trajectory by incorporating vehicle dynamics with an optimal control scheme which minimizes deviation from the path and final time. Simulation results are presented for the mapping and trajectory planning solutions. The SLAM solutions are investigated in terms of estimation performance, filter consistency, and computational efficiency. The trajectory planning method is shown to produce computationally efficient solutions that maximize environment coverage. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis (Ph.D.)--University of Florida, 2007.
Adviser: Lind, Richard C.
Statement of Responsibility:
by Adam Scott Watkins.

Record Information

Source Institution:
Rights Management:
Copyright Watkins, Adam Scott. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
LD1780 2007 ( lcc )


This item has the following downloads:

Full Text

in microprocessors, battery power, and sensor miniaturization, however, have produced

smaller UAVs capable of flight through more complex environments. The ability of smaller

UAVs to ., ..-ressively maneuver through cluttered environments indicates that they are

ideal platforms for completing low-altitude missions, as depicted in Figure 1-4B, that

occur outside of an operator's line-of-sight. As vehicle missions require autonomous flight

through increasingly elaborate, cluttered environments, a higher degree of environmental

awareness is required. Therefore, two critical requirements for autonomous unmanned

vehicles are the ability to construct a spatial representation of the sensed environment

and, subsequently, plan safe trajectories through the vehicle's surroundings.


Figure 1-4. Unmanned air vehicle mission profiles. A) Current missions occur at high
altitude, therefore, not requiring rapid path planning and environmental
awareness. B) Future missions will be low altitude assignments necessitating
quick reaction to environmental obstacles.

This dissertation concentrates on vision-based environmental sensing. Computer

vision has been identified as an attractive environmental sensor for small UAVs due to the

large amount of information that can be gathered from a single light-weight, low-power

camera. The sensor selection is a result of the decrease in vehicle size which severely

constrains the available p liload both in terms of size and weight along with power

consumption. Therefore, autonomous behaviours for UAVs must be designed within the

framework of vision-based environmental sensing. In particular, two critical capabilities

520 East (
East (ft)

Figure 7-5.

The vision-based flight simulation environment for testing the SLAM
algorithm. A) The urban simulation environment is composed of geometrically
regular obstacles representing buildings. A precomputed trajectory generated
from a nonlinear aircraft model is shown circling the environment. B) As the
aircraft navigates along the precomputed trajectory sensor information is
gathered. The gathered sensor information is restricted to a senor cone defined
by a maximum depth of sensing and horizontal and vertical fields of view.
Three-dimensional feature points are distributed on the building faces and
planes are fit to the pointwise data.



North (II.


Similarly, time-parameterized rigid body motions, such as trim primitives, can be written

in the form

g(t ) t T(t) (68)
0 1

which can be written in the exponential form g(t) = exp(rt) where r] is called the twist.

Optimal trajectories are found by solving the Nonlinear Program (NLP)

(w*, r*) arg min J(w, 7r) ( MW + yA,1ri) + A7wA T
s.t. : gw n i'exp(qTi) go1 9
go 0 (6-9)
w e L(MA)

r > 0, Vi c {1, N+ 1}

where the cost function J is defined by a summation of all maneuver costs Fm. and

primitive costs 77T, which are scaled linearly by their duration of execution r7,. The

first constraint ensures the system is driven to a goal state and second constraint assures

that the solution maneuver exists in the set of all admissible maneuver sequences L(MA)

defined by the MA.

The posed optimization becomes computationally intractable as the length of the

maneuver sequence and the number of available motion primitives increases. Therefore,

the problem is decomposed into a search for the optimal maneuver sequence w* and an

optimization to determine the optimal coasting times r*. This formulation, however, does

not include constraints for obstacle avoidance. For obstacle avoidance, the MA motion

planning strategy is coupled with sampling-based methods [94] (Section 6.4).

6.3 Decoupled Trajectory Planning

The aforementioned direct trajectory planning methods employ strategies that plan

directly in the state space of the system. Decoupled, or indirect methods, plan paths in

the vehicle's configuration space and convert the resulting collision-free path q(s) into a

trajectory. Therefore, this method can be used to find the time-optimal trajectory that


10 20 30 40 50 60 70 80 90
time (s)


10 20 30 40 50 60 70 80 90
time (s)





10 20 30 40 50 60 70 80 90
time (s)









10 20 30 40 50 60 70 80 90
time (s)


0 10 20 30 40 50 60 70 80 90
time (s)


S i ^i,,fl

10 20 30 40 50 60 70 80 90
time (s)

Figure 7-7. Average estimation error for the three SLAM filters and the nominal

trajectory for 50 Monte Carlo runs. A) Estimation error in the North

direction. B) Estimation error in the East direction. C) Estimation error in

altitude. D) Estimation error in roll angle. E) Estimation error in pitch angle.

F) Estimation error in heading angle.



Adam Scott Watkins was born in Orlando, Florida on November 11, 1980. After

graduating from Lake Mary High School, Adam attended the University of Florida

receiving a Bachelor of Science in Mechanical Engineering in 2003. Adam continued

his education at the University of Florida receiving a Master of Science in Mechanical

Engineering in 2005. Upon obtaining his MS, Adam pursued a doctoral degree under

the advisement of Dr. Richard Lind. Adam's research involves the control of unmanned

systems and he is prepared to graduate with his Ph.D. in the summer of 2007. Upon

graduation Adam will begin work at the Johns Hopkins University Applied Physics Lab

continuing his research in the field of unmanned systems.

5.3 Filtering for SLAM

The airborne SLAM problem addressed in this dissertation is subject to difficulties

exceeding ground vehicle SLAM with respect to computational complexity and high

nonlinearity in both vehicle dynamics and environmental measurements [19]. The airborne

SLAM problem has been investigated with the Extended Kalman Fitler (EKF) [18]

and the Unscented Kalman Filter (UKF) [20]. The EKF approach has been shown to

be somewhat successful in implementation for a very limited number of landmarks.

Additionally, UKF approach has proven to be superior to the EKF in terms of estimation

accuracy, but suffers from increased computational complexity. This dissertation

investigates three additional SLAM filters based on the Rao-Blackwellized Particle Filter

(RBPF). The goal is to find a solution to airborne SLAM that is more computationally

efficient than both the EKF and UKF approaches and handles the nonlinearities as well as

the UKF. The complete formulation of these filters, including the equations necessary for

implementation, is described in this section.

The SLAM estimation task is framed as a probabilistic process with the goal of

calculating the joint posterior distribution of the vehicle and landmark states

p(xk, m ZO:k, UO:k, XO) (5-7)

where ZO:k is the history of all landmark observations, Uo:k is the history of all control

inputs, and xo is the known initial vehicle state. The following three filters estimate this

distribution solving the airborne SLAM problem.

5.3.1 Rao-Blackwellized Particle Filter

The RBPF was introduced as an alternative to EKF SLAM in an approach referred

to in literature as FastSLAM [23, 87]. The RBPF is a sampling-based approach which

factors the joint SLAM posterior distribution according to the observation that landmark

positions can be estimated independently if the vehicle path is known. The resulting

Rao-Blackwellized form contains a sampled term representing the probability of vehicle

5.2.1 Vehicle Model

The addressed airborne SLAM problem assumes the vehicle is a fixed-wing aircraft

as depicted in Figure 5-3. Additionally, it is assumed that vehicle linear velocity [u, v, w]

and angular velocity [p, q, r] information is provided via an IMU or vision-based state

estimation algorithm (Sections 2.3.2 and 2.4). Therefore, the input to the vehicle model is

selected to be u [u, v, w,p, q, r]T

(ND it
[,r ,y, ]T

0, q Ob, r

Figure 5-3. Fixed-wing aircraft model definitions. The aircraft linear and angular rates are
defined with respect to a body-fixed basis B. The inertial position and
orientation of the vehicle is measured with respect to a North East Down
(NED) inertial frame E.

The vehicle model f(.) accepts noisy estimates of vehicle velocities as inputs.

Position and orientation state information is generated according to the aircraft kinematic

S= u cos 0 cos Q + v(- cos Q sin Q + sin Q sin 0 cos Q)

+w(sin Q sin i + cos Q sin 0 cos b)

y = cos 0 sin Q + v(cos Q cos Q + sin Q sin 0 sin Q)

+w(- sin cos + cos sinsin ) ( )

i u sin v sin cos 0 w cos 0 cos 0

Sp + tan (q sin + rcos )

S= q cos Q r sin

~ (qsin + rcos 0)/cos 0

to generate a spatial model. In addition, dimensionality reduction can filter the effects of

sensor noise by creating a smooth representation of noisy data.

Robotic navigation can be divided into different methodologies depending on the

level of knowledge afforded the vehicle concerning its surroundings and relative position

within the environment. The most general case requires that the vehicle estimate both

a map of the environment and its location within that map simultaneously [10, 60, 61].

This case is known as the Simultaneous Localization and Mapping (SLAM) problem as

discussed in detail in C'! lpter 5. The SLAM framework requires that distinguishable and

re-observable landmarks be present in the environment. Landmarks allow the vehicle

to ascertain its relative location to the environment through sensing operations. The

only discernible characteristic of a 3D feature point is its location in the environment

which makes 3D points a poor choice for landmarks. A geometric landmark, however,

can provide additional information in terms of orientation and size. Additionally, feature

points that are visible from a given vantage point are not necessarily visible from another

location due to occlusion and lighting variations. Geometric maps define landmarks that

are robustly observable with respect to camera pose.

M 11 : missions envisioned for autonomous vehicles require human interaction in

terms of high-level mission planning. Surveilling targets in an environment such as a

specific building or roadway depends upon the ability to properly designate the known

objective. Similarly, reconnaissance of an area of interest in the environment requires that

the region be clearly specified. Such high-level decisions will be made by a human operator

who must designate the targets or regions of interest in some manner. In this capacity, a

spatial model is advantageous as it provides an intuitive interface for target selections as

opposed to a pointwise model which is an ambiguous representation that requires further

interpretation of the data.

The need to enable motion pl1 iii i construct compact map representations, define

distinguishable landmarks, and assist high-level mission planning motivates the generation

6-9 Collision detection for the global path. ............. ... 95

6-10 Interior to boundary test ............... ......... .. .. 96

6-11 Collision detection for the local path ................ ..... 98

6-12 Motion planning for coverage strategies ............... ... 99

6-13 Environmental coverage metric ............... ........ 100

7-1 Result of the plane fitting algorithm without image noise. . . .... 105

7-2 Angle of incidence constraint. ............... ..... .... 106

7-3 Results of the plane fitting algorithm in the presence of image noise. ...... .107

7-4 Result of the plane fitting algorithm when nonplanar obstacles are included. 109

7-5 The vision-based flight simulation environment for testing the SLAM algorithm. 110

7-6 Example localization result for the airborne SLAM simulation . ... 112

7-7 Average estimation error for the three SLAM filters and the nominal trajectory
for 50 Monte Carlo runs. .................. .......... 114

7-8 Filter consistency results .................. ............ .. 116

7-9 An example result for the complete airborne SLAM solution using the URBPF 118

7-10 Example paths for environment coverage ................ ....... 119

7-11 Comparison of environment coverage results. ............... 120

7-12 Resultant trajectories from the path tracking optimization. . .... 121

[26] C('!..- I H., Lynch, K., Hutchinson, S., Kantor, G., Burgard, W., Kavarki, L., and
Thrun, S., "Principles of Robot Motion: Theory, Algorithms, and Implementations,"
MIT Press, Cambridge, MA, 2005.

[27] Kavraki, L. E., Svestka, P., Latombe, J. C., and Overmars, M. H., "Probabilistic
Roadmaps for Path Planning in High-Dimensional Configuration Spaces," IEEE
Transactions on Robotics and Automation, Vol. 12, No. 4, 1996, pp. 566-580.

[28] Hsu, D., "Randomized Single-Query Motion Planning in Expansive Spaces," Ph.D.
thesis, Department of Computer Science, Stanford University, 2000.

[29] Hsu, D., Latombe, J. C., and Motwani R., "Path Planning in Expansive
Configuration Spaces," International Journal of Computational Geometry and
Applications, Vol. 9, No. 4-5, 1998, pp. 495-512.

[30] Hsu, D., Kindel, R., Latombe, J. C., and Rock S., "Randomized Kinodynamic
Motion Planning with Moving Obstacles," International Journal of Robotics Re-
search, Vol. 21, No. 3, 2002, pp. 233-255.

[31] Kuffner, J. J. and LaValle, S. M., "RRT-Connect: An Efficient Approach to
Single-Query Path Planning," In IEEE International Conference on Robotics
and Automation, 2000.

[32] LaValle, S. M. and Kuffner, J. J., "Randomized Kinodynamic Planning," Interna-
tional Journal of Robotics Research, Vol. 20, No. 5, 2001, pp. 378-400.

[33] Dubins, L. E., "On Curves of Minimal Length with a Constraint on Average
Curvature, and with Prescribed Initial and Terminal Positions and Tangents,"
American Journal of Mathematics, Vol. 79, No. 3, July, 1957, pp. 497-516.

[34] Shkel, A. and Lumelsky, V., "Classification of the Dubins Set," Robotics and
Autonomous S.i-1. mi- Vol. 34, 2001, pp. 179-202.

[35] Frazzoli, E., "Robust Hybrid Control of Autonomous Vehicle Motion P1 ,lnIir; "I
Ph.D. Thesis, MIT, June 2001.

[36] Frazzoli, E., Dahleh, M., and Feron, E., "Robust Hybrid Control for Autonomous
Vehicle Motion Pl ,lniir-1 Technical Report, LIDS-P-2468, 1999.

[37] Frazzoli, E., Dahleh, M., and Feron, E., \l! ii. uver-based Motion Planning for
Nonlinear Systems With Symmetries," IEEE Transactions on Robotics, Vol. 21, No.
6, December 2005.

[38] Schouwenaars, T., Mettler, B., Feron, E., and How, J., "Robust Motion Planning
Using a Maneuver Automaton with Built-in Uncertainties," Proceedings of the IEEE
American Control Conference, June 2003, pp. 2211-2216.

[39] Bryson, A. E., "Dynamic Optimzation," Addison Wesley Longman, Inc., 1999.

follows a specified path in the configuration space [40, 41]. This dissertation employs a

decoupled trajectory planning strategy in order to incorporate a more complicated cost

function into the planning process than can be employ, ,1 efficiently into a direct method.

Additionally, the full vehicle dynamics are used. The main drawback of this method is

that the coverage cost is not included in the optimization. However, since the environment

is unknown the planning strategy must be implemented in a RH fashion so optimality can

never be guaranteed.

6.4 Sampling-Based Path Planning

Decoupled trajectory planning requires that a path be planned in the vehicle's

configuration space. A popular methodology for efficiently solving the path planning

problem in high dimensional spaces is known as sampling-based planners. Sampling-based

planners randomly select collision-free configurations from the vehicle's configuration space

and connect the samples with kinematically feasible paths to form a solution. It should

be noted that the MA can be combined with sample-based planning to directly produce

dynamically feasible trajectories.

The first sampling-based path planner to be developed was the Probabilistic

Roadmap (PRM) [27]. The PRM algorithm constructs a roadmap in the workspace

consisting of nodes and connecting arcs as shown in Figure 6-3. The nodes are vehicle

configurations sampled from the vehicle's free configuration space Qfree. The connecting

arcs are collision-free paths generated by a local planner A that connects two sampled

configurations and satisfies the kinematic constraints of the vehicle. Therefore, A contains

a collision detection step alerting the planner if a collision-free path cannot be constructed.

The roadmap is intended to capture the connectivity of the vehicle's configuration space

in order to efficiently plan feasible paths without an exhaustive search. The PRM process

involves a learning phase in which the roadmap is constructed and a query phase in which

multiple path planning problems are solved with the constructed roadmap. Therefore, the

roadmap only needs to be constructed once.

Therefore, the overall structure of the environment is not considered and many incorrect

associations can be made when the measurements are noisy.

The nearest neighbor approach uses Mahalanobis distance as a statistical measure of

similarity for an observed landmark and a predicted landmark. The Mahalanobis distance

is a distance metric that is scaled by the statistical variation of the data points. For the

case of data association, the squared Mahalanobis distance is calculated as follows

Mij = (z, 2j)T(Pij)- (z 2j) (5-64)

where Pij is the covariance of the ith observation zi with respect to the observation

estimate 2j of the jth landmark. In the Gaussian case the Mahalanobis distance forms

a X2 distribution dependent on the dimension of the innovation vector and the desired

percentage of accepted associations. For a 4-dimensional innovation vector and an

acceptance percentage of 95'. the Mahalanobis gate value for which an association will be

accepted is 9.5. An additional step is added to the nearest neighbor search which checks

the the distance between the estimated and observed plane boundary and does not accept

associations with boundaries that are too distant from each other.

The data association step is used in the preceding SLAM filters whenever a

measurement residual is calculated. The residual is a function of the actual measurement

and the predicted measurement, but the mapping between which actual measurement

relates to which prediction is unknown. Therefore, the true value and predicted value are

associated as the minimum arguments to Equation 5-64 for all possible pairs of true and

predicted measurements. All associated pairs are used to update the SLAM filter.

5.5 Map Management

The observations in the described SLAM formulation consist of partially perceived

planes. New observations must be incrementally fused into the global map as the

environment is continuously surv i' 1 Incremental map building is accomplished by

mapping the boundary of an observation, described by a convex hull H, to the inertial

[13] H,;t J.B., Lerasle, F., and Devy, M., "A Visual Landmark Framework for Indoor
Mobile Robot Navigation," IEEE International Conference on Robotics and Automa-
tion, Washington, DC, May, 2002.

[14] Davison, A., "Real-Time Simultaneous Localization and Mapping with a Single
Camera," IEEE International Conference on Computer Vision, Nice, France, 2003.

[15] Molton, N., Davison, A., and Reid I., "Locally Planar Patch Features for Real-Time
Structure from Motion," British Machine Vision Conference, Kingston University,
London, 2004.

[16] Davison, A., Cid, Y., and Kita, N., "Real-Time 3D SLAM with Wide-Angle Vision,"
IFAC/EURON Symposium on Intelligent Autonomous Vehicles, Instituto Superior
Tecnico, Lisboa, Portugal, 2004.

[17] Davison, A., and Kita, N., "3D Simultaneous Localization and Map-Building Using
Active Vision for a Robot Moving on Undulating Terrain," IEEE Computer S... ,/;
Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, 2001.

[18] Kim, J. H. and Sukkarieh, S., "Airborne Simultaneous Localisation and Map
Building," IEEE International Conference on Robotics and Automation, Taipei,
Taiwan, 2003.

[19] Kim, J. and Sukkarieh, S., "Real-Time Implementation of Airborne Inertial-SLAM,"
Robotics and Autonomous S;,-I. m Issue 55, 62-71, 2007.

[20] Langelaan, J. and Rock, S., "Passive GPS-Fee N,- ii;,_, iin for Small UAVs," IEEE
Aerospace Conference, Big Sky, Montana, 2005.

[21] Langelaan, J. and Rock, S., \ ii.-, ii of Small UAVs Operating in Forests,"
AIAA Guidance, Navigation, and Control Conference, Providence, RI, 2004.

[22] Langelaan, J. and Rock, S., "Towards Autonomous UAV Flight in Forests," AIAA
Guidance, Navigation, and Control Conference, San Francisco, CA, 2005.

[23] Montemerlo, M., Thrun, S., Koller, D., and Wegbrit, B., 1 -Ii.SLAM: A Factored
Solution to the Simultaneous Localization and Mapping Problem," Proceedings of
the AAAI National Conference on Arl.:;i .:,.l Intelligence, 2002.

[24] Weingarten, J., and Sieqwart, R., 1-lIl--based 3D SLAM for Structured
Environment Reconstruction," In Proceedings of IROS, Edmonton, Canada, August
2-6, 2005.

[25] Brunskill, E., and Roy, N., "SLAM using Incremental Probabilistic PCA and
Dimensionality Reduction," IEEE International Conference on Robotics and
Automation, April 2005.

Figure 6-2. An example Maneuver Automaton. A Maneuver Automaton defined by a
maneuver library of three trim trajectories and nine maneuvers. I1,, defines
the initial state transition by which the aircraft enters the directed graph.

M define transitions between trim trajectory operating conditions with I1,, defining the

initial state transition by which the aircraft enters the directed graph.

The MA defines a set of dynamically admissible motion primitives for a given

vehicle which can be concatenated to form a trajectory. To generate a motion plan, an

optimization is performed which searches for the optimal sequence of maneuvers and

the associated time duration for each connecting primitive. Therefore, the optimization

searches for a trajectory defined by the sequence

(w, ) -A, (Ti)MwAI2(2) ...... ( ), A > 0, Vi {1,N+ 1} (6-6)

where w is a vector determining the maneuver sequence and is a vector of coasting

times that define how long the trim primitives are executed. The trim primitives are

uniquely defined by the maneuver sequence and the constraint on 7 ensures that all

coasting times are positive. The trim trajectories and maneuvers can be multiplied in

this fashion because they represent transformations of the vehicle state written in a

homogeneous representation. Therefore, general rigid-body motion denoted by a rotation

R and translation T, such as a maneuver, is written in the form

g = (6-7)
0 1



M 200-



o feature points
-vision frustum


Figure 3-5. Dimensionality reduction of feature point data. A) Pointwise data from the
SFM algorithm. B) The dimensionally-reduced dataset consists of geometric
primitives in the form of planar features.

not modified by multiplying the essential matrix by a nonzero constant. Estimation to a

scale-factor is an inherent ambiguity in the SFM problem stemming from the inability to

distinguish between the motion through one environment and twice the motion through a

scene twice as large, but two times further away.

2.2.4 Euclidean Reconstruction

Once camera motion is known, the final step in the SFM algorithm involves

estimating the three-dimensional location of the tracked two-dimensional feature

points. The calculation of pointwise scene structure is accomplished by triangulating

the correlated feature points. Recall that the three-dimensional location of a feature point

is denoted p = [1, /92, 93]T. Therefore, the triangulation step can be formulated by relating

the unknown depths of the ith feature point, Im and qi,, measured with respect to two

different camera poses, Cm and Cn, yielding

T,,f = 9,mRfR + 7T (2-12)

where 7 is the unknown translational scale-factor and f, denotes the ith feature point

projected onto the image plane of the mth camera pose.

Since ],m and ]3, represent the same depth measured with respect to different

camera frames Equation 2-12 can be rewritten in terms of a single unknown depth as


Tmf x Rf; + f xT 0 (2-13)

Equation 2-13 can be solved for j correspondence pairs simultaneously by expressing

the problem in the form

Mr= 0 (2-14)

where y is a vector of the unknown feature point depths and unknown translational scale


S[Tru1 m/ 2 .. (2-15)

where the mean of the probability distribution of the current observation is
Zk i ( (4-30)
with corresponding covariance
Pzz1[ZW[Zk Zk][,k- ZkT (4 31)
The Kalman gain for the UKF is calculated according to

Pz,k|k-1 ZL0 iWX i,klk-1- 1Xk|k-l1 [Zi,k Zkf(
Kk Pxz,klk-1 (Pzz,k)-1

Finally, the state estimate and covariance matrix can be updated to generate the

current state estimate
Xk|k = Xk|k-1 + Kk (zk Zk) (433)
Pklk P Pk|k-1 KkPzz,kK[
Therefore, the state estimate of the system has been computed without requiring

linearization of the process and measurement models.

4.2.4 The Square-Root Unscented Kalman Filter

The UKF approach to state estimation has been shown to produce superior results to

the EKF for nonlinear systems [83]. The UKF, however, is a computationally expensive

algorithm in comparison to the EKF. This expense is because the sigma point calculation,

Equation 4-17, requires 2L + 1 evaluations of the nonlinear process model and matrix

square roots of the state covariance. The Square-Root Unscented Kalman Filter

(SR-UKF) is an efficient variant of the UKF in which the matrix square root required to

generate the sigma points is eliminated by propagating the square root of the covariance

matrix in lieu of the full covariance [84]. Therefore, the sigma point calculation, replacing

Equation 4-17, is

X-|1 X[ 1,xL ( L+ A)S ) (4 34)
X "k-llk_1 Xk~l P k I + A)Sik-lk-1)l

are split in the direction of the first principal component pl as described by vector r

whose magnitude is a function of the components singular value ar as given by

r = p 1 (3-4)

The expression in Equation 3-4 is equal to the splitting distance employ, ,1 in the

G-means algorithm [73]. After the centroids have been split or removed k-means is rerun

on the remaining data set. This process is performed repeatedly until the number of

remaining data points falls below a threshold tolf. This tolerance is typically set as 10' of

the total number of feature points.

Algorithm 2 Plane fitting algorithm
1: initialize a single cluster centroid (k = 1)
2: initialize observation counter j = 0
3: repeat
4: perform k-means on the dataset returning centroid locations {cI, c2,..., Ck}
5: for i = 1 to k clusters do
6: perform PCA on cluster ci
7: extract third singular value r3
8: if 03 < at then
9: increment observation counter j j + 1
10: project all feature points onto PCA basis Y = XV
11: find all feature points that fit the planar definition
12: compute convex hull H of associated feature points
13: store H and PCA result as sensor observation zj
14: remove centroid ci and associated feature points from data set
15: else
16: split centroid c, along first principal component pi
17: end if
18: end for
19: until number of fpts < tolf

The result of the plane fitting algorithm is shown in Figure 3-5. The algorithm

transforms the pointwise data provided by SFM into a set of geometric primitives.

Concatenating these observations will create an incorrect geometric map due to drift in

vehicle state estimates. C!i Ilter 5 will present an algorithm that fuses the vehicle state

estimates and environment observations in order to create a spatially consistent map.

2007 Adam S. Watkins

m = m_1 + Kk(zk- z4) (5-62)

Uk KkSy,k S |,klk cholupdate (Sk|,_ -1 (5-63)

The SR-URBPF procedure is more computationally efficient than the URBPF

approach. A comparison of all three filters is given in C'!i pter 7.

5.4 Data Association

Several important implementation issues were not addressed in the previous filter

development section. The first to be discussed is data association, the process of relating

existing map landmarks to the current environmental observations. Data association

is a critical step in the SLAM problem with incorrect associations often leading to

a catastrophic failure of the state estimation process. The RBPF mitigates the data

association problem by allowing for multiple association hypotheses because each particle

contains a separate estimate of the environment map. Additionally, defining landmarks

as geometric primitives further alleviates the problem by greatly reducing the number of

landmarks and making the landmarks more distinguishable by incorporating orientation


The vast in i ii i iy of SLAM implementations deal with a large number of landmarks

defined by inertial location. These situations lend themselves to batch gating which

exploits the geometric relationships between landmarks by considering multiple landmark

associations simultaneously [78]. The observed landmark information provided by the

dimensionality reduction step, however, is sparse and distinguishable. Furthermore,

texture variations on building facades can lead to multiple observations of the same

landmark during a single sensing operation. Therefore, a nearest neighbor approach

is used to allow for multiple associations to the same target. It should be noted that

individual gating techniques such as the nearest neighbor method fail in environments

that are not sparsely populated or structured. This failure is a result of the individual

gating method's investigation of a single landmark and sensor measurement pair at a time.

The sampling of a reduced state space increases the computational efficiency of the

RBPF by decreasing the dimension of the problem. This approach is applied to SLAM by

partitioning the state space so that the vehicle pose is sampled and the map's landmark

states are determine analytically. This application is described further in C'! lpter 5 where

the RBPF is combined with the aforementioned EKF, UKF, and SR-UKF to produce

three filters for solving the SLAM problem.


8.1 Conclusion

This research has presented a complete framework for solving the problem of

autonomous airborne mapping of unknown environments. The problem was decomposed

into two parts: map building in the presence of noisy measurements and uncertain vehicle

location and the planning of trajectories for minimum-time environment coverage.

The focus of the SLAM formulation is to generate consistent maps of the environment

in order to enable higher-level control decisions. Three filters were investigated in

terms of estimation performance, consistency, and efficiency. The results show that

the URBPF is superior to RBPF and SR-URBPF with respect to estimation performance

and consistency. The URBPF suffers due to its inefficiency in comparison to the other

filters. This problem can be alleviated by reducing the number of particles and reducing

the rate at which SLAM algorithm updates the aircraft's INS estimates. Regardless, it

has been shown that it is possible to build an environment map from noisy sensor data

gathered by a vision-based UAV.

The trajectory planning portion of this dissertation showed that it is possible

to efficiently plan paths for sensor coverage of an unknown environment. The map

construction process was used as feedback for obstacle avoidance and the paths were

planned incrementally as knowledge of the environment increased. By definition, however,

optimality cannot be proven for this process since the structure of the environment is

unknown prior to the planning task. Presented results show that the trajectory planning

process planned local paths that increased the gained knowledge of the environment with

respect to a purely greedy exploration strategy. Additionally, it was found that an optimal

path following approach to incorporating dynamics is feasible, but can be computationally

inefficient. However, a MA approach could also be easily incorporated into the framework

to possibly produce more computationally efficient results.


C (2) pi e-I ((z- k P -1(z (5-13)


PW VhxEqVhx + VhmP.,klVhm + ,E (5 14)

and Vhx is the Jacobian of the observation model with respect to vehicle pose

evaluated at ^5kjk-1 and mk-1, Vhm is the Jacobian of the observation model with

respect to landmark location evaluated at ^
of the map states.

3. Calculate the proposal distribution and draw N samples xk defining the updated

vehicle state where the distribution is Gaussian with parameters:
T ^i i i ii)5)
Xk|Ik -k lk-1 + Pk -kh z(B k) (5-15)

P',k = (h (BR)- hx + (q,)-1)-1 (516)

where Bk is defined as

B" = ,E + VhmP",k_1Vh (5-17)

4. Update definitions of observed landmarks according to the EKF measurement


Kk Pmrk-Vh (B')-1 (5-18)

m m_1 + K,(zk (5-19)

P, = (I KVhm)P,1 (5-20)

If a new landmark is observed it's covariance is initialized to

Pn,k =VhmErVhm (5-21)

From the above FastSLAM algorithm it can be seen that another advantage of

the RBPF is that only observed landmarks are updated. Therefore, the RBPF is more

reasons, map representation is a vital aspect of any SLAM algorithm. Traditionally, SLAM

algorithms define landmarks as three-dimensional points of interest in the environment

[10]. The practice of using point landmarks often yields environment models containing

millions of features [23]. Such environment models of large size are infeasible due to

the data storage requirements and computational power required to incorporate large

numbers of landmarks into the SLAM calculation. Additionally, the data association

problem becomes intractable because the relative measurements to landmarks cannot

be correlated as a result of the large number of possible associations combined with

uncertainty in vehicle location. Several SLAM formulations have constructed geometric

environment maps for highly structured environment maps by fitting geometric primitives

to the gathered pointwise data [24, 25]. The benefits of geometric primitives include the

reduction of data required to store an environment map and landmarks that are more

easily distinguished because they contain structural information. This dissertation employs

a similar dimensionality reduction step in order to construct a complete environment map

consisting of planar features.

In order to autonomously and rapidly construct an environment map, this dissertation

presents a trajectory planning algorithm. Ideally, the trajectory planning problem

would be posed as an optimal control problem in order to find a globally optimal

solution. An optimal solution, however, is computationally intractable because the

problem is non-convex and occurs in a high-dimensional space. For this reason, several

approximations to the full optimal control solution have been proposed which included

Mixed Integer Linear Programming (\! IL.P) and Maneuver Automatons ( \ As).

The MILP solution poses the full nonlinear optimization as a linear optimization

[90, 91]. Therefore, the solution is only capable of handling a linearized dynamic aircraft

model. The linearization restriction severely limits the efficacy of the MILP solution due

to the reduction of available maneuvers. A linear model cannot properly characterize

the -- ressive maneuvers necessary to navigate in a cluttered environment. Additionally,

for a given environment, it can be repeatedly used to find paths from a given initial

configuration to a desired goal configuration. Navigation in unknown environments,

however, does not require full exploration of the configuration space since new sensor

information must be incrementally incorporated into the planning process. Therefore,

extensions of PRMs such as the Expansive-Spaces Trees (ESTs) [28-30] and the

Rapidly-Exploring Random Trees (RRTs) [31, 32] planners are emplo, l The EST and

RRT planning strategies are more efficient since they are designed to find feasible paths

between two specific configurations, rather than provide a general planning solution like

the PRM. This dissertation employs a variant of the RRT to efficiently plan collision-free

paths through the environment.

An abundance of research has been performed in the areas of SLAM and trajectory

planning. This dissertation extends previous research efforts in both fields to provide a

methodology for autonomous map building in highly-structured environments.

1.4 Contributions

The main contribution of this work is a complete solution for autonomous map

building by a UAV in an unknown, urban environment. The solution incrementally

constructs a map from noisy, relative sensor measurements without knowledge of the

vehicle's location. The map is used as feedback to generate collision-free trajectories that

maximize environment coverage.

In the process of developing the main contribution of this work, the following

contributions were made:

* A plane fitting algorithm is developed to reduce the dimensionality of gathered
three-dimensional, pointwise sensor data.

* A method for incorporating planar features into the SLAM algorithm is developed.

* A new SLAM filter based on the Rao-Blackwellized particle filter and the Square-
Root Unscented Kalman filter is developed.

* Three SLAM filters are investigated in relation to the airborne SLAM problem.

[79] Bar-Shalom, Y., Li, X.R., and Kirubarajan T., Estimation with Applications to
Tracking and Navigation, John Wiley and Sons, 2001.

[80] Kalman, R. E., "A New Approach to Linear Filtering and Prediction Problems,"
Transactions of the ASME Journal of Basic Engineering, 1960, pp. 34-45.

[81] "Algorithms for Multiple Target Ti .. 1:iw- American Scientist, Vol. 80, No. 2, 1992,
pp. 128-141.

[82] Arulampalam, M. S., Maskell, S., Gordon, N., and Clapp, T., "A Tutorial on
Particle Filters for Online Nonlinear/Non-Gaussian B ,. -i ,i Ti 1.1ii:; Vol. 50, No.
2, 2002, pp. 174-188.

[83] Julier, S. and Uhlmann, J., "A New Extension of the Kalman Filter to Nonlinear
Systems," SPIE AeroSense Symposium, April 21-24, 1997.

[84] van der Merwe, R. and Wan., E., "The Square-Root Unscented Kalman Filter for
State and Parameter Estimation," International Conference on Acoustics, Speech,
and S:g,.,.l Processing, Salt Lake City, Utah, ri, 2001.

[85] Liu, J and C'I i R., "Sequential Monte Carlo Methods for Dynamical Systems,"
Journal of the American Statistical Association, Vol. 93, 1998, pp. 1032-1044.

[86] Murphy, K., "B ,'. -i in Map Learning in Dynamic Environments," In Advances in
Neural Information Processing S1;-/. -i Vol. 12, 2000, pp. 1015-1021.

[87] Montemerlo, M., Thrun, S., Koller, D., and Wegbreit, B., 1 i-iSLAM 2.0: An
Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping
that Provably Converges," International Joint Conference on Arl.:i, .:., Intelligence,
Acapulco, Mexico, 2003.

[88] Li, M., Hong, B., Cai, Z., and Luo, R., N\.,. I Rao-Blackwellized Particle Filter for
Mobile Robot SLAM Using Monocular Vision," Journal of Intelligent T. / ,,. /. 4,.i
Vol. 1., No. 1., 2006.

[89] Richards, A. and How, J., "Aircraft Trajectory Planning With Collision Avoidance
Using Mixed Integer Linear Programming," Proceedings of the American Control
Conference, Anchorage, AK, May 2002.

[90] Kuwata, Y. and How, J., "Three Dimensional Receding Horizon Control for UAVs,"
AIAA Guidance, Navigation, and Control Conference and Exhibit, Providence,
Rhode Island, August 2004.

[91] Bellingham, J., Richards, A., and How, J., "Receding Horizon Control of
Autonomous Aerial Vehicles," Proceedings of the American Control Conference,
Anchorage, AK, ,li- 2002.

ambiguity associated with SFM. The SFM approach proposed by Prazenica mitigates

errors inherent in SFM by incorporating vehicle dynamics into the Kalman filter. This

approach adds robustness to the SFM calculation by producing state estimates even

when the SFM calculation fails. The scale-factor can be resolved by a gradual refinement

of estimates made by incorporating information from the physical model of the vehicle

[53, 54].
The Kalman filtering approach defines both a state transition model g(.) and a

measurement model h(.) written as

Xk = (Xk-, Uk-1,K) +qk-1, qk-1 ~A (0, q)
Yk = (xk, K)+rk, rk ~ '(0, Er)

where x denotes the complete state vector including vehicle states and three-dimensional

feature point locations. The vector y denotes the projection of the feature points onto

the image plane, u represents the control inputs to the vehicle equations of motion, K

is the camera pose relative to the body-fixed coordinate system, and the additive noise

parameters q and r are zero-mean Gaussian random vectors with covariances Eq and

Er, respectively. The notation a ~ p(b) denotes that random variable a is drawn from a

probability distribution defined by p(b).

Therefore, f(.) is a nonlinear function that updates the vehicle states and estimated

feature point locations according to the previous system state and vehicle control input.

The nonlinear function h(.) predicts the projection of the three-dimensional feature points

onto the image plane given the estimated inertial feature point locations and the camera

pose. Consequently, the measurement model is the epipolar constraint (Section 2.2.3).

The filtering approach solves the SFM problem by estimating the state of the model

given by Equation 2-20. The estimation process is computed with an Extended Kalman

Filter (EKF). The EKF first predicts the state and covariance of the system according to

X/]|k-1 f= (xk--llk-l, Uk-_I, m) (2-21)


4.1 Introduction ...................... ........... 49
4.2 The Kalman Filter ............................. 49
4.2.1 The Extended Kalman Filter ................... ... .. 52
4.2.2 The Unscented Transform .................. 53
4.2.3 The Unscented Kalman Filter ............... . .. 54
4.2.4 The Square-Root Unscented Kalman Filter . . 56
4.3 The Particle Filter ............... . . ... 58
4.4 The Rao-Blackwellized Particle Filter ................... ... .. 60


5.1 Introduction .................. ................ .. 62
5.2 Airborne SLAM Models .................. ......... .. 65
5.2.1 Vehicle M odel .................. ........... .. 66
5.2.2 Measurement Model. .................. ...... .. .. 67
5.3 Filtering for SLAM .............. . . . 68
5.3.1 Rao-Blackwellized Particle Filter ............ .. .. 68
5.3.2 Unscented Rao-Blackwellized Particle Filter . . 71
5.3.3 Square-Root Unscented Rao-Blackweillized Particle Filter . 73
5.4 Data Association ................ ............ .. 75
5.5 Map Management .................. ............ .. 76

6 TRAJECTORY PLANNING .............. . ..... 78

6.1 Introduction ............... . . .. 78
6.2 Direct Trajectory Planning Methods ................ .... .. 80
6.2.1 Mixed-Integer Linear Programming .... . . 80
6.2.2 Maneuver Automaton ............... .... .. 82
6.3 Decoupled Trajectory Planning .............. ...... 84
6.4 Sampling-Based Path Planning .................. .... .. 85
6.4.1 Expansive-Spaces Tree Planner ............. . 87
6.4.2 Rapidly-Exploring Random Tree Planner . . 88
6.5 The Dubins Paths .................. ............ .. 90
6.5.1 Extension to Three Dimensions ............. .. 93
6.5.2 Collision Detection ......... . . ... 94 Global path collision detection .. . . 94 Local path collision detection ..... . . 96
6.6 Environment Coverage Criteria ................ .. .. .. 97
6.7 Optimal Path Tracking ............... .......... .. 101
6.8 Trajectory Planning Algorithm .............. .. 102

7 DEMONSTRATIONS AND RESULTS ................ 105

7.1 Introduction . ............... ............ .. .. 105

1.2 Motivation

In recent years, autonomous vehicles have become increasingly ubiquitous. Unmanned

vehicles are employ, 1 in a variety of situations where direct human involvement is

infeasible. Such situations include tasks occurring in environments that are hazardous to

humans, missions that humans are incapable of performing due to physical limitations,

or assignments considered too mundane or tedious to be reliably performed by a human.

Examples of autonomous vehicles currently in use, shown in Figure 1-2, include the

Packbot EOD which assists in Explosive Ordnance Disposal (EOD) [1], the Mars

Exploration Rovers (M\!lI ) Spirit and Opportunity used for planetary exploration [2],

and the Roomba vacuum robot [3].


Figure 1-2. Examples of unmanned vehicles. A) The JPL Mars Exploration Rover Spirit.
B) The iRobot Packbot EOD. C) The iRobot Roomba.

The unmanned vehicles depicted in Figure 1-2 exemplify the current level of

autonomy reliably achieved by unmanned systems. The Packbot EOD requires constant

guidance from a human operator and does not perform any truly autonomous tasks.

The MER autonomously monitors its health in order to maintain safe operation and

tracks vehicle attitude to aid in safe navigation over uneven terrain. Preloaded and

communicated instructions, however, are executed by the MER after significant human

analysis and interaction. The Roomba di-p'-,1 a higher level of by sensing

the MILP solution cannot prove optimality because the problem is non-convex and

the optimization must be implemented in a Receding Horizon (RH) format to be

computationally efficient.

The MA is a method which employs a discrete representation of the vehicle dynamics

in order to plan trajectories [35-38]. The dynamics of the vehicle are discretized into a set

of trim trajectories, or steady-state motions, and maneuvers that connect the trim states.

The MA planning process is decomposed into a combinatorial problem which searches for

the best maneuver sequence and an optimization that finds the optimal time durations

for the trim trajectories. Unlike, MILP the discretized dynamics are based on a nonlinear

model and can incorporate .I-: riessive maneuvers. The MA solution, however, can become

computationally intractable when solving for large trajectories requiring a long maneuver

sequence. This issue is exacerbated if the discretization is not restricted to a small set

of vehicle motions or if obstacles are considered. Therefore, the MA is combined with

sampling-based planning methods to produce a solution in which optimality cannot be


This dissertation employs a decoupled trajectory-planning strategy [40, 41] in which

a feasible collision-free path is planned and optimal control is employ, 1 to track the path

in minimal time. The path is subject to kinematic constraints and incorporates metrics

for selecting paths that provide maximum coverage with respect to other paths generated

during the planning process. The decoupled approach allows for the use of the complete

nonlinear aircraft model; however, optimality in terms of coverage is not guaranteed since

the coverage criteria is only included in the path planning task. The path to be tracked is

generated with a sampling-based planning strategy [26].

Sampling-based algorithms were introduced by a methodology called Probabilistic

Roadmaps (PRMs) [27]. PRMs construct a graph that captures the connectivity of the

configuration space of a robot by randomly sampling configurations and connecting

them to the existing map with kinematically feasible paths. Once a PRM is constructed

of view is selected as qL. A local RRT is planned to this goal from the initial configuration

qint. Unlike the global RRT, the local RRT is performed in a less greedy fashion until N

number of candidate paths have been generated. The path which maximizes the senor

variance an2 is selected as the local path qL(s). The local RRT branching strategy is

biased towards regions that maximize variance. Therefore, the space within the sensor

field of view is efficiently searched for informative paths. The paths are restricted to the

field of view so that the detailed planning occurs within the region of lowest uncertainty.

Lastly, the local path path is converted to a trajectory by solving the NLP given in

Equation 6-24. The flexibility of the planning framework allows for the optimal control

formulation to be substituted with a MA implementation as described in 6.2.2. Both

methodologies would incorporate dynamics into the final solution. The final planned

trajectory is implemented and the planning process is repeated when the vehicle arrives at


This chapter has presented a framework that plans aircraft trajectories for complete

sensor coverage of an unknown, uncertain environment. Results of the planning strategy

are presented in C'!i Iter 7.

5.3.3 Square-Root Unscented Rao-Blackweillized Particle Filter
This dissertation introduces a new option for efficiently solving the SLAM problem
based on the Square-Root Unscented Kalman Filter (SR-UKF) (Section 4.2.4). The
SR-UKF is an efficient variant of the UKF in which the matrix square root required
to generate the sigma points is eliminated by propagating the square root of the
covariance matrix in lieu of the full covariance. Similarly, the Square-Root Unscented
Rao-Blackwellized Particle Filter (SR-URBPF) is formulated by replacing the UKF-- -1.
calculations in the URBPF with the SR-UKF approach.
1. Generate N vehicle pose samples x^i and associated observations zi according to
SR-UKF prediction rule:

-|k-1 k-lx Xk-1 ( L + A)Sxn,k-lk-1) (5-40)

^ l2L (5-41)
XkXk-I,t 1-=k-ik-l,t, UkX) Xki W k-X (5-41)

Sxz,klk-i (v (I:2Lkk-1k Xkk- i ) q]) (5 42)

Six,k|k- cholupdate (S,k k1 ,kk- Xkk-1)i, W&) (5-43)
xo klW) (5l 43)

Zk,, j h(X4 kl,t, mj,k-1) Z = Et0 WsZ (5 44)

Szz,k qr ([w i:2L,k Z)i r)(545)

S = cholupdate (Sz,k ,k Zk, W) (5-46)

2. Calculate importance weights wk and resample to promote particle diversity:

Wk = W 1C (5-47)

where the constant C is the likelihood of all observations zk (the subscript j has
been dropped to show that the following is performed for all landmarks in a batch
fashion) given the estimated position of the vehicle. The likelihood is calculated as

C" (2)T pl i (ze (P) 1(ze (5-48)

Figure 6-4. Expansive-Spaces Tree planning. The EST involves constructing a tree T by
extending the tree from randomly sampled configuration in the tree q. A
random sample qrand local to q is connected to the tree by a local planner.
Both a successful qrand and unsuccessful connection q"and are shown.

The last step of the EST planning process is to connect the tree to the goal location

Pgoal. This connecting is accomplished by checking for connections with the local planner

A. If no connection is made, further expansion of the tree is required. Another variation

involves simultaneously growing a tree Tgoal rooted at pgoal and proceeding with a

connection process between q E Tinit and q E Tgoal to merge the trees.

6.4.2 Rapidly-Exploring Random Tree Planner

The RRT planner, like the EST, was developed to efficiently cover the space between

qinit and qgoal. The RRT differs from the EST with respect to the tree expansion process

as described in Algorithm 5.

Instead of extending the tree from randomly selected points in the tree T, as in the

EST approach, the RRT extends the tree towards randomly selected points qrand in the

free configuration space Qfree. Tree construction begins by selecting qrand and finding the

nearest configuration near in T to the sample. The tree is extend from near a distance 6

toward qrand. The extension is performed by a local operator A terminating at qnew. If the

constructed path is collision-free, it is added to T. Tree construction, depicted in Figure


6.2 Direct Trajectory Planning Methods

Direct trajectory planning methods generate motion plans directly in the state space

of the system. Several methods for direct trajectory planning include artificial potential

functions [92], optimal control [89], and sampling-based methods [93]. The latter two

methods have been directly applied to the aircraft motion planning problem and are

described below.

6.2.1 Mixed-Integer Linear Programming

MILP is the minimization of a linear function subject to linear constraints in which

some of the variables are constrained to have integer values. The integer valued variables

are key because they allow the incorporation of obstacle avoidance into the optimization.

MILP has been selected to solve the trajectory optimization problem because of its ability

to handle obstacle constraints and due to recent improvements in the computational

efficiency of commercially available software for solving MILP problems. The general form

of a mixed integer program for the variable vector x E R" with m constraints can be

written as
x* = argmin J(c,x) = (cTx)
s.t. Cx < b (6-1)

where J is the objective function to be minimized, c E R" and b E R" are vectors of

known coefficients, C E is a constraint matrix, u is the upper bound of the solution

vector, and 1 is the corresponding lower bound.

The following formulation of the MILP problem for vehicle navigation is a simplified

version for a two-dimensional vehicle [91], but can be extended to three dimensions

[90]. The goal of the trajectory planning problem is to compute a control history

{uk+i E R2 : i= 0,1,..., L 1} that defines a trajectory {xk+i E R2 : i= 1,...,L}.

The optimization seeks to find a minimum time trajectory that terminates at a specified

goal location x,. This minimum time, terminally-constrained problem can be formulated


4.1 Introduction

('!i ipter 3 described a method for extracting planar features from a highly structured

environment. These features will be used in ('! Ilpter 5 to incrementally construct an

environment map with a filtering technique known as Simultaneous Localization and

Mapping (SLAM). The SLAM task is a state estimation approach in which the state

of the system is a combination of the vehicle state, defined in terms of position and

orientation, and the state of the map, designated by the position and orientation of

environmental landmarks.

State estimation is commonly addressed as a filtering problem in which estimates

are calculated recursively by incrementally incorporating sensor information [79]. This

chapter describes several filters that follow a similar predictor-corrector structure depicted

in Figure 4-1. The filtering process begins with the prediction of a noisy estimate of the

system state according a state transition model. Measurements of the system are predicted

as if the true system state is equal to the state prediction. The measurement prediction

is compared to an actual, noisy system measurement and the state estimate is updated

according to a weighted measurement residual. The weight of the measurement residual

represents the confidence of the measurement with respect to the model-based state


Two filters that form the foundation of the i, Ii ii ly of SLAM solutions are the

Kalman filter [80] and particle filter [82]. This chapter describes the general form of these

filters and several variants that will be used in the SLAM filtering approaches described in

('!C ,pter 5.

4.2 The Kalman Filter

The Kalman filter is an estimator for linear systems that yields minimum mean

square error (il\ IlE) estimates. Kalman filtering estimates the state x E R" of a dynamic

It should be noted that all segment lengths in these Dubins set calculations that

represent angular distance are constrained to be on the interval [0, 27). An example of the

Dubins set is shown in Figure 6-7.


1.5 ... ..x.
1 4 '" '


-1- q*()
-1.5 -

O R SR :
-2- x LSR
-2.5- RLR ** **
3 I I I I I I I I I
-2 -1 0 1 2 3 4 5 6 7

Figure 6-7. The Dubins set of paths for two configurations. Notice that the LRL path
does not give a feasible solution for this set. The time-optimal is denoted as
q* (s).

6.5.1 Extension to Three Dimensions

The Dubins set determines the shortest curvature-constrained path between two

configurations in a planar environment. Aircraft are not constrained to in-plane motion

so altitude variations must be included to properly characterize the vehicle kinematics.

A recent application of the Dubins set to three dimensions assumes a linear variation

of altitude limited by the maximum climb rate of the vehicle [97]. A method similar to

the MA (Section 6.2.2) plans an initial path which places the vehicle at a position and

orientation that forms a plane with the final position and orientation [98]. A 2D Dubins

path is then planned between the two in-plane configurations.

Because the dynamics of the vehicle will be incorporated after the path planning

task, the 3D Dubins solution employ ,1 in this dissertation assumes that changes in flight

* A method for for rapid path planning in an unknown environment based on local and
global navigation tasks is developed.

* An environment coverage criteria is developed which does not require discretization of
the environment.

* An optimization problem is posed for tracking the planned path in minimal time.

subject to uncertainty due to sensor noise, modeling errors, and external disturbances.

Additionally, the constructed kinematically feasible path is, in general, not dynamically

feasible due to instantaneous changes in heading rate and climb rate. Therefore, the

trajectory will not exactly follow the planned path. To account for these uncertainties, the

local path collision checking algorithm ensures safe navigation by increasing the required

distance between the vehicle and obstacles.

The local collision checking algorithm represents the path as a cylinder of radius R.

The radius of the cylinder is a function of the various uncertainties in the problem. The

radius must be as large as the greatest variance in the vehicle's and obstacles' inertial

location. Additionally, the radius must account for deviations of the trajectory from the

prescribed path which can be determined from simulation of the optimal control problem

presented in Section 6.7.

Therefore, the local collision checking algorithm searches for intersections between

the piecewise linear path segments represented by finite length cylinders and obstacles

represented by bounded planes (see Figure This search is accomplished by first

finding the intersection point pint between the central axis of the cylinder and the obstacle

using Equation 6-18. The intersection between the cylinder and plane is an ellipse with

center pint. As shown in Figure, the i i, i axis of the ellipse will be R where p is
the angle between the plane's normal vector and the cylinder's axis. The minor axis of the

ellipse has a value R. The orientation of the ellipse is determined by the in ii' axis which

is oriented in the plane defined by the plane's normal vector and the cylinder's axis. The

discretized ellipse can be tested for being interior to the plane boundary with Equation


6.6 Environment Coverage Criteria

Sections 6.4 and 6.5.2 described a method for constructing collision-free paths in a

cluttered environment. This section defines a method for selecting paths with the goal of

constructing a complete environment map in minimum time. For this situation, optimality







500, 500,
400 400


S00 1300

S- measured trajectory


North (ft) 52 4 13001300
20 .13

S8 780
200 1 0 0


30 / 520 East (ft 100 1200 900 000 300
0 0 North (ft)


Figure 7-6. Example localization result for the airborne SLAM simulation. A) The
complete localization result with constructed map and estimated trajectory.
B) The constructed map after a single loop through the environment. C) The
localization result without the constructed map. D) A top-down view of the
actual vehicle trajectory, trajectory predicted from measured vehicle states
only, and the trajectory estimated by the SLAM algorithm.

measured trajectory is calculated from the noisy state measurements without updates

from the vision sensor information. The estimated trajectory quickly converges to the

groundtruth trajectory as is considerably more accurate than the model based prediction

over the entire trajectory. This shows the versatility of the SLAM algorithm in that it can

be employ, ,1 if the structure of the environment is known to provide accurate estimates of

vehicle location despite very noisy sensor measurements.

To my parents for their support, encouragement, and, most of all, love
for a son who never calls nearly enough.

To my brother for ah--iv-x setting the bar so high that I have to stretch
and for alv--,i- reminding me that, somn.-Iiv, I need to graduate.

To Jessica for making the last long years in Gainesville fun, exciting,
and sometimes even more stressful than they should have been.

To Allie for making sure that I get up every morning
and put one foot in front of the other,
if only to go for a short walk.

4.3 The Particle Filter

The Kalman filter and its nonlinear variants (EKF,UKF,SR-UKF) operate under

the assumption that the probability distribution of the state estimate is Gaussian, or

able to be parameterized by a mean and covariance. The particle filter is a Monte Carlo

( MC) method for state estimation which represents the probability distribution as a set of

weighted random samples [82]. This formulation allows the estimation process to handle

non-Gaussian and multi-modal probability density functions.

Like the Kalman filtering approaches, the particle filter operates in a recursive

fashion. The state is predicted according to a process model and, subsequently, updated

by comparing sensor measurements to measurement predictions calculated with a

measurement model. However, while the Kalman filter propagates a normal distribution

through the nonlinear models, the particle filter propagates a set of weighted samples

called particles. Particle filters employ a set of N particles defined as

S {xw} Vi ,... ,N (4 41)

where x4 are random samples of the state's probability density function p(Xk Xk_1, Uk)

and w' are associated importance weights calculated according to environmental

observations. The particles differ from the sigma points employ, ,1 by the UKF in that

they are randomly selected and not deterministically calculated. Additionally, the particles

are weighted in order to maintain a general probability distribution while the sigma points

are used to reconstitute a Gaussian distribution.

The particle filtering algorithm, outlined in Algorithm 3, begins by drawing state

samples from a probability density defined as

Xk ~ p(XkcXk k-1, Uk) = f(xk-l, Uk, qk) (4-42)

which is the same nonlinear process model definition used in the EKF (Equation 4-10).
The notation a ~ b denotes that the random variable a has the probability distribution

global planner is employ, 1 represents traditional planning strategies that do not account

for a directed sensor. The second case is the complete planning strategy which accounts

for a directed sensor by tracking vehicle poses and selecting paths which maximize the

variance in sensing vantages.

For the example result, each case was run for an equal amount of time. Since the

global planner selects more direct paths through the map, its resultant path traverses a

larger portion of the environment. The path that results from including the local planning

strategy follows the same trend as the global path, but is more circuitous due to the goal

of maximizing the variance in sensor vantage. Both paths follow the same trend because

the first global goal location, the north-east corner of the environment, is identical in both

cases. The benefit of this methodology can be seen in Figure 7-11 which compares both

methods in terms of total obstacle surface area viewed. The result shows that adding

the local planning strategy increases environment coverage despite reducing the overall

distance traversed to the first global goal location.

x 10/




C 3 global strategy
., complete strategy
2- ,


0 5 10 15 20 25

Figure 7-11. Comparison of environment coverage results. The amount of obstacle surface
area viewed during the transversal of paths given in Figure 7-10.

cl R, T) c

Figure 2-3. Epipolar geometry. The epipolar constraint describes the geometric
relationship between camera motion and the projection of a feature point onto
the image plane.

The epipolar constraint relates the image projections, fl and f2, of the

three-dimensional point, p, to the relative position, T, and orientation, R, of the two

camera locations. Since these three vectors are coplanar their scalar triple product is zero

as follows

f2(T x Rf) 0 (2-10)

The relative position and orientation of the camera are typically encoded as a single

entity known as the essential matrix, EE IR33

E-TxR (2-11)

The core calculation of the SFM algorithm is estimation of the essential matrix using

a set of tracked feature points and the epipolar constraint given by Equation 2-10. A

method for esimating the essential matrix is described in Section 2.3.1. Once the essential

matrix is estimated, it can be decomposed into the relative rotation and translation of the

camera. The relative translation, however, can only be estimated to a scale-factor since

Equation 2-10 is homogeneous with respect to the essential matrix (Equation 2-11) and is

7.2 Plane Fitting Results .................. ........... 105
7.2.1 Effects of Feature Point Noise ................ ... 106
7.2.2 Nonplanar Obstacles .................. .. ..... 108
7.3 SLAM Results ................... ... ... ..... 108
7.3.1 Simulation Environment .................. .. .. .. 109
7.3.2 Localization Result .................. ........ .. 111
7.3.3 State Estimation Error .................. ..... .. 113
7.3.4 Filter Consistency .................. ......... .. 113
7.3.5 Filter Efficiency .................. .......... .. 116
7.3.6 Example SLAM Result .................. ..... .. 117
7.4 Trajectory Planning Results .................. ..... .. 117
7.4.1 Environment Coverage .............. .... .. 117
7.4.2 Optimal Control Results .............. . .. .. 121

8 CONCLUSIONS .................. ............ .. .. 122

8.1 Conclusion ................. ............ .. .. 122
8.2 Future Directions ................ ............ 123

REFERENCES ... .. .. .. ... .. .. .. .. ... .. .. . . 124

BIOGRAPHICAL SKETCH .................. ............. 132


P, (Pf )T P f,k + PT,,k + (5-28)

P,k- z W [t -] [z ],t (5 29)

P7,k E>\ W i1 [Z^ yt (5 30)
xx,k tc |kk-l,t Xkk-1 [,t ] (5-30)
3. Calculate the proposal distribution and draw N samples to define the new vehicle

state xk where the distribution is Gaussian with parameters:

^-k k + K (Zk ) (5-31)

Px,k|k P?,klk- KPzzk(K)T (5-32)

and the Kalman gain is

K PX',k(PI ,k)1 (5 33)

4. Calculate sigma points for the observed landmarks and transform according to the

observation model:

S= ml, m_1 ((L A)P (5-34)
-2L0 (5-35)

y7, -- Y -i
~yy,k E [c [ l,t -zk] k k, -z] (5-36)
5. Update definitions of observed landmarks according to the UKF update rule:

Kr, k Pn,k lk-lPyy,k((Pyy,k) Pn,kk-lPyy,k + r)1 (5 37)

m m_ + K Zk k ) (538)

P,k, k (I Kk(P y ) (5-39)
Knkk r,k(P~y,k) )P'rn,klk-1 5 9

information which is limited to 30 Hz. The observation updates in the SLAM process

occur at 3 Hz to allow for sufficient parallax for robust triangulation in creating

three-dimensional feature points. The three-dimensional feature points are generated

from the triangulation of two-dimensional image points, therefore, noise can be reduced

by allowing for a larger camera translation between calculations. If the translation

is too larger, however, two-dimensional feature points will leave the image plane and

no three-dimensional information will be calculated. This problem can be alleviated

by testing SFM algorithms on the actual flight data to determine the optimal rate

for triangulation or by instituting an adaptive step where triangulation occurs at the

maximum camera motion that retains the largest number of tracked feature points.

7.3.2 Localization Result

A localization result of the SLAM algorithm is shown in Figure 7-6. The result,

obtained with the URBPF approach, simulates a known environment by initializing the

landmarks to their actual locations. This is equivalent to using the SLAM framework

strictly for localizing the vehicle in the presence of noisy state measurements and noisy

sensor measurements. The noise values, defined in Equations 5-3 and 5-4, were given the

following standard deviations

au =5 ft/s

ac = 5 ft/s O, 0.1 ft

5 ft/s ao,, 0.1 ft
a = 0.05 rad/s a,3 0.1 ft

ao = 0.05 rad/s ad = 5 ft

a, = 0.05 rad/s

The localization result(Figure 7-6) shows the filter is capable of producing very

good localization results in the presence of considerable noise. The estimated trajectory

given by the SLAM algorithm is defined by the particle with the largest weight and the

Figure 6-10. Interior to boundary test. A point that intersects with the infinite plane is
within the boundary defined by the convex hull H if the sum of the interior
angles is equal to 2r.

plane boundary. C'!h pter 3 defined each planar boundary by a convex hull H

{hi, hi,..., h.} e c of n boundary points. Therefore, the test for determining if a
collision has occurred becomes determining if the intersection point pint is interior to the

three-dimensional boundary H. This determination is accomplished by inspecting the two

dimensional projections onto the planar landmark of pint and H denoted by pint E IR2 and

H .- As shown in Figure 6-10, pint can be connected with every point in H. If the

sum of the angles between the connecting vectors is equal to 27 then the point is interior

to the boundary. This constraint can be written for n boundary points as

Scos-1 (h pint) (h+l pint) 2 (69)
i |h, pint |(hi+ -Pint)

where the value h +1 is equal to hi. Local path collision detection

The above formulation for collision checking is well-suited for the global planning

task. It is a computationally efficient solution that can quickly search a large number of

lengthy paths for collision with a large environment map. The method assumes that the

vehicle is a point and that there are no uncertainties in vehicle and obstacle location. The

local planning strategy, however, requires a more detailed approach to account for the

various uncertainties in the problem. The current map and relative vehicle position are

state information that is subject to drift over time. In C'!i pter 5 this information will

be fused with onboard state estimates and the external environment measurements in a

Simultaneous Localization and Mapping (SLAM) algorithm to reduce drift in vehicle state

estimates and concurrently decrease map errors.

[66] Elfes, A., "Using Occupancy Grids for Mobile Robot Perception and Navigation,"
Computer, Vol. 22, Issue 6, 1989, pp. 46-57.

[67] Castelloanos, J., Tardos, J., and Schmidt, G., "Building a Global Map of the
Environment of a Mobile Robot: The Importance of Correlations," IEEE Interna-
tional Conference on Robotics and Automation, Albuquerque, New Mexico, April,

[68] Surmann, H., Nuchter, A., and Hertzberg, J., "An Autonomous Mobile Robot
with a 3D Laser Range Finder for 3D Exploration and Digitalization of Indoor
Environments," Robotics and Autonomous S,-/.i; 1 Vol. 45, 2003, pp. 181-198.

[69] Liu, Y., Emery, R., C'! .:1 .I ,arti, D., Burgard, W., and Thrun, S., "Using EM
to Learn 3D models with mobile robots," In Proceedings of the International
Conference on Machine Learning, 2001.

[70] Martin, C., and Thrun S., "Online Acquisition of Compact Volumetric Maps with
Mobile Robots," In IEEE International Conference on Robotics and Automation,
Washington, DC, 2002.

[71] Jolliffe, I., "Principal Component Analysis," Springer, ISBN 0387954422, 2002.

[72] MacQueen, J.B., "Some Methods for Classification and Analysis of Multivariate
Observations," Proceedings of the 5th Berl l. ;, Symposium on Mathematical Statis-
tics and P l,,,l:.:l;' 1967, pp. 281-297.

[73] Hammerly, G., and Elkan, C., "Learning the k in k-means," Neural Information
Processing S,-.I mi- 15, 2004.

[74] Stentz, A., l\! ip-Based Strategies for Robot N .1,i;,! i.i:i in Unknown
Environments," Proceedings of AAAI Spring Symposium on pl,,:,,,,,; with In-
complete Information for Robot Problems, 1996.

[75] Bailey, T. and Durrant-Whyte, H., "Simultaneous Localization and Mapping
(SLAM): Part I The Essential Algorithms," Robotics and Automation ,l~. ..:,'.
June, 2006.

[76] Guivant, J., and Mario, E., "Optimization of the Simultaneous Localization and
Map-Building Algorithm for Real-Time Implementation," IEEE Transactions on
Robotics and Automation, Vol. 17, No. 3, June 2001.

[77] Williams, S., N. vin i1,1 P., Dissanayake, G., and Durrant-Whyte H.F., "Autonomous
Underwater Simultaneous Localization and Map Building," Proceedings of the IEEE
International Conference on Robotics and Automation, San Francisco, CA, April,

[78] Bailey, T. and Durrant-Whyte, H., "Simultaneous Localization and Mapping
(SLAM): Part II State of the Art," Robotics and Automation l'rl,~r ..:,. September,

6.4.1 Expansive-Spaces Tree Planner

The EST planner was developed to efficiently cover the space between qinit and qgoal

without needlessly exploring the entire Qfree. Instead of building a complete roadmap, like

the PRM approach, the EST incrementally constructs a tree T that is guided toward the

goal location. The tree construction process is outlined in Algorithm 4.

Algorithm 4 Expansive-Spaces Tree Construction
1: initialize tree T as a set of nodes V = qo and edges E = 0
2: for i = 1 to n number of tree expansions do
3: randomly select q from T with probability 7rT(q)
4: select a random collision-free configuration qrand near q
5: if A finds a collision-free connection between q and qrand then
6: V VU qrand
7: E EU {q, qrand}
8: end if
9: end for
10: return T

The standard form of the EST constructs a tree Tinit rooted at qinit by incrementally

adding branches to the tree in the form of nodes and connecting arcs. The tree

construction process begins by randomly selecting a node q from the tree. A random

sample qrand is then selected from the neighborhood of q and the local planner A attempts

to connect the two configurations. If the connection is successful then qrand and the

connecting arc is added to the tree. This process, depicted in Figure 6-4 continues until a

termination criteria such as the maximum number of nodes in the tree or nodal proximity

to the goal is satisfied.

The most important aspect of the EST is the ability to direct the tree growth

according to a prescribed metric. Guided sampling of Qfree is achieved by assigning a

probability density function 7rT to the nodes of the tree. The probability density function

biases the selection of tree nodes from which to extend the tree. This bias enables tree

growth toward specified regions such as areas that are sparsely populated by nodes or the

goal location.

number of landmarks that must be stored and incorporated into the estimation process.

Additionally, the geometric landmarks are easily distinguishable which aids the association

of sensor observations with map features.

This chapter builds on the overview of recursive filters that are important to the

SLAM estimation process presented in C'!i lpter 4. Section 5.2 introduces the vehicle and

measurement models developed for the presented SLAM formulation. Section 5.3 describes

three specific filters that were investigated in relation to the airborne SLAM problem.

Lastly, Sections 5.4 and 5.5 describe the data association process and map management

procedures employ, -1 to incrementally incorporate sensed geometric primitives into a

global map.
5.2 Airborne SLAM Models

The goal of the airborne SLAM algorithm is to concurrently estimate the state of an

aircraft, x E R6, defined by position and attitude, and a relative map consisting of j static

environmental landmarks m E R6j. The landmarks in this formulation are distinct planar

features defined by a centroidal location and a unit normal direction m = [p,, f~] .

As described in ('!i lpter 4, state estimation requires the definition of process and

measurement models that describe the system of interest. The general process and

measurement model definitions for the SLAM problem are probabilistically defined as the

following distributions

Xk ~ p(xk Xk 1, Uk) f (xk-1, Uk-1, qk-1) (5-1)

Zkj ~ p(zk,j Xk, my) =h (xk, my, rk) (5-2)

where f(-) is a nonlinear state transition model that calculates the current state of

the system xk based on the previous state xk-1, control input uk-1, and process noise

qk-1 with covariance Cq. The nonlinear measurement model h(-) generates a sensor

measurement zk,j for each landmark my based on the vehicle state xk and measurement

noise rk with covariance E,.



ACKNOW LEDGMENTS ................................. 4

LIST OF TABLES ....................... ............. 8

LIST OF FIGURES .................................... 9

ABSTRACT . . . . . . . . . . 11


1 INTRODUCTION ...................... .......... 13

1.1 Problem Statement ................... ........ 13
1.2 Motivation ...................... ........... 14
1.3 Historical Perspective .............................. 18
1.4 Contributions . . . . . . . . 22


2.1 Introduction ...................... ........... 24
2.2 Structure from Motion Overview ................... ... 24
2.2.1 Pinhole Camera Model ................... ..... 25
2.2.2 Feature-Point Tracking.... ......... ... ... .. .. 26
2.2.3 Epipolar Geometry .................. ........ .. 28
2.2.4 Euclidean Reconstruction ............... .. .. 30
2.3 Calculating Structure from Motion .............. . 31
2.3.1 Eight-Point Algorithm ................ ... ... .. 31
2.3.2 Recursive Structure from Motion. .................... .. 32
2.4 Optical Flow State Estimation . ............ ... 34
2.5 Image Processing for Environment Mapping . . .. 35

3 ENVIRONMENT REPRESENTATION ........... .... . 37

3.1 Introduction ............... ............. .. 37
3.2 Environment Models ............... ........... ..39
3.2.1 Topological Maps .............. ....... ..39
3.2.2 Occupancy Maps ............... ......... ..39
3.2.3 Geometric Maps ............... .......... ..40
3.3 Dimensionality Reduction ............... ........ ..40
3.3.1 Principal Components Analysis ............. .. .. 41
3.3.2 Data Clustering ............... .......... ..43
3.3.3 Plane Fitting Algorithm .............. . .. 45

Equation 2-1 shows that only the depth of the three-dimensional feature point is

required to resolve the location of the point from the image projection.

2.2.2 Feature-Point Tracking

Feature-point tracking is a well-established image processing operation and is

commonly a prerequisite for solving the SFM problem. Essentially, a feature-point

tracking algorithm tracks neighborhoods of pixels, referred to as feature points, through

a sequence of images. Feature points are defined as any distinguishable image region that

can be reliably located within an image which are commonly corners and edges of objects.

An example of tracked feature points are shown in Figure 2-2.


Figure 2-2. Feature-point tracking result from an image sequence. A) Frame 1. B) Frame

The Lucas-Kanade feature-point tracker [45] is a standard algorithm for small baseline

camera motion. The algorithm solves for the displacement of all feature points in an image

where the displacement of a feature point is measured in the horizontal and vertical image

coordinates denoted by p and v, respectively. For a series of two images, the displacement

of a feature point, d = [d,, d,]T, can be described as the translation of a group of image

pixels as follows

I1(f)w = I2(f + d)lw (2-2)

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy



Adam S. Watkins

August 2007

('!C in: Richard C. Lind, Jr.
Major: Mechanical Engineering

The desire to use Unmanned Air Vehicles (UAVs) in a variety of complex missions has

motivated the need to increase the autonomous capabilities of these vehicles. This research

presents autonomous vision-based mapping and trajectory planning strategies for a UAV

navigating in an unknown urban environment.

It is assumed that the vehicle's inertial position is unknown because GPS in

unavailable due to environmental occlusions or jamming by hostile military assets.

Therefore, the environment map is constructed from noisy sensor measurements taken

at uncertain vehicle locations. Under these restrictions, map construction becomes a

state estimation task known as the Simultaneous Localization and Mapping (SLAM)

problem. Solutions to the SLAM problem endeavor to estimate the state of a vehicle

relative to concurrently estimated environmental landmark locations. The presented work

focuses specifically on SLAM for aircraft, denoted as airborne SLAM, where the vehicle

is capable of six degree of freedom motion characterized by highly nonlinear equations

of motion. The airborne SLAM problem is solved with a variety of filters based on the

Rao-Blackwellized particle filter. Additionally, the environment is represented as a set

of geometric primitives that are fit to the three-dimensional points reconstructed from

gathered onboard imagery.

In general, the UT defines a set of 2L + 1 sigma points X and associated weights for

an L-dimensional random variable with mean g and covariance P as follows

Xo = k-1 WA + ( l a2+)
XO Ak ^vL-' 0 LW=A
Xi / k-1 + (V(L + A)Pk-1~1 i WI 'i(
2(L+A) (417)
Xi+= Ak-1 (V(L + A)Pk-1|k-1)i Ws Wc-

A = a2(L + ) L

where the subscript of the square root denotes the ith column of the matrix square root

and a, 3, and K are tuning parameters for controlling the placement and weighting of the

sigma points. The sigma points are transformed by the nonlinear function

i = f(Xj), V i ... ,2L + 1 (4-18)

and the new mean and covariance are the weighted average and weighted outer product of

the transformed points
I'k WYWSi (4-19)
P = Wf [Y k] [Yi tk] (4-20)

4.2.3 The Unscented Kalman Filter

The unscented transform is applied to the EKF to replace the prediction and update

equations of Equation 4-16 used to create the UKF. This derivation is accomplished with

an augmented state vector x' e R"+', where q is the dimension of the noise vector q,

defined as
x' (421)

The process model, Equation 4-10, is rewritten to accept the augmented state vector

as input,

Xk f (Xk-1, Uk_1, qk-1) (4-22)

as the following MILP

u* argmin J(bg(u),t) L E bi (u1)

s.t. -K( b) < x < bg) (6-2)
L b 1
i= 1 "g 1

where bg e BL is a set of binary decision variables, B = 0, 1}, that define the point where

the goal location is reached and K is large positive number. When the goal is reached at

point Xk+i the value of b, is set to 1, otherwise, a value of 0 makes the first constraint

inactive. The second constraint ensures that the goal location is reached only once. The

objective function J guarantees that the goal is reached in minimum time.

Vehicle dynamics, written in terms of position x and velocity x, are included as a

constraint on the optimization in the linearized form

Xkc+l Xkc
k+l A +Buk (6-3)
Xk+l Xk

where A is the vehicle state transition matrix, B is the vehicle input matrix, and u is

the vehicle control input. The vehicle dynamic model is subject to limits on velocity and

control input

Uk < Umax
Lastly, collision avoidance is included in the optimization as a constraint stating that

trajectory points must not reside within an obstacle. The following form of the constraint

is written for rectangular obstacles

High K[bin,1, bin,2]T < Xk+i < Olow + K[bin,3, bin,4]T
Z: binj < 3

where K is a large positive value, bi EC B4 is a binary vector that must have at least one

active constraint (bj = 0) to be outside a given obstacle, and the obstacle upper right

Figure 6-9. Collision detection for the global path. The collision detection algorithm
searches for any intersections between the piecewise linear paths of a tree
planner and the planar environment obstacles.

The first step of the collision detection algorithm is to search for intersections between

each piecewise linear segment of the path and all infinite planes in the current map. The

intersection point pint is calculated by

a pi.f+D
vfl (6-18)
Pint = Pi av

where pi is the first point of the path segment, v is the vector of the path segment

originating at pi, fi is the normal direction of the plane, and D is the intercept of the

plane from the plane equation Ax + By + Cz + D = 0. If the scalar value a is on the

interval [0, 1] then an intersection exists along the finite path segment. Alternatively, if

the denominator v in is zero the line segment is parallel to the plane. Therefore, the

path segment is parallel to the plane and may be contained within the plane. These cases

can be differentiated by checking if one of the end points satisfies the plane equation

Ax + By + Cz + D 0.

Once the intersections between the path segments and the infinite planar obstacles

have been calculated a search for collisions can be completed. A collision exists only

if the intersection point for each segment-plane pair is contained within the specified

where [x, y, z]T denotes the inertial position measure in a NED frame and [, 0, Ob]T are the

vehicle Euler angles defining roll, pitch, and yaw. The errors in linear and angular velocity

qk are assumed to be Gaussian with a covariance

2 0 0 0 0 0

0 a2 0 0 0 0

0 0 a 2 0 0 0
v, (5-4)
0 0 0 2 0 0

0 0 0 0 a2 0

0 0 0 0 0 ao

5.2.2 Measurement Model

The measurement model h(.) maps planes defined in the inertial coordinate system

to a body-fixed definition so that predicted measurements z can be compared with actual

sensor readings z. The definition of a landmark mj consists of an inertially defined

centroid and normal direction mj = [,j, fj] The sensor observations consist of planes

defined by a normal direction fi and a perpendicular distance d from the plane to the

sensor. Therefore, the observation model h(.) is defined as

kj ,j Rbj (5-5)
dkj nikj R B(Pi Pk)

where Rb is the rotation matrix from the inertial frame to the body-fixed frame at time k

and pk is the current vehicle position. The sensor induced errors for the perceived normal

direction i = [ni, n2, n3]T and perpendicular distance are characterized by

S 0 0 0

0 a2 0 0
Er n2 (5-6)
0 0 ~2 0
0 0 0
0 0 0 ad

Algorithm 6 Trajectory Planner for Environment Coverage
1: repeat
2: calculate covariance Epos of all vehicle locations in the trajectory
3: select the global goal location qG that maximizes apos
4: plan greedy RRT to qG using Algorithm 5
5: select the shortest distance path qc(s) from the global RRT
6: select local goal location qL on qc(s) inside sensor field of view
7: repeat
8: extend exploratory RRT towards qL using Algorithm 5
9: until N desired candidate paths q(s) have been generated
10: calculate sensor covariance Esen along each q(s)
11: select q(s) that maximizes aen
12: convert local path to trajectory using Equation 6-24
13: execute path to gather sensor data and build environment map
14: until aent + 1 Int < tolsen

The planning process begins with the determination of the vehicle's global goal

location qc. The global goal is selected so that it's addition to the vehicle position

history maximizes the variance of vehicle position Rpos. This process greedily drives the

vehicle to locations in the environment that have not been explored. For example, under

this strategy a vehicle starting in one corner of a rectangular environment will select a

goal location in the opposite corner. Therefore, the vehicle will cross the environment

diagonally performing a greedy search.

An RRT is planned from the initial vehicle configuration qint to the goal location

qc. The global RRT is extended in a greedy fashion in order to rapidly generate a path

solution to the goal location. The tree expansion is made greedy by sampling qG with a

high probability. The fact that other randomly generated configurations are sampled for

connection to the tree differentiates the greedy RRT from an artificial potential function

planner. This differentiation is important because it prevents the planning strategy from

becoming mired in a local minimum. The solution path is denoted qc(s).

Next, the local goal configuration qL is selected. The configuration qL is found by

selecting the first point of intersection between the sensor field of view and qG(s). A

specified distance from the intersection point along qg(s) towards the interior of the field

the onboard camera in order to ensure that the planner incorporates the most recently

sensed environmental information. This ensures that the local path is collision free.

260 East (ft)


Njfl L

Nv. (* L /


0 260 520 780 1040 1300
East (ft)

260 520 780 1040 1300
East (ft)

Figure 7-10. Example paths for environment coverage. A) Example result for the global

planning strategy. B) Example result for the complete planning strategy. C)

Top-down view of the global planning strategy result. D) Top-down view of

the complete planning strategy result.

In order to quantify the benefit of the overall planning strategy, Figure 7-10 presents

trajectory results for cases when the planning process is restricted to the global planner

and when the complete planning strategy is employ, l The first case, where just the

* 300


North (ft)

S/ 1040
300 7 BO
260 East ift)




900 -90
North (ft) 600

500 55k00

S300 00
200 1200
0 0 '", L
1500 15000 "- \ 1

S1200 0 < 13000
9- O 90oC" '

th (" 1040 N.Ah (ft) 60 1040
Nodh (fh 600 078 Northft) 0 <"o

\ 260 0/- 260


1300r "

.nan/ ----- measured trajectory |
actua trajectory I i
S... .~ 'measured trajectory
S* --estimated trajectory

4 00
S00- -
< 01200 900 600 300

N t () 2Noth (78)
0 M" 3

20 100 1200 900 600 300 0
N North of7)


Figure 7-9. An example result for the complete airborne SLAM solution using the
URBPF. A) The complete SLAM result with constructed map and estimated
trajectory. B) The constructed map after a single loop through the
environment. C) The SLAM result without the constructed map. D) A top
down view of the actual vehicle trajectory, trajectory predicted from measured
vehicle states only, and the trajectory estimated by the SLAM algorithm.

a region has been reached it has been sensed. This, however, is not correct for a directed

sensor such as an onboard camera. In this case, direct paths through the environment will

not provide the needed information to build a complete map.

The local planning strategy incorporates the sensor physics by tracking vehicle

poses and selecting paths which maximize the variance in sensing vantages. Therefore,

paths are selected according to the amount of new information that will be provided by

traversing the path. The local planning process occurs within the vision cone defined by

[40] Pfeiffer, F. and Johanni, R., "A Concept for Manipulator Trajectory Planning,"
IEEE Journal of Robotics and Automation, Vol. 3, No. 2, 1987, pp. 115-123.

[41] Slotine, J.-J. and Yang, S., Imipi. ivm the Efficiency of Time-Optimal
Path-Following Algorithms," IEEE Transactions on Robotics and Automation,
Vol. 5, No. 1, 1989, pp. 118-124.

[42] DeSouza, G. and Kak, A., "Vision for Mobile Robot N. ii;, i.i .. A Survey," IEEE
Transactions on Pattern A,.al, .: and Machine Intelligence, Vol. 24, No. 2, February

[43] Ma, Y., Soatto, S., Kosekca, J., and Sastry S., "An Invitation to 3-D Vision: From
Images to Geometric Models" Springer-Verlag New York, Inc. 2004.

[44] Oliensis, J., "A Critique of Structure-from-Motion Algorithms," Computer Vision
and Image Under ,-l1.:,. 80:172-214,2000.

[45] Tomasi, C. and Kanade, T., "Shape and Motion from Image Streams Under
Orthography," International Journal of Computer Vision, Vol. 9, No. 2, 1992,
pp. 137-154.

[46] Prazenica, R., Watkins, A., Kurdila, A., Ke, Q., and Kanade T., "Vision-Based
Kalman Filtering for Aircraft State Estimation and Structure from Motion,"
AIAA Guidance, Navigation, and Control Conference and Exhibit, San Francisco,
California, August 2005.

[47] Longuet-Ti-.-.ii- H.C., "A Computer Algorithm for Reconstructing a Scene from
Two Projections," Nature, Vol. 293, Sept. 1981, pp. 133-135.

[48] Hartley, R., "In Defense of the Eight-Point Algorithm," IEEE Transactions on
Pattern A,.al, -.: and Machine Intelligence, Vol. 19, No. 6, June 1997.

[49] Broida, T. and CI! I! ,ppa, R., "Estimation of Object Motion Parameters from Noisy
Images," IEEE Transactions on Pattern Aa,:.l;.:- and Machine Intelligence, Vol. 8,
No. 1, Jan. 1986, pp. 90-99.

[50] Matthies, L., Szeliski, J., and Kanade, T., "Kalman Filter-based Algorithms for
Estimating Depth from Image Sequences," International Journal of Computer
Vision, Vol. 3, 1989, pp. 209-236.

[51] C('!,-.., A., Favaro, P., Jin, H., and Soatto S., "Structure from Motion Causally
Integrated Over Time," IEEE Transactions on Pattern A,.al-.:- and Machine
Intelligence, Vol. 24, No. 4, April 2002.

[52] Thomas, J. and Oliensis J., "Dealing with Noise in Multiframe Structure from
Motion," Computer Vision and Image Under.lir,,'1.:,.., Vol. 76, No. 2, November
1999, pp. 109-124.

corner and lower left corner are given by Ohigh and orow, respectively. The extension of this

constraint to three dimensions requires geometrically regular p" I.v.-onal obstacles.

The above formulation for minimum time travel to a goal location is computationally

intractable for large-scale trajectories and high-dimensional systems with more complex

dynamic models and a greater number of obstacles. Therefore, the MILP trajectory

planning problem is implemented in a Receding Horizon (RH) fashion to force a

computationally efficient solution which, by construct, is not provably optimal. This is

true for all RH implementations because the problem is solved incrementally and does

not account for the full scope of the problem at once. The MILP solution, like most

optimization-based approaches, is sensitive to initial conditions and there is no guarantee

that the solution is globally optimal as opposed to locally optimal. Additionally, a vast

number of objective functions and constraints cannot be expressed linearly and thus fall

outside the umbrella of admissible MILP problems.

6.2.2 Maneuver Automaton

The MA motion planning strategy is a direct motion planning strategy which realizes

a computationally efficient solution by discretizing the system dynamics and controls

into a maneuver library. The maneuver library is a finite set of motion primitives that

can be concatenated to create a complete trajectory. The motion primitives consist of

a set of trim trajectories and maneuvers. The trim trajectories are time-parameterized

vehicle motions that are characterized by constant control input and body-axis velocities.

The maneuvers are finite time transitions that connect trim trajectories. The MA can be

depicted as a directed graph as shown in Figure 6-2.

The MA is a directed graph, as seen in Figure 6-2, consisting of a set of nodes and

connecting arcs. The nodes represent the trim trajectories A that define equilibrium states

for the vehicle. For example, the three trim trajectory model shown could represent an

aircraft operating at constant altitude with one trim trajectory representing steady level

flight and the two other trim trajectories representing left and right turns. The maneuvers


2.1 Introduction

Computer vision is an attractive sensor option for autonomous vehicle navigation

tasks due to the wealth of information that can be inferred from digital imagery [42]. In

addition, the low-cost, small p loadd requirements, and passive nature of vision has made

it the primary sensor for many small Unmanned Air Vehicles (UAVs) [4]. This study

assumes that computer vision is the only available onboard sensor capable of measuring

the surrounding environment.

Mapping an unknown environment with a moving, monocular camera requires

that three-dimensional information be inferred from a sequence of two-dimensional

images. This process of three-dimensional reconstruction from imagery is an actively

researched area in the image processing community commonly referred to as Structure

from Motion (SFM) [43, 44]. This chapter presents an overview of SFM in Section 2.2.

Specific solutions to the SFM problem are then discussed in Sections 2.3.1 and 2.3.2.

Section 2.4 describes a vehicle state estimation task similar to SFM which calculates

vehicle velocity states. Subsequently, C!i plters 3 and 5 describe how the vision-based state

estimates and environmental structure information is employ, -1 in the robotic mapping


2.2 Structure from Motion Overview

The class of algorithms known as SFM endeavor to reconstruct the pose of a moving

camera as well as the three-dimensional structure of its environment. Calculating the

motion of the camera typically begins with an operation known as feature-point tracking,

as described in Section 2.2.2, which calculates correspondences between points of interest

in successive images. Section 2.2.3 demonstrates that by exploiting epipolar geometry

between correspondence pairs the relative rotation and translation between the two

camera views can be estimated. The calculated motion of the camera can then be used

Figure 6-13. Environmental coverage metric. Environment coverage is quantified by the
variance of the position of the vehicle and sensing locations over the entire
path. Sensing locations are defined by the vector originating from the end of
the sensor field of view oriented back towards the sensor.

where Epos is the sample covariance of the all vehicle position vectors. The same equation

can compute the variance of the the sensor pose vectors a21n given the covariance of the

sensor pose vectors -sen.

The variance of the vehicle position apos is used to determine the global location

by selecting the point in the environment that, when added to the current vehicle

location history, maximizes apos. This calculation is computed on a coarse grid of the

vehicle environment for the purpose of computational efficiency. The global path planner

constructs a path to this location with a greedy search and a coarse discretization of the


The local path is determined by maximizing the variance of the sensor pose vectors

gen over all candidate paths. As opposed to the quick, greedy search used for global path

1p1 iiiiiiii the local sampling-based strategy generates a tree that more completely explores

the environment. When a specified number of candidate paths are found the path that

maximizes aen is selected.

The Dubins paths for

1. Dubins path LSL




2. Dubins path RSR


3. Dubins path

4. Dubins path

5. Dubins path












the entire Dubins set are:

+ta|n-' cos g-cos \
-I' +tan1 (C O
d+sin yb-sin g)
/2+ d2 2 cos(' -'. + 2d(sin sin ,g)

ta.n-1 cosg -cos
S, tan1 (d+sin-Csing

tan -1 cos Yb-cos ,g
= d-sin tbi +sin bg
= 2+ d2- 2 cos(' -') + 2d(sin g sin,' )
-- + tan-'1 COS i-COS lbg
g' ( t d- sin YH-sin Yg

+ tan -C-1 ( cos- i-s g
( d+sin 9bi+sin 9g
V/-2 + d2 + 2 cos( ,) +

+tan-' cos -cos -g
\ d+sin ii+sin tb


2d(sin + sin ,)

tan-1 (2

Stan cos1 +cosg) + an-1 (2
d-sin i-sin -g ) V2
-2 +d2 + 2cos(,' g)+ 2d(sin +sin,'-,)

tan-1 ( cosicos#g )+ an-1 (2)
d-sini-sinpg V2a

S- Ota -I ( COS 9b-COSbg \ V2
V1 I tan d-sin sin bg, ) 2

V2 COS-1 ((6 d2 2 cos(,' -

V3 = g -V +V2

6. Dubins path LRL

+',) +2d(sin,'

sin ',)))

, + tan-l ( cosg-cosb )+
Sd+sin i-sinbg )
cos- ( (6 d2 2 cos(' -

+ ', V- I +V2

.,') +2d(sin,', sin' ))








Figure 6-12. Motion planning for coverage strategies. Traditional motion planning
strategies for environmental coverage are not well suited for vehicles with
dynamic constraints and directional sensors. A) Robotic coverage tasks
assume the environment is known and decompose the free space into
manageable search spaces. B) Sensor-based coverage strategies often assume
omnidirectional sensors.

robot capable of nonholonomic motion [100, 101] (see Figure 6.6). Therefore, current

sensor-based coverage algorithms do address the problems inherent in mapping a

three-dimensional environment with a dynamically constrained vehicle equipped with

a directional senor.

The coverage methodology used in this dissertation affects both the global and local

planning strategies. The strategy is based on tracking all locations to which the vehicle

has traveled and the attitudes from which individual locations have been sensed. The

goal of the planner is maximize the variance of these individual quantities. Therefore,

the discrete vehicle locations and the vectors defining location and orientation of sensed

regions are tracked over time as shown in Figure 6-13. The vectors defining sensed region

are constructed from the attitude of the vehicle during the sensing operation and the

center of the field of view of the sensor.

The total variance of the vehicle position is calculated as the trace of the covariance

of the vehicle positions qpos as given by

as = tr(EpoS) (6-20)




[92] Henshaw, C., "A Unification of Artificial Potential Function Guidance and Optimal
Trajectory Planning," Advances in the Astronautical Sciences, Vol. 121, 2005, pp.

[93] Barraquand, B. and Latombe, J. C., "Nonholonomic Multibody Mobile Robots:
Controllability and Motion Planning in the Presence of Obstacles," Algorithmica,
Vol. 10, 1993, pp. 121-155.

[94] Frazzoli, E., Dahleh, M., and Feron, E., "Real-Time Motion Planning for Agile
Autonomous Vehicles," AIAA Guidance, Navigation, and Control Conference and
Exhibit, Denver, CO, August 2000.

[95] Tang, Z. and Ozguner, U., "Motion Planning for Multitarget Surveillance with
Mobile Sensor Agents," IEEE Transactions on Robotics, Vol. 21, No. 5, October

[96] Laumond, J.-P., Jacobs, P., Tai,and Murray, M., "A Motion Planner for
Nonholonomic Mobile Robots," IEEE Transactions on Robotics and Automation,
Vol. 10, No. 5, October, 1944.

[97] Hwangbo, M., Kuffner, J., and Kanade, T., "Efficient Two-phase 3D Motion
Planning for Small Fixed-wing UAVs," IEEE International Conference on Robotics
and Automation, Roma, Italy, April 2007.

[98] Shanmugavel, M., Tsourdos, A., Zbikowski, R., and White B.A., "3D Dubins Sets
Based Coordinated Path Planning for Swarm UAVs," AIAA Guidance, Navigation,
and Control Conference and Exhibit, Keystone, Colorado, August 2006.

[99] C!..- I H., "Coverage for Robotics A Survey of Recent Results," Annals of
Mathematics and Ar'.:i ..:. Intelligence, Vol. 31, No. 1-4, 2001, pp. 113-126.

[100] C('! ..- I H., Walker, S., Eiamsa-Ard, K., and Burdick, J., "Sensor-Based Exploration:
Incremental Construction of the Hierarchical Generalized Voronoi Graph," The
International Journal of Robotics Research, Vol. 19, No. 2, February 2000, pp.

[101] Taylor, C. J. and Kriegman, D. J., "Vision-Based Motion Planning and Exploration
Algorithms for Mobile Robots," IEEE Transactions on Robotics and Automation,
Vol. 14, No. 3, June 1998.

[102] Holmstrom, K., "TOMLAB An Environment for Solving Optimization Problems in
MATLAB," Proceedings for the Nordic MATLAB Conference'97 Stockholm, Sweden,

[103] Bailey, T., Neito J., and Nebot, E., "Consistency of the FastSLAM Algorithm,"
IEEE International Conference on Robotics and Automation, 2006.

[104] Bailey, T., Neito J., and Nebot, E., "Consistency of the EKF-SLAM Algorithm,"
IEEE/RSJ International Conference on Intelligent Robots and S;,il/. ii 2006.

Cos c\


Figure 6-11. Collision detection for the local path. The local collision detection algorithm
incorporates a safety region which encompasses the path segments radially.
A) The path segment represented by a cylinder with radius R intersects an
obstacle described as a plane with normal vector fi. B) A two-dimensional
view rotated orthogonal to the plane containing the in i, r axis of the
intersecting ellipse.

cannot be proven because initially the environment is entirely unknown. Therefore, an

antagonistic example can be constructed for any planner to provide a non-optimal result.

The robotic coverage problem refers to optimal coverage of an environment with

known obstacles [99]. Solutions to the robotic coverage problem generally involve

decomposing the free space of the environment into connected regions. Each region of

the environment is sensed with a back and forth lawnmower, or boustrophedon, pattern as

shown in Figure 6.6. The boustrophedon path is optimal for a two-dimensional holonomic

vehicle under the assumption that passing through a small region of the environment is

equivalent to sensing the region. The space decomposition process reduces the planning

task to finding the optimal route for which to connect the individual paths.

When the environment is unknown and the planning task must be directed by

gathered sensor information, the problem is called sensor-based coverage or sensor-based

exploration. Sensor-based coverage tasks typically operate under several strict assumptions

including a two-dimensional environment, an omnidirectional range sensor, and a

Algorithm 5 Rapidly-Exploring Tree Construction
1: initialize tree T as a set of nodes V = qo and edges E = 0
2: for i = 1 to n number of tree expansions do
3: randomly select qrand from Qfree with probability distribution 7Q
4: assign near to be the closest neighbor q in T to qrand
5: find path P between near and qrand with A
6: extract sub-path p along P terminating at qnew a distance 6 from near
7: if p is collision-free then
8: V V U qnew
9: E EU neara, qnew}
10: end if
11: end for
12: return T

6-5, continues until a termination criteria such as the maximum number of nodes in the

tree or nodal proximity to the goal is satisfied.


qinit q Y ,new


Figure 6-5. Rapidly-Exploring Tree planning. The RRT involves constructing a tree T by
extending the tree toward a randomly sampled configuration qrand in Qfree.
The tree is extended a distance 6 toward qrand terminating at qnew.

Guided sampling can be incorporated into the RRT in a similar fashion to the EST.

To guide sampling, qrand is sampled according to a probability distribution 7Q. Therefore,

the main difference between the EST and RRT planners is the mechanism for guiding the

sampling. The EST guides the sampling by selecting attractive nodes of the tree to extend

while the RRT expands by selecting regions in the environment to move towards. Though


5.1 Introduction

Trajectory planning for a vehicle navigating an unknown environment motivates the

need to incrementally construct a spatial map of the vehicle's surroundings. Motion

planning strategies that react only to current environmental sensor measurements

produce inefficient results since no knowledge outside the current sensor field of view

is incorporated into the planning process [74]. Time-dependent missions, such as target

acquisition and tracking, require a map to produce time efficient solutions. Additionally,

the ini i ily of motion planning strategies presuppose a known environment map [26].

Therefore, map construction becomes a vitally important task for efficient motion


This research is concerned with navigation in GPS denied environments meaning

knowledge of the vehicle's inertial location is not known. Therefore, navigation must rely

on relative vehicle position estimates which are subject to drift due to model uncertainties

and external disturbances. Additionally, sensor measurements of the external environment

are susceptible to noise. The combination of errors incurred from uncertain vehicle

location and noisy senor measurements produces spatially inconsistent maps as shown in

Figure 5-1.

The issues incurred by placing a vehicle at an unknown location in an unknown

environment and requiring it to incrementally build a spatially consistent map while

concurrently computing its location relative to the map are known as the Simultaneous

Localization and Mapping (SLAM) problem [75]. Research into the SLAM problem

has produced a variety of solutions based on probabilistic recursive algorithms with

implementations on a variety of platforms including ground vehicles, aircraft, and

underwater vehicles [19, 76, 77]. All successful SLAM solutions follow a similar overall

estimation strategy as depicted in Figure 5-2.

The Dubins set can be calculated analytically for any pair of configurations qinit

and qgoal. These general configurations are first transformed to the coordinate system

pictured in Figure 6-6. The transformed configurations have the form qinit = [0, 0, ]T and

goal = [d, 0, igT where qinit is located at origin and qgoal is located on the y-axis.


Figure 6-6.

I_ I Ii Y
(0,0) / (d,0)


The coordinate system for calculating the paths in the Dubins set D. The
initial configuration is qinit is translated to the origin and rotated so that qgoal
is located on y-axis.

The Dubins paths are the concatenation of three canonical transformations scaled so

that pmin = 1 and the vehicle moves with unit speed. This formulation allows the general

solution to be scaled by the vehicle's pmin because the maximum heading rate is equal to

-. The transformations are written in terms the distance of motion v along the path
segments for an arbitrary configuration q = [x, y, ]T as follows

L, (x, y, Q)

R,(x, y, Q)

S,(x, y, Q)

[x + sin(Q + U) sin Q, y cos(Q + v) + cos Q, Q + V]T

[x sin(Q vU) + sin Q, y + cos( v v) cos Q, V]T

[x + v cos Q, y + v sin b, Q]T


The total length of a Dubins path is the sum of the individual segment lengths

vi, v2, v3 that transform the initial configuration [,0,0,' ] to the final configuration

[d, 0, 'T,

L = VI + V2 + V3

(6- 11)


I would like to thank my advisor Dr. Rick Lind for his guidance, criticisms, and

motivation throughout my graduate school experience. I would like to express gratitude

to my committee members for their patience and insight during my doctoral studies. I

would also like to extend thanks to my lab fellow researchers in the Flight Controls Lab

including AMi il !id Abdulrahim, Eric Branch, Ryan Causey, Daniel Grant, and Joe Kehoe.

Thanks to M1,li I.!i Eric, and Ryan for dealing with my nearly constant disruptions of the

workplace and generally sarcastic, downbeat attitude. Thanks go to Daniel "Tex" Grant

for handling my verbal abuse so well and teaching me how to shoot a gun. Thanks to Joe

Kehoe for his thousands of research related ideas, including the three that actually worked.

7.4.2 Optimal Control Results

The NLP presented in C'! ipter 6 was used to convert Dubins paths to aircraft

trajectories. The objective function of the NLP minimizes the distance between the

trajectory and the given paths. Therefore, the NLP defines a control strategy for optimal

path following. Results of the optimal control problem are presented in Figure 7-12.

-80--kinematic path
optimal trajectory
-kinematic path -60- traje
--optimal trajectory -40
120 /
S100 20
\ Z 200 40
-50\ 150 60
0 100 80
0 North (ft)
East (ft) 50' 50 1000 50 100 150 200 250
100 0 North (ft)

Figure 7-12. Resultant trajectories from the path tracking optimization. A) Results of the
optimal control strategy used to convert Dubins paths to aircraft trajectories.
B) Top-down view of the optimal path following result.

The optimal control results are generated by supplying the Dubins path as the initial

condition to the optimization. The Dubins paths specify initial conditions for the states V,

7, y, x, y, h, and 0. The angle of attack, c, was calculated for each segment of the Dubins

path by calculating the trim angle of attack according to the vehicle's equations of motion

(Equation 6-21).

The optimal control results show that the aircraft is capable of tracking the

Dubins paths in the lateral direction. The longitudinal directional tracking, however,

produces poorer results. It was found that the optimization did not alv-b- converge,

generating poor results for .I.-- ressive maneuvers. Additionally, the optimizations required

computation time on the order of 20 to 60 seconds. For these reasons, it is believed that

an MA strategy would provide a better solution to incorporating dynamics into the

planning strategy.


1.1 Problem Statement

The problem addressed in this research is the mapping of unknown, urban

environments by an Unmanned Air Vehicle (UAV) equipped with a vision sensor. Map

building is a vital task for autonomous vehicle navigation in that it provides information

enabling vehicle localization, obstacle avoidance, and high-level mission planning. The

presented approach uses information gathered by an onboard camera to construct a

three-dimensional environment map while accounting for sensor noise and uncertainty in

vehicle state estimates. Additionally, motion planning strategies are developed to generate

trajectories which maximize environment coverage and avoid collisions with obstacles. The

combined map building and motion planning algorithms yield a solution for autonomous

flight through urban environments. The overall concept of this research is depicted in

Figure 1-1 where a UAV navigating at low altitude uses sensor information to construct a

three-dimensional map and, subsequently, plan a trajectory.


Figure 1-1. Map construction by an autonomous UAV. A) Missions envisioned for UAVs
involve safe navigation in cluttered urban environments. B) Navigation in
cluttered environments will require vehicles capable of autonomously building
a map from sensor data and, subsequently, planning safe trajectories.

so that the sigma points in Equation 4-17 are drawn from a probability distribution with

a mean value

x -k Xk-1k(4-23)

and covariance

SPxv,k-llk-1 Eq

where Pxv is the cross correlation matrix between the state errors and the process noise.

The measurement model, Equation 4-11, is rewritten so that measurement noise is

restricted to be additive as was assumed in the linear Kalman filter

Zk h' (Xk-1) + rk-1 (425)

The resulting UKF filter equations begin with the prediction step where sigma points

X', as given in Equation 4-17, are drawn from the distribution defined by Equations 4-23

and 4-24. These sigma points are transformed by the nonlinear process model with zero


Xi,klk- f (,k-|lk-1, Uk-l, 0) V i 1,..., 2L + 1 (4-26)

The mean and covariance of the sigma points are calculated from Equations 4-19 and

4-20 as
^Xklk-1 WXklk-1 (4-27)

Pk|k-1 Wi[Xiklk-1 Xklk-1 [Xi,k k-1 Xk|k-1 ] (4-28)
Next, observations are predicted by transforming the updated sigma points with the

measurement model

Zik h' (Xikk-1, Uk-, 0) Vi 1,...,2L+ 1 (4 29)

b and the notation p(cld) represents a probability distribution for variable c given the

evidence d.

Each particle drawn from the process model distribution is assigned a weight

w oc w p(zk X) (4 43)

where p(zk lx) is defined as the nonlinear measurement model

Zkj ~ p(zk Xk) h(xk, rk) (444)

and the proportionality is eliminated by normalizing the weights at each step. Therefore,

each particle is assigned a weight which is a function of its previous weight and the

probability that the sample would produce the current sensor measurement. The weights

can be seen as the importance of the particles with respect to maintaining an accurate

representation of the probability distribution.

The final step of the particle filter is the resampling of the particles with replacement

according to their weights. The resampling allows for particles with a low weight, that

no longer represent the actual probability distribution, to be removed from further

consideration. Since the particle set is sampled with replacement, particles with a large

weight are often selected multiple times. The sampling step, however, can lead to particle

deficiency if sampling occurs too frequently. Particle deficiency refers to a lack of particle

diversity in which the distribution is approximated by N particles, but few are unique.

.ii;, sampling heuristics have been developed to determine when best to perform the

resampling step. The method used in this dissertation is a resampling procedure based on

the number of effective particles N~ff [85]. The number of effective particles

Neff,k N= (4-45)

estimates how effectively the current particle set approximates the true posterior

distribution. Resampling occurs when Neff falls below a threshold NT.


6.1 Introduction

Trajectory planning refers to any process which determines the history of control

inputs, u(t), for a system that yield a time parameterized curve, q(t), called a trajectory

[26]. The trajectory for system can be derived to avoid obstacles, regulate the system to a

desired state, optimize an objective function, or a combination of these tasks. Trajectory

planning is distinct from path planning because it accounts for system dynamics by

planning motions with physically realizable velocities and accelerations. This distinction

means that the curve q(t) is twice differentiable with respect to time. Path 1p1 i11Hi:7.

however, is limited to constructing a kinematically feasible curve, q(s), in the system's

configuration space.

The goal of trajectory planning addressed in this research is to generate collision-free

trajectories for an aircraft navigating in an unknown environment. The trajectories are

designed to build a complete environment map in minimal time. The employ, -1 mapping

procedure which incrementally constructs a map of planar features was discussed in

C'! lpter 5. This map is used as feedback for obstacle avoidance in the trajectory planning


Optimal trajectory planning for aircraft in the presence of obstacles is a difficult

problem because the optimization is non-convex [89]. This non-convexity leads to

multiple locally optimal solutions which makes most standard optimization routines

computationally intractable. Several approaches for directly solving the trajectory

optimization problem for aircraft have been proposed. Two prominent methods based on

Mixed-Integer Linear Programming (l\ILI.P) [90, 91] and the Maneuver Automaton [35-37]

are described in Section 6.2. These methods do not, in general, guarantee optimality, are

sensitive to initial conditions, and can become computationally intractable in realistic


the expansion methods are similar, this difference can produce very different results and

computational performance.

Like the EST, the RRT planning process culminates in an attempt to connect

the current tree to the goal location pgo,,. If the local planner A cannot construct a

collision-free path to the goal from any configuration in the tree further tree expansion is


This section has describe two strategies for efficient path planning in cluttered

environments. The next section describes the local planning function A which connects

the nodes in the trees of both the EST and RRT.

6.5 The Dubins Paths

The sampling-based planners described in Section 6.4 rely on a local planner A

to connect vehicle configurations. The local planners are defined by vehicle kinematic

constraints so that the constructed path is kinematically feasible. For nonholonomic

vehicle motion pl1 "'ii:i-r a popular local planning strategy is the Dubins optimal path

[34, 95, 96].

The Dubins optimal path is a shortest path solution for connecting two vehicle

configurations subject to a minimum curvature constraint. The solution pertains to a

car-like vehicle operating in a three-dimensional configuration space defining location

and orientation qi = [xi, yi,' ] E R3. The curvature constraint relates to the minimum

turning radius of the vehicle pmin.

The seminal work by Dubins [33] proved that the optimal curvature constrained path

consists of three path segments of sequence CCC or CSC where C is a circular arc of

radius pmin and S is a straight line segment. Therefore, by enumerating all possibilities

the Dubins set is defined by six paths D = {LSL, RSR, RSL, LSR, RLR, LRL} where

L denotes a left turn and R denotes a right turn. The optimal Dubins path q*(s) is the

shortest path q(s) E D. For a mobile robot, the optimal Dubins path is the time-optimal

path for non-accelerating motion where pmin is unique to the given constant velocity.

efficient than a naive implementation of an EKF where all landmark and vehicle states

are concatenated into a single matrix and updated simultaneously. Current research

approaches use submaps to increase in the efficiency of KF-based solutions.

5.3.2 Unscented Rao-Blackwellized Particle Filter

The Unscented Rao-Blackwellized Particle Filter (URBPF) is formulated by applying

the Unscented Transform (UT) (Section 4.2.2) to the RBPF SLAM formulation [88]. The

URBPF is an improvement over the RBPF in terms of estimation performance similar

to the comparison between the EKF and UKF discussed in Section 4.2.2. Additionally,

the URBPF is easier to implement because it does not require linearization of the state

transition and measurement models.

1. Generate N vehicle pose samples x^i and associated observations z, according to

the UKF prediction rule:

1-|k-k1 L [ ,x_ 1, (L + A)Pzzk-l||- (5-22)

Xk fk-l,t f(X -lk--l,t Uk) X lk-1 EO WiX|k-l (5-23)
Px,klk-1 > w[kXk-1,t Xkk- l [X k-l,t XAkk- (5 24)

,tj = h( _k-l2,t, mJ,k-1) 2L t Z (5-25)
Zj h(Xk WtZ k,t,j

2. Calculate importance weights wk and resample according to weights to promote

particle diversity:

k = w _1C (5-26)

where the constant C is the likelihood of all observations zk (the subscript j has

been dropped to show that the following is performed for all landmarks in a batch

fashion) given the estimated position of the vehicle. The likelihood is calculated as


C = (27)m jp (ze 2 k)-1(- ( (5-27)

by f, = [p, vi, 1]T and fA = [p, v, 1]T. The Kronecker product of the correspondence

pair is equal to the ith row of A as follows

A(i,:) [ tu tv, P, n4P', V4j, V4, P' 1] (2 18)

The essential matrix can be recovered from A by calculating its singular value

decomposition (SVD)

A UA AV" (2-19)

The ninth column of VA can be unstacked to recover the essential matrix. Once the

essential matrix is found, camera rotation and translation can be extracted as seen by

Equation 2-11. It should be noted that this presentation of the eight-point algorithm is

highly sensitive to noise and requires a normalizing step as presented by Har'!,- [48].

2.3.2 Recursive Structure from Motion

The estimates of camera egomotion produced by two-frame SFM algorithms, such as

the eight-point algorithm, are subject to systematic, correlated errors since the current

calculation is appended to past reconstructions. Several batch processing algorithms have

been presented to eliminate these additive errors [49]. These algorithms, however, are not

suited for trajectory generation because they incur a delay from the data gathering process

and cannot be implemented in real-time. For causal, real-time SFM a Kalman filter can be

employ, to recursively calculate structure and motion estimates [50].

The standard recursive SFM algorithms gradually refine SFM state estimates by

incorporating new data [51]. Alternatively, batch recursive algorithms create a series

of intermediate reconstructions which are fused into a final reconstruction [52]. Both

recursive techniques show improved results over nonrecursive techniques, however, no

single algorithm works well in all situations [44].

The problems specific to SFM reconstruction for a UAV are identified by Prazenica

et al. [46]. These issues include the loss of feature tracking due to .,..-ressive vehicle

motion, unreliable reconstructions due to small baselines, and the well-known scale-factor


7.1 Introduction

This chapter presents results and analysis of the algorithms presented throughout this

research. Section 7.2 gives results of the plane fitting algorithm introduced in ('! Ilpter 3.

The plane fitting algorithm is employ, -1 in the SLAM algorithm (C'!i lpter 5) to produce

the results in Section 7.3. Finally, results of the trajectory planning algorithm (C'!i lpter 6)

are presented in Section 7.4.

7.2 Plane Fitting Results

The plane fitting algorithm introduced in ('!, lpter 3 generates a low dimensional

environment representation of the pointwise sensor data provided by SFM (Chapter 2).

The quality of the fit is investigated with respect to feature point noise in Section 7.2.1

and the inclusion of nonplanar obstacles in Section 7.2.2. The simulation environment

employ, 1 for analyzing the plane fitting algorithm is shown in Figure 7-1.

feature points
- vision frustum







Figure 7-1. Result of the plane fitting algorithm without image noise.

reduced representation vastly decreases the storage requirements for a given dataset.

The dimensionality reduction step in this dissertation is achieved by fitting planes to the

three-dimensional points generated by the SFM algorithm.

3.3.1 Principal Components Analysis

The plane fitting task is accomplished using Principal Components Analysis (PCA)

[71], a statistical linear transformation used for reducing the dimensionality of a dataset.

PCA finds the directions in a dataset with the most variation, as shown in Figure 3-2 in

order to capture the variability of the data with fewer parameters. These directions are

also the eigenvectors that correspond to the largest eigenvalues of the data's covariance

matrix. Therefore, PCA can be employ, ,1 to transform a dataset to a new basis where the

first coordinate, called the first principal component, defines the direction of maximum

variance. All subsequent directions defined by the basis are orthogonal to all previous

directions and define the remaining direction of greatest variance.


1- Data points
S. 2c covariance
10 .- P-
-- 2 P


.. .. -. *

2 dd, '

-4 1--------------------------i
-10 -5 0 5 10 15

Figure 3-2. 2D Prinicipal Components Analysis for a dataset with a linear trend. The
principal components are labeled p,, for the nth principal component and the
2o covariance of the dataset is shown.

- -

-- estimate

/ '

Figure 5-1. Map building in the presence of noise. Noise in the map building process
produces a spatially inconsistent map.

The SLAM estimation strategy begins with a prediction step (Figure 5-2A) in

which the vehicle position and orientation are estimated according to a motion model.

Additionally, sensor measurements are predicted using a measurement model that

estimates sensor readings based on the uncertain state of the vehicle and the current

map of the environment. Essentially, the prediction step generates a guess of what the

vehicle should observe given the current state estimates. Next, actual sensor measurements

of the environment are taken by the vehicle's onboard sensor suite as depicted in Figure

5-2B. Finally, the actual observations are compared with the predicted observations and

the map and vehicle states are updated accordingly (Figure 5-2C).

There are several open problems in the SLAM community including the reduction

of computation complexity to enable real-time implementation, correct association of

observed landmarks with mapped landmarks, and environment representation [78].

This dissertation addresses these problems by employing a dimensionally reduced map

representation as described in Ch'! pter 3. The map representation is a set of planar

landmarks that fit three-dimensional feature points gathered by an onboard camera. The

geometric map alleviates the computational burden of the SLAM solution by reducing the


where E[.] denotes the expectation or mean value of a random variable.

The Kalman filtering procedure begins with a prediction step that updates the state

estimate and error covariance as follows

Xk|k-1 Axk-i|k-1 + Buk-1 (4-5)

Pklk-1 = APk-1Ik-1AT + Eq (4-6)

where the subscript klk 1 denotes a value at time k given measurement information, or

evidence, up to time k 1 also known as an a priori estimate. The state and covariance

at k k 1 define the prior probability distribution of the state estimate. The notation

klk is referred to as an a posteriori estimate, with the mean and covariance denoting the

posterior probability distribution, since such estimates include measurements taken up to

the current time step.

The optimal Kalman gain, which weights the measurement residual, for MMSE

estimation is defined as

Kk Pi k-1HT (HPklk-1HT + r)-1 (4-7)

The final step of the Kalman filter is to update the predictions given in Equations 4-5

and 4-6 after a measurement zk is taken. The measurement update equations are

Xk|k Xk|k-1 + Kk (Zk- Hxklk-) (4 8)

Pk|k -(I KkH) Pk|k-1 (4-9)

The resulting estimates :kjk and Pk k are used as previous time estimates in

Equations 4-5 and 4-6 for the next prediction as the recursive process continues.

The Kalman filter is designed for linear systems with linear measurement models. The

SLAM problem, however, deals with nonlinear systems in the form of vehicle dynamic

models and sensors with nonlinear measurement models. Therefore, the nonlinear

extensions of the Kalman filter are employ, -l for the estimation task. The nonlinear

Feature points
-vision frustum
600 planes







North (ft) 400 800

200 \ 4008 East (ft)
0 0

Figure 7-4. Result of the plane fitting algorithm when nonplanar obstacles are included.

The filters were compared by performing Monte Carlo (' \C) simulations in which the

vehicle state measurement noise and sensor noise were randomly generated within the

confines of the covariance matrices Eq and Er, respectively. This process tests the average

performance of the filters over many estimation processes.

7.3.1 Simulation Environment

The simulation environment, depicted in Figure 7-5A, consists of geometrically

regular obstacles indicative of an urban environment. The aircraft navigates along a

pre-computed trajectory generated by a high-fidelity nonlinear aircraft model. SFM,

as discussed in C'! lpter 2, is simulated by distributing three-dimensional feature points

on the building facades and ground. The feature points provide pointwise structure

information to the plane fitting algorithm. The plane fitting algorithm, introduced in

C'! lpter 3, generates observations as the vehicle navigates as shown in Figure 7-5B.

The vehicle state prediction occurs at 30 Hz. Though an IMU could provide a

faster state-estimation rate, it is assumed that the IMU is fused with image processing

dense depth information available from such sensors is not equivalent to the sparse depth

information afforded a vehicle with a vision sensor. The use of vision in SLAM typically

involves the identification and differentiation of landmarks, but relies on other sensors

to measure the structure of the scene [12, 13]. In an example of purely vision-based

SLAM, Davison uses vision to detect locally planar features which are incorporated as

landmarks into the SLAM framework [14-16]. The implementation emphasizes accurately

estimating camera ego-motion and is not concerned with generating an environment

model. Consequently, the constructed map is a sparse approximation of the environment.

The approach in this dissertation shifts the focus from accurate ego-motion estimation

to building a complete map of the environment for the purpose of efficiently planning

collision-free vehicle motions.

Though the SLAM problem has been a highly researched topic in the ground vehicle

community for the past two decades, very little research has been performed for aircraft.

A few ground vehicle implementations have considered undulating terrain which requires

the assumption of nonplanar motion [17]. Kim and Sukkarieh implemented SLAM on

a UAV equipped with a vision sensor that locates artificial landmarks of known size

[18, 19]. A bearings-only SLAM approach for a Micro Air Vehicle (\!A V) equipped with

a camera and an Inertial Measurement Unit (IMU) operating in a forest environment was

introduced by Langelaan [20-22]. Current approaches to airborne SLAM focus primarily

on augmenting inertial navigation systems (INS) to reduce drift inherent in the calculated

state estimates. These approaches do not address the difficulties associated with building a

complete environment map as is addressed in this dissertation.

Solutions to the SLAM problem require repeated observations of environmental

landmarks in order to build a stochastic obstacle map. Therefore, information regarding

each landmark must be stored and observations of a landmark must be correlated with

past observations. The correlation task is known as data association and incorrect

associations often lead to catastrophic failures of the SLAM estimation process. For these

the feature points at two distinct vantage points. Noise is added to the two-dimensional

feature point locations in both images and noisy three dimensional points are calculated

by triangulation (Equation 2-14). Results of the plane fitting algorithm in the presence of

noise are shown in Figure 7-3.

240 \


800 \\\
North ift) 400


S20 /


North (ft) 400


Feature points
viSl on frustum
; ,- ,- ,

400 00
400 East (ft)

Feature points
vision frustum
600 p planes


1000 \\
North (ft) 400 \0 800
20 4 600
200 \ / 400
S200 East (ft)


S 600
400 East (ft)

Figure 7-3.

Results of the plane fitting algorithm in the presence of image noise. A) Plane
fitting result with a = 0.5 pixel image noise and a camera displacement of 25

ft. B) Plane fitting result with a = 1.0 pixel image noise and a camera

displacement of 25 ft. C) Plane fitting result with a = 0.5 pixel image noise
and a camera displacement of 15 ft. D) Plane fitting result with a = 1.0 pixel
image noise and a camera displacement of 15 ft.

The plane fitting results, shown in Figure 7-3, were generated with Gaussian noise of

0.5 and a = 1.0 pixels applied to the images. Additionally, the

- feature points
-vision frustum

North (ft)

200 400
200 East (ft)

standard deviation a

6.7 Optimal Path Tracking

The presented path planning strategy computes short detailed paths within the

sensor field of view. The local collision detection algorithm (Section ensures that

the planned paths traverse the environment inside of a safe corridor. The optimal path

tracking strategy seeks to find a time optimal trajectory that follows the planned path

within a distance r < R subject to the vehicle dynamics. The constraint r is less than the

radius of the safety corridor because additional uncertainties incurred in the map building

process must be accounted for.

The dynamic vehicle model used for optimization is the navigation-level aircraft

V Tcos(a) CDV2 sin7

S (sin(a) + CLV) cos Q C`

= ( sin(Ta) + CL V) s(in2
cos 7 (6 21)
S= V sin 7

S = V cos 7 cos Q

S = V cos 7 sin Q

where V is total vehicle velocity, a is angle of attack, 7 is flight path angle, 0 is

bank angle, Q is heading, T is thrust, CD is the drag coefficient, and CL is the lift

coefficient. The thrust and lift and drag coefficients are approximated by polynomial

fits parameterized by V and a

T =Ao+AIV+A2V2

CD Bo + Ba + B2a2 (6-22)

CL =C Co + C1 + C2(a 1i)2

where the constant values are given by Bryson [39]. The model was selected because

aircraft lift and drag coefficients are incorporated into the formulation which provides a

realistic aircraft response. Additionally, lift and drag curves from UAV wind tunnel test

could be included into the trajectory tracking optimization. The model was augmented so

..+........ + ... ...
' ,,, ,,,, ., ... +.. "
+" +


+ +

- I

- A

..... 2o covariance

--.- P2
+ data points
+ data points

+ + +
++ +

t t+, +
---+ +

x x

x x
x .

'* ''

centroid 1
E centroid 2
"..... 2 covariance
-- P1
--- P2
+ data points 1
x data points 1

Figure 3-4. Centroid splitting for iterative k-means. A) The initial result of k-means which

does not fit the dataset. B) Centroid splitting results in two new centroids

that more aptly characterize the dataset. C) The final k-means result that

properly clusters the data.

* centroid 1
o centroid 2
+ data points


4 K

and M is the matrix

f xRf 0 0 0 f xT

0 f x Rf 0 0 f xT (216)
M- (2-16)
0 0 "*. 0

0 0 0 f x Rf' f xT

The solution of Equation 2-14 provides the depth of the tracked feature points.

Therefore, this section has outlined the solution to the SFM problem from two-dimensional

image data to estimated camera motion and three-dimensional Euclidean scene structure.

2.3 Calculating Structure from Motion

This section presents two methods for calculating SFM. The first method is the

eight-point algorithm which requires only two camera images and a minimum of eight

tracked feature points. The second method is a recursive algorithm which employs a

Kalman filter in order to incorporate vehicle dynamics into the motion estimation process.

2.3.1 Eight-Point Algorithm

The eight-point algorithm introduced by Longuet-HT.--ii:i, [47] calculates the rotation

and translation between two images given a minimum set of eight tracked feature points.

Larger numbers of tracked feature points can be used in the calculation by solving a linear

least-squares minimization problem. The inclusion of additional tracked points has been

shown to improve the results of the algorithm.

The eight-point algorithm exploits the epipolar constraint by solving a linear system

of equations in the form

Ae = 0 (2-17)

where e E R9x1 is a stacked version of the essential matrix E and A E RIx9 is constructed

from the two-dimensional correspondence pairs. Each row of A is constructed by

calculating the Kronecker product of the ith feature point correspondence. The ith

feature point correspondence is two feature points correlated between two images denoted

Pklk-1 = FkPk-llk-lFT + Eq

where xklk-1 denotes a state estimate at time k given measurements up to time k 1,

Pk|k-1 is the covariance matrix of the state without a measurement update, and Fk is the
linearized form of the state transition function g(.).

Next the optimal Kalman gain K is calculated according to

Sk = HkPklk-1iH + r (2-23)

Kk -Pklk-1HSk 1 (2-24)

where Hk is the linearized form of the measurement model h(.).

The final step in the filtering process is to update the state and covariance of the

system by incorporating the sensor measurement as follows

Xk|k Xk|k-1 + Kk(yk hxk-1, k )) (2-25)

Fk I KkHk (2-26)

Pk|k rkPklk- lr + KkErK[ (2-27)

The updated state estimate xk|k gives both structure and motion for the current time


2.4 Optical Flow State Estimation

Several research efforts have pursed optical flow state estimation for UAVs [55-57].

Optical flow is similar to feature tracking with the distinction that displacement is

calculated at a fixed location as opposed to tracking a specific image feature through an

image stream. Additionally, optical flow refers to feature point velocity as opposed to

displacement, but for a given fixed frame rate the difference in calculation is trivial.

For optical flow state estimation, the focal plane velocity of the feature point,

approximated by the optical flow, can be obtained by differentiating Equation 2-1.

The result is an equation for the focal plane velocity of a feature point in terms of its


to triangulate the three-dimensional position of the tracked feature points as shown in

Section 2.2.4.

2.2.1 Pinhole Camera Model

This chapter derives image processing algorithms which assume a pinhole camera

model. The pinhole camera is an idealized model of camera function in which all light

converges on an infinitesimally small hole that represents a camera lens with zero aperture.

In the frontal pinhole imaging model, as shown in Figure 2-1, the two-dimensional

projection f = [p, v]T of a three-dimensional point p = [q 92, 913]T is the intersection

of the ray emanating from the camera optical center ending at point p and the image

plane. The image plane is located at a distance f, called the focal length. The camera

coordinate frame C is located at the camera optical center with coordinates o1 and 82

oriented parallel to the image plane in the negative vertical and horizontal directions,

respectively. The coordinate 83 is oriented perpendicular to the image plane and denotes

the camera axis.

3 9f

Pf3 7 r/2 ...

173 p

Figure 2-1. Pinhole camera model and perspective projection of a feature point onto the
image plane.

For the pinhole camera model, perspective projection of a three-dimensional point can

be written in terms of the camera focal length

[P -f 7 (2-1)
V T3 T2

r *predict system state
*predict measurement

I Update-


*update state prediction *measure system state
according to measurement *compare to predicted
residual measurement

Figure 4-1. Recursive state estimation. The recursive process involves three steps:
prediction, observation, and update.

system described by the linear stochastic difference equation

Xk = Axk-1 + Buk- + qk-1 (41)

where AE '. is the state matrix, B E is the input matrix, u E RI is the input

vector, and qk-1 E R' is a vector of random variables representing additive process noise.

The noise variable qk-1 is a zero-mean Gaussian variable with covariance Eq.

The state estimation process incorporates measurements z e R' of the system that

are modeled as

Zk Hxk + rk (4-2)

where H E Rmxm is an output matrix mapping states to observations and r E R" is a

zero-mean Gaussian random variable vector with covariance E, representing measurement


For a given state estimate x, the error e and error covariance P are defined as

e = x x (4-3)

P E [eeT] (4-4)

components and a centroid will define each planar surface in the environment regardless of

how many feature points comprise the surface.

7 Data points
m 2o covariance
20 ,
--P p/
--' P,





0 10
10 10
20 20

Figure 3-3. 3D Principal Components Analysis for a planar dataset. The principal
components are labeled p, for the nth principal component and the 2a
covariance of the dataset is shown.

3.3.2 Data Clustering

The aforementioned PCA algorithm cannot be naively implemented to generate

planar landmarks for a given sensing operation. The three-dimensional data points

returned by the computer vision algorithm must be clustered into candidate planar regions

before PCA can return the plane parameters. A difficulty arises because no knowledge

regarding the number of visible planes is available. Therefore, the number of planar

clusters must be estimated for each sensing operation.

Planar clustering is accomplished with an iterative algorithm based on k-means.

K-means is an unsupervised learning algorithm that defines a set of clusters for a dataset

by defining cluster centers, or centroids, that minimize the distance between all data

points and their associated cluster centroid [72]. Effectively, the k-means algorithm

its environment and updating the vehicle path accordingly. The floor cleaning task,

however, is simplistic when compared to envisioned unmanned vehicle missions such as

autonomous navigation of a car through traffic. The successful completion of complex

missions requires technological advances that increase the level of autonomy currently

di-'.1 v, -1 by unmanned vehicles.

A specific area of unmanned vehicles that have gained significant attention is

UAVs. UAVs are currently used in a variety of reconnaissance, surveillance, and combat

applications. Examples of UAVs, depicted in Figure 1-3, include the MQ-1 Predator that

has been used for reconnaissance missions and is capable of firing Hellfire missiles, the

RQ-5 Hunter designed for reconnaissance missions, and the Dragon Eye designated for

reconnaissance, surveillance,and target acquisition [4]. The current UAV technology has a

low level of .l, i I.o, .in,: requiring considerable human interaction to perform basic missions.

Highly trained operators are necessary to remotely operate each individual unmanned



Figure 1-3. Examples of unmanned air vehicles. A) The General Atomics Aeronautical
Systems Inc. MQ-1 Predator. B) The Northrop Grumman RQ-5 Hunter. C)
The AeroVironment Dragon Eye.

Future missions envisioned for unmanned aircraft require a greater level of autonomy

than is currently available in existing systems. Common autonomous behaviors exhibited

by UAVs include waypoint navigation [5] and simple obstacle avoidance [6]. Such

behaviors are well-suited for relatively benign missions that occur at high altitudes and

require little reaction to the surrounding environment as shown in Figure 1-4A. Advances

that the control inputs, a and Q, have second order and first order dynamics, respectively,

as given by


where cj,~ is the the natural frequency for the a response, (Q is the damping ratio

corresponding to a, Tr is the time constant for the Q response, and the new control inputs

to the system are us and u6.

The optimization problem is formulated as the following nonlinear program (NLP)

(u*) argmin J(u) t + Zt i 1(Xk,q(s))

s.t. Xk+ f(xk, uk)dt (624)
XN qf

where T is a function that calculates distance between the discrete trajectory points Xk

and the path q(s). The objective function J minimizes final time tf and the sum of the

distances between the path and the trajectory by varying the control input u = [ua, u]1.

The first constraint is an Euler integration step between two successive trajectory points

ensuring the trajectory is dynamically feasible. The second constraint forces the endpoints

of the trajectory and the path to be equal. The final constraint ensures each calculated

values of the error function T are less than the safety radius r.

The NLP was solved with the nonlinear optimization routine SNOPT as a part of the

Matlab TOMLAB toolbox [102].

6.8 Trajectory Planning Algorithm

An overview of the trajectory planning approach is presented in Algorithm 6. The

presented formulation is executed until the change in sensor variance a2e falls below a

threshold tolsen. Therefore, the planning process stops when additional vehicle motions do

not provide additional environment coverage. Supplementary metrics such as a maximum

final time can easily be employ, ,1 in the same framework.

are investigated for creating a truly autonomous UAV: vision-based map building and

computationally-efficient trajectory planning.

The first capability is a vision-based map building task in which the vehicle gathers

information concerning its surroundings with an onboard camera and constructs a spatial

representation of the environment. For the past two decades, robotic mapping has been

an actively researched topic because it is regarded as the first step toward absolute

autonomy [7]. Robotic mapping allows the vehicle to localize itself with respect to the

constructed map for navigating in GPS-denied environments. This capability is critical for

military operations where GPS can be jammed or cluttered environments preclude the use

of GPS due to occlusion.

Additionally, robotic mapping is vitally important to safe, time-efficient navigation

through unknown environments. Finding the shortest path to a goal location using purely

reactive control methods, where the robot selects control actions based only on the current

sensor measurements, is a trial-and-error process in unknown environments with no

provably optimal solution. By generating an obstacle map, a robot can retain knowledge

of its surroundings and discriminate between obstructed and unobstructed paths. As

knowledge of the environment grows, the task of traversing to a goal location can be

accomplished in a time-efficient manner with the planning of complete, obstacle-free paths.

The second capability developed in this dissertation is a computationally efficient

trajectory planning routine which maximizes environment coverage to minimize the

time to construct a complete environment map. In implementation, the method must

be computationally efficient since the planning process occurs incrementally as new

information is gathered. The rapidity of the planning process allows for path updates as

knowledge of the environment increases which is critical for safe navigation in unknown

environments. Additionally, vehicle dynamics are incorporated into the planning process

in order to generate trajectories. This task is vital for aircraft which may not be capable

of tracking every kinematically feasible path. Most ground vehicles are able to stop their

Full Text








IwouldliketothankmyadvisorDr.RickLindforhisguidance,criticisms,andmotivationthroughoutmygraduateschoolexperience.Iwouldliketoexpressgratitudetomycommitteemembersfortheirpatienceandinsightduringmydoctoralstudies.IwouldalsoliketoextendthankstomylabfellowresearchersintheFlightControlsLabincludingMujahidAbdulrahim,EricBranch,RyanCausey,DanielGrant,andJoeKehoe.ThankstoMujahid,Eric,andRyanfordealingwithmynearlyconstantdisruptionsoftheworkplaceandgenerallysarcastic,downbeatattitude.ThanksgotoDaniel\Tex"Grantforhandlingmyverbalabusesowellandteachingmehowtoshootagun.ThankstoJoeKehoeforhisthousandsofresearchrelatedideas,includingthethreethatactuallyworked. 4


page ACKNOWLEDGMENTS ................................. 4 LISTOFTABLES ..................................... 8 LISTOFFIGURES .................................... 9 ABSTRACT ........................................ 11 CHAPTER 1INTRODUCTION .................................. 13 1.1ProblemStatement ............................... 13 1.2Motivation .................................... 14 1.3HistoricalPerspective .............................. 18 1.4Contributions .................................. 22 2SCENERECONSTRUCTIONFROMMONOCULARVISION ......... 24 2.1Introduction ................................... 24 2.2StructurefromMotionOverview ....................... 24 2.2.1PinholeCameraModel ......................... 25 2.2.2Feature-PointTracking ......................... 26 2.2.3EpipolarGeometry ........................... 28 2.2.4EuclideanReconstruction ........................ 30 2.3CalculatingStructurefromMotion ...................... 31 2.3.1Eight-PointAlgorithm ......................... 31 2.3.2RecursiveStructurefromMotion .................... 32 2.4OpticalFlowStateEstimation ......................... 34 2.5ImageProcessingforEnvironmentMapping ................. 35 3ENVIRONMENTREPRESENTATION ...................... 37 3.1Introduction ................................... 37 3.2EnvironmentModels .............................. 39 3.2.1TopologicalMaps ............................ 39 3.2.2OccupancyMaps ............................ 39 3.2.3GeometricMaps ............................. 40 3.3DimensionalityReduction ........................... 40 3.3.1PrincipalComponentsAnalysis .................... 41 3.3.2DataClustering ............................. 43 3.3.3PlaneFittingAlgorithm ........................ 45 5


..................... 49 4.1Introduction ................................... 49 4.2TheKalmanFilter ............................... 49 4.2.1TheExtendedKalmanFilter ...................... 52 4.2.2TheUnscentedTransform ....................... 53 4.2.3TheUnscentedKalmanFilter ..................... 54 4.2.4TheSquare-RootUnscentedKalmanFilter .............. 56 4.3TheParticleFilter ............................... 58 4.4TheRao-BlackwellizedParticleFilter ..................... 60 5SIMULTANEOUSLOCALIZATIONANDMAPBUILDING ........... 62 5.1Introduction ................................... 62 5.2AirborneSLAMModels ............................ 65 5.2.1VehicleModel .............................. 66 5.2.2MeasurementModel ........................... 67 5.3FilteringforSLAM ............................... 68 5.3.1Rao-BlackwellizedParticleFilter .................... 68 5.3.2UnscentedRao-BlackwellizedParticleFilter ............. 71 5.3.3Square-RootUnscentedRao-BlackweillizedParticleFilter ...... 73 5.4DataAssociation ................................ 75 5.5MapManagement ................................ 76 6TRAJECTORYPLANNING ............................ 78 6.1Introduction ................................... 78 6.2DirectTrajectoryPlanningMethods ..................... 80 6.2.1Mixed-IntegerLinearProgramming .................. 80 6.2.2ManeuverAutomaton .......................... 82 6.3DecoupledTrajectoryPlanning ........................ 84 6.4Sampling-BasedPathPlanning ........................ 85 6.4.1Expansive-SpacesTreePlanner ..................... 87 6.4.2Rapidly-ExploringRandomTreePlanner ............... 88 6.5TheDubinsPaths ................................ 90 6.5.1ExtensiontoThreeDimensions .................... 93 6.5.2CollisionDetection ........................... 94 ................ 94 ................ 96 6.6EnvironmentCoverageCriteria ........................ 97 6.7OptimalPathTracking ............................. 101 6.8TrajectoryPlanningAlgorithm ........................ 102 7DEMONSTRATIONSANDRESULTS ....................... 105 7.1Introduction ................................... 105 6


.............................. 105 7.2.1EectsofFeaturePointNoise ..................... 106 7.2.2NonplanarObstacles .......................... 108 7.3SLAMResults .................................. 108 7.3.1SimulationEnvironment ........................ 109 7.3.2LocalizationResult ........................... 111 7.3.3StateEstimationError ......................... 113 7.3.4FilterConsistency ............................ 113 7.3.5FilterEciency ............................. 116 7.3.6ExampleSLAMResult ......................... 117 7.4TrajectoryPlanningResults .......................... 117 7.4.1EnvironmentCoverage ......................... 117 7.4.2OptimalControlResults ........................ 121 8CONCLUSIONS ................................... 122 8.1Conclusion .................................... 122 8.2FutureDirections ................................ 123 REFERENCES ....................................... 124 BIOGRAPHICALSKETCH ................................ 132 7


Table page 7-1EciencycomparisonoftheSLAMlters. ..................... 117 8


Figure page 1-1MapconstructionbyanautonomousUAV ..................... 13 1-2Examplesofunmannedvehicles ........................... 14 1-3Examplesofunmannedairvehicles ......................... 15 1-4Unmannedairvehiclemissionproles ........................ 16 2-1Pinholecameramodelandperspectiveprojectionofafeaturepointontotheimageplane. ...................................... 25 2-2Feature-pointtrackingresultfromanimagesequence ............... 26 2-3Epipolargeometry .................................. 29 3-1Environmentrepresentations ............................. 39 3-22DPrinicipalComponentsAnalysisforadatasetwithalineartrend. ...... 41 3-33DPrincipalComponentsAnalysisforaplanardataset. ............. 43 3-4Centroidsplittingforiterativek-means ....................... 46 3-5Dimensionalityreductionoffeaturepointdata. .................. 48 4-1Recursivestateestimation .............................. 50 4-2TheUnscentedTransform .............................. 53 5-1Mapbuildinginthepresenceofnoise ........................ 63 5-2TheSLAMestimationstrategy ........................... 64 5-3Fixed-wingaircraftmodeldenitions ........................ 66 6-1Anoverviewofthetrajectoryplanningprocess ................... 79 6-2AnexampleManeuverAutomaton ......................... 83 6-3ProbabilisticRoadmapplanninginatwo-dimensionalenvironment ....... 86 6-4Expansive-SpacesTreeplanning ........................... 88 6-5Rapidly-ExploringTreeplanning .......................... 89 6-6ThecoordinatesystemforcalculatingthepathsintheDubinssetD. ...... 91 6-7TheDubinssetofpathsfortwocongurations. .................. 93 6-8Examplesof3DDubinspaths ............................ 94 9


....................... 95 6-10Interiortoboundarytest ............................... 96 6-11Collisiondetectionforthelocalpath ........................ 98 6-12Motionplanningforcoveragestrategies ....................... 99 6-13Environmentalcoveragemetric ........................... 100 7-1Resultoftheplanettingalgorithmwithoutimagenoise. ............. 105 7-2Angleofincidenceconstraint. ............................ 106 7-3Resultsoftheplanettingalgorithminthepresenceofimagenoise. ....... 107 7-4Resultoftheplanettingalgorithmwhennonplanarobstaclesareincluded. .. 109 7-5Thevision-basedightsimulationenvironmentfortestingtheSLAMalgorithm. 110 7-6ExamplelocalizationresultfortheairborneSLAMsimulation .......... 112 7-7AverageestimationerrorforthethreeSLAMltersandthenominaltrajectoryfor50MonteCarloruns. ............................... 114 7-8Filterconsistencyresults ............................... 116 7-9AnexampleresultforthecompleteairborneSLAMsolutionusingtheURBPF 118 7-10Examplepathsforenvironmentcoverage ...................... 119 7-11Comparisonofenvironmentcoverageresults. .................... 120 7-12Resultanttrajectoriesfromthepathtrackingoptimization. ............ 121 10






1-1 whereaUAVnavigatingatlowaltitudeusessensorinformationtoconstructathree-dimensionalmapand,subsequently,planatrajectory. B MapconstructionbyanautonomousUAV.A)MissionsenvisionedforUAVsinvolvesafenavigationinclutteredurbanenvironments.B)Navigationinclutteredenvironmentswillrequirevehiclescapableofautonomouslybuildingamapfromsensordataand,subsequently,planningsafetrajectories. 13


1-2 ,includethePackbotEODwhichassistsinExplosiveOrdnanceDisposal(EOD)[ 1 ],theMarsExplorationRovers(MER)SpiritandOpportunityusedforplanetaryexploration[ 2 ],andtheRoombavacuumrobot[ 3 ]. Figure1-2. Examplesofunmannedvehicles.A)TheJPLMarsExplorationRoverSpirit.B)TheiRobotPackbotEOD.C)TheiRobotRoomba. TheunmannedvehiclesdepictedinFigure 1-2 exemplifythecurrentlevelofautonomyreliablyachievedbyunmannedsystems.ThePackbotEODrequiresconstantguidancefromahumanoperatoranddoesnotperformanytrulyautonomoustasks.TheMERautonomouslymonitorsitshealthinordertomaintainsafeoperationandtracksvehicleattitudetoaidinsafenavigationoveruneventerrain.Preloadedandcommunicatedinstructions,however,areexecutedbytheMERaftersignicanthumananalysisandinteraction.TheRoombadisplaysahigherlevelofautonomybysensing 14


1-3 ,includetheMQ-1PredatorthathasbeenusedforreconnaissancemissionsandiscapableofringHellremissiles,theRQ-5Hunterdesignedforreconnaissancemissions,andtheDragonEyedesignatedforreconnaissance,surveillance,andtargetacquisition[ 4 ].ThecurrentUAVtechnologyhasalowlevelofautonomyrequiringconsiderablehumaninteractiontoperformbasicmissions.Highlytrainedoperatorsarenecessarytoremotelyoperateeachindividualunmannedaircraft. B C Examplesofunmannedairvehicles.A)TheGeneralAtomicsAeronauticalSystemsInc.MQ-1Predator.B)TheNorthropGrummanRQ-5Hunter.C)TheAeroVironmentDragonEye. Futuremissionsenvisionedforunmannedaircraftrequireagreaterlevelofautonomythaniscurrentlyavailableinexistingsystems.CommonautonomousbehaviorsexhibitedbyUAVsincludewaypointnavigation[ 5 ]andsimpleobstacleavoidance[ 6 ].Suchbehaviorsarewell-suitedforrelativelybenignmissionsthatoccurathighaltitudesandrequirelittlereactiontothesurroundingenvironmentasshowninFigure 1-4A .Advances 15


1-4B ,thatoccuroutsideofanoperator'sline-of-sight.Asvehiclemissionsrequireautonomousightthroughincreasinglyelaborate,clutteredenvironments,ahigherdegreeofenvironmentalawarenessisrequired.Therefore,twocriticalrequirementsforautonomousunmannedvehiclesaretheabilitytoconstructaspatialrepresentationofthesensedenvironmentand,subsequently,plansafetrajectoriesthroughthevehicle'ssurroundings. B Unmannedairvehiclemissionproles.A)Currentmissionsoccurathighaltitude,therefore,notrequiringrapidpathplanningandenvironmentalawareness.B)Futuremissionswillbelowaltitudeassignmentsnecessitatingquickreactiontoenvironmentalobstacles. Thisdissertationconcentratesonvision-basedenvironmentalsensing.ComputervisionhasbeenidentiedasanattractiveenvironmentalsensorforsmallUAVsduetothelargeamountofinformationthatcanbegatheredfromasinglelight-weight,low-powercamera.Thesensorselectionisaresultofthedecreaseinvehiclesizewhichseverelyconstrainstheavailablepayloadbothintermsofsizeandweightalongwithpowerconsumption.Therefore,autonomousbehavioursforUAVsmustbedesignedwithintheframeworkofvision-basedenvironmentalsensing.Inparticular,twocriticalcapabilities 16


7 ].RoboticmappingallowsthevehicletolocalizeitselfwithrespecttotheconstructedmapfornavigatinginGPS-deniedenvironments.ThiscapabilityiscriticalformilitaryoperationswhereGPScanbejammedorclutteredenvironmentsprecludetheuseofGPSduetoocclusion.Additionally,roboticmappingisvitallyimportanttosafe,time-ecientnavigationthroughunknownenvironments.Findingtheshortestpathtoagoallocationusingpurelyreactivecontrolmethods,wheretherobotselectscontrolactionsbasedonlyonthecurrentsensormeasurements,isatrial-and-errorprocessinunknownenvironmentswithnoprovablyoptimalsolution.Bygeneratinganobstaclemap,arobotcanretainknowledgeofitssurroundingsanddiscriminatebetweenobstructedandunobstructedpaths.Asknowledgeoftheenvironmentgrows,thetaskoftraversingtoagoallocationcanbeaccomplishedinatime-ecientmannerwiththeplanningofcomplete,obstacle-freepaths.Thesecondcapabilitydevelopedinthisdissertationisacomputationallyecienttrajectoryplanningroutinewhichmaximizesenvironmentcoveragetominimizethetimetoconstructacompleteenvironmentmap.Inimplementation,themethodmustbecomputationallyecientsincetheplanningprocessoccursincrementallyasnewinformationisgathered.Therapidityoftheplanningprocessallowsforpathupdatesasknowledgeoftheenvironmentincreaseswhichiscriticalforsafenavigationinunknownenvironments.Additionally,vehicledynamicsareincorporatedintotheplanningprocessinordertogeneratetrajectories.Thistaskisvitalforaircraftwhichmaynotbecapableoftrackingeverykinematicallyfeasiblepath.Mostgroundvehiclesareabletostoptheir 17


8 9 ]whichdescribedamethodologyforgeneratingstochasticobstaclemapsfromaseriesofuncertainspatialrelationships.Inregardstounmannedvehicles,thisworkdescribestheSLAMproblem:theconstructionofanenvironmentmapfromnoisysensormeasurementstakenfromuncertainvehiclepositionswhileconcurrentlylocalizingthevehiclewithrespecttothelearnedmap.ThesolutiontotheSLAMproblempresentedbySmith,etal.hassincebeenextendedtoproduceSLAMalgorithmsfornumeroussensors,vehicles,andenvironmenttypes.ThisdissertationfocusesonSLAMforvision-equippedaircraftnavigatinginhighly-structured,urbanenvironments.Vision-basedSLAMreferstoinstanceswhenvisionistheonlysensoravailableformeasuringtheenvironment.Historically,solutionstotheSLAMproblemhavefocusedonvehiclesequippedwithsensorssuchaslaserrangendersandsonar[ 10 11 ].The 18


12 13 ].Inanexampleofpurelyvision-basedSLAM,DavisonusesvisiontodetectlocallyplanarfeatureswhichareincorporatedaslandmarksintotheSLAMframework[ 14 { 16 ].Theimplementationemphasizesaccuratelyestimatingcameraego-motionandisnotconcernedwithgeneratinganenvironmentmodel.Consequently,theconstructedmapisasparseapproximationoftheenvironment.Theapproachinthisdissertationshiftsthefocusfromaccurateego-motionestimationtobuildingacompletemapoftheenvironmentforthepurposeofecientlyplanningcollision-freevehiclemotions.ThoughtheSLAMproblemhasbeenahighlyresearchedtopicinthegroundvehiclecommunityforthepasttwodecades,verylittleresearchhasbeenperformedforaircraft.Afewgroundvehicleimplementationshaveconsideredundulatingterrainwhichrequirestheassumptionofnonplanarmotion[ 17 ].KimandSukkariehimplementedSLAMonaUAVequippedwithavisionsensorthatlocatesarticiallandmarksofknownsize[ 18 19 ].Abearings-onlySLAMapproachforaMicroAirVehicle(MAV)equippedwithacameraandanInertialMeasurementUnit(IMU)operatinginaforestenvironmentwasintroducedbyLangelaan[ 20 { 22 ].CurrentapproachestoairborneSLAMfocusprimarilyonaugmentinginertialnavigationsystems(INS)toreducedriftinherentinthecalculatedstateestimates.Theseapproachesdonotaddressthedicultiesassociatedwithbuildingacompleteenvironmentmapasisaddressedinthisdissertation.SolutionstotheSLAMproblemrequirerepeatedobservationsofenvironmentallandmarksinordertobuildastochasticobstaclemap.Therefore,informationregardingeachlandmarkmustbestoredandobservationsofalandmarkmustbecorrelatedwithpastobservations.ThecorrelationtaskisknownasdataassociationandincorrectassociationsoftenleadtocatastrophicfailuresoftheSLAMestimationprocess.Forthese 19


10 ].Thepracticeofusingpointlandmarksoftenyieldsenvironmentmodelscontainingmillionsoffeatures[ 23 ].SuchenvironmentmodelsoflargesizeareinfeasibleduetothedatastoragerequirementsandcomputationalpowerrequiredtoincorporatelargenumbersoflandmarksintotheSLAMcalculation.Additionally,thedataassociationproblembecomesintractablebecausetherelativemeasurementstolandmarkscannotbecorrelatedasaresultofthelargenumberofpossibleassociationscombinedwithuncertaintyinvehiclelocation.SeveralSLAMformulationshaveconstructedgeometricenvironmentmapsforhighlystructuredenvironmentmapsbyttinggeometricprimitivestothegatheredpointwisedata[ 24 25 ].Thebenetsofgeometricprimitivesincludethereductionofdatarequiredtostoreanenvironmentmapandlandmarksthataremoreeasilydistinguishedbecausetheycontainstructuralinformation.Thisdissertationemploysasimilardimensionalityreductionstepinordertoconstructacompleteenvironmentmapconsistingofplanarfeatures.Inordertoautonomouslyandrapidlyconstructanenvironmentmap,thisdissertationpresentsatrajectoryplanningalgorithm.Ideally,thetrajectoryplanningproblemwouldbeposedasanoptimalcontrolprobleminordertondagloballyoptimalsolution.Anoptimalsolution,however,iscomputationallyintractablebecausetheproblemisnon-convexandoccursinahigh-dimensionalspace.Forthisreason,severalapproximationstothefulloptimalcontrolsolutionhavebeenproposedwhichincludedMixedIntegerLinearProgramming(MILP)andManeuverAutomatons(MAs).TheMILPsolutionposesthefullnonlinearoptimizationasalinearoptimization[ 90 91 ].Therefore,thesolutionisonlycapableofhandlingalinearizeddynamicaircraftmodel.ThelinearizationrestrictionseverelylimitstheecacyoftheMILPsolutionduetothereductionofavailablemaneuvers.Alinearmodelcannotproperlycharacterizetheaggressivemaneuversnecessarytonavigateinaclutteredenvironment.Additionally, 20


35 { 38 ].Thedynamicsofthevehiclearediscretizedintoasetoftrimtrajectories,orsteady-statemotions,andmaneuversthatconnectthetrimstates.TheMAplanningprocessisdecomposedintoacombinatorialproblemwhichsearchesforthebestmaneuversequenceandanoptimizationthatndstheoptimaltimedurationsforthetrimtrajectories.Unlike,MILPthediscretizeddynamicsarebasedonanonlinearmodelandcanincorporateaggressivemaneuvers.TheMAsolution,however,canbecomecomputationallyintractablewhensolvingforlargetrajectoriesrequiringalongmaneuversequence.Thisissueisexacerbatedifthediscretizationisnotrestrictedtoasmallsetofvehiclemotionsorifobstaclesareconsidered.Therefore,theMAiscombinedwithsampling-basedplanningmethodstoproduceasolutioninwhichoptimalitycannotbeguaranteed.Thisdissertationemploysadecoupledtrajectory-planningstrategy[ 40 41 ]inwhichafeasiblecollision-freepathisplannedandoptimalcontrolisemployedtotrackthepathinminimaltime.Thepathissubjecttokinematicconstraintsandincorporatesmetricsforselectingpathsthatprovidemaximumcoveragewithrespecttootherpathsgeneratedduringtheplanningprocess.Thedecoupledapproachallowsfortheuseofthecompletenonlinearaircraftmodel;however,optimalityintermsofcoverageisnotguaranteedsincethecoveragecriteriaisonlyincludedinthepathplanningtask.Thepathtobetrackedisgeneratedwithasampling-basedplanningstrategy[ 26 ].Sampling-basedalgorithmswereintroducedbyamethodologycalledProbabilisticRoadmaps(PRMs)[ 27 ].PRMsconstructagraphthatcapturestheconnectivityofthecongurationspaceofarobotbyrandomlysamplingcongurationsandconnectingthemtotheexistingmapwithkinematicallyfeasiblepaths.OnceaPRMisconstructed 21


28 { 30 ]andtheRapidly-ExploringRandomTrees(RRTs)[ 31 32 ]plannersareemployed.TheESTandRRTplanningstrategiesaremoreecientsincetheyaredesignedtondfeasiblepathsbetweentwospeciccongurations,ratherthanprovideageneralplanningsolutionlikethePRM.ThisdissertationemploysavariantoftheRRTtoecientlyplancollision-freepathsthroughtheenvironment.AnabundanceofresearchhasbeenperformedintheareasofSLAMandtrajectoryplanning.Thisdissertationextendspreviousresearcheortsinbotheldstoprovideamethodologyforautonomousmapbuildinginhighly-structuredenvironments. 22




42 ].Inaddition,thelow-cost,smallpayloadrequirements,andpassivenatureofvisionhasmadeittheprimarysensorformanysmallUnmannedAirVehicles(UAVs)[ 4 ].Thisstudyassumesthatcomputervisionistheonlyavailableonboardsensorcapableofmeasuringthesurroundingenvironment.Mappinganunknownenvironmentwithamoving,monocularcamerarequiresthatthree-dimensionalinformationbeinferredfromasequenceoftwo-dimensionalimages.Thisprocessofthree-dimensionalreconstructionfromimageryisanactivelyresearchedareaintheimageprocessingcommunitycommonlyreferredtoasStructurefromMotion(SFM)[ 43 44 ].ThischapterpresentsanoverviewofSFMinSection 2.2 .SpecicsolutionstotheSFMproblemarethendiscussedinSections 2.3.1 and 2.3.2 .Section 2.4 describesavehiclestateestimationtasksimilartoSFMwhichcalculatesvehiclevelocitystates.Subsequently,Chapters 3 and 5 describehowthevision-basedstateestimatesandenvironmentalstructureinformationisemployedintheroboticmappingmission. 2.2.2 ,whichcalculatescorrespondencesbetweenpointsofinterestinsuccessiveimages.Section 2.2.3 demonstratesthatbyexploitingepipolargeometrybetweencorrespondencepairstherelativerotationandtranslationbetweenthetwocameraviewscanbeestimated.Thecalculatedmotionofthecameracanthenbeused 24


2.2.4 2-1 ,thetwo-dimensionalprojectionf=[;]Tofathree-dimensionalpoint=[1;2;3]Tistheintersectionoftherayemanatingfromthecameraopticalcenterendingatpointandtheimageplane.Theimageplaneislocatedatadistancef,calledthefocallength.ThecameracoordinateframeCislocatedatthecameraopticalcenterwithcoordinates^c1and^c2orientedparalleltotheimageplaneinthenegativeverticalandhorizontaldirections,respectively.Thecoordinate^c3isorientedperpendiculartotheimageplaneanddenotesthecameraaxis. Figure2-1. Pinholecameramodelandperspectiveprojectionofafeaturepointontotheimageplane. Forthepinholecameramodel,perspectiveprojectionofathree-dimensionalpointcanbewrittenintermsofthecamerafocallength 326412375(2{1) 25


2{1 showsthatonlythedepthofthethree-dimensionalfeaturepointisrequiredtoresolvethelocationofthepointfromtheimageprojection. 2-2 B Feature-pointtrackingresultfromanimagesequence.A)Frame1.B)Frame10. TheLucas-Kanadefeature-pointtracker[ 45 ]isastandardalgorithmforsmallbaselinecameramotion.Thealgorithmsolvesforthedisplacementofallfeaturepointsinanimagewherethedisplacementofafeaturepointismeasuredinthehorizontalandverticalimagecoordinatesdenotedbyand,respectively.Foraseriesoftwoimages,thedisplacementofafeaturepoint,d=[d;d]T,canbedescribedasthetranslationofagroupofimagepixelsasfollows 26


2{2 canbeformulatedasaleast-squaresminimizationofthechangeinimageintensityvaluesbetweenfeaturepointwindowsintwoconsecutiveimagesasgivenby 2{3 toproduce 2{5 withrespecttofeaturepointdisplacementthesolutiontothelinearleast-squaresestimateoffeaturepointdisplacementcanbewrittenasfollows 27


46 ]. 2-3 ,whichisageometricconstraintrelatingcameramotiontotheimageprojectionsofatrackedfeaturepoint.EpipolargeometrydictatesthatthereisaplanewhichisdenedbyathreedimensionalfeaturepointandtwocameracoordinateframesC1andC2fromwhichthepointisvisible.Theepipolarplaneintersectsbothimageplanesattheepipolarlinesl1andl2whichcontaintheimageprojectionsf1andf2ofthefeaturepointandtheepipolese1ande2. 28


Epipolargeometry.Theepipolarconstraintdescribesthegeometricrelationshipbetweencameramotionandtheprojectionofafeaturepointontotheimageplane. Theepipolarconstraintrelatestheimageprojections,f1andf2,ofthethree-dimensionalpoint,,totherelativeposition,T,andorientation,R,ofthetwocameralocations.Sincethesethreevectorsarecoplanartheirscalartripleproductiszeroasfollows 2{10 .AmethodforesimatingtheessentialmatrixisdescribedinSection 2.3.1 .Oncetheessentialmatrixisestimated,itcanbedecomposedintotherelativerotationandtranslationofthecamera.Therelativetranslation,however,canonlybeestimatedtoascale-factorsinceEquation 2{10 ishomogeneouswithrespecttotheessentialmatrix(Equation 2{11 )andis 29


2{12 canberewrittenintermsofasingleunknowndepthasfollows 2{13 canbesolvedforjcorrespondencepairssimultaneouslybyexpressingtheproblemintheform 30


2{14 providesthedepthofthetrackedfeaturepoints.Therefore,thissectionhasoutlinedthesolutiontotheSFMproblemfromtwo-dimensionalimagedatatoestimatedcameramotionandthree-dimensionalEuclideanscenestructure. 47 ]calculatestherotationandtranslationbetweentwoimagesgivenaminimumsetofeighttrackedfeaturepoints.Largernumbersoftrackedfeaturepointscanbeusedinthecalculationbysolvingalinearleast-squaresminimizationproblem.Theinclusionofadditionaltrackedpointshasbeenshowntoimprovetheresultsofthealgorithm.Theeight-pointalgorithmexploitstheepipolarconstraintbysolvingalinearsystemofequationsintheform 31


2{11 .Itshouldbenotedthatthispresentationoftheeight-pointalgorithmishighlysensitivetonoiseandrequiresanormalizingstepaspresentedbyHartley[ 48 ]. 49 ].Thesealgorithms,however,arenotsuitedfortrajectorygenerationbecausetheyincuradelayfromthedatagatheringprocessandcannotbeimplementedinreal-time.Forcausal,real-timeSFMaKalmanltercanbeemployedtorecursivelycalculatestructureandmotionestimates[ 50 ].ThestandardrecursiveSFMalgorithmsgraduallyreneSFMstateestimatesbyincorporatingnewdata[ 51 ].Alternatively,batchrecursivealgorithmscreateaseriesofintermediatereconstructionswhicharefusedintoanalreconstruction[ 52 ].Bothrecursivetechniquesshowimprovedresultsovernonrecursivetechniques,however,nosinglealgorithmworkswellinallsituations[ 44 ].TheproblemsspecictoSFMreconstructionforaUAVareidentiedbyPrazenicaetal.[ 46 ].Theseissuesincludethelossoffeaturetrackingduetoaggressivevehiclemotion,unreliablereconstructionsduetosmallbaselines,andthewell-knownscale-factor 32


53 54 ].TheKalmanlteringapproachdenesbothastatetransitionmodelg()andameasurementmodelh()writtenas 2.2.3 ).ThelteringapproachsolvestheSFMproblembyestimatingthestateofthemodelgivenbyEquation 2{20 .TheestimationprocessiscomputedwithanExtendedKalmanFilter(EKF).TheEKFrstpredictsthestateandcovarianceofthesystemaccordingto ^xkjk1=f(^xk1jk1;uk1;)(2{21) 33


^xkjk=^xkjk1+Kk(ykh(^xkjk1;))(2{25) 55 { 57 ].Opticalowissimilartofeaturetrackingwiththedistinctionthatdisplacementiscalculatedataxedlocationasopposedtotrackingaspecicimagefeaturethroughanimagestream.Additionally,opticalowreferstofeaturepointvelocityasopposedtodisplacement,butforagivenxedframeratethedierenceincalculationistrivial.Foropticalowstateestimation,thefocalplanevelocityofthefeaturepoint,approximatedbytheopticalow,canbeobtainedbydierentiatingEquation 2{1 .Theresultisanequationforthefocalplanevelocityofafeaturepointintermsofits 34


23264301032375266664_1_2_3377775(2{28)TheopticalowofthefeaturepointsisaknownquantitydeterminedinthefeaturetrackingalgorithmthatisanintegralpartoftheSFMcalculation.TheequationsofmotionofanaircraftcanbecoupledwithEquation 2{28 inordertoexpressopticalowintermsofaircraftstates,asgivenby 301 31 2{29 havebeenproposedusingoptimizationroutines.TheresultingsolutionshavebeenshowntoproduceaccurateestimatesofaircraftlinearandangularvelocitiesthoughthetotalvelocityoftheaircraftmustbeknowntoaccountforthescaleambiguitythatalsoaectsSFM. 3 togenerateageometricrepresentationoftheenvironment.Furthermore,bothSFMandtheopticalowapproachprovidevehicle 35


5 thisinformationwillbefusedwithonboardstateestimatesandtheexternalenvironmentmeasurementsinaSimultaneousLocalizationandMapping(SLAM)algorithmtoreducedriftinvehiclestateestimatesandconcurrentlydecreasemaperrors. 36


26 ]andisapoormapintermsofdatastorage,landmarkdenition,andenablinghigh-levelmissionplanning.Motionplanningalgorithmstypicallyusedforroboticnavigationpresupposeaknownobstaclemap[ 58 ].Theobstaclemapenablestheidenticationofsaferoutesthroughavehicle'ssurroundingsbylimitingthesetofadmissiblepositionsavehiclecanachieve.Apointwiserepresentationofobstaclegeometryisambiguousintermsofdeningsaferegionswithintheenvironment.Amoreconciserepresentationoftheenvironment,suchasgeometricobjects,inwhichcompleteobstaclescanbeaccuratelydescribedisbettersuitedtotheprocessofmotionplanning.Mappinglarge-scaleenvironmentsrequiresthatmanysensormeasurementsbetakentoaccuratelydenescenestructure.Storingindividualsensormeasurementsduringamappingoperationcanquicklyleadtoacomputationallyintractableproblem[ 59 ].Lower-dimensionalrepresentationsofsensordatacangreatlyreducethenumberofparametersneededtodeneanobstaclemap.Forexample,asingleplanecanreasonablyrepresentthousandsofthree-dimensionalpointsusingfourparametersincludingthreetodeneanormaldirectionandonetodenetheplane'sdistancefromtheoriginalongthatnormaldirection.Forhighlystructuredenvironments,usinggeometricprimitivessuchasplanestomodelthesensordatacangreatlyreducetheamountofdatarequired 37


10 60 61 ].ThiscaseisknownastheSimultaneousLocalizationandMapping(SLAM)problemasdiscussedindetailinChapter 5 .TheSLAMframeworkrequiresthatdistinguishableandre-observablelandmarksbepresentintheenvironment.Landmarksallowthevehicletoascertainitsrelativelocationtotheenvironmentthroughsensingoperations.Theonlydiscerniblecharacteristicofa3Dfeaturepointisitslocationintheenvironmentwhichmakes3Dpointsapoorchoiceforlandmarks.Ageometriclandmark,however,canprovideadditionalinformationintermsoforientationandsize.Additionally,featurepointsthatarevisiblefromagivenvantagepointarenotnecessarilyvisiblefromanotherlocationduetoocclusionandlightingvariations.Geometricmapsdenelandmarksthatarerobustlyobservablewithrespecttocamerapose.Manymissionsenvisionedforautonomousvehiclesrequirehumaninteractionintermsofhigh-levelmissionplanning.Surveillingtargetsinanenvironmentsuchasaspecicbuildingorroadwaydependsupontheabilitytoproperlydesignatetheknownobjective.Similarly,reconnaissanceofanareaofinterestintheenvironmentrequiresthattheregionbeclearlyspecied.Suchhigh-leveldecisionswillbemadebyahumanoperatorwhomustdesignatethetargetsorregionsofinterestinsomemanner.Inthiscapacity,aspatialmodelisadvantageousasitprovidesanintuitiveinterfacefortargetselectionsasopposedtoapointwisemodelwhichisanambiguousrepresentationthatrequiresfurtherinterpretationofthedata.Theneedtoenablemotionplanning,constructcompactmaprepresentations,denedistinguishablelandmarks,andassisthigh-levelmissionplanningmotivatesthegeneration 38


3-1 B C Environmentrepresentations.A)Topographicmapsdepictspatialconnectivitybystoringlocationsofinterestandconnectingpaths.B)Occupancymapsrecordtheprobabilitythatadiscretelocationcontainsanobstacle.C)Geometricmapsdeneobstaclesasgeometricprimitives. 62 { 64 ]asshowninFigure 3-1A .Typically,thenodesconsistofplacesofinterestsuchastheintersectionofcorridorsandarcsprovideconnectivityinformationaboutthenodes,tosuggestnavigationpathsbetweennodes.Topologicalmodelsarenotnecessarilyspatialmodelsasthenodescanrepresentvariousstatesofthevehicle.Suchmaps,however,relyonspatialmodelsfortheirconstructionsincethearcsneedtodenemaneuversthatavoidobstacles.Thisfactblursthedistinctionbetweentopologicalandgeometricmapssincetopologicalmapsrelyongeometricinformation. 65 66 ]. 39


25 67 ].Three-dimensionalmapsgeneratedfromdensedepthinformationtypicallyusetrianglemeshestorepresenttheenvironment[ 60 68 ].Examplesexistforrepresentingstructured,three-dimensionalenvironmentsbyalessgeneralsetofgeometricprimitivessuchasdistinctplanarsurfaces[ 24 69 70 ].ThisdissertationemploysageometricenvironmentmapbecausethepropertiesarebenecialforboththeSLAM(Chapter 5 )andtrajectoryplanning(Chapter 6 )tasks.Thegeometricmapprovidesacompactrepresentationthatreducesthecomputationalburdenforbothprocesses.Additionally,ageometricdescriptionofthelandmarksmakesthemmoreobservablethanpointlandmarksorvisuallydenedlandmarks.Lastly,thegeometriclandmarksareamenabletocollisioncheckingalgorithmsrequiredforplanningsafetrajectoriesandareanintuitiverepresentationforaidinginhigher-levelmissionplanningconcerns.ThegeometricmapisconstructedfromthepointwiseobservationsoftheenvironmentprovidedbySFM.Thistaskisaccomplishedviaadimensionalityreductionprocess. 40


71 ],astatisticallineartransformationusedforreducingthedimensionalityofadataset.PCAndsthedirectionsinadatasetwiththemostvariation,asshowninFigure 3-2 inordertocapturethevariabilityofthedatawithfewerparameters.Thesedirectionsarealsotheeigenvectorsthatcorrespondtothelargesteigenvaluesofthedata'scovariancematrix.Therefore,PCAcanbeemployedtotransformadatasettoanewbasiswheretherstcoordinate,calledtherstprincipalcomponent,denesthedirectionofmaximumvariance.Allsubsequentdirectionsdenedbythebasisareorthogonaltoallpreviousdirectionsanddenetheremainingdirectionofgreatestvariance. Figure3-2. 2DPrinicipalComponentsAnalysisforadatasetwithalineartrend.Theprincipalcomponentsarelabeledpnforthenthprincipalcomponentandthe2covarianceofthedatasetisshown. 41


^XSVD=UVT(3{1)whereUandVdenematricesoforthonormalvectorsanddenesadiagonalmatrixofsingularvaluesassumedtobeorderedfromlargesttosmallest.ThecolumnsofVdenetheprincipalcomponentsofXwiththecolumncorrespondingtolargestsingularvaluedesignatingtherstprincipalcomponent.Theresultistypicallyorderedsothattherstprincipalcomponentistherstentryofwithsubsequentcomponentslistedinadescendingorder.Thedatasetcanbeprojectedontothebasisformedbytheprincipalcomponentsaccordingto 3{1 ,respectively.Additionally,thesingularvaluesarethesquarerootsoftheeigenvaluesof^XT^Xand^X^XT.Sincethedatasethaszeromean,thecovariancematrixof^Xis^XT^XandtheeigendecompositionofthecovariancegiveseigenvectorsidenticaltoVandproduceseigenvaluesequaltothesquareofthesingularvalues.Therefore,SVDandeigendecompositioncanbeusedinterchangeablyforPCA.Forthree-dimensionalinformationbeingmappedtoaplanarrepresentation,showninFigure 3-3 ,thersttwoprincipalcomponentsdenetwoorthogonalin-planedirectionswhilethethirdcomponentdenestheplane'snormaldirection.Therefore,threeprincipal 42


Figure3-3. 3DPrincipalComponentsAnalysisforaplanardataset.Theprincipalcomponentsarelabeledpnforthenthprincipalcomponentandthe2covarianceofthedatasetisshown. 72 ].Eectively,thek-meansalgorithm 43


3{3 iscarriedoutaccordingtoAlgorithm 1 whichiteratesuntilallcentroidlocationsconvergetolocallyoptimalsolution.Thealgorithmbeginsbyguessingthenumberofcentroidskandrandomlyinitializingthecentroidlocationsfc1;c2;:::;ckg.Foreachiteration,thedatapointsareassociatedtothenearestcentroidaccordingtoEuclideandistance.Thecentroidlocationsarethenupdatedtothemeanofthenewlyassociateddatapoints.Convergenceoccurswhenthemaximumcentroiddisplacementislessthanagiventolerancetolk. 73 ].Similarly, 44


2 ,iterativelysearchesforplanarclustersbystartingwithasinglecluster(k=1)andincreasingthenumberofclusterstobesttthedataset.Eachiterationbeginswiththek-meansalgorithmwhichproducesasetofclusteredfeaturepoints.EachclusterischeckedforplanaritywithPCAbyinspectingthesingularvalueassociatedwiththethirdprinciplecomponentp3.ThesingularvaluesproducedbyPCAdenotetheamountofthedata'svariancecapturedbytheassociatedprincipalcomponent.Therefore,themagnitudeofthethirdsingularvalue,3,denoteshowplanaristhedataclusterwithzeromagnitudedescribingaperfectlyplanardataset.Planarityisdeterminedbyimposingathresholdton3.Athresholdvalueof0:05t0:1producesgoodresultsforrealisticallynoisydatasets.Clustersfoundtobeplanarareremovedfromfurtherconsiderationintheclusteringprocedure.Allfeaturepoints,includingpointscurrentlyattributedtodierentclusters,areprojectedontotheplanarbasisaccordingtoEquation 3{2 .Thefeaturepointsaretestedforinclusionintothecurrentplanedenitionaccordingtoconnectivityanddistancealongthenormaldirectiontests.Theconnectivitytestsearchesforallpointswithinaspecieddistancefromanycurrentlyincludedpointbeginningwiththepointclosesttothecluster'scentroid.Thenormaldirectiontestremovespointsthatareaspecieddistancefromanin-planeposition.Theremainingpointsareclassiedasasingleplanarobjectandremovedfromthedataset.TheconvexhullofthepointsH,theplane'snormaldirection,andtheperpendiculardistancefromthesensorpositiontotheplanearesavedasanobservationz.AllobservationsareusedintheSLAMalgorithmdescribedinChapter 5 .Thecentroidsofnon-planarclustersaresplit,asshowninFigure 3-4 ,inordertochoosenewcentroidsthatproperlycapturethevariabilityofthedataset.Thecentroids 45


B C Centroidsplittingforiterativek-means.A)Theinitialresultofk-meanswhichdoesnottthedataset.B)Centroidsplittingresultsintwonewcentroidsthatmoreaptlycharacterizethedataset.C)Thenalk-meansresultthatproperlyclustersthedata. 46


3{4 isequaltothesplittingdistanceemployedintheG-meansalgorithm[ 73 ].Afterthecentroidshavebeensplitorremovedk-meansisrerunontheremainingdataset.Thisprocessisperformedrepeatedlyuntilthenumberofremainingdatapointsfallsbelowathresholdtolf.Thistoleranceistypicallysetas10%ofthetotalnumberoffeaturepoints. 3-5 .ThealgorithmtransformsthepointwisedataprovidedbySFMintoasetofgeometricprimitives.Concatenatingtheseobservationswillcreateanincorrectgeometricmapduetodriftinvehiclestateestimates.Chapter 5 willpresentanalgorithmthatfusesthevehiclestateestimatesandenvironmentobservationsinordertocreateaspatiallyconsistentmap. 47


B Dimensionalityreductionoffeaturepointdata.A)PointwisedatafromtheSFMalgorithm.B)Thedimensionally-reduceddatasetconsistsofgeometricprimitivesintheformofplanarfeatures. 48


3 describedamethodforextractingplanarfeaturesfromahighlystructuredenvironment.ThesefeatureswillbeusedinChapter 5 toincrementallyconstructanenvironmentmapwithalteringtechniqueknownasSimultaneousLocalizationandMapping(SLAM).TheSLAMtaskisastateestimationapproachinwhichthestateofthesystemisacombinationofthevehiclestate,denedintermsofpositionandorientation,andthestateofthemap,designatedbythepositionandorientationofenvironmentallandmarks.Stateestimationiscommonlyaddressedasalteringprobleminwhichestimatesarecalculatedrecursivelybyincrementallyincorporatingsensorinformation[ 79 ].Thischapterdescribesseveralltersthatfollowasimilarpredictor-correctorstructuredepictedinFigure 4-1 .Thelteringprocessbeginswiththepredictionofanoisyestimateofthesystemstateaccordingastatetransitionmodel.Measurementsofthesystemarepredictedasifthetruesystemstateisequaltothestateprediction.Themeasurementpredictioniscomparedtoanactual,noisysystemmeasurementandthestateestimateisupdatedaccordingtoaweightedmeasurementresidual.Theweightofthemeasurementresidualrepresentsthecondenceofthemeasurementwithrespecttothemodel-basedstateestimate.TwoltersthatformthefoundationofthemajorityofSLAMsolutionsaretheKalmanlter[ 80 ]andparticlelter[ 82 ].ThischapterdescribesthegeneralformoftheseltersandseveralvariantsthatwillbeusedintheSLAMlteringapproachesdescribedinChapter 5 49


Recursivestateestimation.Therecursiveprocessinvolvesthreesteps:prediction,observation,andupdate. systemdescribedbythelinearstochasticdierenceequation 50


^xkjk1=A^xk1jk1+Buk1(4{5) 4{5 and 4{6 afterameasurementzkistaken.Themeasurementupdateequationsare ^xkjk=^xkjk1+KkzkH^xkjk1(4{8) 4{5 and 4{6 forthenextpredictionastherecursiveprocesscontinues.TheKalmanlterisdesignedforlinearsystemswithlinearmeasurementmodels.TheSLAMproblem,however,dealswithnonlinearsystemsintheformofvehicledynamicmodelsandsensorswithnonlinearmeasurementmodels.Therefore,thenonlinearextensionsoftheKalmanlterareemployedfortheestimationtask.Thenonlinear 51


81 ],theUnscentedKalmanFilter(UKF)[ 83 ],andtheSquare-RootUnscentedKalmanFilter(SQUKF)[ 84 ]. @x^xk1jk1;uk1(4{12) @x^xkjk1;(4{13)Additionally,themodelsarelinearizedwithrespecttothenoisevalues @q^xk1jk1;uk1(4{14) @r^xkjk1;(4{15)whichmeanstheEKFlterequations,analogoustoEquations 4{5 through 4{9 ,canbewrittenas ^xkjk1=f^xk1jk1;uk1;0Pkjk1=AkPk1jk1ATk+QkqQTkKk=Pkjk1HTkHkPkjk1HTk+RkrRTk1^xkjk=^xkjk1+Kkzkhxkjk1;0Pkjk=(IKkHk)Pkjk1(4{16)whichdenesthecompleteEKF.Itshouldbenotedthatthestateameasurementpredictionsusethefullnonlinearequationswhilethelinearizedmodelsarerequiredto 52


4-2 .Thesigmapointsarechosenaccordingtoadeterministicalgorithmsothattheyaccuratelyapproximatethetruemeanandcovarianceoftheuntransformeddistribution. Figure4-2. TheUnscentedTransform.TheUTpropagatesasetofsigmapointsthroughanonlinearfunctiontocalculatetheposteriorprobabilitydistribution. 53


L+;Wc0= L++(12+)Xi=k1+p 2(L+)Xi+L=k1p 2(L+)=2(L+)L(4{17)wherethesubscriptofthesquarerootdenotestheithcolumnofthematrixsquarerootand,,andaretuningparametersforcontrollingtheplacementandweightingofthesigmapoints.Thesigmapointsaretransformedbythenonlinearfunction 4{16 usedtocreatetheUKF.Thisderivationisaccomplishedwithanaugmentedstatevectorx02Rn+q,whereqisthedimensionofthenoisevectorq,denedas 4{10 ,isrewrittentoaccepttheaugmentedstatevectorasinput, 54


4{17 aredrawnfromaprobabilitydistributionwithameanvalue ^x0k1jk1=264^xk1jk10q1375(4{23)andcovariance 4{11 ,isrewrittensothatmeasurementnoiseisrestrictedtobeadditiveaswasassumedinthelinearKalmanlter 4{17 ,aredrawnfromthedistributiondenedbyEquations 4{23 and 4{24 .Thesesigmapointsaretransformedbythenonlinearprocessmodelwithzeronoise 4{19 and 4{20 as ^xkjk1=2LXi=0WsiX0i;kjk1(4{27) 55


^zk=2LXi=0WsiZ0i;k(4{30)withcorrespondingcovariance ^xkjk=^xkjk1+Kk(zk^zk)Pkjk=Pkjk1KkPzz;kKTk(4{33)Therefore,thestateestimateofthesystemhasbeencomputedwithoutrequiringlinearizationoftheprocessandmeasurementmodels. 83 ].TheUKF,however,isacomputationallyexpensivealgorithmincomparisontotheEKF.Thisexpenseisbecausethesigmapointcalculation,Equation 4{17 ,requires2L+1evaluationsofthenonlinearprocessmodelandmatrixsquarerootsofthestatecovariance.TheSquare-RootUnscentedKalmanFilter(SR-UKF)isanecientvariantoftheUKFinwhichthematrixsquarerootrequiredtogeneratethesigmapointsiseliminatedbypropagatingthesquarerootofthecovariancematrixinlieuofthefullcovariance[ 84 ].Therefore,thesigmapointcalculation,replacingEquation 4{17 ,is 56


4{35 and 4{36 replacetheUKFstatecovarianceupdateEquation 4{28 .Themeasurementcovarianceupdateequations,replacingtheanalogousUKFEquation 4{31 ,followthesameformulationasthestatepredictionmethodologyasshowninthefollowingequations 57


82 ].Thisformulationallowstheestimationprocesstohandlenon-Gaussianandmulti-modalprobabilitydensityfunctions.LiketheKalmanlteringapproaches,theparticlelteroperatesinarecursivefashion.Thestateispredictedaccordingtoaprocessmodeland,subsequently,updatedbycomparingsensormeasurementstomeasurementpredictionscalculatedwithameasurementmodel.However,whiletheKalmanlterpropagatesanormaldistributionthroughthenonlinearmodels,theparticlelterpropagatesasetofweightedsamplescalledparticles.ParticleltersemployasetofNparticlesdenedas 3 ,beginsbydrawingstatesamplesfromaprobabilitydensitydenedas xkp(xkjxk1;uk)=f(xk1;uk;qk)(4{42)whichisthesamenonlinearprocessmodeldenitionusedintheEKF(Equation 4{10 ).Thenotationabdenotesthattherandomvariableahastheprobabilitydistribution 58


85 ].Thenumberofeectiveparticles 59


4.4TheRao-BlackwellizedParticleFilterParticlelteringcanbeextremelyinecientinhigh-dimensionalspaces[ 86 ].ThisineciencyisvitallyimportantwithrespecttotheSLAMproblemwherethestatespaceincludesthevehiclestatesandthemaplandmarkstates.Therefore,particlelteringhasbeenappliedtotheSLAMproblemwiththeRao-BlackwellizedParticleFilter(RBPF).TheRao-Blackwellizationisaprocessinwhichthenumberofstatesthatneedtobesampledisreducedbypartitioningthejointstate.Therefore,forajointdistributionp(x1;x2)thestateispartitionedusingtheproductrule 60


5 wheretheRBPFiscombinedwiththeaforementionedEKF,UKF,andSR-UKFtoproducethreeltersforsolvingtheSLAMproblem. 61


74 ].Time-dependentmissions,suchastargetacquisitionandtracking,requireamaptoproducetimeecientsolutions.Additionally,themajorityofmotionplanningstrategiespresupposeaknownenvironmentmap[ 26 ].Therefore,mapconstructionbecomesavitallyimportanttaskforecientmotionplanning.ThisresearchisconcernedwithnavigationinGPSdeniedenvironmentsmeaningknowledgeofthevehicle'sinertiallocationisnotknown.Therefore,navigationmustrelyonrelativevehiclepositionestimateswhicharesubjecttodriftduetomodeluncertaintiesandexternaldisturbances.Additionally,sensormeasurementsoftheexternalenvironmentaresusceptibletonoise.ThecombinationoferrorsincurredfromuncertainvehiclelocationandnoisysenormeasurementsproducesspatiallyinconsistentmapsasshowninFigure 5-1 .TheissuesincurredbyplacingavehicleatanunknownlocationinanunknownenvironmentandrequiringittoincrementallybuildaspatiallyconsistentmapwhileconcurrentlycomputingitslocationrelativetothemapareknownastheSimultaneousLocalizationandMapping(SLAM)problem[ 75 ].ResearchintotheSLAMproblemhasproducedavarietyofsolutionsbasedonprobabilisticrecursivealgorithmswithimplementationsonavarietyofplatformsincludinggroundvehicles,aircraft,andunderwatervehicles[ 19 76 77 ].AllsuccessfulSLAMsolutionsfollowasimilaroverallestimationstrategyasdepictedinFigure 5-2 62


Mapbuildinginthepresenceofnoise.Noiseinthemapbuildingprocessproducesaspatiallyinconsistentmap. TheSLAMestimationstrategybeginswithapredictionstep(Figure 5-2A )inwhichthevehiclepositionandorientationareestimatedaccordingtoamotionmodel.Additionally,sensormeasurementsarepredictedusingameasurementmodelthatestimatessensorreadingsbasedontheuncertainstateofthevehicleandthecurrentmapoftheenvironment.Essentially,thepredictionstepgeneratesaguessofwhatthevehicleshouldobservegiventhecurrentstateestimates.Next,actualsensormeasurementsoftheenvironmentaretakenbythevehicle'sonboardsensorsuiteasdepictedinFigure 5-2B .Finally,theactualobservationsarecomparedwiththepredictedobservationsandthemapandvehiclestatesareupdatedaccordingly(Figure 5-2C ).ThereareseveralopenproblemsintheSLAMcommunityincludingthereductionofcomputationcomplexitytoenablereal-timeimplementation,correctassociationofobservedlandmarkswithmappedlandmarks,andenvironmentrepresentation[ 78 ].ThisdissertationaddressestheseproblemsbyemployingadimensionallyreducedmaprepresentationasdescribedinChapter 3 .Themaprepresentationisasetofplanarlandmarksthattthree-dimensionalfeaturepointsgatheredbyanonboardcamera.ThegeometricmapalleviatesthecomputationalburdenoftheSLAMsolutionbyreducingthe 63


B C TheSLAMestimationstrategy.A)Inthepredictionsteptheposeofthevehicleandsensormeasurementsarepredictedaccordingtomotionandmeasurementmodels,respectively.Theerrorellipsesdenoteuncertaintyinvehiclestateandmapestimates.B)Theobservationstepinvolvesacquiringrelativemeasurementsfromonboardsensorstoenvironmentallandmarks.C)Intheupdatestepthevehiclestateandmapestimatesareupdatedbycomparingthepredictedsensormeasurementswiththeactualsensormeasurements. 64


4 .Section 5.2 introducesthevehicleandmeasurementmodelsdevelopedforthepresentedSLAMformulation.Section 5.3 describesthreespecicltersthatwereinvestigatedinrelationtotheairborneSLAMproblem.Lastly,Sections 5.4 and 5.5 describethedataassociationprocessandmapmanagementproceduresemployedtoincrementallyincorporatesensedgeometricprimitivesintoaglobalmap. 4 ,stateestimationrequiresthedenitionofprocessandmeasurementmodelsthatdescribethesystemofinterest.ThegeneralprocessandmeasurementmodeldenitionsfortheSLAMproblemareprobabilisticallydenedasthefollowingdistributions 65


5-3 .Additionally,itisassumedthatvehiclelinearvelocity[u;v;w]andangularvelocity[p;q;r]informationisprovidedviaanIMUorvision-basedstateestimationalgorithm(Sections 2.3.2 and 2.4 ).Therefore,theinputtothevehiclemodelisselectedtobeu=[u;v;w;p;q;r]T Fixed-wingaircraftmodeldenitions.Theaircraftlinearandangularratesaredenedwithrespecttoabody-xedbasisB.TheinertialpositionandorientationofthevehicleismeasuredwithrespecttoaNorthEastDown(NED)inertialframeE. Thevehiclemodelf()acceptsnoisyestimatesofvehiclevelocitiesasinputs.Positionandorientationstateinformationisgeneratedaccordingtotheaircraftkinematicequations _x=ucoscos+v(cossin+sinsincos)+w(sinsin+cossincos)_y=ucossin+v(coscos+sinsinsin)+w(sincos+cossinsin)_z=usinvsincoswcoscos_=p+tan(qsin+rcos)_=qcosrsin_=(qsin+rcos)=cos(5{3) 66




19 ].TheairborneSLAMproblemhasbeeninvestigatedwiththeExtendedKalmanFitler(EKF)[ 18 ]andtheUnscentedKalmanFilter(UKF)[ 20 ].TheEKFapproachhasbeenshowntobesomewhatsuccessfulinimplementationforaverylimitednumberoflandmarks.Additionally,UKFapproachhasproventobesuperiortotheEKFintermsofestimationaccuracy,butsuersfromincreasedcomputationalcomplexity.ThisdissertationinvestigatesthreeadditionalSLAMltersbasedontheRao-BlackwellizedParticleFilter(RBPF).ThegoalistondasolutiontoairborneSLAMthatismorecomputationallyecientthanboththeEKFandUKFapproachesandhandlesthenonlinearitiesaswellastheUKF.Thecompleteformulationoftheselters,includingtheequationsnecessaryforimplementation,isdescribedinthissection.TheSLAMestimationtaskisframedasaprobabilisticprocesswiththegoalofcalculatingthejointposteriordistributionofthevehicleandlandmarkstates 23 87 ].TheRBPFisasampling-basedapproachwhichfactorsthejointSLAMposteriordistributionaccordingtotheobservationthatlandmarkpositionscanbeestimatedindependentlyifthevehiclepathisknown.TheresultingRao-Blackwellizedformcontainsasampledtermrepresentingtheprobabilityofvehicle 68


1. GenerateNvehicleposesamples^xikandassociatedobservations^zik;j: 2. Calculateimportanceweightswikandresampleaccordingtoweightstopromoteparticlediversity: 69


2(zk^zik)T(Piwk)1(zk^zik)(5{13)where 3. CalculatetheproposaldistributionanddrawNsamplesxkdeningtheupdatedvehiclestatewherethedistributionisGaussianwithparameters: 4. UpdatedenitionsofobservedlandmarksaccordingtotheEKFmeasurementupdate: 70


4.2.2 )totheRBPFSLAMformulation[ 88 ].TheURBPFisanimprovementovertheRBPFintermsofestimationperformancesimilartothecomparisonbetweentheEKFandUKFdiscussedinSection 4.2.2 .Additionally,theURBPFiseasiertoimplementbecauseitdoesnotrequirelinearizationofthestatetransitionandmeasurementmodels. 1. GenerateNvehicleposesamples^xikandassociatedobservations^zik;jaccordingtotheUKFpredictionrule: 2. Calculateimportanceweightswikandresampleaccordingtoweightstopromoteparticlediversity: 2(zk^zik)T(Piwk)1(zk^zik)(5{27) 71


3. CalculatetheproposaldistributionanddrawNsamplestodenethenewvehiclestatexkwherethedistributionisGaussianwithparameters: 4. Calculatesigmapointsfortheobservedlandmarksandtransformaccordingtotheobservationmodel: 5. UpdatedenitionsofobservedlandmarksaccordingtotheUKFupdaterule: 72


4.2.4 ).TheSR-UKFisanecientvariantoftheUKFinwhichthematrixsquarerootrequiredtogeneratethesigmapointsiseliminatedbypropagatingthesquarerootofthecovariancematrixinlieuofthefullcovariance.Similarly,theSquare-RootUnscentedRao-BlackwellizedParticleFilter(SR-URBPF)isformulatedbyreplacingtheUKF-stylecalculationsintheURBPFwiththeSR-UKFapproach. 1. GenerateNvehicleposesamples^xikandassociatedobservations^zik;jaccordingtoSR-UKFpredictionrule: 2. Calculateimportanceweightswikandresampletopromoteparticlediversity: 2(zk^zik)T(Piwk)1(zk^zik)(5{48) 73


3. CalculatetheproposaldistributionanddrawNsamplestodenethenewvehiclestatexkwherethedistributionisGaussianwithparameters: 4. Calculatesigmapointsfortheobservedlandmarksandtransformaccordingtotheobservationmodel: 5. UpdatedenitionsofobservedlandmarksaccordingtotheSR-UKFmeasurementupdaterule: 74


7 78 ].Theobservedlandmarkinformationprovidedbythedimensionalityreductionstep,however,issparseanddistinguishable.Furthermore,texturevariationsonbuildingfacadescanleadtomultipleobservationsofthesamelandmarkduringasinglesensingoperation.Therefore,anearestneighborapproachisusedtoallowformultipleassociationstothesametarget.Itshouldbenotedthatindividualgatingtechniquessuchasthenearestneighbormethodfailinenvironmentsthatarenotsparselypopulatedorstructured.Thisfailureisaresultoftheindividualgatingmethod'sinvestigationofasinglelandmarkandsensormeasurementpairatatime. 75


5{64 forallpossiblepairsoftrueandpredictedmeasurements.AllassociatedpairsareusedtoupdatetheSLAMlter. 76


7 77


26 ].Thetrajectoryforsystemcanbederivedtoavoidobstacles,regulatethesystemtoadesiredstate,optimizeanobjectivefunction,oracombinationofthesetasks.Trajectoryplanningisdistinctfrompathplanningbecauseitaccountsforsystemdynamicsbyplanningmotionswithphysicallyrealizablevelocitiesandaccelerations.Thisdistinctionmeansthatthecurveq(t)istwicedierentiablewithrespecttotime.Pathplanning,however,islimitedtoconstructingakinematicallyfeasiblecurve,q(s),inthesystem'scongurationspace.Thegoaloftrajectoryplanningaddressedinthisresearchistogeneratecollision-freetrajectoriesforanaircraftnavigatinginanunknownenvironment.Thetrajectoriesaredesignedtobuildacompleteenvironmentmapinminimaltime.TheemployedmappingprocedurewhichincrementallyconstructsamapofplanarfeatureswasdiscussedinChapter 5 .Thismapisusedasfeedbackforobstacleavoidanceinthetrajectoryplanningprocess.Optimaltrajectoryplanningforaircraftinthepresenceofobstaclesisadicultproblembecausetheoptimizationisnon-convex[ 89 ].Thisnon-convexityleadstomultiplelocallyoptimalsolutionswhichmakesmoststandardoptimizationroutinescomputationallyintractable.Severalapproachesfordirectlysolvingthetrajectoryoptimizationproblemforaircrafthavebeenproposed.TwoprominentmethodsbasedonMixed-IntegerLinearProgramming(MILP)[ 90 91 ]andtheManeuverAutomaton[ 35 { 37 ]aredescribedinSection 6.2 .Thesemethodsdonot,ingeneral,guaranteeoptimality,aresensitivetoinitialconditions,andcanbecomecomputationallyintractableinrealisticsituations. 78


6-1 .First,acoarsepathisconstructedtoaselectedgoallocation.Theportionoftheglobalpaththatoccurswithinthecurrentsensoreldofviewisrenedtoprovideadetailedlocalpath.Asanalstep,thedetailedlocalpathisconvertedtoatrajectorywithoptimalcontrol. Figure6-1. Anoverviewofthetrajectoryplanningprocess.Acoarseglobalpathisplannedtoaprescribedgoallocationdenedtoexploretheenvironment.Adetailedlocalpathisplannedtoprovidelocalsensorcoverage.Lastly,thelocalpathisconvertedtoatrajectoryusingoptimalcontroltheory. Theemployedpathplanningmethodologyisbasedonwellestablishedsampling-basedstrategies[ 27 32 ]describedinSection 6.4 .Section 6.5 describesacurvatureconstrainedkinematicmodelusedtogeneratekinematicallyfeasiblepaths.TheenvironmentcoveragecriteriawhichselectsthebestpathforenvironmentcoverageareintroduceinSection 6.6 .TheoptimalcontrolformulationemployedforconvertingthekinematicallyfeasiblepathtoadynamicallyfeasibletrajectoryisdescribedinSection 6.7 .Thenalsection,Section 6.8 ,describesthefullalgorithmfortrajectoryplanningforoptimalmapbuilding. 79


92 ],optimalcontrol[ 89 ],andsampling-basedmethods[ 93 ].Thelattertwomethodshavebeendirectlyappliedtotheaircraftmotionplanningproblemandaredescribedbelow. 91 ],butcanbeextendedtothreedimensions[ 90 ].Thegoalofthetrajectoryplanningproblemistocomputeacontrolhistoryfuk+i2R2:i=0;1;:::;L1gthatdenesatrajectoryfxk+i2R2:i=1;:::;Lg.Theoptimizationseekstondaminimumtimetrajectorythatterminatesataspeciedgoallocationxg.Thisminimumtime,terminally-constrainedproblemcanbeformulated 80


_x_xmaxukumax(6{4)Lastly,collisionavoidanceisincludedintheoptimizationasaconstraintstatingthattrajectorypointsmustnotresidewithinanobstacle.Thefollowingformoftheconstraintiswrittenforrectangularobstacles 81


6-2 .TheMAisadirectedgraph,asseeninFigure 6-2 ,consistingofasetofnodesandconnectingarcs.Thenodesrepresentthetrimtrajectoriesthatdeneequilibriumstatesforthevehicle.Forexample,thethreetrimtrajectorymodelshowncouldrepresentanaircraftoperatingatconstantaltitudewithonetrimtrajectoryrepresentingsteadylevelightandthetwoothertrimtrajectoriesrepresentingleftandrightturns.Themaneuvers 82


AnexampleManeuverAutomaton.AManeuverAutomatondenedbyamaneuverlibraryofthreetrimtrajectoriesandninemaneuvers.M0denestheinitialstatetransitionbywhichtheaircraftentersthedirectedgraph. (w;)=w1(1)Mw1w2(2)Mw2:::MwNwN+1(wN+1);i0;8i2f1;N+1g(6{6)wherewisavectordeterminingthemaneuversequenceandisavectorofcoastingtimesthatdenehowlongthetrimprimitivesareexecuted.Thetrimprimitivesareuniquelydenedbythemaneuversequenceandtheconstraintonensuresthatallcoastingtimesarepositive.Thetrimtrajectoriesandmaneuverscanbemultipliedinthisfashionbecausetheyrepresenttransformationsofthevehiclestatewritteninahomogeneousrepresentation.Therefore,generalrigid-bodymotiondenotedbyarotationRandtranslationT,suchasamaneuver,iswrittenintheform 83


(w;)=argmin(w;)J(w;)=PNi=1Mwi+wiwi+wN+1wN+1s.t.:gwQNw+1i=1exp(ii)=g10gfw2L(MA)i0;8i2f1;N+1g(6{9)wherethecostfunctionJisdenedbyasummationofallmaneuvercostsMwiandprimitivecostsTwiwhicharescaledlinearlybytheirdurationofexecutionwi.TherstconstraintensuresthesystemisdriventoagoalstateandsecondconstraintassuresthatthesolutionmaneuverexistsinthesetofalladmissiblemaneuversequencesL(MA)denedbytheMA.Theposedoptimizationbecomescomputationallyintractableasthelengthofthemaneuversequenceandthenumberofavailablemotionprimitivesincreases.Therefore,theproblemisdecomposedintoasearchfortheoptimalmaneuversequencewandanoptimizationtodeterminetheoptimalcoastingtimes.Thisformulation,however,doesnotincludeconstraintsforobstacleavoidance.Forobstacleavoidance,theMAmotionplanningstrategyiscoupledwithsampling-basedmethods[ 94 ](Section 6.4 ). 84


40 41 ].Thisdissertationemploysadecoupledtrajectoryplanningstrategyinordertoincorporateamorecomplicatedcostfunctionintotheplanningprocessthancanbeemployedecientlyintoadirectmethod.Additionally,thefullvehicledynamicsareused.Themaindrawbackofthismethodisthatthecoveragecostisnotincludedintheoptimization.However,sincetheenvironmentisunknowntheplanningstrategymustbeimplementedinaRHfashionsooptimalitycanneverbeguaranteed. 27 ].ThePRMalgorithmconstructsaroadmapintheworkspaceconsistingofnodesandconnectingarcsasshowninFigure 6-3 .Thenodesarevehiclecongurationssampledfromthevehicle'sfreecongurationspaceQfree.Theconnectingarcsarecollision-freepathsgeneratedbyalocalplannerthatconnectstwosampledcongurationsandsatisesthekinematicconstraintsofthevehicle.Therefore,containsacollisiondetectionstepalertingtheplannerifacollision-freepathcannotbeconstructed.Theroadmapisintendedtocapturetheconnectivityofthevehicle'scongurationspaceinordertoecientlyplanfeasiblepathswithoutanexhaustivesearch.ThePRMprocessinvolvesalearningphaseinwhichtheroadmapisconstructedandaqueryphaseinwhichmultiplepathplanningproblemsaresolvedwiththeconstructedroadmap.Therefore,theroadmaponlyneedstobeconstructedonce. 85


6-3A .Oncetheroadmaphasbeenconstructedpathplanningqueries,intheformofaninitialcongurationqinitandgoalcongurationqgoal,areanswered.AsshowninFigure 6-3B ,qinitandqgoalareconnectedtothetotheirnearestneighborintheroadmapq0andq00,respectively.Lastly,agraphsearchisperformedtondtheshortestpathconnectingtheinitialandgoallocations. B ProbabilisticRoadmapplanninginatwo-dimensionalenvironment.A)Aconstructedroadmapwherethecirclesarenodesdesignatingvehiclecongurationsandthestraightlinesarearcsconnectingthenodeswithkinematicallyfeasiblepaths.B)Thequeryphasewheretheshortestpathisfoundthroughtheroadmapgivenaninitialandgoalconguration. ThePRMwasdesignedasacomputationallyecientstrategyforcapturingtheconnectivityofQfreeinordertoanswermultiplepathplanningqueriesinregardstothesameenvironment.Manyplanningstrategies,however,requiretheanswertoasinglequeryanddonotrequireexplorationoftheentireQfree.TwoexamplesofsamplingbasedsinglequeryplannersaretheExpansive-SpacesTree(EST)plannerandRapidly-ExploringRandomTree(RRT)planner.Thedierencebetweenthetwoapproachesisthemannerinwhichtheroadmap,ortree,isconstructed. 86


4 6-4 continuesuntilaterminationcriteriasuchasthemaximumnumberofnodesinthetreeornodalproximitytothegoalissatised.ThemostimportantaspectoftheESTistheabilitytodirectthetreegrowthaccordingtoaprescribedmetric.GuidedsamplingofQfreeisachievedbyassigningaprobabilitydensityfunctionTtothenodesofthetree.Theprobabilitydensityfunctionbiasestheselectionoftreenodesfromwhichtoextendthetree.Thisbiasenablestreegrowthtowardspeciedregionssuchasareasthataresparselypopulatedbynodesorthegoallocation. 87


Expansive-SpacesTreeplanning.TheESTinvolvesconstructingatreeTbyextendingthetreefromrandomlysampledcongurationinthetreeq.Arandomsampleqrandlocaltoqisconnectedtothetreebyalocalplanner.Bothasuccessfulq0randandunsuccessfulconnectionq00randareshown. ThelaststepoftheESTplanningprocessistoconnectthetreetothegoallocationpgoal.Thisconnectingisaccomplishedbycheckingforconnectionswiththelocalplanner.Ifnoconnectionismade,furtherexpansionofthetreeisrequired.AnothervariationinvolvessimultaneouslygrowingatreeTgoalrootedatpgoalandproceedingwithaconnectionprocessbetweenq2Tinitandq2Tgoaltomergethetrees. 5 .InsteadofextendingthetreefromrandomlyselectedpointsinthetreeT,asintheESTapproach,theRRTextendsthetreetowardsrandomlyselectedpointsqrandinthefreecongurationspaceQfree.TreeconstructionbeginsbyselectingqrandandndingthenearestcongurationqnearinTtothesample.Thetreeisextendfromqnearadistancetowardqrand.Theextensionisperformedbyalocaloperatorterminatingatqnew.Iftheconstructedpathiscollision-free,itisaddedtoT.Treeconstruction,depictedinFigure 88


,continuesuntilaterminationcriteriasuchasthemaximumnumberofnodesinthetreeornodalproximitytothegoalissatised. Figure6-5. Rapidly-ExploringTreeplanning.TheRRTinvolvesconstructingatreeTbyextendingthetreetowardarandomlysampledcongurationqrandinQfree.Thetreeisextendedadistancetowardqrandterminatingatqnew. GuidedsamplingcanbeincorporatedintotheRRTinasimilarfashiontotheEST.Toguidesampling,qrandissampledaccordingtoaprobabilitydistributionQ.Therefore,themaindierencebetweentheESTandRRTplannersisthemechanismforguidingthesampling.TheESTguidesthesamplingbyselectingattractivenodesofthetreetoextendwhiletheRRTexpandsbyselectingregionsintheenvironmenttomovetowards.Though 89


6.4 relyonalocalplannertoconnectvehiclecongurations.Thelocalplannersaredenedbyvehiclekinematicconstraintssothattheconstructedpathiskinematicallyfeasible.Fornonholonomicvehiclemotionplanning,apopularlocalplanningstrategyistheDubinsoptimalpath[ 34 95 96 ].TheDubinsoptimalpathisashortestpathsolutionforconnectingtwovehiclecongurationssubjecttoaminimumcurvatureconstraint.Thesolutionpertainstoacar-likevehicleoperatinginathree-dimensionalcongurationspacedeninglocationandorientationqi=[xi;yi;i]T2R3.Thecurvatureconstraintrelatestotheminimumturningradiusofthevehiclemin.TheseminalworkbyDubins[ 33 ]provedthattheoptimalcurvatureconstrainedpathconsistsofthreepathsegmentsofsequenceCCCorCSCwhereCisacirculararcofradiusminandSisastraightlinesegment.Therefore,byenumeratingallpossibilitiestheDubinssetisdenedbysixpathsD=fLSL;RSR;RSL;LSR;RLR;LRLgwhereLdenotesaleftturnandRdenotesarightturn.TheoptimalDubinspathq(s)istheshortestpathq(s)2D.Foramobilerobot,theoptimalDubinspathisthetime-optimalpathfornon-acceleratingmotionwhereminisuniquetothegivenconstantvelocity. 90


6-6 .Thetransformedcongurationshavetheformqinit=[0;0;i]Tandqgoal=[d;0;g]Twhereqinitislocatedatoriginandqgoalislocatedonthey-axis. Figure6-6. ThecoordinatesystemforcalculatingthepathsintheDubinssetD.Theinitialcongurationisqinitistranslatedtotheoriginandrotatedsothatqgoalislocatedony-axis. TheDubinspathsaretheconcatenationofthreecanonicaltransformationsscaledsothatmin=1andthevehiclemoveswithunitspeed.Thisformulationallowsthegeneralsolutiontobescaledbythevehicle'sminbecausethemaximumheadingrateisequaltoV min.Thetransformationsarewrittenintermsthedistanceofmotionvalongthepathsegmentsforanarbitrarycongurationq=[x;y;]Tasfollows 91


1. DubinspathLSL v1=i+tan1cosgcosi 2. DubinspathRSR v1=itan1cosicosg 3. DubinspathLSR v1=i+tan1cosicosg 4. DubinspathRSL v1=itan1cosi+cosg 5. DubinspathRLR v1=itan1cosicosg 8(6d2+2cos(ig)+2d(sinising))v3=igv1+v2(6{16) 6. DubinspathLRL v1=i+tan1cosgcosi 8(6d2+2cos(ig)+2d(singsini))v3=i+gv1+v2(6{17) 92


6-7 Figure6-7. TheDubinssetofpathsfortwocongurations.NoticethattheLRLpathdoesnotgiveafeasiblesolutionforthisset.Thetime-optimalisdenotedasq(s). 97 ].AmethodsimilartotheMA(Section 6.2.2 )plansaninitialpathwhichplacesthevehicleatapositionandorientationthatformsaplanewiththenalpositionandorientation[ 98 ].A2DDubinspathisthenplannedbetweenthetwoin-planecongurations.Becausethedynamicsofthevehiclewillbeincorporatedafterthepathplanningtask,the3DDubinssolutionemployedinthisdissertationassumesthatchangesinight 93


6-8 B Examplesof3DDubinspaths.Thepathsoriginatefromforthesamestartcongurationandterminateatrandomlygeneratedgoalcongurations.A)Perspectiveview.B)Top-downview. 6.4 ).Theresultofisapathcomposedofpiecewiselinearsegments.Thesensedenvironmentisrepresentedbyasetofboundedplanes.Therefore,thecollisiondetectionalgorithmmustsearchforcollisionsalongtheentirepathbydetectingintersectionsbetweenlinearpathsegmentsandplanesasdepictedinFigure 6-9 94


Collisiondetectionfortheglobalpath.Thecollisiondetectionalgorithmsearchesforanyintersectionsbetweenthepiecewiselinearpathsofatreeplannerandtheplanarenvironmentobstacles. Therststepofthecollisiondetectionalgorithmistosearchforintersectionsbetweeneachpiecewiselinearsegmentofthepathandallinniteplanesinthecurrentmap.Theintersectionpointpintiscalculatedby 95


Interiortoboundarytest.ApointthatintersectswiththeinniteplaneiswithintheboundarydenedbytheconvexhullHifthesumoftheinterioranglesisequalto2. planeboundary.Chapter 3 denedeachplanarboundarybyaconvexhullH=fh1;h1;:::;hng2R3nofnboundarypoints.Therefore,thetestfordeterminingifacollisionhasoccurredbecomesdeterminingiftheintersectionpointpintisinteriortothethree-dimensionalboundaryH.ThisdeterminationisaccomplishedbyinspectingthetwodimensionalprojectionsontotheplanarlandmarkofpintandHdenotedbypint2R2andH2R2n.AsshowninFigure 6-10 ,pintcanbeconnectedwitheverypointinH.Ifthesumoftheanglesbetweentheconnectingvectorsisequalto2thenthepointisinteriortotheboundary.Thisconstraintcanbewrittenfornboundarypointsas 96


6.7 .Therefore,thelocalcollisioncheckingalgorithmsearchesforintersectionsbetweenthepiecewiselinearpathsegmentsrepresentedbynitelengthcylindersandobstaclesrepresentedbyboundedplanes(seeFigure ).ThissearchisaccomplishedbyrstndingtheintersectionpointpintbetweenthecentralaxisofthecylinderandtheobstacleusingEquation 6{18 .Theintersectionbetweenthecylinderandplaneisanellipsewithcenterpint.AsshowninFigure ,themajoraxisoftheellipsewillbeR 6{19 6.4 and 6.5.2 describedamethodforconstructingcollision-freepathsinaclutteredenvironment.Thissectiondenesamethodforselectingpathswiththegoalofconstructingacompleteenvironmentmapinminimumtime.Forthissituation,optimality 97


B Collisiondetectionforthelocalpath.Thelocalcollisiondetectionalgorithmincorporatesasafetyregionwhichencompassesthepathsegmentsradially.A)ThepathsegmentrepresentedbyacylinderwithradiusRintersectsanobstacledescribedasaplanewithnormalvectorn.B)Atwo-dimensionalviewrotatedorthogonaltotheplanecontainingthemajoraxisoftheintersectingellipse. cannotbeprovenbecauseinitiallytheenvironmentisentirelyunknown.Therefore,anantagonisticexamplecanbeconstructedforanyplannertoprovideanon-optimalresult.Theroboticcoverageproblemreferstooptimalcoverageofanenvironmentwithknownobstacles[ 99 ].Solutionstotheroboticcoverageproblemgenerallyinvolvedecomposingthefreespaceoftheenvironmentintoconnectedregions.Eachregionoftheenvironmentissensedwithabackandforthlawnmower,orboustrophedon,patternasshowninFigure 6.6 .Theboustrophedonpathisoptimalforatwo-dimensionalholonomicvehicleundertheassumptionthatpassingthroughasmallregionoftheenvironmentisequivalenttosensingtheregion.Thespacedecompositionprocessreducestheplanningtasktondingtheoptimalrouteforwhichtoconnecttheindividualpaths.Whentheenvironmentisunknownandtheplanningtaskmustbedirectedbygatheredsensorinformation,theproblemiscalledsensor-basedcoverageorsensor-basedexploration.Sensor-basedcoveragetaskstypicallyoperateunderseveralstrictassumptionsincludingatwo-dimensionalenvironment,anomnidirectionalrangesensor,anda 98


B Motionplanningforcoveragestrategies.Traditionalmotionplanningstrategiesforenvironmentalcoveragearenotwellsuitedforvehicleswithdynamicconstraintsanddirectionalsensors.A)Roboticcoveragetasksassumetheenvironmentisknownanddecomposethefreespaceintomanageablesearchspaces.B)Sensor-basedcoveragestrategiesoftenassumeomnidirectionalsensors. robotcapableofnonholonomicmotion[ 100 101 ](seeFigure 6.6 ).Therefore,currentsensor-basedcoveragealgorithmsdoaddresstheproblemsinherentinmappingathree-dimensionalenvironmentwithadynamicallyconstrainedvehicleequippedwithadirectionalsenor.Thecoveragemethodologyusedinthisdissertationaectsboththeglobalandlocalplanningstrategies.Thestrategyisbasedontrackingalllocationstowhichthevehiclehastraveledandtheattitudesfromwhichindividuallocationshavebeensensed.Thegoaloftheplannerismaximizethevarianceoftheseindividualquantities.Therefore,thediscretevehiclelocationsandthevectorsdeninglocationandorientationofsensedregionsaretrackedovertimeasshowninFigure 6-13 .Thevectorsdeningsensedregionareconstructedfromtheattitudeofthevehicleduringthesensingoperationandthecenteroftheeldofviewofthesensor.Thetotalvarianceofthevehiclepositioniscalculatedasthetraceofthecovarianceofthevehiclepositionsqposasgivenby 99

PAGE 100

Environmentalcoveragemetric.Environmentcoverageisquantiedbythevarianceofthepositionofthevehicleandsensinglocationsovertheentirepath.Sensinglocationsaredenedbythevectororiginatingfromtheendofthesensoreldofvieworientedbacktowardsthesensor. whereposisthesamplecovarianceoftheallvehiclepositionvectors.Thesameequationcancomputethevarianceofthethesensorposevectors2sengiventhecovarianceofthesensorposevectorssen.Thevarianceofthevehicleposition2posisusedtodeterminethegloballocationbyselectingthepointintheenvironmentthat,whenaddedtothecurrentvehiclelocationhistory,maximizes2pos.Thiscalculationiscomputedonacoarsegridofthevehicleenvironmentforthepurposeofcomputationaleciency.Theglobalpathplannerconstructsapathtothislocationwithagreedysearchandacoarsediscretizationofthepath.Thelocalpathisdeterminedbymaximizingthevarianceofthesensorposevectors2senoverallcandidatepaths.Asopposedtothequick,greedysearchusedforglobalpathplanning,thelocalsampling-basedstrategygeneratesatreethatmorecompletelyexplorestheenvironment.Whenaspeciednumberofcandidatepathsarefoundthepaththatmaximizes2senisselected. 100

PAGE 101 )ensuresthattheplannedpathstraversetheenvironmentinsideofasafecorridor.Theoptimalpathtrackingstrategyseekstondatimeoptimaltrajectorythatfollowstheplannedpathwithinadistancer
PAGE 102

=!2n;(u)2!n;__=u (6{23)where!n;isthethenaturalfrequencyfortheresponse,isthedampingratiocorrespondingto,isthetimeconstantfortheresponse,andthenewcontrolinputstothesystemareuandu.Theoptimizationproblemisformulatedasthefollowingnonlinearprogram(NLP) (u)=argminuJ(u)=t2f+PNk=1(xk;q(s))s.t.:xk+1=f(xk;uk)dtxN=qfk
PAGE 103

5 5 6{24 103

PAGE 104

6{24 .TheexibilityoftheplanningframeworkallowsfortheoptimalcontrolformulationtobesubstitutedwithaMAimplementationasdescribedin 6.2.2 .Bothmethodologieswouldincorporatedynamicsintothenalsolution.ThenalplannedtrajectoryisimplementedandtheplanningprocessisrepeatedwhenthevehiclearrivesatqL.Thischapterhaspresentedaframeworkthatplansaircrafttrajectoriesforcompletesensorcoverageofanunknown,uncertainenvironment.ResultsoftheplanningstrategyarepresentedinChapter 7 104

PAGE 105

7.2 givesresultsoftheplanettingalgorithmintroducedinChapter 3 .TheplanettingalgorithmisemployedintheSLAMalgorithm(Chapter 5 )toproducetheresultsinSection 7.3 .Finally,resultsofthetrajectoryplanningalgorithm(Chapter 6 )arepresentedinSection 7.4 3 generatesalowdimensionalenvironmentrepresentationofthepointwisesensordataprovidedbySFM(Chapter 2 ).ThequalityofthetisinvestigatedwithrespecttofeaturepointnoiseinSection 7.2.1 andtheinclusionofnonplanarobstaclesinSection 7.2.2 .ThesimulationenvironmentemployedforanalyzingtheplanettingalgorithmisshowninFigure 7-1 Figure7-1. Resultoftheplanettingalgorithmwithoutimagenoise. 105

PAGE 106

7-1 consistsofpolyhedralobstaclesviewedbyacameraatagivenvantagepointdenotedbyavisionfrustum.TheobstaclesrepresentanurbanenvironmentandthefrustumindicatestheeldofviewforacameraattachedtoaUAVnavigatingtheenvironment.Featurepointsareaddedtotheobstaclefacadesandcheckedforvisibilityaccordingtoapinholecameramodel.Thevisibilityconstraintsaccountforenvironmentalocclusion,sensoreldofview,andangleofincidence.TheangleofincidenceconstraintforafeaturepointisdepictedinFigure 7-2 wherefeaturepointswithanangleofincidence,#aredeemedvisibleif15#75.Lastly,allvisiblefeaturepointsareinputintotheplanettingalgorithmtogeneratetheplanartofthepointwisedata. Figure7-2. Angleofincidenceconstraint.Theangleofincidence,#,forfeaturepointfistheanglebetweenthefeaturepoint'sposition,~f,measuredinthecameracoordinateframeandthenormaldirection,n,oftheplaneonwhichthefeaturepointissituated. 106

PAGE 107

2{14 ).ResultsoftheplanettingalgorithminthepresenceofnoiseareshowninFigure 7-3 B C D Resultsoftheplanettingalgorithminthepresenceofimagenoise.A)Planettingresultwith=0:5pixelimagenoiseandacameradisplacementof25ft.B)Planettingresultwith=1:0pixelimagenoiseandacameradisplacementof25ft.C)Planettingresultwith=0:5pixelimagenoiseandacameradisplacementof15ft.D)Planettingresultwith=1:0pixelimagenoiseandacameradisplacementof15ft. Theplanettingresults,showninFigure 7-3 ,weregeneratedwithGaussiannoiseofstandarddeviation=0:5and=1:0pixelsappliedtotheimages.Additionally,the 107

PAGE 108

7-3B and 7-3D .Thesetworesultsshowthatincreasingthebaselinebetweencameravantagepointsproducesmoreaccuratetriangulationresultsleadingtoamorecompleteplanarrepresentation. 7-4 presentsplanettingresultswhennon-structuredobstaclesareincludedintheformofseveraltree-likeobjects.Theadditionalfeaturepointsgeneratespuriousplanarfeatures.SuchfeaturesareinaccuraterepresentationsofthephysicalworldandprovidepoorlandmarksfortheSLAMalgorithm.Specically,thespuriousplanesarenotnecessarilyobservablefrommultiplevantagepoints.Inordertoreducethenumberofspuriousplanarobstaclesinthemap,theSLAMalgorithmonlyincorporatesobstaclesthathavebeenre-observedaminimumnumberoftimes.Aminimumvalueof3re-observationswasfoundtoprovidesuccessfulmapresultsinthepresenceofspuriousplanarfeatures. 5 areperformedforthethreelterspresentedinthisdissertation:theRBPF,URBPF,andSR-URBPF. 108

PAGE 109

Resultoftheplanettingalgorithmwhennonplanarobstaclesareincluded. ThelterswerecomparedbyperformingMonteCarlo(MC)simulationsinwhichthevehiclestatemeasurementnoiseandsensornoisewererandomlygeneratedwithintheconnesofthecovariancematricesqandr,respectively.Thisprocessteststheaverageperformanceoftheltersovermanyestimationprocesses. 7-5A ,consistsofgeometricallyregularobstaclesindicativeofanurbanenvironment.Theaircraftnavigatesalongapre-computedtrajectorygeneratedbyahigh-delitynonlinearaircraftmodel.SFM,asdiscussedinChapter 2 ,issimulatedbydistributingthree-dimensionalfeaturepointsonthebuildingfacadesandground.Thefeaturepointsprovidepointwisestructureinformationtotheplanettingalgorithm.Theplanettingalgorithm,introducedinChapter 3 ,generatesobservationsasthevehiclenavigatesasshowninFigure 7-5B .Thevehiclestatepredictionoccursat30Hz.ThoughanIMUcouldprovideafasterstate-estimationrate,itisassumedthattheIMUisfusedwithimageprocessing 109

PAGE 110

B Thevision-basedightsimulationenvironmentfortestingtheSLAMalgorithm.A)Theurbansimulationenvironmentiscomposedofgeometricallyregularobstaclesrepresentingbuildings.Aprecomputedtrajectorygeneratedfromanonlinearaircraftmodelisshowncirclingtheenvironment.B)Astheaircraftnavigatesalongtheprecomputedtrajectorysensorinformationisgathered.Thegatheredsensorinformationisrestrictedtoasenorconedenedbyamaximumdepthofsensingandhorizontalandverticaleldsofview.Three-dimensionalfeaturepointsaredistributedonthebuildingfacesandplanesarettothepointwisedata. 110

PAGE 111

7-6 .Theresult,obtainedwiththeURBPFapproach,simulatesaknownenvironmentbyinitializingthelandmarkstotheiractuallocations.ThisisequivalenttousingtheSLAMframeworkstrictlyforlocalizingthevehicleinthepresenceofnoisystatemeasurementsandnoisysensormeasurements.Thenoisevalues,denedinEquations 5{3 and 5{4 ,weregiventhefollowingstandarddeviations 7-6 )showsthelteriscapableofproducingverygoodlocalizationresultsinthepresenceofconsiderablenoise.TheestimatedtrajectorygivenbytheSLAMalgorithmisdenedbytheparticlewiththelargestweightandthe 111

PAGE 112

B C D ExamplelocalizationresultfortheairborneSLAMsimulation.A)Thecompletelocalizationresultwithconstructedmapandestimatedtrajectory.B)Theconstructedmapafterasingleloopthroughtheenvironment.C)Thelocalizationresultwithouttheconstructedmap.D)Atop-downviewoftheactualvehicletrajectory,trajectorypredictedfrommeasuredvehiclestatesonly,andthetrajectoryestimatedbytheSLAMalgorithm. measuredtrajectoryiscalculatedfromthenoisystatemeasurementswithoutupdatesfromthevisionsensorinformation.Theestimatedtrajectoryquicklyconvergestothegroundtruthtrajectoryasisconsiderablymoreaccuratethanthemodelbasedpredictionovertheentiretrajectory.ThisshowstheversatilityoftheSLAMalgorithminthatitcanbeemployedifthestructureoftheenvironmentisknowntoprovideaccurateestimatesofvehiclelocationdespiteverynoisysensormeasurements. 112

PAGE 113

7-7 showstheaveragepositionandorientationerrorsforthethreeSLAMltersalongwiththeaverageerrorofthemeasuredtrajectory.Themeasuredtrajectoryisthetrajectorycalculatedpurelyfromtheaircraftkinematicmodelandvelocitymeasurementswithoutupdatesfromtheenvironmentalsensing.Thesegraphsindicatetheaccuracyofthepositionandattitudeestimation.TheresultsshowthattheSLAMalgorithmsprovideasignicantdecreaseinestimationerrorsforaircraftposition,rollangle,andpitchangle.Theheadingangleestimates,however,begintodivergehalfwaythroughthesimulation.Theapparentsharpdecreasesinerrorforthemeasuredtrajectorycanbeattributedtheerraticbehaviorofthemeasuredtrajectory.Thepresentederrorplotsshowtheabsolutevalueoftheerrors,therefore,whentheerrorchangesfrompositivetonegativeitcrossesthezeroerroraxisandappearsasadecreaseinerror.Thissharpdecreaseshouldnotbemistakenforacceptableestimationperformance. 113

PAGE 114

B C D E F AverageestimationerrorforthethreeSLAMltersandthenominaltrajectoryfor50MonteCarloruns.A)EstimationerrorintheNorthdirection.B)EstimationerrorintheEastdirection.C)Estimationerrorinaltitude.D)Estimationerrorinrollangle.E)Estimationerrorinpitchangle.F)Estimationerrorinheadingangle. 114

PAGE 115

79 ]calculatedaccordingto 103 104 ].Theconsistencyofthelters,however,isanacceptablemetricforassessingtherobustnessofthelterbecausethelengthoftimethatalterremainsconsistentrelatestohowwellthelterperformsoverlongtimeperiods.TheresultspresentedinFigure 7-8 showthattheURBPFprovidesthebestresultsintermsoflterconsistency.ThisislikelyduetothefactthattheURBPFdoesnotrequirelinearizationasdoestheRBPF.Also,itisbelieved 115

PAGE 116

B Filterconsistencyresults.A)AverageNEESforallthreeSLAMltersover50MonteCarloruns.B)AverageNEESfortheURBPFlterfor50MonteCarlorunswith100and1000particles. 7-1 .Sincetheltersareimplementedwithnon-optimizedMATLABcode,theeciencyresultshavebeenpresentedascomputationtimenormalizedtotheRBPFresultforNparticles.ThecomparisonincludesresultsforN,2N,and4Nparticlestocomparetheeectsofincreasedsamplingonltereciency.TheeciencyresultsshowthattheRBPFlterisconsiderablyfasterthantheothertwoltersandthattheltersscalelinearlywiththenumberofparticles.Theineciencyoftheunscentedltersisduetothesigmapointpropagationstepinwhich2L+1sigmapointsmustbetransformedaccordingtothesystemdynamicsandsensormodel.TheURBPF,however,isamoreconsistentltermeaningthatitismorerobustwithrespectthelengthoftheestimationtime.Therefore,fewerparticlescouldbeusedintheURBPFtoproduceresultscomparabletotheRBPFoverthesametimeperiod.Thisdecreasein 116

PAGE 117

Table7-1. EciencycomparisonoftheSLAMlters. Numberofparticles RBPF URBPF SR-URBPF N 1.00 2.06 1.602N 2.07 3.97 2.854N 3.98 7.63 5.64 7-9 wheretheestimatedtrajectorygivenbytheSLAMalgorithmisdenedbytheparticlewiththelargestweightateachtimestep.TheerrorcovariancesforthissolutionaregiveninEquation 7{2 6 .ThetrajectorieswereplannedwithinthesimulationenvironmentpresentedinSection 7.3.1 .Theconstructedmapsusedintheplanningprocessareexact,representingaSLAMsolutionwithnoerror. 117

PAGE 118

B C D AnexampleresultforthecompleteairborneSLAMsolutionusingtheURBPF.A)ThecompleteSLAMresultwithconstructedmapandestimatedtrajectory.B)Theconstructedmapafterasingleloopthroughtheenvironment.C)TheSLAMresultwithouttheconstructedmap.D)Atopdownviewoftheactualvehicletrajectory,trajectorypredictedfrommeasuredvehiclestatesonly,andthetrajectoryestimatedbytheSLAMalgorithm. aregionhasbeenreachedithasbeensensed.This,however,isnotcorrectforadirectedsensorsuchasanonboardcamera.Inthiscase,directpathsthroughtheenvironmentwillnotprovidetheneededinformationtobuildacompletemap.Thelocalplanningstrategyincorporatesthesensorphysicsbytrackingvehicleposesandselectingpathswhichmaximizethevarianceinsensingvantages.Therefore,pathsareselectedaccordingtotheamountofnewinformationthatwillbeprovidedbytraversingthepath.Thelocalplanningprocessoccurswithinthevisionconedenedby 118

PAGE 119

B C D Examplepathsforenvironmentcoverage.A)Exampleresultfortheglobalplanningstrategy.B)Exampleresultforthecompleteplanningstrategy.C)Top-downviewoftheglobalplanningstrategyresult.D)Top-downviewofthecompleteplanningstrategyresult. Inordertoquantifythebenetoftheoverallplanningstrategy,Figure 7-10 presentstrajectoryresultsforcaseswhentheplanningprocessisrestrictedtotheglobalplannerandwhenthecompleteplanningstrategyisemployed.Therstcase,wherejustthe 119

PAGE 120

7-11 whichcomparesbothmethodsintermsoftotalobstaclesurfaceareaviewed.Theresultshowsthataddingthelocalplanningstrategyincreasesenvironmentcoveragedespitereducingtheoveralldistancetraversedtotherstglobalgoallocation. Figure7-11. Comparisonofenvironmentcoverageresults.TheamountofobstaclesurfaceareaviewedduringthetransversalofpathsgiveninFigure 7-10 120

PAGE 121

6 wasusedtoconvertDubinspathstoaircrafttrajectories.TheobjectivefunctionoftheNLPminimizesthedistancebetweenthetrajectoryandthegivenpaths.Therefore,theNLPdenesacontrolstrategyforoptimalpathfollowing.ResultsoftheoptimalcontrolproblemarepresentedinFigure 7-12 B Resultanttrajectoriesfromthepathtrackingoptimization.A)ResultsoftheoptimalcontrolstrategyusedtoconvertDubinspathstoaircrafttrajectories.B)Top-downviewoftheoptimalpathfollowingresult. TheoptimalcontrolresultsaregeneratedbysupplyingtheDubinspathastheinitialconditiontotheoptimization.TheDubinspathsspecifyinitialconditionsforthestatesV,,,x,y,h,and.Theangleofattack,,wascalculatedforeachsegmentoftheDubinspathbycalculatingthetrimangleofattackaccordingtothevehicle'sequationsofmotion(Equation 6{21 ).TheoptimalcontrolresultsshowthattheaircraftiscapableoftrackingtheDubinspathsinthelateraldirection.Thelongitudinaldirectionaltracking,however,producespoorerresults.Itwasfoundthattheoptimizationdidnotalwaysconverge,generatingpoorresultsforaggressivemaneuvers.Additionally,theoptimizationsrequiredcomputationtimeontheorderof20to60seconds.Forthesereasons,itisbelievedthatanMAstrategywouldprovideabettersolutiontoincorporatingdynamicsintotheplanningstrategy. 121

PAGE 122


PAGE 123


PAGE 124

[1] Rudakevych,P.andCiholas,M.,\PackBotEODFiringSystem,"ProceedingsoftheSPIEInternationalSocietyforOpticalEngineering,Vol.5804,2005,pp.758-771. [2] Cox,N.,\TheMarsExplorationRovers:HittingtheRoadonMars,"16thInter-nationalFederationofAutomaticControl(IFAC)WorldCongress,Prague,CzechRepublic,July3-12,2005. [3] Forlizzi,J.,andDiSalvo,C.,\ServiceRobotsintheDomesticEnvironment:AStudyoftheRoombaVacuumintheHome,"Proceedingsofthe1stACMSIGCHInSIGARTConverenceonHuman-RobotInteraction,SaltLakeCity,Utah,2006,pp.258-265. [4] OceoftheSecretaryofDefense,UnitedStates,UnmannedAircraftSystemsRoadmap2005-2030,August2005. [5] Kehoe,J.,Causey,R.,Abdulrahim,M.,andLindR.,\WaypointNavigationforaMicroAirVehicleusingVision-BasedAttitudeEstimation,"Proceedingsofthe2005AIAAGuidance,Navigation,andControlConference,SanFrancisco,CA,August2005. [6] Frew,E.,Xiao,X.,Spry,S.,McGee,T.,Kim,Z.,Tisdale,J.,Sengupta,R.,andHedrick,J.,\FlightDemonstrationsofSelf-directedCollaborativeNavigationofSmallUnmannedAircraft,"Invitedpaper,AIAA3rdUnmannedUnlimitedTechnicalConference,Workshop,&Exhibit,Chicago,IL,September2004. [7] Thrun,S.,\RoboticMapping:ASurvey,"InG.LakemeyerandB.Nebeleditors,ExploringArticialIntelligenceintheNewMillenium,MorganKaufmann,2002. [8] Smith,R.,Self,M.,andCheeseman,P.,\EstimatingUncertainSpatialRelationshipsinRobotics,"InI.J.CoxandG.T.Wilfongeditors,AutonomousRobotVehicles,pages167-193,Springer-Verlag,1990. [9] Smith,R.,andCheeseman,P.,\OntheRepresentationandEstimationofSpatialUncertainty,"InternationalJournalofRoboticsResearch,Vol.5,Issue4,1987,pp.56-68. [10] Dissanayake,G.,Newman,P.,Clark,S.,Durrant-Whyte,H.,andCsorba,M.,\ASolutiontotheSimultaneousLocalizationandMapBuilding(SLAM)Problem,"IEEETransactionsonRoboticsandAutomation,Vol.17,No.3,2001. [11] Tards,J.D.,Neira,J.,Newman,P.M.,andLeonard,J.J.,\RobustMappingandLocalizationinIndoorEnvironmentsusingSonarData.,"TheInternationalJournalofRoboticsResearch,2002. [12] Neira,J.,Ribeiro,I.,andTarods,J.D.,\MobileRobotLocalizationandMapBuildingusingMonocularVision,"InternationalSymposiumonIntelligentRoboticsSystems,Stockholm,Sweden,1997. 124

PAGE 125

Hayet,J.B.,Lerasle,F.,andDevy,M.,\AVisualLandmarkFrameworkforIndoorMobileRobotNavigation,"IEEEInternationalConferenceonRoboticsandAutoma-tion,Washington,DC,May,2002. [14] Davison,A.,\Real-TimeSimultaneousLocalizationandMappingwithaSingleCamera,"IEEEInternationalConferenceonComputerVision,Nice,France,2003. [15] Molton,N.,Davison,A.,andReidI.,\LocallyPlanarPatchFeaturesforReal-TimeStructurefromMotion,"BritishMachineVisionConference,KingstonUniversity,London,2004. [16] Davison,A.,Cid,Y.,andKita,N.,\Real-Time3DSLAMwithWide-AngleVision,"IFAC/EURONSymposiumonIntelligentAutonomousVehicles,InstitutoSuperiorTecnico,Lisboa,Portugal,2004. [17] Davison,A.,andKita,N.,\3DSimultaneousLocalizationandMap-BuildingUsingActiveVisionforaRobotMovingonUndulatingTerrain,"IEEEComputerSocietyConferenceonComputerVisionandPatternRecognition,Kauai,Hawaii,2001. [18] Kim,J.H.andSukkarieh,S.,\AirborneSimultaneousLocalisationandMapBuilding,"IEEEInternationalConferenceonRoboticsandAutomation,Taipei,Taiwan,2003. [19] Kim,J.andSukkarieh,S.,\Real-TimeImplementationofAirborneInertial-SLAM,"RoboticsandAutonomousSystems,Issue55,62-71,2007. [20] Langelaan,J.andRock,S.,\PassiveGPS-FeeNavigationforSmallUAVs,"IEEEAerospaceConference,BigSky,Montana,2005. [21] Langelaan,J.andRock,S.,\NavigationofSmallUAVsOperatinginForests,"AIAAGuidance,Navigation,andControlConference,Providence,RI,2004. [22] Langelaan,J.andRock,S.,\TowardsAutonomousUAVFlightinForests,"AIAAGuidance,Navigation,andControlConference,SanFrancisco,CA,2005. [23] Montemerlo,M.,Thrun,S.,Koller,D.,andWegbrit,B.,\FastSLAM:AFactoredSolutiontotheSimultaneousLocalizationandMappingProblem,"ProceedingsoftheAAAINationalConferenceonArticialIntelligence,2002. [24] Weingarten,J.,andSieqwart,R.,\EKF-based3DSLAMforStructuredEnvironmentReconstruction,"InProceedingsofIROS,Edmonton,Canada,August2-6,2005. [25] Brunskill,E.,andRoy,N.,\SLAMusingIncrementalProbabilisticPCAandDimensionalityReduction,"IEEEInternationalConferenceonRoboticsandAutomation,April2005. 125

PAGE 126

Choset,H.,Lynch,K.,Hutchinson,S.,Kantor,G.,Burgard,W.,Kavarki,L.,andThrun,S.,\PrinciplesofRobotMotion:Theory,Algorithms,andImplementations,"MITPress,Cambridge,MA,2005. [27] Kavraki,L.E.,Svestka,P.,Latombe,J.C.,andOvermars,M.H.,\ProbabilisticRoadmapsforPathPlanninginHigh-DimensionalCongurationSpaces,"IEEETransactionsonRoboticsandAutomation,Vol.12,No.4,1996,pp.566-580. [28] Hsu,D.,\RandomizedSingle-QueryMotionPlanninginExpansiveSpaces,"Ph.D.thesis,DepartmentofComputerScience,StanfordUniversity,2000. [29] Hsu,D.,Latombe,J.C.,andMotwaniR.,\PathPlanninginExpansiveCongurationSpaces,"InternationalJournalofComputationalGeometryandApplications,Vol.9,No.4-5,1998,pp.495-512. [30] Hsu,D.,Kindel,R.,Latombe,J.C.,andRockS.,\RandomizedKinodynamicMotionPlanningwithMovingObstacles,"InternationalJournalofRoboticsRe-search,Vol.21,No.3,2002,pp.233-255. [31] Kuner,J.J.andLaValle,S.M.,\RRT-Connect:AnEcientApproachtoSingle-QueryPathPlanning,"InIEEEInternationalConferenceonRoboticsandAutomation,2000. [32] LaValle,S.M.andKuner,J.J.,\RandomizedKinodynamicPlanning,"Interna-tionalJournalofRoboticsResearch,Vol.20,No.5,2001,pp.378-400. [33] Dubins,L.E.,\OnCurvesofMinimalLengthwithaConstraintonAverageCurvature,andwithPrescribedInitialandTerminalPositionsandTangents,"AmericanJournalofMathematics,Vol.79,No.3,July,1957,pp.497-516. [34] Shkel,A.andLumelsky,V.,\ClassicationoftheDubinsSet,"RoboticsandAutonomousSystems,Vol.34,2001,pp.179-202. [35] Frazzoli,E.,\RobustHybridControlofAutonomousVehicleMotionPlanning,"Ph.D.Thesis,MIT,June2001. [36] Frazzoli,E.,Dahleh,M.,andFeron,E.,\RobustHybridControlforAutonomousVehicleMotionPlanning,"TechnicalReport,LIDS-P-2468,1999. [37] Frazzoli,E.,Dahleh,M.,andFeron,E.,\Maneuver-basedMotionPlanningforNonlinearSystemsWithSymmetries,"IEEETransactionsonRobotics,Vol.21,No.6,December2005. [38] Schouwenaars,T.,Mettler,B.,Feron,E.,andHow,J.,\RobustMotionPlanningUsingaManeuverAutomatonwithBuilt-inUncertainties,"ProceedingsoftheIEEEAmericanControlConference,June2003,pp.2211-2216. [39] Bryson,A.E.,\DynamicOptimzation,"AddisonWesleyLongman,Inc.,1999. 126

PAGE 127

Pfeier,F.andJohanni,R.,\AConceptforManipulatorTrajectoryPlanning,"IEEEJournalofRoboticsandAutomation,Vol.3,No.2,1987,pp.115-123. [41] Slotine,J.-J.andYang,S.,\ImprovingtheEciencyofTime-OptimalPath-FollowingAlgorithms,"IEEETransactionsonRoboticsandAutomation,Vol.5,No.1,1989,pp.118-124. [42] DeSouza,G.andKak,A.,\VisionforMobileRobotNavigation:ASurvey,"IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.24,No.2,February2002. [43] Ma,Y.,Soatto,S.,Kosekca,J.,andSastryS.,\AnInvitationto3-DVision:FromImagestoGeometricModels"Springer-VerlagNewYork,Inc.2004. [44] Oliensis,J.,\ACritiqueofStructure-from-MotionAlgorithms,"ComputerVisionandImageUnderstanding,80:172-214,2000. [45] Tomasi,C.andKanade,T.,\ShapeandMotionfromImageStreamsUnderOrthography,"InternationalJournalofComputerVision,Vol.9,No.2,1992,pp.137-154. [46] Prazenica,R.,Watkins,A.,Kurdila,A.,Ke,Q.,andKanadeT.,\Vision-BasedKalmanFilteringforAircraftStateEstimationandStructurefromMotion,"AIAAGuidance,Navigation,andControlConferenceandExhibit,SanFrancisco,California,August2005. [47] Longuet-Higgins,H.C.,\AComputerAlgorithmforReconstructingaScenefromTwoProjections,"Nature,Vol.293,Sept.1981,pp.133-135. [48] Hartley,R.,\InDefenseoftheEight-PointAlgorithm,"IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.19,No.6,June1997. [49] Broida,T.andChellappa,R.,\EstimationofObjectMotionParametersfromNoisyImages,"IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.8,No.1,Jan.1986,pp.90-99. [50] Matthies,L.,Szeliski,J.,andKanade,T.,\KalmanFilter-basedAlgorithmsforEstimatingDepthfromImageSequences,"InternationalJournalofComputerVision,Vol.3,1989,pp.209-236. [51] Chuiso,A.,Favaro,P.,Jin,H.,andSoattoS.,\StructurefromMotionCausallyIntegratedOverTime,"IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.24,No.4,April2002. [52] Thomas,J.andOliensisJ.,\DealingwithNoiseinMultiframeStructurefromMotion,"ComputerVisionandImageUnderstanding,Vol.76,No.2,November1999,pp.109-124. 127

PAGE 128

Webb,T.,Prazenica,R.,Kurdilla,A.,andLind,R.,"Vision-BasedStateEstimationforAutonomousMicroAirVehicles,"ProceedingsoftheAIAAGuidance,Naviga-tion,andControlConference,AIAA-2004-5359,Providence,RI,August,2004. [54] Webb,T.,Prazenica,R.,Kurdilla,A.,andLind,R.,"Vision-BasedStateEstimationforAutonomousUninhabitedAerialVehicles,"ProceedingsoftheAIAAGuidance,Navigation,andControlConference,AIAA-2005-5869,SanFrancisco,CA,August,2005. [55] Kehoe,J.,Causey,R.,Arvai,A.,andLind,R.,\PartialAircraftStateEstimationfromOpticalFlowUsingNon-Model-BasedOptimization,"Proceedingsofthe2006IEEEAmericanControlConference,Minneapolis,MN,June,2006. [56] Kehoe,J.,Watkins,A.,Causey,R.,andLind,R.,\StateEstimationusingOpticalFlowfromParallax-WeightedFeatureTracking"Proceedingsofthe2006AIAAGuidance,Navigation,andControlConference,Keystone,CO,August2006. [57] Iyer,R.V.,He,Z.,andChandler,P.R.,\OntheComputationoftheEgo-MotionandDistancetoObstaclesforaMicroAirVehicle,"Proceedingsofthe2006IEEEAmericanControlConference,Minneapolis,MN,June,2006. [58] Latombe,J.C.,\RobotMotionPlanning,"KluwerAcademicPublishers,1991. [59] Mahon,I.,andWilliams,S.,\Three-DimensionalRoboticMapping,"Proceed-ingsofthe2003AustralasianConferenceonRoboticsandAutomation,Brisbane,Queensland,2003. [60] Thrun,S.,Burgard,W.,andFoxD.,\AReal-TimeAlgorithmforMobileRobotMappingWithApplicationstoMulti-Robotand3DMapping,"InIEEEInterna-tionalConferenceonRoboticsandAutomation,2000. [61] Davison,A.,andMurray,D.,\SimultaneousLocalizationandMap-BuildingUsingActiveVision,"IEEETransactionsonPatternAnalysisandMachineIntelligence,Vol.24,No.7,2002,pp.865-880. [62] Choset,H.andNagatani,K.,\TopologicalSimultaneousLocalizationandMapping(SLAM):TowardExactLocalizationWithoutExplicitLocalization,"IEEETransac-tionsonRoboticsandAutomation,Vol.17,No.2,April2001. [63] Kuipers,B.andByun,Y.,\ARobotExplorationandMappingStrategyBasedonSemanticHierarchyofSpacialRepresentations,"JournalofRoboticsandAu-tonomousSystems,8:47-63,1991. [64] Thrun,S.,Gutmann,J.-S.,Fox,D.,Burgard,W.,andKuipers,B.,\IntegratingTopologicalandMetricMapsforMobileRobotNavigation:AStasticalApproach,"InProceedingsoftheNationalConferenceonArticialIntelligence(AAAI),1998. [65] Elfes,A.,\Sonar-BasedReal-WorldMappingandNavigation,"IEEEJournalofRoboticsandAutomation,Vol.RA-3,No.3,June1987. 128

PAGE 129

Elfes,A.,\UsingOccupancyGridsforMobileRobotPerceptionandNavigation,"Computer,Vol.22,Issue6,1989,pp.46-57. [67] Castelloanos,J.,Tardos,J.,andSchmidt,G.,\BuildingaGlobalMapoftheEnvironmentofaMobileRobot:TheImportanceofCorrelations,"IEEEInterna-tionalConferenceonRoboticsandAutomation,Albuquerque,NewMexico,April,1997. [68] Surmann,H.,Nuchter,A.,andHertzberg,J.,\AnAutonomousMobileRobotwitha3DLaserRangeFinderfor3DExplorationandDigitalizationofIndoorEnvironments,"RoboticsandAutonomousSystems,Vol.45,2003,pp.181-198. [69] Liu,Y.,Emery,R.,Chakrabarti,D.,Burgard,W.,andThrun,S.,\UsingEMtoLearn3Dmodelswithmobilerobots,"InProceedingsoftheInternationalConferenceonMachineLearning,2001. [70] Martin,C.,andThrunS.,\OnlineAcquisitionofCompactVolumetricMapswithMobileRobots,"InIEEEInternationalConferenceonRoboticsandAutomation,Washington,DC,2002. [71] Jollie,I.,\PrincipalComponentAnalysis,"Springer,ISBN0387954422,2002. [72] MacQueen,J.B.,\SomeMethodsforClassicationandAnalysisofMultivariateObservations,"Proceedingsofthe5thBerkeleySymposiumonMathematicalStatis-ticsandProbability,1967,pp.281-297. [73] Hammerly,G.,andElkan,C.,\Learningthekink-means,"NeuralInformationProcessingSystems,15,2004. [74] Stentz,A.,\Map-BasedStrategiesforRobotNavigationinUnknownEnvironments,"ProceedingsofAAAISpringSymposiumonPlanningwithIn-completeInformationforRobotProblems,1996. [75] Bailey,T.andDurrant-Whyte,H.,\SimultaneousLocalizationandMapping(SLAM):PartI-TheEssentialAlgorithms,"RoboticsandAutomationMagazine,June,2006. [76] Guivant,J.,andMario,E.,\OptimizationoftheSimultaneousLocalizationandMap-BuildingAlgorithmforReal-TimeImplementation,"IEEETransactionsonRoboticsandAutomation,Vol.17,No.3,June2001. [77] Williams,S.,Newman,P.,Dissanayake,G.,andDurrant-WhyteH.F.,\AutonomousUnderwaterSimultaneousLocalizationandMapBuilding,"ProceedingsoftheIEEEInternationalConferenceonRoboticsandAutomation,SanFrancisco,CA,April,2000. [78] Bailey,T.andDurrant-Whyte,H.,\SimultaneousLocalizationandMapping(SLAM):PartII-StateoftheArt,"RoboticsandAutomationMagazine,September,2006. 129

PAGE 130

Bar-Shalom,Y.,Li,X.R.,andKirubarajanT.,EstimationwithApplicationstoTrackingandNavigation,JohnWileyandSons,2001. [80] Kalman,R.E.,\ANewApproachtoLinearFilteringandPredictionProblems,"TransactionsoftheASMEJournalofBasicEngineering,1960,pp.34-45. [81] \AlgorithmsforMultipleTargetTracking,"AmericanScientist,Vol.80,No.2,1992,pp.128-141. [82] Arulampalam,M.S.,Maskell,S.,Gordon,N.,andClapp,T.,\ATutorialonParticleFiltersforOnlineNonlinear/Non-GaussianBayesianTracking,"Vol.50,No.2,2002,pp.174-188. [83] Julier,S.andUhlmann,J.,\ANewExtensionoftheKalmanFiltertoNonlinearSystems,"SPIEAeroSenseSymposium,April21-24,1997. [84] vanderMerwe,R.andWan.,E.,\TheSquare-RootUnscentedKalmanFilterforStateandParameterEstimation,"InternationalConferenceonAcoustics,Speech,andSignalProcessing,SaltLakeCity,Utah,May,2001. [85] Liu,JandChen,R.,\SequentialMonteCarloMethodsforDynamicalSystems,"JournaloftheAmericanStatisticalAssociation,Vol.93,1998,pp.1032-1044. [86] Murphy,K.,\BayesianMapLearninginDynamicEnvironments,"InAdvancesinNeuralInformationProcessingSystems,Vol.12,2000,pp.1015-1021. [87] Montemerlo,M.,Thrun,S.,Koller,D.,andWegbreit,B.,\FastSLAM2.0:AnImprovedParticleFilteringAlgorithmforSimultaneousLocalizationandMappingthatProvablyConverges,"InternationalJointConferenceonArticialIntelligence,Acapulco,Mexico,2003. [88] Li,M.,Hong,B.,Cai,Z.,andLuo,R.,\NovelRao-BlackwellizedParticleFilterforMobileRobotSLAMUsingMonocularVision,"JournalofIntelligentTechnology,Vol.1.,No.1.,2006. [89] Richards,A.andHow,J.,\AircraftTrajectoryPlanningWithCollisionAvoidanceUsingMixedIntegerLinearProgramming,"ProceedingsoftheAmericanControlConference,Anchorage,AK,May2002. [90] Kuwata,Y.andHow,J.,\ThreeDimensionalRecedingHorizonControlforUAVs,"AIAAGuidance,Navigation,andControlConferenceandExhibit,Providence,RhodeIsland,August2004. [91] Bellingham,J.,Richards,A.,andHow,J.,\RecedingHorizonControlofAutonomousAerialVehicles,"ProceedingsoftheAmericanControlConference,Anchorage,AK,May2002. 130

PAGE 131

Henshaw,C.,\AUnicationofArticialPotentialFunctionGuidanceandOptimalTrajectoryPlanning,"AdvancesintheAstronauticalSciences,Vol.121,2005,pp.219-233. [93] Barraquand,B.andLatombe,J.C.,\NonholonomicMultibodyMobileRobots:ControllabilityandMotionPlanninginthePresenceofObstacles,"Algorithmica,Vol.10,1993,pp.121-155. [94] Frazzoli,E.,Dahleh,M.,andFeron,E.,\Real-TimeMotionPlanningforAgileAutonomousVehicles,"AIAAGuidance,Navigation,andControlConferenceandExhibit,Denver,CO,August2000. [95] Tang,Z.andOzguner,U.,\MotionPlanningforMultitargetSurveillancewithMobileSensorAgents,"IEEETransactionsonRobotics,Vol.21,No.5,October2005. [96] Laumond,J.-P.,Jacobs,P.,Ta,andMurray,M.,\AMotionPlannerforNonholonomicMobileRobots,"IEEETransactionsonRoboticsandAutomation,Vol.10,No.5,October,1944. [97] Hwangbo,M.,Kuner,J.,andKanade,T.,\EcientTwo-phase3DMotionPlanningforSmallFixed-wingUAVs,"IEEEInternationalConferenceonRoboticsandAutomation,Roma,Italy,April2007. [98] Shanmugavel,M.,Tsourdos,A.,Zbikowski,R.,andWhiteB.A.,\3DDubinsSetsBasedCoordinatedPathPlanningforSwarmUAVs,"AIAAGuidance,Navigation,andControlConferenceandExhibit,Keystone,Colorado,August2006. [99] Choset,H.,\CoverageforRobotics-ASurveyofRecentResults,"AnnalsofMathematicsandArticialIntelligence,Vol.31,No.1-4,2001,pp.113-126. [100] Choset,H.,Walker,S.,Eiamsa-Ard,K.,andBurdick,J.,\Sensor-BasedExploration:IncrementalConstructionoftheHierarchicalGeneralizedVoronoiGraph,"TheInternationalJournalofRoboticsResearch,Vol.19,No.2,February2000,pp.126-148. [101] Taylor,C.J.andKriegman,D.J.,\Vision-BasedMotionPlanningandExplorationAlgorithmsforMobileRobots,"IEEETransactionsonRoboticsandAutomation,Vol.14,No.3,June1998. [102] Holmstrom,K.,\TOMLAB-AnEnvironmentforSolvingOptimizationProblemsinMATLAB,"ProceedingsfortheNordicMATLABConference'97Stockholm,Sweden,1997. [103] Bailey,T.,NeitoJ.,andNebot,E.,\ConsistencyoftheFastSLAMAlgorithm,"IEEEInternationalConferenceonRoboticsandAutomation,2006. [104] Bailey,T.,NeitoJ.,andNebot,E.,\ConsistencyoftheEKF-SLAMAlgorithm,"IEEE/RSJInternationalConferenceonIntelligentRobotsandSystems,2006. 131

PAGE 132

AdamScottWatkinswasborninOrlando,FloridaonNovember11,1980.AftergraduatingfromLakeMaryHighSchool,AdamattendedtheUniversityofFloridareceivingaBachelorofScienceinMechanicalEngineeringin2003.AdamcontinuedhiseducationattheUniversityofFloridareceivingaMasterofScienceinMechanicalEngineeringin2005.UponobtaininghisMS,AdampursuedadoctoraldegreeundertheadvisementofDr.RichardLind.Adam'sresearchinvolvesthecontrolofunmannedsystemsandheispreparedtograduatewithhisPh.D.inthesummerof2007.UpongraduationAdamwillbeginworkattheJohnsHopkinsUniversityAppliedPhysicsLabcontinuinghisresearchintheeldofunmannedsystems. 132