<%BANNER%>

Multi-Vehicle Cooperative Control for Vision-Based Environment Mapping

Permanent Link: http://ufdc.ufl.edu/UFE0019741/00001

Material Information

Title: Multi-Vehicle Cooperative Control for Vision-Based Environment Mapping
Physical Description: 1 online resource (48 p.)
Language: english
Creator: Branch, Eric Alan
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: cooperative, mapping, mav, planning, vision
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Aerospace Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Flight within urban environments is currently achievable using small and agile aircraft; however, the path planning for such vehicles is difficult due to the unknown location of obstacles. This thesis introduces a multi-aircraft formulation to the mapping approach. The problem addressed is the trajectory generation for the mapping of an unknown, urban environment by a distributed team of vision-based Micro Air Vehicles (MAVs). Most mission scenarios require a vehicle to know its position relative its surroundings in order to assure safe navigation and successful mission planning. A vision sensor can be used to infer three-dimensional information from a two-dimensional image for use in generating an environment map. Structure from Motion (SFM) is employed to solve the problem of inferring three-dimensional information from two-dimensional image data. No global information is available in the unmapped regions so the map must be constructed incrementally and safe trajectories must be planned on-line. A feature point data set can be reduced to a set of geometric primitives, if the environment can be well estimated by a geometric model. In the case of an urban environment the scene can be well represented by planar primitives. The trajectories need to be planned to gather information required to construct an optimal map. An efficient mapping can be accomplished by maximizing environment coverage and vehicle distribution. The distribution and coverage criteria are governed by an augmented cost function which takes into account the distance between mapped areas and all other aircraft. Maximizing this cost function automatically distributes aircraft throughout a region and disperses them around regions that remain to be mapped. Also, the mapping approach uses maneuvering capabilities to determine the paths for each aircraft. This determination results by utilizing branches from tree structures that represent achievable maneuvers. Such an approach inherently alters the path between aircraft that may have different turn radius or climb rates.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Eric Alan Branch.
Thesis: Thesis (M.S.)--University of Florida, 2007.
Local: Adviser: Lind, Richard C.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0019741:00001

Permanent Link: http://ufdc.ufl.edu/UFE0019741/00001

Material Information

Title: Multi-Vehicle Cooperative Control for Vision-Based Environment Mapping
Physical Description: 1 online resource (48 p.)
Language: english
Creator: Branch, Eric Alan
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: cooperative, mapping, mav, planning, vision
Mechanical and Aerospace Engineering -- Dissertations, Academic -- UF
Genre: Aerospace Engineering thesis, M.S.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Flight within urban environments is currently achievable using small and agile aircraft; however, the path planning for such vehicles is difficult due to the unknown location of obstacles. This thesis introduces a multi-aircraft formulation to the mapping approach. The problem addressed is the trajectory generation for the mapping of an unknown, urban environment by a distributed team of vision-based Micro Air Vehicles (MAVs). Most mission scenarios require a vehicle to know its position relative its surroundings in order to assure safe navigation and successful mission planning. A vision sensor can be used to infer three-dimensional information from a two-dimensional image for use in generating an environment map. Structure from Motion (SFM) is employed to solve the problem of inferring three-dimensional information from two-dimensional image data. No global information is available in the unmapped regions so the map must be constructed incrementally and safe trajectories must be planned on-line. A feature point data set can be reduced to a set of geometric primitives, if the environment can be well estimated by a geometric model. In the case of an urban environment the scene can be well represented by planar primitives. The trajectories need to be planned to gather information required to construct an optimal map. An efficient mapping can be accomplished by maximizing environment coverage and vehicle distribution. The distribution and coverage criteria are governed by an augmented cost function which takes into account the distance between mapped areas and all other aircraft. Maximizing this cost function automatically distributes aircraft throughout a region and disperses them around regions that remain to be mapped. Also, the mapping approach uses maneuvering capabilities to determine the paths for each aircraft. This determination results by utilizing branches from tree structures that represent achievable maneuvers. Such an approach inherently alters the path between aircraft that may have different turn radius or climb rates.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Eric Alan Branch.
Thesis: Thesis (M.S.)--University of Florida, 2007.
Local: Adviser: Lind, Richard C.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0019741:00001


This item has the following downloads:


Full Text





MUILTI-VEHICGLE COOPERATIVE CONTROL FOR VISION-BA~ ': i: i iVIRC: ii' ri T
MAPPING




















By
ERIC ALAN BRANCH



















A TIF TS Pi ? I TED) TO THE GRADIIUA1TE SCHOOL,
OF TH-IE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THIE REQUIIRIT i~ll TS FOR THIE DEGREE OF
MAS -ii OF SC~ii :E

UNIVERSITY OF FILORIIDA


































O :Eric Alan Branch































TIo my r:i.. and fr-iends,

whose patience and support made this possible









ACKNOWILEDGMIENTS

This work was supported t :l by the Air Force Research Laboratory and the Alir Force

Office: of Scientific Research under I- j ^ 12-03-1-0381 and r -= :2- :- =1-1-0170.

I want to acknowledge and thank A1FRL-MN and Eglin Air Force for the funding of this

proj ect. I would also like to sincerely thank my advisor, Dr. Rick Lind, for his guidance and

support throughout my time at the Ulniversity of F~lorida. Special thanks also to my supervisory

committee members, Dr. WJarren Dixon and Dr. Carl Crane, for their time and consideration.

This work would not be i :1 I without the members of the Flight Controls Lab; Adamn

WJatkins, Joe Kehoe, Ryan Causey, Daniel Grant and Muj ahid A~bdulrahimn, who have always

been willing to share their knowledge and i :i My J... M i ~ark and li' Branch, who

always encourage and support me in all my 1i-7-t











TABLE OF CONTENTS



ACKNOWLEDGMENT S . . 4

LIST OFFIGU RES . . . . . . . ... . 7

ABSTRACT. ................................. .. 8

CHAPTER

1 INTTRODUCTION . . 10

1.1 Problem Statement . . 10
1.2 Motivation. . . 10
1.3 Multi-robot Systems . . 12
1.4 System Architecture. . . 13

2 COMPUTER VISION . . 15

2.1 Camera . .......................... . 15
2.2 Feature Point Detection . . 16
2.3 Feature Point Tracking . . 17
2.4 Two-view Image Geometry . . 18
2.4.1 Epipolar Constraint . . 19
2.4.2 Structure fromMotion. . . 20

3 DIMENSIONALITY REDUCTION . . 22

3.1 Introduction . . 22
3.2 Environment Representation . . 24
3.2.1 Spatial Decomposition/Occupancy Grid . . 24
3.2.2 Topological Representation . . 25
3.2.3 Geometric Maps. . . 25
3.3 Data Clustering . . 26
3.3.1 K-means . . 26
3.3.2 PCA . . 27
3.3.3 Plane Fitting . . 28
3.4 Incremental Map Building . . 29

4 MAPPING CRITERIA . . 30

4.1 Introduction . . 30
4.2 Coverage ......................... .... . 30
4.2.1 Random Movement Method . . 30
4.2.2 Frontier-Based Exploration . . 31
4.2.3 Seek-Open Space Algorithm . . 31
4.2.4 Mapping Criteria . . 32












5 TRAJECTORY PLANNING . . 34

5.1 Introduction . . 34
5.2 Rapidly Exploring Random Tree . . 34
5.3 Sampling . . 36
5.4 Collision Avoidance. . . 37

6 SIMULATION RESULTS . . 39

7 CONCLUSION . . 43


7.1 Summary . . 43
7.2 Future Direction. . . 44

REFERENCES . . 45

BIOGRAPHICAL SKETCH . . 48











LIST OF FIGURES

Figure page

1-1 Micro Air Vehicles . . 11

1-2 Urban Environment ......... ... .. 13

1-3 Vehicle Architecture. . . 14

1-4 Team Architecture. . . 14

2-1 Mapping from Environment to Image Plane . . 15

2-2 The Geometry of the Epipolar Constraint . . 19

3-1 Environment Representations . . 24

3-2 Principle Component Analysis ....... ... .. 28

3-3 Incremental Plane Merging . . 29

5-1 Rapidly-Exploring Random Tree Expansion . . 36

5-2 Rapidly-Exploring Random Tree w/ Kinematic Constraints .... .. .. 37

5-3 Collision Check . . 38

6-1 Simulated Urban Environment . . 40

6-2 Simulation at T=0s . . 40

6-3 Simulation at T=80s. . . 41

6-4 Simulation at T=150s . . 42

6-5 Simulation at T=150s Top View . . 42









Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the: Dlegree of M~faster of Science

M\LULTI-~VEH-ICT E COOPERATIVE CONTROL FOR VISION--BASED ENVIRONMENT
MAPPING

By

Eric Alan Branch

August 2007

Chair: Richard C,. Lind, Jr.
Major: Aerospace Engineering

Flight within urban environments is currently achievable using small and .: 11-i aircraft;

however, the path i. t. : r for such vehicles is difficult due to the unknown location of obstacles.

This thesis introduces a multi-aircraft formulation to the ::: :i i :::- approach. The problem

addressed is the traj ectory generation for the mapping of an unknown, urban environment by a

.i: i-:i:::. team of vision-based Micro A~ir Vehicles ('-. i AVs). Most mission scenarios require

a vehicle to knowv its 4.. 1i 1..:. relative its surroundings in order to assure safe : 1 i .. and

I: .: mission planning. A vision sensor can be used to infer three-dimensional information

from a twvo-dimnensional image for use in generating an environment map. Structure from Motion

(SF'. I) is employed to solve the problem of inferring three-dimensional information from two-

dimnensional image data. No global information is available in the: unmapped regions so the: map

must be constructed incrementally and safe ': :; -- 1 es must be planned on-line. A feature point

data set can be reduced to a set of geometric primitives, if the environment can be well estimated

by a geometric model. In the case of an urban environment the scene can be well .. i : -1 by

planar primitives. The trajectories need to be planned to gather information required to construct

an optimal map. An efficient mapping can be accomplished by maximizing environment

coverage and vehicle distribution. The .1: i:: I-:: :.:.:: and coverage criteria are governed by an

augmented cost function which takes into account the distance between mapped areas and all

other aircraft. Mlaximnizing thi s cost function automatically distributes aircraft throughout a

region and~l i .. them around regions that remain to be mapped. Also, the ... !i.;i : approach










uses maneuvering capabilities to determine the: paths for each aircraft. This determination results

by utilizing branches from tree structures that represent achievable maneuvers. Such an approach

inherently alters the path between aircraft that may have: ii i : :: turn 1 :. :: or climb rates.









CH-APTER I
INTRODUCTION

1.1 Problerm Statemnent

This thesis introduces a multi-aircraft formulation to the mapping approach. The problem

addressed is the traj ectory generation for the mapping of an unknown, urban environment by a

.i: i-:i:::. team of vision-based Micro A~ir Vehicles ('-. i AVs). Most mission scenarios require

a vehicle to knowv its 4.. 1i 1..:. relative its surroundings in order to assure safe : 1 i .. and

I: .: mission planning. A vision sensor can be used to infer three-dimensional information

from a twvo-dimnensional image for use in generating an environment map. No global information

is available in the unmapped regions so the map must be constructed incrementally and safe

traj ectories must be planned on line. The: traj ectories need to be planned to gather information

::1 to construct an optimal map. An efficient .4 r can be accomplished by mnaximnizing

environment coverage and vehicle distribution. The distribution and coverage criteria are

governed by an augmented cost function which takes into account the distance between app--

areas and all other aircraft. Maximnizing this cost function automatically i: :i:

throughout a region and disperses them around regions that remain to be mapped. Also, the

:: i:: ;:ir-: : 1: uses maneuvering capabilities to determine the paths for each aircraft.

Thi s determination results by utilizing branches from tree structures that i i : ::i achievable

maneuvers. Such an .:i i : :- 1: inherently alters the path between aircraft that may have different

turn : ? i::: or climb rates.

1.2 Mlotiivatiion

The MAV has been developed to be a low-cost platform capable of completing I i

low~-altitude missions for both military and civilian applications. The long list of missions

envisioned for MVAVs include: military reconnaissance, surveillance and tracking of a i r 0 .1

target, :: :i i1:: an area :F i' 1 i by the dispersal of a chemical/biological agent, and visual

monitoring of traffic. A common ... i to these missions is that the: vehicle must fly at low

altitudes and even inside rooms; and r-:::: i: :: in regions that are either beyond an operators

line-of-sight or deemed too hazardous for i:: human involvement. Therefore, the ability










to exhibit a high level of autonomy, and situational awareness becomes a critical attribute for

ensuring the usefulness of MAVs [33]. An accurate mapping of unknown environments is a

critical task for ensuring accurate situational awareness. The number of hazards at these flight

conditions is disproportionately large in comparison to flight at high altitudes. Information must

be gathered that indicates the location of obj ects in these areas along with their size and shape in

relation to the vehicles. It is reasonable to assume pre-flight knowledge of maj or obj ects within

cities or rooms; however, it is doubtful that every tree and sign or angle of every door is known

in advance. As such, in-flight mapping is a realistic expectation to obtain information about

the environment and permit flight operations. Figure 1-1 shows several examples of current

generation MAVs.









Figure 1-1. Micro Air Vehicles

An approach was developed to optimize mapping using a single aircraft. This approach

developed a metric that noted the distance between the aircraft and features that had already

been mapped. Path planning thus resulted by maximizing that distance. The motivation for this

thesis is to develop technologies that enable a distributed team of MAVs to autonomously map

an unknown environment. The capability of cooperative map generation allows the vehicles

to accurately and efficient map the environment and share locally generated incremental maps

with the vehicle team which increases vehicle safety and enables efficient traj ectory generation

beyond any individual vehicles line-of-sight. An accurate environment map is also a critical

component for future high level mission planning.










The :.: i i :: system proposed in this paper iteratively i ::.. the: three: following

activities:

1. locally building a partial map that : 1.:- : the portion of the environment current sensed

2. -- :t.. the global mnap according to the newly acquired partial map

3i. determining and .1l.::::: :: toward the next observation 1:. ii 1. =: according to the
i7:i..:: strategy

1.3 iMurlti-robot Systems

.ir..autonomous robot teams are : to have many advantages over single

vehicle systems. Cooperating robots have the potential to accomplish tasks more efficiently and

with greater speed than a single vehicle. Beyond gains in efficiency multiple vehicle systems

have the I ::r :1 to demonstrate i: .1- : and more accurate localizations if they exchange

information about their ... ri .:. and sensed environment in as .. q .. :. i r manner. Distributed

team systems also have the added advantage of increasing redundancy and sensor variety. U' .1:

several low cost vehicles introduces redundancy and a higher level of risk reduction than having

only one, possibly more expensive, vehicle. Multi-robot systems have the .i!- -- :::v properties:

greater "l. ::. y
improved system performance
fault tolerance
ar lower economic cost
ii: :::i:::. 1 sensing and action
a larger range of task domains

In the field of mobile robotics, research on multiple robot systems is gaining popularity

because: the :.:. .011.?. robot approach i : r-l .i.*: ::: I advantages over a : 1 robot such as

scalability, reliability and speed. On the other hand it increases the : :::i 1 II of the system over

that of a : 1 vehicle system.

Unlike most i :.:il T mult-vehicle behavior exploration using multiple vehicles is

characterized by I. 7. :1. |:: that avoid tightly .. .:== i:: .. i behavior. Most; ..J.. : ?il multiple:

vehicle behavior relates to the close coordination of vehicles in a formation or a i- i ..- the

leader" 1::1 1 I---. However < 4-1. : :i. : 1: : I: ::::l i 1:l; vehi cles isa characterized by a team










distribution strategy which efficiently covers an environment and avoids techniques which rely on

tightly coordinated behavior.














Figure 1-2. Urban Environment


1.4 System Architecture

This thesis is concerned with traj ectory generation for vision-based mapping of an urban

environment by a distributed MAV team. The overall system architecture is made up of four sep-

arate problems: three-dimensional reconstruction from two-dimensional image data, environment

representation and incremental map building, exploration and mapping criteria, and traj ectory

generation.

Chapter 2 describes the camera model and the approach for mapping a three-dimensional

obj ect from a two-dimensional image, feature point detection and feature point tracking. As

well as describe the implicit relationship between the camera and environment using two view

geometry and the epipolar constraint.

Chapter 3 describes methods for environment representation and dimensionality reduction.

A sensor based data set, in this case feature points, can be reduced to a set of geometric prim-

itives, if the environment can be well estimated by a geometric model. In the case of an urban

environment the scene can be well represented by planar primitives.

Chapter 4 describes the mapping criteria and team distribution strategy used to ensure

coverage of the environment. The search and exploration strategy uses a multivariate approach to











determine goal locations which have a higher probability of containing environment features of

interest.

Chapter 5 describes the traj ectory planning technique using a sample based motion plan-

ning algorithm, which is able to be executed as an on-line algorithm to produce a conditional

traj ectory, which can replan as environment information increases.

Chapter 6 and 7 presents simulation results and conclusions on the algorithms presented.

Also describes future direction for research and development.

An overview of how the separate components are interconnected is graphically depicted

below in Figure 1-3 and 1-4.


Team
Positions


Local Geometric


Vehicle
Position


Figure 1-3. Vehicle Architecture


Figure 1-4. Team Architecture









CHAPTER 2
COMPUTER VISION

2.1 Camera

A camera provides a mapping between the three-dimensional world and a two- dimensional

image. For this discussion we will consider a basic pinhole camera model, consisting of the

central proj section of points in space onto an image plane. In this case the center of proj section will

be the origin of a Euclidean coordinate system, with the image plane defined as a plane normal to

the camera's central axis and located a focal length, f, away from the center of proj section. The

pinhole camera geometry is described in Fig. 2-1.







71 f ~ -Y Projec in of P








Figure 2-1. Mapping from Environment to Image Plane


Under the pinhole camera model a point in space with coordinates nl = [nx~, ny, Ez rT

is mapped to the point on the image plane where a line j oining the point rl to the center of

proj section meets the image plane. By similar triangles the point (rlx, Ty,z rZT is mapped to

the point (u, v)T. Equation 2-1 describes the central proj section mapping from world to image

coordinates [4].



(Ex, Byx Ez T x B (2-1)

For a pinhole camera with the camera origin placed at the lens the relative positions are

given by the following model.












G'(x) = C VI( )VIT() =(25
x W( x ),Iv I

at each pixel in the window W the matrix G is computed. If the smallest singular value of

G is found to be greater than a defined threshold, z, then the pixel (pu, v) is classified as a corner

feature [3] [12] [10].

2.3 Feature Point Tracking

In order to match individual feature points from image to image a feature tracking process

must be performed. The feature correspondence process relies on image intensity matching over

a defined pixel region. The Lucas-Kanade feature point tracker is a standard algorithm for small

baseline camera motion. The algorithm tracks feature points through the correspondence of

pixel groupings, a neighborhood of pixels is tracked between two images. Therefore, a feature

point can be associated with a feature point on a subsequent image by a translation in pixel

coordinates where the displacement is defined as h = [dx, dy] The solution for h is found

through the minimization of intensity error obtained by matching pixel windows of similar

intensities [21] [32].

Therefore, if one assumes a simple translational



h2=argm~in ||7~1(~1-2 26 ~ (2-6)
xeW(x)
Applying the Taylor series expansion to this expression about the point of interest, x, while

retaining only the first term in the series results in Equation 2-7.


aI dp aI dv aI
+ + = 0 (2-7)
aSp dt ay dt at
Rewriting Equation 2-7 in matrix form results in Equation 2-8.



AI'u+Itl = 0 (2-8)









whereI u = [ dt) dt i

We can solve for the velocities if additional constraints are enforced on the problem,

which entails restraining regions to a local window that move at constant velocity. Upon these

assumption one can minimize the error function given in Equation 2-9.



El(u) = C [VIT(i, t)u(x)+lt(ilt)]2 (2-9)
W(x)
The minimum of this function is obtained by setting VE1 = 0 and therefore, resulting in

Equation 2-10.


Iply~~ u- + CI,IT = 0 (2-10)


or, rewritten in matrix form results in the following


u = -G b (2-11)

where



G = Iply (2-12)




b = I CIT (2-13)



2.4 Two-view Image Geometry

The two-view image geometry relates the measured image coordinates to the 3D scene.

The camera configuration could be either two images taken over time of the same scene, as in

the monocular case, or two cameras simultaneously capturing two images of the same scene,

as in the stereo vision case. The geometry of the two-view configuration will generate epipolar

constraint from which the 3D scene reconstruction will be formulated [22].









2.4.1 Epipolar Constraint

The relationship between camera and environment is formulated as the epipolar constraint

or coplanarity constraint. This constraint requires position vectors, which describe a feature point

relative to the camera at two instants in time, to be coplanar with the translation vector and the

origins of the camera frames. This geometric relationship is illustrated in Figure 2-2 where rll

and rl2 denote the position vectors of the feature point, P, in the camera reference frames. Also,

the values of xl and x2 represent the position vectors proj ected onto the focal plane while T

indicates the translation vector of the origin of the camera frames.























(R, T)

Figure 2-2. The Geometry of the Epipolar Constraint


A geometric relationship between the vectors in Figure 2-2 is expressed by introducing R

as a rotation matrix. This rotation matrix is the transform of the camera position between image

frames. Assuming a pin-hole camera which is collinear with its proj section into the focal plane,

the geometric relationship is described in Equation 2-14.











x2 (T x Rxl) = 0


(2-14)


The expression in Equation 2-14 reflect that the scalar triple product of three coplanar

vectors is zero, which forms a plane in space. These relationships can be expanded using linear

algebra to generate a standard form of the epipolar geometry as in Equation 2-15. This new

form indicates a relationship between the rotation and translation, written as the essential matrix

denoted as E, to the intrinsic parameters of the camera and associated feature points.



[x2] E [x1] =0 (2-15)

2.4.2 Structure from Motion

Structure from motion (SFM) is a technique to estimate the location of environmental

features in 3D space [26]. This technique utilizes the epipolar geometry in Figure 2-2 and

assumes that the rotation, R, and translation, T, between frames is known. Given that, the

coordinates of ni and rl2 can be computed, by the fundamental relationship given in Equa-

tion 2-16 [21] [20] [27].

rl2 = Rrll+T (2-16)

The location of environmental features is obtained by first noting the relationships be-

tween feature points and image coordinates given in Equation 2-2 and Equation 2-3. These

relationships allow some components of rlx and 'Iv to be written in terms of pu and v which are

known from the images. Thus, the only unknowns are the depth components, rll,z and rl2,z, foT

each image. The resulting system can be cast as Equation 2-17 and solved using a least-squares

approach.
-U _(R,,o + R12 +1IR13) Tx

Tl2,z
(R21 +1 R22 +1 R23) Ty 2-7









This equation can be written in a compact form as shown in Equation 2-18 using z =

[r12,z, r1,z] as the desired vector of depths.

Az = T (2-18)


The least-squares solution to Equation 2-18 obtains the depth estimates of a feature point

relative to both camera frames. This information along with the image plane coordinates can be

used to compute (Tl1,x,T1yL) and (r12,x,12y1) by substituting these values back into Equations 2-2

and 2-3. The resulting components of rll can then be converted to the coordinate frame of the

second image and it should exactly match r12. These values will never match perfectly due

to noise and unknown camera parameters so, in practice, an average process is often used to

estimate the feature coordinates [1] [13] [34].









CH-APTER 3
DIMIENSIONALITYIREDUCTION

3.1 Introductiion

For autonomous MAiVs the ability to create maps of the environment autonomously is the

first requirement for the planning and completion of many tasks. It cannot be assumed a priori

environment information will be available for navigation and mission planning, in fact it is rarely

the: case. Not only do city 11 ..: 1 or related types of U.i fail to be consistently reliable but,

numerous : i.1- : of an environment are not likely to :irt-- :: on a ::: :it such as ; :: billboards

and semi-stationary obj ects (i.e., stationary vehicles). Maps made: for people: often depend upon

the interpretive skills of the i -- : :: using the map and on his or her ability to infer :f:::: :.:i

information. Maps :: :: :1i;1 i: :! structural elements with abstract symbols and semantic

labels. A MVAV must be able to relate its current location directly to its owvn sensory information

regarding its environment. Thi s -:---I. -- that an appropriate map for an autonomous robot should

relate to the t .c~-- of sensor data the robot is likely to observe. In general, these :.: : : ;- -1 that

the ability to perform some degree of autonomous map construction, .: 7 :1 and validation is the

primary requirement for successful mission planning [29)] [5].

'.1.:i can take many forms and fall between twvo -- :-: :r: :: ::l;-l :: extremes, which

are 1. -= ..1 -= ly relevant to autonomous vehicles. The: first are metric maps, which are: based on

an absolute reference frame and location estimates of where objects are in space. The second

are topological maps, or relational maps, that only i 1 ITt represent connectivity information

between important obj ects or landmarks. Each of these representation extremes allow for various

.r: I T :.!..::, of an environment [23] [30].

The amount of information contained in a representation and the assumptions made about

the environment leads to layering of: i:. ;n ::!.:i ..::, of map data with increasingly strong

assumptions and a narrowing of suitable mission tasks. Belowv is a list of mnap structures in order

implied model : ::::I:










1. .. :: .; r..i Raw sensor data (i.e., rawv feature point data)

2. Spatial D.: :::i- .T iI -= : Occupancy Grid. D~iscretely sampled environment, defining
probability of .:i -

3. Geometric. 2- or 3-Dimensional objects inferred from sensor data

4i. Topological. The large-scale rational links that connect objects and locations across the
environment as a whole

5. Sem~antic. Function labels associated with the landmnarkis of the map


Geometric maps are generally the most intuitive: at first glance: and give a more general

representation of the environment. Because the obtained map is more general in nature it is ..

suited for a wide: variety of mission tasks. WVhile a topological 1 .: ? i2 :: is more: closely related

to an individual mission .. i:: ?: and thus less easily adapted to diverse mission tasks.

Sensorial ::: :i .!- :- I- i: ::i minimally altered sensor data, however the large storage size and

difficulty inferring information makes them undesirable for some high level mission planning.

Not only are existing maps of most environments inaccurate, but almost every environment

occupied by humans undergoes change. As a result robot systems must be able to accommodate

change in their environment. If they are to maintain nn[ ~..- they must be able to update them. The

general approach of map building is to incrementally integrate: newv data into the map. When each

newi~ frame is obtained, it is aligned to a cumulative global :::

The ...1 ...: : of an unknown environment, the construction of a geometric map from

that exploration, and other related problems have been i-:: i:: i extensively in the field of

:::::''T.: :1 geometry.' iF l ::.:i ..:: relates to a variety of related ri .:1i I'i Ii. These include
fo "< **' -- .e rg a ---. .. for a route with specific i: .. .. .ries,

covering free i: r. and learning about the ---:i-::- space (i.e., ::: i:i :1:: A broad class of

problems deal with search in unknown or 1.::I:i~ known environments. Such ;: t 1~ : are

interesting not only in their own right but also because they are closely related to environmental

exploration in general where the goal being sought is additional geometric knowledge.










3.2 Environment Representation

3.2.1 Spatial Decomposition/Occupancy Grid

One of the most straightforward and widely used environment representations for robotic

mapping are grid structures. Grid structures or occupancy grids are spatial decompositions of

the environment, where the two- or higher-dimensional environment is discretely sampled. The

simplest method used to sample the environment is to sample the space using a uniform grid,

with each sample taken at points in the grid expressing the probability of occupancy at that

sample point. At each sampled point the space is therefore defined as empty, full, or partially

full. Because this method of environment representation only describes the space sampled and

not the individual obj ect contained within that space, no information is available to identify or

discriminate between obj ects. Other disadvantages are that the grid resolution is limited by how

course or fine the environment is discretized and storage requirement is the same whether the

environment is empty or occupied. The main advantage of a uniform grid representation is its

extreme generality, requiring no assumptions be made regarding the type of obj ects contained

within the space. The 2-dimensional occupancy map can be extended to represent 3-dimensional

spaces through use of a volumetric model. The volumetric model defines the environment by

defining a 3-dimensional grid which divides the workspace into voxels (cubes) with edges and

volume [24]. Furthermore, more efficient sampling can be obtained by using an adaptive cell size

to more efficiently model empty and occupied space, by using areas of high resolution for areas

of cluttered features and low resolution in open space [8] [15] [28].









A B C

Figure 3-1. Environment Representations A) Occupancy Grid B) Topological Map C) Geometric
Map










3.2.2 Topological Representation

A~ topological map is a graph like representation, which uses nodes and edges to describe

distinct features of an environment. Nodes represent the key-places in the map and edges repre-

sent the transitions to navigate between ; i 1 :. : [29]. In contrast to metric maps, topological

maps naturally !i.. : ,i .1::.:: information while: 1 ::,..1.. : what may be irrelevLant

or ::F: :: details. The key to a I: i -1-- 1 :7 ni 1 :II :: 1: is som e ii: ii 1-= n .- ::r :r -l: of

connectivity between : .. or obj ects. The vehicle has no real understanding of the geometric

relationship between locations in the environment; locations are only linked by their topological

representation. As a result, topological maps have: a very i i:. :: connection to mission tasks and

problems statement. A typical subway map, or the typical navigation instructions used by humans

over substantial regions of 1.:-:.:-- are ; :1. *i7: of r :. 1.= 1. : !: .:-[]

3.2.3 Geomnetric 1::::,,

Geometric maps are those made: up of i : :. geometric primitives; for :::.; T. lines,

polygons and/or polyhedral. Such maps have the advantage of being highly space-efficient

because: an arbitrarily large: region of i.: can be: represented by a model wiith only a few

parameters. In addition, geometric maps can store ( :i :: data with almost arbitrarily high

resolution without becoming liable to the storage limitations incurred by i.. I::::Lt::. based on

spatial :il.

The : I :::: y shortcoming of geometric model-based ::i: ::!.:i .. :: relates to the fact that

they can be difficult to infer reliably from sensor data. Three fundamental modeling problems

are regularly encountered: the representation may change drastically depending on the modeling

Sii many different environments may map to the same : ..: .. .. == building wMill

have no 11 .: ::::: character stics; and it may be difficult to :: i : ::r i. I:- ::I i: :i::::= of the

environment within the modeling system.

Geometric maps however can be an efficient description of the environment, if one assumes

that the sensor provides suitable: data, and that the: environment to be described is i i suited










to the geometric modeling primitives to be used. In this case the: environment, an urban city, is

characterized by large planar :F. .:i ::: which lends itself to a planar modeling primitive.

The: world, W, wiill be: defined as a 3-dimnensional space: in which W ER The described

world will consi st of two. ll. .. f obstacles and vehicles. Obstacles are considered the portions

of the world that are permanently occupied, for r:::ipl as in walls of a building. Vehicles are

bodies that are modeled geometrically and controllable via a motion .. t. : r step. In this case

vehicles will be thought off as a point in space with no geomnetric size. This assumption is do

to the fact that the characteristic size of a M/AVI is very small in relation to the defined world.

Therefore collision avoidance between vehicles and obstacles is assured solely by checking the

vehicle path for conflict.

The following sections i: ::I a method for systematically :: .t-:::: I :: i: i :: ::r :r -:: .for

obstacles and vehicles using a collection of primitives [2] [22]. Obstacle: regions will be denoted

as O), a set of all points in i i that lie in one or more obstacles; O)E : i Three-dimensional

obstacles will be represented using a boundary representation consisting of convex i .

initialized in 3-dimensional space. An n-sided polygon can be described using two kinds of

features, vertices and ..1--. Every vertex ::; i:. ::.7 to a corner of the ..T .. :: and every

edge corresponds to a line segment between a pair of vertices. The ,..7 .. can be specified by a

.(x, lyl) (%X21Y2), t* ?(:. ) OfH 1 : .1::I in the plane defined by P".

3.3 D~ata Clustering

3.3.1 K-mealns

K-means is an :::: ::i : : .7 learning algorithm which solves the clustering problem. The

K-means algorithm allowt~s for a way to split a given data set of N i .:.! into K subsets so as to

minimize the sum-~of-~squares criteria [11].

where x,, is a vector i ..:. :1::: the : data point and pu is the: geometric centroid of the

data subset.



KN o










The algorithm works by initializing K centroids into the space represented by the: data set

being clustered. Once the centroids are T::I i I.:i ij set each data point is assigned to the ::1 ::i

with the closest centroid. Once all data points are: assigned the centroids are: recalculated for the

currently defined data sets. The above procedure is repeated until the centroids no long move or

the change in centroid 1..: :;: :: falls belowv some user defined threshold.

The Kt-means algorithmn .. i from several sort comings. One is the algorithm is sensitive

to the initial randomly selected centroid locations. Also the number of centroids must be selected

a priori. The algorithm does not guarantee a globally optimal solution of J over the centroid

as signments.

3.3.2 PCA

r:.:.~:1 C. ::: ...::. ::; Analysis (PCA) is a wvay to identify i: :(H ::: in a data set and I i

the data set in such a way as to highlight the similarities and differences in the data. PC(A can be

used to simplify a dataset by reducing the dimensionality : i:li retaining those characteristics of

the dataset that I ... ..l ... most to the variation. PCA is an orthogonal linear transformation that

transforms the data to a new coordinate system such that the first principle component accounts

for as much of the variability in the data as possible, with each succeeding component accounting

for a ::: :J : ly of the remaining variation. The : :: : obj ectives of PCA are to di cover or to

reduce: the dimensionality of a dataset and to identify new meaningful underlying variables. In

PCAl such 1:: 1: :: are merely the eigenvectors of the covariance matrix having the largest

eigenvalue; this proj section optimizes a sumn-sqluared-error criterion [7] [31i] [3 5].

First, the d-dimensional mean vector pu and dxd covariance matrix I: are computed for

the full data set. Next, the T; : .: and eigenvalues are .. :::i:::i. I and sorted according to

decreasing eigenvalue. Choosing the k : .. : 1 : :having the largest eigenvalues. Often there

will be just a i : large eigenvalues, and this implies that k is the inherent dimensionality of the

sub space governing the data while the remaining I i.. .. .. .. generally contain noise. Next we

form a d~xk mlatrix A whose columns consist of the k eigenvectors. The representation of the












/P
P .


Figure 3-2. Principle Component Analysis


data by principal components consists of proj ecting the data onto the k-dimensional subspace

according to

3.3.3 Plane Fitting

The plane fitting procedure works by clustering the data set using the k-means algorithm

and checking the planarity of the newly defined clusters using the PCA algorithm in an iterative

approach. First the number of clusters, k, is found by selectively splitting and removing centroids

based on the planarity of the data set they represent. Once a data subset can be accurately

modeled by a plane, then that cluster is removed from the data set. If no clusters are found to fit

the planar model then the centroid is split and the k-means clustering algorithm is iterated over

the newly defined centroids. The planarity check is based on the fact that for a data set where

all data points lie with in a plane the principal components computed by PCA will give two

eigenvectors which lie in the plane and a third that is the normal vector of the plane. Therefore,

the decision to remove a data set associated with a centroid is determined by proj ecting the data

set onto the calculated eigenvectors (principal components). The data which after proj section fall

within a threshold distance from the origin along the normal of the defined plane are determined

to approximately lie on the plane.










3.4 Incremental Map Building

The general approach of map building is to incrementally integrate new data into the map.

When each new frame is obtained, it is aligned to a cumulative global map.

In the case of multiple vehicles, each vehicle's local incremental map must be integrated

into the global map for use in future path planning by not only itself but all other vehicles in the

distributed team. Each vehicle continuously performs the scene reconstruction and plane fitting

process over the locally sensed feature point data. By completing the dimensionality reduction

process on each local data set the incremental global map building process is reduced to a plane

merging task. By performing the dimensionality reduction on the incremental local data set the

storage requirement for each vehicle is reduced and the data set transferred for inclusion in the

global map is reduced. The plane fitting procedure is therefore completed on a smaller data set

allowing for more computationally efficient data clustering. The global map building procedure

therefore becomes a merging process, where the individual local low dimensional maps are

incorporated into the central global map.

Although communication restrictions are not explicitly considered in the problem statement,

by sharing a reduced order data set with the centralized global map the required communication

bandwidth would be reduced to a more realistic level from the requirement necessary to share the

unaltered sensor data.








20000 20000

2000 2000

0 0t x (f) 0 0) x (R)

A B

Figure 3-3. Incremental Plane Merging A) Local Planes B) Merged Planes









CH-APTER 4
M\LAPPING CRITERIA

4.1 Introductiion

The primary problem in any exploration task is that given what you know about the

environment where should you move to gain as much new information as possible or ensure a

high probability of learning new information in unexplored regions. The choice of movement

algorithms or :.: i i :: criteria wMill affect the success and efficiency of a vehicle: team l i

to disperse themselves in an unkinow~n environment. This section will develop a ::: :i -1:: criteria

which simultaneously distributes a MAV team and ensure: i .10ration of ::: i .:-ed regions of

the environment. Many methods are available to i1. ::: :1:.:i : movement algorithms and define goal

locations for autonomous exploration. Random i 1 : .0. :: strategies are: easily implemented,

however they do not make use of current mnap knowledge for planning. Another method used

often in indoor cyclic environments is Frontier-based .1 ic:ation. Frontier-based I1.: ;:

works on the : -- 0 ---t that newN information can be gained by exploring the boundary between

open i :. : and -:::: ; -7 :-ed territory. Another method is the Seek-Open Space' method which

attempts to simultaneously distribute and cover an environment with mobile robots. The Seek-

Open ^.i :- : method is formulated for mobile robots in a 2--dimensional space, however the basic

idea can be used and i 1 1 for 3-dimensional environment with a MVAY.

The search and. i-::.:IT :: strategy proposed here is based on selecting goal locations

which offer the: highest probability of 1:==. I::U new/ obstacles. The: strategy is based on multivariate

exploration and decision methods, which classifies a goal location which is both iTi:i

i :: from kinowvn obstacles and other teamn MAVs.

The coverage i : t 31~ : has been defined as the maximization of the total obstacle surface

area covered by the MAV vision sensor.

41.2, Coverage

4.2.1 Random M/lovement Mlethiod

Random movement 17-:. 0il: T1:::-: are the most basic 1:: r.. :: algorithms. The random

movement algorithm works by commanding the search vehicle to move forward with small










random control inputs at random time intervals. The overall vehicle path exhibits a random

'wondering' movement with a continuously curved path. To ensure forward movement and avoid

repeated circling, the turn command and the time interval are constrained to promote exploration.

Another random movement strategy works by planning straight line paths between the

vehicle position and a randomly generated location, contained inside the search area. This

method leads to complete coverage, however the search effort leads to excessive coverage of the

central area at the expense of the boundary area.

Random exploration strategies are easily implemented and computationally inexpensive.

However they do not make use of current map knowledge for planning and do not inherently

provide a method for coordinated team distribution.

4.2.2 Frontier-Based Exploration

Frontier-based exploration works on the assumption that new information can be gained by

exploring the boundary between open space and unexplored territory. Frontier-based exploration

has been shown to work well for 2-dimensional mobile robot mapping, where the primary

goal is to map the boundaries of an environment, for example the outer walls of a room. The

frontier-based approach incrementally constructs a global occupancy map of the environment.

The map is analyzed to locate the 'frontiers' between the free and unknown space. Exploration

proceeds in the direction of the closest 'frontier'. The frontier method is proven to work well for

indoor cyclic environment where a boundary wall can be continuously explored, providing new

environment information. The frontier approach does not explicitly provide for team distribution.

Frontier based exploration also is formulated around an occupancy grid representation which will

have the same limitations detailed in Chapter 3.

4.2.3 Seek-Open Space Algorithm

The seek-open space movement algorithm causes a robot vehicle to move toward open areas

in the map which are distant from known obstacles. The algorithm is executed by first calculating

the average obstacle vector for all obstacles in sensor range. The average obstacle vector is then

computed for all obstacles sensed. The obstacle vector is defined such that the magnitude of the














The i .: i i:1: i distance of the each vehicle : t iT. -- are also included in order to

drive the goal locations away from the current vehicle positions.

The spatial characteristic of the expected data set are modeled by the covariance matrix.

This approach allows for a fi: :i i coverage criteria which ensures exploration and teamn

distribution regardless of whether or not prior information is kinowt~n about the: environment.

The more information kinowi~n about the environment the higher the probability a goal location,

determined by the i .1 : :i ..: criteria, will contain new/ obstacle information. If no prior

knowledge of the environment is utilized the Mahalanobis i: i.::: :i 17 :i to a standard

Euclidean !: : : :. measurement.

The goal locations are calculated by each individual vehicle, with the assumption that each

vehicle location is available for inclusion in the calculation of xc. Goal locations for each vehicle

are calculated independently of each other. Therefore each vehicle's goal location is based on

the current global map information and the current vehicle locations. By using this method for

goal calculation a vehicle search i. ... can be intentionally fixed or calculated using a different

performance criteria, but still influence the goal locations of the remaining vehicles.









CH-APTER 5
TRAJECTOIRY PLANNING

5.1 Introductiion

One major disadvantage with many path planners is the assumption that the environment

is known in advance. Methods that i i 1: deal with the need to replan and can reevaluate

the path as it is being executed are known as on-line algorithms, and the traj ectory they produce

is sometimes referred to as a conditional plan. A true on-line algorithm should be able: to

:- .I a preliminary 1I: :j- : I::y even in the : :::i 1- I- absence of any map. These algorithms

may not guarantee an optimal t: :i. i .. however they do guarantee a planned' = 1. i = and

connectivity [19].

Sample-based motion planning is performed as a search over the defined configuration ir
generating samples to learn about the 1 saebigerc. .1-bsdagrtm

relies on random .: .1 l:: of states to. .1..<. the configuration .i. .: but can also be based on

a deterministic ... .1: scheme. Samples : i: 1: the configuration space are stored in a

data structure which can be used to approximate the |= : The data structure is composed of

nodes and edges; nodes represent sampled states in the configuration space and edges represent a

valid path connecting two states. The data structure is stored as a expanding tree. :::: -l- -based

approaches have: !ii 1: properties such as guaranteed connectivity, are computationally

I:: i ,:I and are easily made to suit the dynamics constraints of the system. Sample-~based

motion planning methods also are: probabilistically ..:::i .1. i meaning that if a solution exists,

the probability to find it converges to the one when the computation time approaches infinity. If

no path is found through the configuration .-:.-. -- it could mean that no valid path exists or the

I... was insufficient to :1. ..:' 7 .( i.1- the space [818.

Rapidly F }.1. ::1:: :. Random Tree (RRT) i:l : :::.; : are a class of random motion i }.::::

algorithmn that can be used for systems involving kinodynamic constraints [14] [16] [17].

5.2 Rapidly Exploring Random Tree

The Rapidly-Exploring Random Tree (RRT) algorithm provides a randomized method for

the: construction of roadmaps. The: goal is to r ?i:: :i i and efficiently .1..n:e: the state-space or










configuration space. The RRT algorithm wvas developed by La'. !ill and Kuffner i :C. 17 to

handle problems that involve dynamics and differential constraints. An advantage of an RRT is

that they can be: ::. :i applied to kinodynamic i :1: hich stems from1 the: fact that RRTs

do not require any connections to be made between i.:: of : : : :-I as in probabilistic

:...1:: Through rapid .1.: r;..:: of the 1: .. the RRT can achieve efficient :.. .:1:-: 1r- i/le

solutions between two points, and as with all : ,1 -based planners has been shown to be

probabilistically complete.

WJ~hen RR~s are used for MIAVI path i. .... ..: the tree nodes are potentialtl'i MA I .. .

and the branches are -i-tr- to the waypoints. A tree is initially 1- ::- -- of the UAV's :

as a single node. A random MAV state is generated and the tree is extended toward the random

point :: : :: a newn branch and node as outlined in Algorithm 1. When a path is found or a

maximum number of iterations is reached, the: RRT terminates.


Algorithm i".:4iI11 p i1.- :: .Random Trees

1. Initialize a tree containing one node: the starting location of the: UAV
2. while a path has not been found do
3. Xrand, =- a random UAV state
4. .". The state in the tree that is closest to
5. if the tree: can be connected from '' to I' without collision then
6j. extend the tree from r' to X an~d
7. end if
8. end while:


Path Planning is viewed as the search in .i : X, for a continuous path from an initial state

xinet to a goal state: r. It is assumed that the I:. r X is a bounded region which contains a

fixed obstacle region, Xobs,, and free space, -irree. The RRET will be constructed so that all of its

vertices are: within XSree and each edge of the: RRT will correspond to a path that lies entirely in

Afree. C i i: : :: detection will be performed by an incremental method, and will be described in

future section.

For a given initial state location, xingi,, an RRT,: T, with K vertices is constructed as ?- i

The first node of T' is initialized to be equal to xinit, for a single forward planning RRT. Then














xo x~_,r o



A B

Figure 5-1. Rapidly-Exploring Random Tree Expansion A) Sample Random Node B) Extend
Tree

for each iteration of the RRT generating algorithm a new random state, xt and is selected from

the configuration space X. The node nearest xt and in terms of a distance metric p is found and

used as the point for expansion. An incremental control input u is selected which minimizes the

distance from xnear to xt and. The new node and edge must be located in free space and checked

for obstacle collision. The tree, T, is updated with the new node and control input.

5.3 Sampling

Traditionally randomized motion planning algorithms use a uniform sample distribution

to generate xt and which tends to grow the tree in such a way that the entire space is covered. In

robot motion planning applications, the presence of obstacles often restricts the use of "greedy"

solutions which involve the robot proceeding directly to the goal. The use of the uniform

distribution increases the algorithms robustness when solving problem in which the space is not

convex, meaning the space contains local minimums which may result in the planning algorithm

not being able to find a valid path. However, when the space is convex and the vehicle dynamics

are fast enough to ensure adequate maneuverability, using a uniform distribution is unnecessary

and computationally expensive, because the "obvious" solution is not explored immediately.

Another option would be to use a distribution which was biased in such a way to generate

with some probability samples which fall close to the goal location. The possible drawback of a

biased search scheme or any "greedy" search strategy is that it can get stuck in local minima and

fail to find a valid path.




















A B

Figure 5-2. Rapidly-Exploring Random Tree w/ Kinematic Constraints A) Two-Dimensional
RRT B) Three-Dimensional RRT


5.4 Collision Avoidance

Once it has been decided where the samples will be placed, the next problem is to determine

whether the configuration is in collision. Thus, collision detection is a critical component of

sampling-based planning. In most motion planning applications, the maj ority of computation

time is spent on collision checking. The choice of the collision detector is extremely important,

as most of the time spent by planners is devoted to collision checking, both for validating

samples and edges connecting samples. The problem of obstacle avoidance for autonomous

aircraft is complicated by the dynamic constraints of the vehicle. A large body of work exists

that has focused on the problem of obstacle avoidance for mobile ground robots. In contrast to

ground vehicles and rotorcraft which can always stop, aircraft must maintain a minimum airspeed

in order to generate sufficient lift to remain aloft. In comparison to helicopters which can turn in

place, the lateral dynamics of aircraft limit the turning capabilities of the system.

The obstacle avoidance algorithm developed in this work assumes the UAV is stabilized by

a low level autopilot and that the standard kinematic model describes the navigation behavior of

the vehicle.

The three dimensional obstacles are composed of two dimensional convex polygons, as

defined in Chapter 3. Each convex polygon, as shown in Figure 5-3, represents an obstacle face.

A polygon is in collision with a potential incremental path segment, defined between nodes xi

and xj, if there exists a point r which lies on the line connecting the two nodes and contained














X/

A B

Figure 5-3. Collision Check A) Line-Plane Intersection B) Point Inside Polygon

inside the polygon. The point r is contained inside the polygon if the sum of the interior angles,
op, is equal to 2%n.









CHAPTER 6
SIMULATION RESULTS

A simulation is used to demonstrate the team distribution and mapping strategy. The

simulation consists of a two MAV team governed by individual kinematic models. Kinematic

models were used instead of a nonlinear dynamic model to reduce complexity and computational

cost. The purpose of the proposed simulation is to test the coverage and team distribution

algorithms, therefore the assumption is made that the kinematic model sufficiently describes the

flight characteristics which would be expected from a more accurate dynamic model combined

with a suitable control system.

There is a camera mounted on the nose of each MAV, aligned parallel with the longitudinal

axis. The camera is updated at a sensor rate of .01Hz. This simulation was run using a feature

point mesh over the faces of the buildings at an even interval. At each sensor time step the

feature points in the camera frustrum are located using scene reconstruction and processed

with the previously described dimensionality reduction, plane fitting described in Chapter 3.

Optimal implementations of the scene reconstruction analysis is unnecessary. Therefore, the

actual computation time will be ignored and the sensor rate and scene reconstruction process

will be updated at rates found in published literature. For this example, the scene reconstruction

algorithm will be running at 0.01Hz.

For this simulation, it is assumed that perfect feature point extraction and tracking is

available. There is also assumed to be perfect state estimation, perfect terrain mapping, and

perfect path planning. Clearly these assumptions are unrealistic but will suffice to demonstrate

the effectiveness of the mapping criteria.

The MAV team will fly through an environment designed to demonstrate the performance of

the map criteria in an urban environment. The environment is shown in Figure 6-1. The mission

obj ective is to ensure the vehicles are distributed in the environment and efficiently map the

unknown environment.

Path planning is accomplished using the RRT algorithm, as described in Chapter 5. The

RRT provides a kinodynamically feasible, collision free traj ectory for the currently known

























Figure 6-1. Simulated Urban Environment

environment map. The planned path connects the vehicle from its current pose to the currently

defined goal location, determined from the mapping criteria. The planned path serves as a

'conditional traj ectory', and only ensures a feasible path for the current environment map. The

conditional traj ectory does not guarantee obstacle avoidance for the unknown portion of the

environment, nor an optimal traj ectory to the goal location. Because the vehicle is operating in a

changing state of environment knowledge, the RRT algorithm is used as an 'on-line' path planner

which replans the traj ectory at a given rate. The RRT algorithm will provide a new traj ectory at a


rate of .25 Hz.








3000.1


20000 3000
1500 i( 50




Figure 6-2. Simulation at T=0s


The simulation is initialized with a completely unknown environment, with the search area

bounded by [3 000 ft., 3 000 ft., 1000 ft.] environment volume. The two vehicle team is initialized










slightly laterally separated at equal altitudes with identical initial attitudes. The initial goal

location as determined from the mapping criteria is based completely on vehicle location, since

there are no initially known obstacles. Figure 6-2 shows the initial environment, vehicle locations

and conditional traj ectories. The initial goal locations quickly result in diverging traj ectories,

distributing the vehicles to opposing environment boundaries.

When a team vehicle approaches within a defined proximity of the exploration goal

location the map criteria recalculates a new exploration goal based on the current map and

team knowledge. The new goal location moves the vehicle away from its current position, team

vehicle position and known map knowledge.










2000e lso 20 30
..0 200 2500 30

Y(R)0- x (R


Figure 6-3. Simulation at T=80s


Figure 6-3 shows the replanned goal location and new conditional traj ectory.

After continuing the simulation for 150 seconds 65% obstacle coverage was achieved.

Obstacle coverage is defined as the percent of mapped obstacle surface area.

The simulation was also run with randomly selected goal locations. Therefore current map

knowledge and team distribution were not included in the map criteria. The simulation was again

run for 150 seconds, with 60% of the obstacles mapped. Using the map criteria the team vehicles

remained more widely distributed. The two simulations, random and proposed method, are

only intended to compare the overall representative behavior of each method as they effect team

distribution and coverage. No guarantees are made that the percentage of coverage would remain





















3000


Figure 6-4. Simulation at T=150s


Figure 6-5. Simulation at T=150s Top View


the same for subsequent simulation runs. This is due to the random nature of the traj ectory

generation and therefore the path defined toward each new goal location.









CH-APTER 7
CONCLUSION

7.1 1.

Micro Anir Vehicles (MA~Vs) are a reasonable platform for vision-based autonomous

environment mapping. Cameras are relatively small and lightweight, making them especially

well suited for vehicles with small payloads. They i :.:. T.1 a rich stream of data describing their

environment, suitable for map building. '.? ..rit 1. M\IAV teams have the: potential to exhibit many

advantages over .1:: 1: vehicle systems, however team distribution and task assignments must be

rn ::= ii governed to realize: the: added benefits.

Use of geometric maps to represent the sensed environment allows for ::i i-: : ::i detail to

be: mapped. The geometric representation i :. 1.1. a suitable: map for a wiide: range: of mission

tasks, provided an adequate geometric model is used. T-he model must be able to accurately

capture environment 1^. :::::; with enough detail to :il i_ the mission :. 4:I: :.::!-:i In this case

an urban environment can be well modeled by -- t plane :? I: r -1- a suitable map

for mission planning, collision avoidance, and obstacle recognition.

The mapping criteria developed in this thesis proved to be a valid exploration method,

allowi~ing for the intelligent selection of goal locations to govern vehicle movement and team

distribution. The :.: r: criteria exihibited .1 .= .. over other uncoordinated, random

vehicle -1: :.:IT :: strategies, and wvas found to be easily scaled for any number of team vehicles

without increasing system ..:::i.1. liy.

The use of RRT path planners for traj ectory generation provided a i- i o :i. i. method for

::::iT: conditional vehicle tr :i i .: which are able to be ::-1 y r: i.7 ::::1 .1 when newiv map

knowledge becomes available. RRT algorithms also allowed kinodynamic constraints for each

vehicle to be tailored for heterogeneous vehicle teams adding another layer of i1 i;

The simulations are limited by the assumptions of perfect feature point detection, feature

.:: :_ and scene reconstruction but suffice to demonstrate the I : :i : :i and performance of

the mapping criteria proposed.










7.2 Future D~irectiion

Recently many research efforts have considered the problem of simultaneous localization

and mapping (SLAMI). The SLAMt process uses a single mobile robot to simultaneously generate

a global environment maps and localize the vehicle within the environment. An somewhat

different ,: 17 is that of I ** i :.i localization and : .....;i1.. (CLAM\I). Basically, CILAM\I

involves twvo or more robots : i : :II: to build a ::: :i of the environment. The C : i :

strategy is not only aimed at ::: i. .. increasing the: speed with which the map is constructed but

also it is aimed at increasing the accuracy of the resultant maps and vehicle location estimates.










REFF


[1] B~roida, T. and Chellappa, R., ""Estimation of Obj ect Motion Parameters from Noisy
r i: : Transactrion s on Pa~ttern~ Analysis -and Maiichine ': ...c~ .,.. Vol 8, N o. I ,
pp. 90--99, Jan. 1986.

[2] Brunskill, E., and Roy, N., i M using Incremental Probabilistic PCA and Dimens-
sionalityy Reduction,"' IEEE IntferationalE C .:,;. .. . oni Robotics annd Automation, April


[3] Canny, J.F., ";A Computational Alpproach to Edge D~etection,"; ::.' ,' .:. .. :. :. on
Pairttern Analysis aind Machrcine ': --' Vol. 8, No. 6, November 1986, pp. 679-698.

[4l] Castro, G.J., Nieto, J., Gallego, L.M., Pastor, L., Cabello, E., "Ain El i. i : Camera
Calibration Method," I'T 77; 0-7' .- i; i ; 7/ :', 19)98, pp. 171-174.

[5] Choset, H., L~ynch, K., Hutchinson, S., Kantor, G., Burgard, W., Kavarki, L., and Thrun,
S.,""Principles of Robot Motion: Theory, Algorithmns, and Implementations ", MITrPress,
Cambridge, MA,

[6] Choset, H. and Nagatani, K., ""Topological Simultaneous Localization and nr:.i::
slam1M: Toward Exact Localization WJlithout Explicit Localiz~ation," :: :.', :. .
on Robotics and Autom~ationz, Vol. 17, No. 2, April 2001.

[7] D~uda, R. O., Hart, P. E., and -1: i I G., i .:i i ::: Classification," J/ohn a I'jnd~l So~ns,
Ince., :-:~1.

[8] Elf~es, Ai. ""Sonar-Based Real-WVorld I 'i: and T. lu: / :: 1.i: ... .:..' o~flobotics
anvd Automzaion, Vol. RA-3, No. 3, June: 1987.

[9] Elf~es, Ai., ""Using O- ::i ::: Grids for Mobile Robot Perception and Navigation,"
(......... :. Vol. 22, Issue: 6, pp. 46-57, 1989).

[10] F~orsyth, D>.A. and P~once, J., C1':l-=:~7 Vision : A M~odern 1; .: .. .. Prentice-Hall
i':i : :. ~ Upper Saddle: T' I- :, NJ, 2003.

[1 l] Ht-ammerly, G., and Elkan, C., "ILearning the ki in ki-means", Neuiral li -- --- P roceess-
::-; Systems, 1_5, -: i

[12] i ii.:- .. C. and' ~ : li' i ""A Com~bined Corner and Edge i : I.. ,; : of -
Alvey Vr2isio C. .: : 1988, pp. 147-15 1.

[13] i ii,: Li j, Ri., "Iln D~efense of the Eight-P~oint Algorithm," IEE~E --- on Paittern
Analy/sis aindl ac~lhine -.-: : :'.- : -.. Vol. 19, No. 6, June 1997.

[14] K~ufifner, J., and ILaValle, S., "'RRT : r An efficient ?it-oach to Single-Query
Path f l.::::: :: ." 17-. .. ..'..: of .-.. IEEE~~IF Internationalr C -. ,:'.' .. .-:. on Robotics anuid
Aultomation, : :: == =, pp. 995-1001.











[15] Kuipers, B3. and B~yun, Y., ""A Robot IF i.1. : ,il. .= and 7.1 r i. i: Strategy Based on Semantic
H-ierarchy of Spacial Representations"', Je~tntrnal o~ff: .' :.. and Autonom ouks Sy~stems,
8:47-63, I991.

[16] La'. :i i: S.M., and Kuffner, J.J., "IRandomized K~inodynamic lil :::::I:::-- Int~ernt~ation alI
JlouirnEal -.. y ..: : ResearchI(-I, Vol. 20, No. 5, M~lay :'::1i, 378-400.

[17] LaValle, S.M., Ph7.:7:1;.:-Alr, ..:.!.. Cambridge University Press, Also .: : .i
ii I1.i i //m sl.cs.uiuc. 1 ':1. 1 !: '. =

[18] Latombe, J.C., R~obot M~otion .' ... B~oston, MA:Klawer Alcademic, 19)91.

[19] Laumnond, J.P., Robot M~otion .' ... .- : an d C~ontrol, Online book: http :l//M/ww.aas.f r/
jpl/book.html.

[20] Longuet Frl 1:. H.C., ";A C.. .:i ::i Algorithm for T' :: : -ucting a Scene: from Two
r:r~T : Na~turei, Vol. 293, pp. 133-135, Sept 1981.

[21] ILucas, B3. and Kanade, T., ""An Iterative Image Registration T 1:::I. i:: with an Application
to Stereo Vision,"! 1. .: f .-?. D)~ARP !-. C::.'. .:. 7..':.-. 0 -' :. ., ,1981, pp.
121-130.

[22] li' i :, Y., -:rn- S., K~osekca, J., and Sastry S., "'An Invitation to 3-D) Vision: F~rom? Images
to Geomectric Models," e.. .: r i: New~ibc York, Inc. I -- i

[23] i':il :1 : ., and W~illiamns, S., '"Three-D~imensional Robotic lill. ii .., Ti' .. of .7
::' Australasianir C. :.. : oni Robotics anmd Automationa, Brisbane, O:: : : ? 1

[24] li' i -tin, C., and Thrun S., "Online,? i: :i. .. of: -- --. : ''s 1. .... 1: c `. with li' i ile
Robots", InIEE In: Yternatfional C -:. :. .. on Robotics anzd Automation, ".'.'.: : -1.i :: DC,


[25] MlacC~ueen, J.B3., "Somne Methods for Classification and Analysis of Mlultivariate Ob-
serv: : I' : o_ f --'.. 5th Berkeley; SI L. ...... on ?..7.: Statistics and'
Probarbility;, pp. 281-297, 1967.

[26] O)liensis, J., "'A : +1:I Ii of Structure-fromn-M~otion Algorithmns", Clom~puter- TOsioni aiind
~zImage [..-, ... .- :, r. 80: 172-2ld

[27] P~razenica, R., *\;.::i I A., ii.:l.1: A., i Q ., and i : ..1.1 T., .. -- i: -i Kalman
Filtering for Aircraft State Estimation and :: i:::. from '-- 1.: II. : ~, AlIAA Gukida nce,
U :..P..ad Ciontr~ol C1 .. ` .. a ind Exhibit, San F~rancisco, California, August ::

[28] Surmnann, HI., Nuchter, A., and rr:- '- : J., "iAn Autonomous Mlobile Robot with a 3D
Laser Range F~inder for 3D C) 1.1. ii::' and D~igitalization of Indoor Environments",
Robotics anmd Autonomlouts Systemns, Vol. 45, pp. 8 1-198, :'










[29]1 Thrun, S., G~utmann, J.-S., Fox, D., Burgard, W., and K~uipers, B3., i:: : :-: TI ,r
and M~fetric '-.l.:i for Mlobile Robot -::1 i: A ^.i : .il- :1 Approach"', In PI . .7:. of'
the- Nartional C1' .r .F : : n .:?*.:1,.. (AAAI), 19r98.

[30] Thrun, S., B3urgard, W., and Foxe D., "A Real-~Time Algorithm for I i. .i: i: Robot 1
With ,:: ..[.1 t .=. to M~lulti-Robot and 3D '.1.i.. .1 ", In lIEEE nte~crnationa-l C1' .r .F : -.o
Robotics and Auitom~ation,

[31]T 4 :: Iand Bishop, C., "PIrobabilistic r::: li 1.- C. .: =.i .. : : e Analysis"', JlournEal of .*F.
RoyalESta~tistical Society~, Series B3, 61, Part 3, pp. 61 1--622.

[32] Tomnasi, C. and Kanade, T., i"Shape and Motion from Image Streams Under Orthography",
Int~ernt~ationan lJournal o~f C.'~ -: '. Vi~sion, Vol. 9, No. 2, pp. 13 7-154.

[33] Office: of the: Secretary of D)efense, United States, ""Unmanned Aircraft Systems Roadmap
0::~ -1~ ", August ::

[34] Thomas, J. and G-:.: :: : J., "D3e~aling with Noise in Multiframne Structure from -- :
C'omp~2~uter i. .... ad -.... i ..:.. :-. : .: ...: ,Vol. 76, No. 2, November, pp. 109)-1241, 1999.)

[35] '.'.' :: J. 1 U.Y, and Hwnang WV, "Candid Covariance-Free Incremental Principle:
Component A.::.:il .: i' .': '. .: .: .: ..: on,' .. .... Analysis anddnachine ..









BIOGRAPHICIAL ~:TETCH

Eric Branch was born and raised in Hollywood, Florida. He attended the University of

We~st Florida in Pensacola, Florida. During his time: in Pensacola, he worked as a flightline

attendant at Pensacola Region Alirport, where he started his flight training and eventually

earned a Comnmercial Pilots License. In ~::-1, he transferred to the University of Florida, where

he received a Bachelor of Science degree in aerospace engineering in August :: After

graduation, Eric went to work for Naval Air Systems Command (` AVAiIR) as a Flight Test

Engineer. Eric later returned to the University of Florida to earn a Mtaster of Science degree in

::: : engineering. His graduate research involved the development of yision-based flight

control methodologies.





PAGE 1

MULTI-VEHICLECOOPERATIVECONTROLFORVISION-BASEDENVIR ONMENT MAPPING By ERICALANBRANCH ATHESISPRESENTEDTOTHEGRADUATESCHOOL OFTHEUNIVERSITYOFFLORIDAINPARTIALFULFILLMENT OFTHEREQUIREMENTSFORTHEDEGREEOF MASTEROFSCIENCE UNIVERSITYOFFLORIDA 2007 1

PAGE 2

c r 2007EricAlanBranch 2

PAGE 3

Tomyfamilyandfriends, whosepatienceandsupportmadethispossible 3

PAGE 4

ACKNOWLEDGMENTS ThisworkwassupportedjointlybytheAirForceResearchLab oratoryandtheAirForce OfceofScienticResearchunderF49620-03-1-0381andF49 620-03-1-0170. IwanttoacknowledgeandthankAFRL-MNandEglinAirForcefo rthefundingofthis project.Iwouldalsoliketosincerelythankmyadvisor,Dr. RickLind,forhisguidanceand supportthroughoutmytimeattheUniversityofFlorida.Spe cialthanksalsotomysupervisory committeemembers,Dr.WarrenDixonandDr.CarlCrane,fort heirtimeandconsideration. ThisworkwouldnotbepossiblewithoutthemembersoftheFli ghtControlsLab;Adam Watkins,JoeKehoe,RyanCausey,DanielGrantandMujahidAb dulrahim,whohavealways beenwillingtosharetheirknowledgeandexpertise.Mypare nts,MarkandPeggyBranch,who alwaysencourageandsupportmeinallmypursuits. 4

PAGE 5

TABLEOFCONTENTS page ACKNOWLEDGMENTS ....................................4 LISTOFFIGURES .......................................7 ABSTRACT ...........................................8 CHAPTER 1INTRODUCTION ....................................10 1.1ProblemStatement .................................10 1.2Motivation ......................................10 1.3Multi-robotSystems ................................12 1.4SystemArchitecture .................................13 2COMPUTERVISION ..................................15 2.1Camera .......................................15 2.2FeaturePointDetection ...............................16 2.3FeaturePointTracking ...............................17 2.4Two-viewImageGeometry .............................18 2.4.1EpipolarConstraint .............................19 2.4.2StructurefromMotion ............................20 3DIMENSIONALITYREDUCTION ...........................22 3.1Introduction .....................................22 3.2EnvironmentRepresentation ............................24 3.2.1SpatialDecomposition/OccupancyGrid ..................24 3.2.2TopologicalRepresentation .........................25 3.2.3GeometricMaps ...............................25 3.3DataClustering ...................................26 3.3.1K-means ...................................26 3.3.2PCA .....................................27 3.3.3PlaneFitting .................................28 3.4IncrementalMapBuilding .............................29 4MAPPINGCRITERIA ..................................30 4.1Introduction .....................................30 4.2Coverage ......................................30 4.2.1RandomMovementMethod .........................30 4.2.2Frontier-BasedExploration .........................31 4.2.3Seek-OpenSpaceAlgorithm ........................31 4.2.4MappingCriteria ..............................32 5

PAGE 6

5TRAJECTORYPLANNING ...............................34 5.1Introduction .....................................34 5.2RapidlyExploringRandomTree ..........................34 5.3Sampling ......................................36 5.4CollisionAvoidance .................................37 6SIMULATIONRESULTS ................................39 7CONCLUSION ......................................43 7.1Summary ......................................43 7.2FutureDirection ...................................44 REFERENCES .........................................45 BIOGRAPHICALSKETCH ..................................48 6

PAGE 7

LISTOFFIGURES Figure page 1-1MicroAirVehicles ....................................11 1-2UrbanEnvironment ....................................13 1-3VehicleArchitecture ....................................14 1-4TeamArchitecture .....................................14 2-1MappingfromEnvironmenttoImagePlane .......................15 2-2TheGeometryoftheEpipolarConstraint .........................19 3-1EnvironmentRepresentations ...............................24 3-2PrincipleComponentAnalysis ..............................28 3-3IncrementalPlaneMerging ................................29 5-1Rapidly-ExploringRandomTreeExpansion .......................36 5-2Rapidly-ExploringRandomTreew/KinematicConstraint s ...............37 5-3CollisionCheck ......................................38 6-1SimulatedUrbanEnvironment ..............................40 6-2SimulationatT=0s ....................................40 6-3SimulationatT=80s ....................................41 6-4SimulationatT=150s ...................................42 6-5SimulationatT=150sTopView .............................42 7

PAGE 8

AbstractofThesisPresentedtotheGraduateSchool oftheUniversityofFloridainPartialFulllmentofthe RequirementsfortheDegreeofMasterofScience MULTI-VEHICLECOOPERATIVECONTROLFORVISION-BASEDENVIR ONMENT MAPPING By EricAlanBranch August2007 Chair:RichardC.Lind,Jr.Major:AerospaceEngineering Flightwithinurbanenvironmentsiscurrentlyachievableu singsmallandagileaircraft; however,thepathplanningforsuchvehiclesisdifcultdue totheunknownlocationofobstacles. Thisthesisintroducesamulti-aircraftformulationtothe mappingapproach.Theproblem addressedisthetrajectorygenerationforthemappingofan unknown,urbanenvironmentbya distributedteamofvision-basedMicroAirVehicles(MAVs) .Mostmissionscenariosrequire avehicletoknowitspositionrelativeitssurroundingsino rdertoassuresafenavigationand successfulmissionplanning.Avisionsensorcanbeusedtoi nferthree-dimensionalinformation fromatwo-dimensionalimageforuseingeneratinganenviro nmentmap.StructurefromMotion (SFM)isemployedtosolvetheproblemofinferringthree-di mensionalinformationfromtwodimensionalimagedata.Noglobalinformationisavailable intheunmappedregionssothemap mustbeconstructedincrementallyandsafetrajectoriesmu stbeplannedon-line.Afeaturepoint datasetcanbereducedtoasetofgeometricprimitives,ifth eenvironmentcanbewellestimated byageometricmodel.Inthecaseofanurbanenvironmentthes cenecanbewellrepresentedby planarprimitives.Thetrajectoriesneedtobeplannedtoga therinformationrequiredtoconstruct anoptimalmap.Anefcientmappingcanbeaccomplishedbyma ximizingenvironment coverageandvehicledistribution.Thedistributionandco veragecriteriaaregovernedbyan augmentedcostfunctionwhichtakesintoaccountthedistan cebetweenmappedareasandall otheraircraft.Maximizingthiscostfunctionautomatical lydistributesaircraftthroughouta regionanddispersesthemaroundregionsthatremaintobema pped.Also,themappingapproach 8

PAGE 9

usesmaneuveringcapabilitiestodeterminethepathsforea chaircraft.Thisdeterminationresults byutilizingbranchesfromtreestructuresthatrepresenta chievablemaneuvers.Suchanapproach inherentlyaltersthepathbetweenaircraftthatmayhavedi fferentturnradiusorclimbrates. 9

PAGE 10

CHAPTER1 INTRODUCTION 1.1ProblemStatement Thisthesisintroducesamulti-aircraftformulationtothe mappingapproach.Theproblem addressedisthetrajectorygenerationforthemappingofan unknown,urbanenvironmentbya distributedteamofvision-basedMicroAirVehicles(MAVs) .Mostmissionscenariosrequire avehicletoknowitspositionrelativeitssurroundingsino rdertoassuresafenavigationand successfulmissionplanning.Avisionsensorcanbeusedtoi nferthree-dimensionalinformation fromatwo-dimensionalimageforuseingeneratinganenviro nmentmap.Noglobalinformation isavailableintheunmappedregionssothemapmustbeconstr uctedincrementallyandsafe trajectoriesmustbeplannedonline.Thetrajectoriesneed tobeplannedtogatherinformation requiredtoconstructanoptimalmap.Anefcientmappingca nbeaccomplishedbymaximizing environmentcoverageandvehicledistribution.Thedistri butionandcoveragecriteriaare governedbyanaugmentedcostfunctionwhichtakesintoacco untthedistancebetweenmapped areasandallotheraircraft.Maximizingthiscostfunction automaticallydistributesaircraft throughoutaregionanddispersesthemaroundregionsthatr emaintobemapped.Also,the mappingapproachusesmaneuveringcapabilitiestodetermi nethepathsforeachaircraft. Thisdeterminationresultsbyutilizingbranchesfromtree structuresthatrepresentachievable maneuvers.Suchanapproachinherentlyaltersthepathbetw eenaircraftthatmayhavedifferent turnradiusorclimbrates. 1.2Motivation TheMAVhasbeendevelopedtobealow-costplatformcapableo fcompletingcomplex, low-altitudemissionsforbothmilitaryandcivilianappli cations.Thelonglistofmissions envisionedforMAVsincludemilitaryreconnaissance,surv eillanceandtrackingofaspecied target,mappinganareaaffectedbythedispersalofachemic al/biologicalagent,andvisual monitoringoftrafc.Acommonaspecttothesemissionsisth atthevehiclemustyatlow altitudesandeveninsiderooms;andfunctioninregionstha tareeitherbeyondanoperators line-of-sightordeemedtoohazardousfordirecthumaninvo lvement.Therefore,theability 10

PAGE 11

toexhibitahighlevelofautonomy,andsituationalawarene ssbecomesacriticalattributefor ensuringtheusefulnessofMAVs[33].Anaccuratemappingof unknownenvironmentsisa criticaltaskforensuringaccuratesituationalawareness .Thenumberofhazardsattheseight conditionsisdisproportionatelylargeincomparisontoi ghtathighaltitudes.Informationmust begatheredthatindicatesthelocationofobjectsinthesea reasalongwiththeirsizeandshapein relationtothevehicles.Itisreasonabletoassumepre-ig htknowledgeofmajorobjectswithin citiesorrooms;however,itisdoubtfulthateverytreeands ignorangleofeverydoorisknown inadvance.Assuch,in-ightmappingisarealisticexpecta tiontoobtaininformationabout theenvironmentandpermitightoperations.Figure 1-1 showsseveralexamplesofcurrent generationMAVs. Figure1-1.MicroAirVehicles Anapproachwasdevelopedtooptimizemappingusingasingle aircraft.Thisapproach developedametricthatnotedthedistancebetweentheaircr aftandfeaturesthathadalready beenmapped.Pathplanningthusresultedbymaximizingthat distance.Themotivationforthis thesisistodeveloptechnologiesthatenableadistributed teamofMAVstoautonomouslymap anunknownenvironment.Thecapabilityofcooperativemapg enerationallowsthevehicles toaccuratelyandefcientmaptheenvironmentandshareloc allygeneratedincrementalmaps withthevehicleteamwhichincreasesvehiclesafetyandena blesefcienttrajectorygeneration beyondanyindividualvehiclesline-of-sight.Anaccurate environmentmapisalsoacritical componentforfuturehighlevelmissionplanning. 11

PAGE 12

Themappingsystemproposedinthispaperiterativelyperfo rmsthethreefollowing activities: 1.locallybuildingapartialmapthatrepresentstheportio noftheenvironmentcurrentsensed 2.updatingtheglobalmapaccordingtothenewlyacquiredpa rtialmap 3.determiningandplanningtowardthenextobservationpos ition,accordingtothe explorationstrategy 1.3Multi-robotSystems Multipleautonomousrobotteamsaresuggestedtohavemanya dvantagesoversingle vehiclesystems.Cooperatingrobotshavethepotentialtoa ccomplishtasksmoreefcientlyand withgreaterspeedthanasinglevehicle.Beyondgainsinef ciencymultiplevehiclesystems havethepotentialtodemonstratefasterandmoreaccuratel ocalizationsiftheyexchange informationabouttheirpositionandsensedenvironmentin acooperativemanner.Distributed teamsystemsalsohavetheaddedadvantageofincreasingred undancyandsensorvariety.Using severallowcostvehiclesintroducesredundancyandahighe rlevelofriskreductionthanhaving onlyone,possiblymoreexpensive,vehicle.Multi-robotsy stemshavethefollowingproperties: greaterefciency improvedsystemperformance faulttolerance lowereconomiccost distributedsensingandaction alargerrangeoftaskdomains Intheeldofmobilerobotics,researchonmultiplerobotsy stemsisgainingpopularity becausethemultiplerobotapproachprovidesdistinctadva ntagesoverasinglerobotsuchas scalability,reliabilityandspeed.Ontheotherhanditinc reasesthecomplexityofthesystemover thatofasinglevehiclesystem. Unlikemostcooperativemult-vehiclebehaviorexploratio nusingmultiplevehiclesis characterizedbytechniquesthatavoidtightlycoordinate dbehavior.Mostcooperativemultiple vehiclebehaviorrelatestotheclosecoordinationofvehic lesinaformationora“followthe leader”strategy.Howeverexplorationusingmultiplevehi clesischaracterizedbyateam 12

PAGE 13

distributionstrategywhichefcientlycoversanenvironm entandavoidstechniqueswhichrelyon tightlycoordinatedbehavior. Figure1-2.UrbanEnvironment 1.4SystemArchitecture Thisthesisisconcernedwithtrajectorygenerationforvis ion-basedmappingofanurban environmentbyadistributedMAVteam.Theoverallsystemar chitectureismadeupoffourseparateproblems:three-dimensionalreconstructionfromtw o-dimensionalimagedata,environment representationandincrementalmapbuilding,exploration andmappingcriteria,andtrajectory generation. Chapter2describesthecameramodelandtheapproachformap pingathree-dimensional objectfromatwo-dimensionalimage,featurepointdetecti on,andfeaturepointtracking.As wellasdescribetheimplicitrelationshipbetweenthecame raandenvironmentusingtwoview geometryandtheepipolarconstraint. Chapter3describesmethodsforenvironmentrepresentatio nanddimensionalityreduction. Asensorbaseddataset,inthiscasefeaturepoints,canbere ducedtoasetofgeometricprimitives,iftheenvironmentcanbewellestimatedbyageometr icmodel.Inthecaseofanurban environmentthescenecanbewellrepresentedbyplanarprim itives. Chapter4describesthemappingcriteriaandteamdistribut ionstrategyusedtoensure coverageoftheenvironment.Thesearchandexplorationstr ategyusesamultivariateapproachto 13

PAGE 14

determinegoallocationswhichhaveahigherprobabilityof containingenvironmentfeaturesof interest. Chapter5describesthetrajectoryplanningtechniqueusin gasamplebasedmotionplanningalgorithm,whichisabletobeexecutedasanon-linealg orithmtoproduceaconditional trajectory,whichcanreplanasenvironmentinformationin creases. Chapter6and7presentssimulationresultsandconclusions onthealgorithmspresented. Alsodescribesfuturedirectionforresearchanddevelopme nt. Anoverviewofhowtheseparatecomponentsareinterconnect edisgraphicallydepicted belowinFigure 1-3 and 1-4 Figure1-3.VehicleArchitecture Figure1-4.TeamArchitecture 14

PAGE 15

CHAPTER2 COMPUTERVISION 2.1Camera Acameraprovidesamappingbetweenthethree-dimensionalw orldandatwo-dimensional image.Forthisdiscussionwewillconsiderabasicpinholec ameramodel,consistingofthe centralprojectionofpointsinspaceontoanimageplane.In thiscasethecenterofprojectionwill betheoriginofaEuclideancoordinatesystem,withtheimag eplanedenedasaplanenormalto thecamera'scentralaxisandlocatedafocallength, f ,awayfromthecenterofprojection.The pinholecamerageometryisdescribedinFig. 2-1 Camera AxisImage Plane h x P h y h z Projectionof P ˆ i 1 ˆ i 3 ˆ i 2 ~ h f ˆ i 3 I n Figure2-1.MappingfromEnvironmenttoImagePlane Underthepinholecameramodelapointinspacewithcoordina tes h =[ h x ; h y ; h z ] T ismappedtothepointontheimageplanewherealinejoiningt hepoint h tothecenterof projectionmeetstheimageplane.Bysimilartrianglesthep oint ( h x ; h y ; h z ) T ismappedto thepoint ( ; n ) T .Equation 2–1 describesthecentralprojectionmappingfromworldtoimag e coordinates[4]. ( h x ; h y ; h z ) T f h x h z ; f h y h z T (2–1) Forapinholecamerawiththecameraoriginplacedatthelens therelativepositionsare givenbythefollowingmodel. 15

PAGE 16

= f h x h z (2–2) n = f h y h z (2–3) Thecentralprojectioncanfurtherbeexpressedinhomogene ouscoordinatesasshownbelow h z 266664 n 1 377775 = 266664 f 000 0 f 00 0010 377775 266666664 h x h y h z 1 377777775 (2–4) 2.2FeaturePointDetection Therststepinthescenereconstructionprocessisthedete ctionoffeatureswithinthe viewableenvironment.Thedetectionoffeaturepointsallo wsimagepixels,features,tobe denedasgeometricentities,suchaspointsandlines.Thef eaturepointselectionisbasedon changesinimagegradientfrompixeltopixelovertheentire image,thereforeselectedfeature pointsareusuallyareaswithuniquecharacteristics.Feat uressuchascorners,edges,andlinesare characterizedbylargechangesincolorintensityandthere foreallowforgoodfeatureselection, capableofbeingeasilytrackedimagetoimage[32].Forscen ereconstructiontheassumption isthattheselectedfeaturepointscorrespondtoobstacles intheenvironmentwhichmustbe detectedforcollisionavoidanceandmappingtasks.Object sofinterestsuchasbuildings, bridges,andcitysignshavemanysharpedgesandabruptcolo rchangesallowingforgood featuredetection. Acommonlyusedfeaturepointdetectionmethodisthegradie nt-basedcornerdetection algorithm.Thealgorithmrequiresthattheimagegradient I becalculatedovertheimageframe. Therefore,givenanimagegradient I ( ; n ) acornerfeaturecanbedetectedatagivenpixel ( ; n ) bycomputingthesummationoftheouterproductofthegradie nts, G ( ; n ) ,overasearchwindow ofthesize W ( ; n ) 16

PAGE 17

G ( x )= x 2 W ( x ) I ( x ) I T ( x )= 264 I 2 I I n I I n I 2 n 375 (2–5) ateachpixelinthewindow W thematrix G iscomputed.Ifthesmallestsingularvalueof G isfoundtobegreaterthanadenedthreshold, t ,thenthepixel ( ; n ) isclassiedasacorner feature[3][12][10]. 2.3FeaturePointTracking Inordertomatchindividualfeaturepointsfromimagetoima geafeaturetrackingprocess mustbeperformed.Thefeaturecorrespondenceprocessreli esonimageintensitymatchingover adenedpixelregion.TheLucas-Kanadefeaturepointtrack erisastandardalgorithmforsmall baselinecameramotion.Thealgorithmtracksfeaturepoint sthroughthecorrespondenceof pixelgroupings,aneighborhoodofpixelsistrackedbetwee ntwoimages.Therefore,afeature pointcanbeassociatedwithafeaturepointonasubsequenti magebyatranslationinpixel coordinateswherethedisplacementisdenedas h =[ dx ; dy ] T .Thesolutionfor h isfound throughtheminimizationofintensityerrorobtainedbymat chingpixelwindowsofsimilar intensities[21][32]. Therefore,ifoneassumesasimpletranslational ˆ h = argmin h x 2 W ( x ) k I 1 ( x I 2 ( x + d x ) k 2 (2–6) ApplyingtheTaylorseriesexpansiontothisexpressionabo utthepointofinterest, x ,while retainingonlythersttermintheseriesresultsinEquatio n 2–7 I d dt + I n d n dt + I t = 0(2–7) RewritingEquation 2–7 inmatrixformresultsinEquation 2–8 D I T u + I t = 0(2–8) 17

PAGE 18

where u =[ d dt ; d n dt ] Wecansolveforthevelocitiesifadditionalconstraintsar eenforcedontheproblem, whichentailsrestrainingregionstoalocalwindowthatmov eatconstantvelocity.Uponthese assumptiononecanminimizetheerrorfunctiongiveninEqua tion 2–9 E 1 ( u )= W ( x ) [ I T ( x ; t ) u ( x )+ I t ( x ; t )] 2 (2–9) Theminimumofthisfunctionisobtainedbysetting E 1 = 0andtherefore,resultingin Equation 2–10 264 I 2 I I n I I n I 2 n 375 u + 264 I I t I n T t 375 = 0(2–10) or,rewritteninmatrixformresultsinthefollowing u = G 1 b (2–11) where G = 264 I 2 I I n I I n I 2 n 375 (2–12) b = 264 I I t I n T t 375 (2–13) 2.4Two-viewImageGeometry Thetwo-viewimagegeometryrelatesthemeasuredimagecoor dinatestothe3Dscene. Thecameracongurationcouldbeeithertwoimagestakenove rtimeofthesamescene,asin themonocularcase,ortwocamerassimultaneouslycapturin gtwoimagesofthesamescene, asinthestereovisioncase.Thegeometryofthetwo-viewcon gurationwillgenerateepipolar constraintfromwhichthe3Dscenereconstructionwillbefo rmulated[22]. 18

PAGE 19

2.4.1EpipolarConstraint Therelationshipbetweencameraandenvironmentisformula tedastheepipolarconstraint orcoplanarityconstraint.Thisconstraintrequiresposit ionvectors,whichdescribeafeaturepoint relativetothecameraattwoinstantsintime,tobecoplanar withthetranslationvectorandthe originsofthecameraframes.Thisgeometricrelationshipi sillustratedinFigure 2-2 where h 1 and h 2 denotethepositionvectorsofthefeaturepoint, P ,inthecamerareferenceframes.Also, thevaluesof x 1 and x 2 representthepositionvectorsprojectedontothefocalpla newhile T indicatesthetranslationvectoroftheoriginofthecamera frames. ( R ; T ) P p 1 e 2 I 1 l 1 e 1 p 2 I 2 l 2 Figure2-2.TheGeometryoftheEpipolarConstraint AgeometricrelationshipbetweenthevectorsinFigure 2-2 isexpressedbyintroducing R asarotationmatrix.Thisrotationmatrixisthetransformo fthecamerapositionbetweenimage frames.Assumingapin-holecamerawhichiscollinearwithi tsprojectionintothefocalplane, thegeometricrelationshipisdescribedinEquation 2–14 19

PAGE 20

x 2 ( T R x 1 )= 0(2–14) TheexpressioninEquation 2–14 reectthatthescalartripleproductofthreecoplanar vectorsiszero,whichformsaplaneinspace.Theserelation shipscanbeexpandedusinglinear algebratogenerateastandardformoftheepipolargeometry asinEquation 2–15 .Thisnew formindicatesarelationshipbetweentherotationandtran slation,writtenastheessentialmatrix denotedas E ,totheintrinsicparametersofthecameraandassociatedfe aturepoints. [ x 2 ] E [ x 1 ]= 0(2–15) 2.4.2StructurefromMotion Structurefrommotion(SFM)isatechniquetoestimatethelo cationofenvironmental featuresin3Dspace[26].Thistechniqueutilizestheepipo largeometryinFigure 2-2 and assumesthattherotation, R ,andtranslation, T ,betweenframesisknown.Giventhat,the coordinatesof h 1 and h 2 canbecomputed,bythefundamentalrelationshipgiveninEq uation 2–16 [21][20][27]. h 2 = R h 1 + T (2–16) Thelocationofenvironmentalfeaturesisobtainedbyrstn otingtherelationshipsbetweenfeaturepointsandimagecoordinatesgiveninEquatio n 2–2 andEquation 2–3 .These relationshipsallowsomecomponentsof h x and h y tobewrittenintermsof and n whichare knownfromtheimages.Thus,theonlyunknownsarethedepthc omponents, h 1 ; z and h 2 ; z ,for eachimage.TheresultingsystemcanbecastasEquation 2–17 andsolvedusingaleast-squares approach. 266664 2 f ( R 11 1 f + R 12 n 1 f + R 13 ) n 2 f ( R 21 1 f + R 22 n 1 f + R 23 ) 1 ( R 31 1 f + R 32 n 1 f + R 33 ) 377775 264 h 2 ; z h 1 ; z 375 = 266664 T x T y T z 377775 (2–17) 20

PAGE 21

ThisequationcanbewritteninacompactformasshowninEqua tion 2–18 using z = [ h 2 ; z ; h 1 ; z ] asthedesiredvectorofdepths. A z = T (2–18) Theleast-squaressolutiontoEquation 2–18 obtainsthedepthestimatesofafeaturepoint relativetobothcameraframes.Thisinformationalongwith theimageplanecoordinatescanbe usedtocompute ( h 1 ; x ; h 1 ; y ) and ( h 2 ; x ; h 2 ; y ) bysubstitutingthesevaluesbackintoEquations 2–2 and 2–3 .Theresultingcomponentsof h 1 canthenbeconvertedtothecoordinateframeofthe secondimageanditshouldexactlymatch h 2 .Thesevalueswillnevermatchperfectlydue tonoiseandunknowncameraparametersso,inpractice,anav erageprocessisoftenusedto estimatethefeaturecoordinates[1][13][34]. 21

PAGE 22

CHAPTER3 DIMENSIONALITYREDUCTION 3.1Introduction ForautonomousMAVstheabilitytocreatemapsoftheenviron mentautonomouslyisthe rstrequirementfortheplanningandcompletionofmanytas ks.Itcannotbeassumedapriori environmentinformationwillbeavailablefornavigationa ndmissionplanning,infactitisrarely thecase.Notonlydocityblueprintsorrelatedtypesofmaps failtobeconsistentlyreliablebut, numerousaspectsofanenvironmentarenotlikelytoappearo namapsuchassigns,billboards andsemi-stationaryobjects(i.e.,stationaryvehicles). Mapsmadeforpeopleoftendependupon theinterpretiveskillsofthepersonusingthemapandonhis orherabilitytoinferfunctional information.Mapsusuallyrepresentstructuralelementsw ithabstractsymbolsandsemantic labels.AMAVmustbeabletorelateitscurrentlocationdire ctlytoitsownsensoryinformation regardingitsenvironment.Thissuggeststhatanappropria temapforanautonomousrobotshould relatetothetypesofsensordatatherobotislikelytoobser ve.Ingeneral,thesefactorsimplythat theabilitytoperformsomedegreeofautonomousmapconstru ction,update,andvalidationisthe primaryrequirementforsuccessfulmissionplanning[29][ 5]. Mapscantakemanyformsandfallbetweentwospecicreprese ntationalextremes,which areparticularlyrelevanttoautonomousvehicles.Therst aremetricmaps,whicharebasedon anabsolutereferenceframeandlocationestimatesofwhere objectsareinspace.Thesecond aretopologicalmaps,orrelationalmaps,thatonlyexplici tlyrepresentconnectivityinformation betweenimportantobjectsorlandmarks.Eachoftheserepre sentationextremesallowforvarious specicdescriptionsofanenvironment[23][30]. Theamountofinformationcontainedinarepresentationand theassumptionsmadeabout theenvironmentleadstolayeringofrepresentationsofmap datawithincreasinglystrong assumptionsandanarrowingofsuitablemissiontasks.Belo wisalistofmapstructuresinorder impliedmodelassumptions. 22

PAGE 23

1.Sensorial.Rawsensordata(i.e.,rawfeaturepointdata)2.SpatialDecomposition/OccupancyGrid.Discretelysamp ledenvironment,dening probabilityofoccupancy 3.Geometric.2-or3-Dimensionalobjectsinferredfromsen sordata 4.Topological.Thelarge-scalerationallinksthatconnec tobjectsandlocationsacrossthe environmentasawhole 5.Semantic.Functionlabelsassociatedwiththelandmarks ofthemap Geometricmapsaregenerallythemostintuitiveatrstglan ceandgiveamoregeneral representationoftheenvironment.Becausetheobtainedma pismoregeneralinnatureitiswell suitedforawidevarietyofmissiontasks.Whileatopologic almappingismorecloselyrelated toanindividualmissionrequirement,andthuslesseasilya daptedtodiversemissiontasks. Sensorialmappingsrepresentminimallyalteredsensordat a,howeverthelargestoragesizeand difcultyinferringinformationmakesthemundesirablefo rsomehighlevelmissionplanning. Notonlyareexistingmapsofmostenvironmentsinaccurate, butalmosteveryenvironment occupiedbyhumansundergoeschange.Asaresultrobotsyste msmustbeabletoaccommodate changeintheirenvironment.Iftheyaretomaintainmaps,th eymustbeabletoupdatethem.The generalapproachofmapbuildingistoincrementallyintegr atenewdataintothemap.Wheneach newframeisobtained,itisalignedtoacumulativeglobalma p. Theexplorationofanunknownenvironment,theconstructio nofageometricmapfrom thatexploration,andotherrelatedproblemshavebeenstud iedextensivelyintheeldof computationalgeometry.Explorationrelatestoavarietyo frelatedcapabilities.Theseinclude searchingforaspecicobjectiveorgoalposition,searchi ngforaroutewithspecicproperties, coveringfreespace,andlearningabouttheoccupancyspace (i.e.,mapping).Abroadclassof problemsdealwithsearchinunknownorpartiallyknownenvi ronments.Suchproblemsare interestingnotonlyintheirownrightbutalsobecausethey arecloselyrelatedtoenvironmental explorationingeneralwherethegoalbeingsoughtisadditi onalgeometricknowledge. 23

PAGE 24

3.2EnvironmentRepresentation 3.2.1SpatialDecomposition/OccupancyGrid Oneofthemoststraightforwardandwidelyusedenvironment representationsforrobotic mappingaregridstructures.Gridstructuresoroccupancyg ridsarespatialdecompositionsof theenvironment,wherethetwo-orhigher-dimensionalenvi ronmentisdiscretelysampled.The simplestmethodusedtosampletheenvironmentistosamplet hespaceusingauniformgrid, witheachsampletakenatpointsinthegridexpressingthepr obabilityofoccupancyatthat samplepoint.Ateachsampledpointthespaceisthereforede nedasempty,full,orpartially full.Becausethismethodofenvironmentrepresentationon lydescribesthespacesampledand nottheindividualobjectcontainedwithinthatspace,noin formationisavailabletoidentifyor discriminatebetweenobjects.Otherdisadvantagesaretha tthegridresolutionislimitedbyhow courseornetheenvironmentisdiscretizedandstoragereq uirementisthesamewhetherthe environmentisemptyoroccupied.Themainadvantageofauni formgridrepresentationisits extremegenerality,requiringnoassumptionsbemaderegar dingthetypeofobjectscontained withinthespace.The2-dimensionaloccupancymapcanbeext endedtorepresent3-dimensional spacesthroughuseofavolumetricmodel.Thevolumetricmod eldenestheenvironmentby deninga3-dimensionalgridwhichdividestheworkspacein tovoxels(cubes)withedgesand volume[24].Furthermore,moreefcientsamplingcanbeobt ainedbyusinganadaptivecellsize tomoreefcientlymodelemptyandoccupiedspace,byusinga reasofhighresolutionforareas ofclutteredfeaturesandlowresolutioninopenspace[8][1 5][28]. A B C Figure3-1.EnvironmentRepresentationsA)OccupancyGrid B)TopologicalMapC)Geometric Map 24

PAGE 25

3.2.2TopologicalRepresentation Atopologicalmapisagraphlikerepresentation,whichuses nodesandedgestodescribe distinctfeaturesofanenvironment.Nodesrepresenttheke y-placesinthemapandedgesrepresentthetransitionstonavigatebetweenkey-places[29].I ncontrasttometricmaps,topological mapsnaturallycapturequalitativeinformationwhiledeem phasizingwhatmaybeirrelevant orconfusingdetails.Thekeytoatopologicalrelationship issomeexplicitrepresentationof connectivitybetweenregionsorobjects.Thevehiclehasno realunderstandingofthegeometric relationshipbetweenlocationsintheenvironment;locati onsareonlylinkedbytheirtopological representation.Asaresult,topologicalmapshaveaveryex plicitconnectiontomissiontasksand problemstatement.Atypicalsubwaymap,orthetypicalnavi gationinstructionsusedbyhumans oversubstantialregionsofspaceareexamplesoftopologic alrepresentations[6]. 3.2.3GeometricMaps Geometricmapsarethosemadeupofdiscretegeometricprimi tives;forexamplelines, polygonsand/orpolyhedral.Suchmapshavetheadvantageof beinghighlyspace-efcient becauseanarbitrarilylargeregionofspacecanberepresen tedbyamodelwithonlyafew parameters.Inaddition,geometricmapscanstoreoccupanc ydatawithalmostarbitrarilyhigh resolutionwithoutbecomingliabletothestoragelimitati onsincurredbytechniquesbasedon spatialsampling. Theprimaryshortcomingofgeometricmodel-basedrepresen tationsrelatestothefactthat theycanbedifculttoinferreliablyfromsensordata.Thre efundamentalmodelingproblems areregularlyencountered:therepresentationmaychanged rasticallydependingonthemodeling assumptions;manydifferentenvironmentsmaymaptothesam erepresentation,buildingwill havenodiscerningcharacteristics;anditmaybedifcultt orepresentprominentfeaturesofthe environmentwithinthemodelingsystem. Geometricmapshowevercanbeanefcientdescriptionofthe environment,ifoneassumes thatthesensorprovidessuitabledata,andthattheenviron menttobedescribediswellsuited 25

PAGE 26

tothegeometricmodelingprimitivestobeused.Inthiscase theenvironment,anurbancity,is characterizedbylargeplanarfeatureswhichlendsitselft oaplanarmodelingprimitive. Theworld, W ,willbedenedasa3-dimensionalspaceinwhich W 2 R 3 .Thedescribed worldwillconsistoftwoobjects;obstaclesandvehicles.O bstaclesareconsideredtheportions oftheworldthatarepermanentlyoccupied,forexample,asi nwallsofabuilding.Vehiclesare bodiesthataremodeledgeometricallyandcontrollablevia amotionplanningstep.Inthiscase vehicleswillbethoughtoffasapointinspacewithnogeomet ricsize.Thisassumptionisdo tothefactthatthecharacteristicsizeofaMAVisverysmall inrelationtothedenedworld. Thereforecollisionavoidancebetweenvehiclesandobstac lesisassuredsolelybycheckingthe vehiclepathforconict. Thefollowingsectionspresentamethodforsystematically constructingrepresentationsfor obstaclesandvehiclesusingacollectionofprimitives[2] [22].Obstacleregionswillbedenoted as O ,asetofallpointsin W thatlieinoneormoreobstacles; O 2 W .Three-dimensional obstacleswillberepresentedusingaboundaryrepresentat ionconsistingofconvexpolygons initializedin3-dimensionalspace.An n -sidedpolygoncanbedescribedusingtwokindsof features,verticesandedges.Everyvertexcorrespondstoa cornerofthepolygon.andevery edgecorrespondstoalinesegmentbetweenapairofvertices .Thepolygoncanbespeciedbya sequence, ( x 1 ; y 1 ) ; ( x 2 ; y 2 ) ;:::; ( x n ; y n ) of n pointsintheplanedenedby P 3.3DataClustering 3.3.1K-means K-meansisanunsupervisedlearningalgorithmwhichsolves theclusteringproblem.The K-meansalgorithmallowsforawaytosplitagivendatasetof N pointsinto K subsetssoasto minimizethesum-of-squarescriteria[11]. where x n isavectorrepresentingthe n th datapointand isthegeometriccentroidofthe datasubset. J = K j = 1 N i = 1 rrr x ( j ) i j rrr 2 (3–1) 26

PAGE 27

Thealgorithmworksbyinitializing K centroidsintothespacerepresentedbythedataset beingclustered.Oncethecentroidsareinitiallyseteachd atapointisassignedtothesubgroup withtheclosestcentroid.Oncealldatapointsareassigned thecentroidsarerecalculatedforthe currentlydeneddatasets.Theaboveprocedureisrepeated untilthecentroidsnolongmoveor thechangeincentroidpositionfallsbelowsomeuserdened threshold. TheK-meansalgorithmsuffersfromseveralsortcomings.On eisthealgorithmissensitive totheinitialrandomlyselectedcentroidlocations.Alsot henumberofcentroidsmustbeselected apriori.Thealgorithmdoesnotguaranteeagloballyoptima lsolutionof J overthecentroid assignments.3.3.2PCA PrincipalComponentAnalysis(PCA)isawaytoidentifypatt ernsinadatasetandexpress thedatasetinsuchawayastohighlightthesimilaritiesand differencesinthedata.PCAcanbe usedtosimplifyadatasetbyreducingthedimensionalitywh ileretainingthosecharacteristicsof thedatasetthatcontributemosttothevariation.PCAisano rthogonallineartransformationthat transformsthedatatoanewcoordinatesystemsuchthatthe rstprinciplecomponentaccounts forasmuchofthevariabilityinthedataaspossible,withea chsucceedingcomponentaccounting foramajorityoftheremainingvariation.Theprimaryobjec tivesofPCAaretodiscoverorto reducethedimensionalityofadatasetandtoidentifynewme aningfulunderlyingvariables.In PCAsuchdirectionsaremerelytheeigenvectorsofthecovar iancematrixhavingthelargest eigenvalue;thisprojectionoptimizesasum-squared-erro rcriterion[7][31][35]. First,the d -dimensionalmeanvector and d x d covariancematrix S arecomputedfor thefulldataset.Next,theeigenvectorsandeigenvaluesar ecomputed,andsortedaccordingto decreasingeigenvalue.Choosingthe k eigenvectorshavingthelargesteigenvalues.Oftenthere willbejustafewlargeeigenvalues,andthisimpliesthat k istheinherentdimensionalityofthe subspacegoverningthedatawhiletheremainingdimensions generallycontainnoise.Nextwe forma d x k matrix A whosecolumnsconsistofthe k eigenvectors.Therepresentationofthe 27

PAGE 28

Figure3-2.PrincipleComponentAnalysisdatabyprincipalcomponentsconsistsofprojectingthedat aontothe k -dimensionalsubspace accordingto3.3.3PlaneFitting Theplanettingprocedureworksbyclusteringthedatasetu singthek-meansalgorithm andcheckingtheplanarityofthenewlydenedclustersusin gthePCAalgorithminaniterative approach.Firstthenumberofclusters, k ,isfoundbyselectivelysplittingandremovingcentroids basedontheplanarityofthedatasettheyrepresent.Oncead atasubsetcanbeaccurately modeledbyaplane,thenthatclusterisremovedfromthedata set.Ifnoclustersarefoundtot theplanarmodelthenthecentroidissplitandthek-meanscl usteringalgorithmisiteratedover thenewlydenedcentroids.Theplanaritycheckisbasedont hefactthatforadatasetwhere alldatapointsliewithinaplanetheprincipalcomponentsc omputedbyPCAwillgivetwo eigenvectorswhichlieintheplaneandathirdthatisthenor malvectoroftheplane.Therefore, thedecisiontoremoveadatasetassociatedwithacentroidi sdeterminedbyprojectingthedata setontothecalculatedeigenvectors(principalcomponent s).Thedatawhichafterprojectionfall withinathresholddistancefromtheoriginalongthenormal ofthedenedplanearedetermined toapproximatelylieontheplane. 28

PAGE 29

3.4IncrementalMapBuilding Thegeneralapproachofmapbuildingistoincrementallyint egratenewdataintothemap. Wheneachnewframeisobtained,itisalignedtoacumulative globalmap. Inthecaseofmultiplevehicles,eachvehicle'slocalincre mentalmapmustbeintegrated intotheglobalmapforuseinfuturepathplanningbynotonly itselfbutallothervehiclesinthe distributedteam.Eachvehiclecontinuouslyperformsthes cenereconstructionandplanetting processoverthelocallysensedfeaturepointdata.Bycompl etingthedimensionalityreduction processoneachlocaldatasettheincrementalglobalmapbui ldingprocessisreducedtoaplane mergingtask.Byperformingthedimensionalityreductiono ntheincrementallocaldatasetthe storagerequirementforeachvehicleisreducedandthedata settransferredforinclusioninthe globalmapisreduced.Theplanettingprocedureistherefo recompletedonasmallerdataset allowingformorecomputationallyefcientdataclusterin g.Theglobalmapbuildingprocedure thereforebecomesamergingprocess,wheretheindividuall ocallowdimensionalmapsare incorporatedintothecentralglobalmap. Althoughcommunicationrestrictionsarenotexplicitlyco nsideredintheproblemstatement, bysharingareducedorderdatasetwiththecentralizedglob almaptherequiredcommunication bandwidthwouldbereducedtoamorerealisticlevelfromthe requirementnecessarytosharethe unalteredsensordata. 0 2000 0 2000 0 1000 x (ft) y (ft)altitude (ft) A 0 2000 0 2000 0 1000 x (ft) y (ft)altitude (ft) B Figure3-3.IncrementalPlaneMergingA)LocalPlanesB)Mer gedPlanes 29

PAGE 30

CHAPTER4 MAPPINGCRITERIA 4.1Introduction Theprimaryprobleminanyexplorationtaskisthatgivenwha tyouknowaboutthe environmentwhereshouldyoumovetogainasmuchnewinforma tionaspossibleorensurea highprobabilityoflearningnewinformationinunexplored regions.Thechoiceofmovement algorithmsormappingcriteriawillaffectthesuccessande fciencyofavehicleteamattempting todispersethemselvesinanunknownenvironment.Thissect ionwilldevelopamappingcriteria whichsimultaneouslydistributesaMAVteamandensureexpl orationofunexploredregionsof theenvironment.Manymethodsareavailabletoformulatemo vementalgorithmsanddenegoal locationsforautonomousexploration.Randomexploration strategiesareeasilyimplemented, howevertheydonotmakeuseofcurrentmapknowledgeforplan ning.Anothermethodused ofteninindoorcyclicenvironmentsisFrontier-basedexpl oration.Frontier-basedexploration worksontheassumptionthatnewinformationcanbegainedby exploringtheboundarybetween openspaceandunexploredterritory.Anothermethodisthe' Seek-OpenSpace'methodwhich attemptstosimultaneouslydistributeandcoveranenviron mentwithmobilerobots.The'SeekOpenSpace'methodisformulatedformobilerobotsina2-dim ensionalspace,howeverthebasic ideacanbeusedandexpandedfor3-dimensionalenvironment withaMAV. Thesearchandexplorationstrategyproposedhereisbasedo nselectinggoallocations whichofferthehighestprobabilityofndingnewobstacles .Thestrategyisbasedonmultivariate explorationanddecisionmethods,whichclassiesagoallo cationwhichisbothstatistically distantfromknownobstaclesandotherteamMAVs. Thecoverageproblemhasbeendenedasthemaximizationoft hetotalobstaclesurface areacoveredbytheMAVvisionsensor. 4.2Coverage 4.2.1RandomMovementMethod Randommovementalgorithmsarethemostbasicexplorationa lgorithms.Therandom movementalgorithmworksbycommandingthesearchvehiclet omoveforwardwithsmall 30

PAGE 31

randomcontrolinputsatrandomtimeintervals.Theoverall vehiclepathexhibitsarandom 'wondering'movementwithacontinuouslycurvedpath.Toen sureforwardmovementandavoid repeatedcircling,theturncommandandthetimeintervalar econstrainedtopromoteexploration. Anotherrandommovementstrategyworksbyplanningstraigh tlinepathsbetweenthe vehiclepositionandarandomlygeneratedlocation,contai nedinsidethesearcharea.This methodleadstocompletecoverage,howeverthesearcheffor tleadstoexcessivecoverageofthe centralareaattheexpenseoftheboundaryarea. Randomexplorationstrategiesareeasilyimplementedandc omputationallyinexpensive. Howevertheydonotmakeuseofcurrentmapknowledgeforplan ninganddonotinherently provideamethodforcoordinatedteamdistribution.4.2.2Frontier-BasedExploration Frontier-basedexplorationworksontheassumptionthatne winformationcanbegainedby exploringtheboundarybetweenopenspaceandunexploredte rritory.Frontier-basedexploration hasbeenshowntoworkwellfor2-dimensionalmobilerobotma pping,wheretheprimary goalistomaptheboundariesofanenvironment,forexamplet heouterwallsofaroom.The frontier-basedapproachincrementallyconstructsagloba loccupancymapoftheenvironment. Themapisanalyzedtolocatethe'frontiers'betweenthefre eandunknownspace.Exploration proceedsinthedirectionoftheclosest'frontier'.Thefro ntiermethodisproventoworkwellfor indoorcyclicenvironmentwhereaboundarywallcanbeconti nuouslyexplored,providingnew environmentinformation.Thefrontierapproachdoesnotex plicitlyprovideforteamdistribution. Frontierbasedexplorationalsoisformulatedaroundanocc upancygridrepresentationwhichwill havethesamelimitationsdetailedinChapter 3 4.2.3Seek-OpenSpaceAlgorithm Theseek-openspacemovementalgorithmcausesarobotvehic letomovetowardopenareas inthemapwhicharedistantfromknownobstacles.Thealgori thmisexecutedbyrstcalculating theaverageobstaclevectorforallobstaclesinsensorrang e.Theaverageobstaclevectoristhen computedforallobstaclessensed.Theobstaclevectorisde nedsuchthatthemagnitudeofthe 31

PAGE 32

vectormustbelargeforobjectsclosetothevehicleandsmal lforobjectsfaraway.Afterthe averageobstaclevectoriscomputed,thegoalofthevehicle becomestomoveintheopposite directionoftheaverageobstaclevector.Thevehicleisgiv enacontrolinputwhichturnsthe vehicletowardthedirectionofthenegativeobstaclevecto r.Therateofturnisdeterminedbythe magnitudeoftheaverageobstaclevector.TheSeek-OpenSpa cealgorithmhastheadvantage thatteamdistributioncanbeexplicitlyaccountedforbymo delingeachvehicleasanobstacle. Therefore,thevehicleteamwillexploretheenvironmentwh ileavoidingregionswithbothteam vehiclesandobstaclespresent.4.2.4MappingCriteria AswiththeSeek-OpenSpacestrategythesearchandexplorat ionstrategyproposedhere isbasedonselectingagoallocationwhichoffershighestpr obabilityofndingnewobstacles, whichareassumedtobeinareaswhicharedistantfromknowno bstacles.Thestrategyisbased onmultivariateexplorationanddecisionmethods[25]. Thepresentedapproachenablescoverageoftheenvironment byselectingindividualvehicle goallocationsthatarestatisticallydistantfrommeasure dlandmarksandothervehiclelocations. Thedistancemeasureusedtodeterminethegoallocationist heMahalanobisdistance D M [7]. TheMahalanobisdistanceisadistancemeasurementthatiss caledbythestatisticalvariationof aknowndataset;therefore,theMahalanobisdistancecanbe usedtodeterminethesimilarity ofanunknownsampletoaknowndistribution.Foramultivari ateGaussianwithmean and covariancematrix S ,theMahalanobisdistanceofamultivariatevector x isgivenby D M ( x )= q ( x ) T S 1 ( x ) (4–1) Thegoallocation x G ofthevehiclethatpromotescoverageoftheenvironmentand vehicle separationischosentobethepointthatmaximizestheMahal anobisdistancefor N known landmarksandteamvehiclesasgivenby 32

PAGE 33

x G = max ˆ x N i k D M i ( ˆ x ) k (4–2) TheMahalanobisdistanceoftheeachvehicleposition D M v i arealsoincludedinorderto drivethegoallocationsawayfromthecurrentvehicleposit ions. Thespatialcharacteristicoftheexpecteddatasetaremode ledbythecovariancematrix. Thisapproachallowsforaexiblecoveragecriteriawhiche nsuresexplorationandteam distributionregardlessofwhetherornotpriorinformatio nisknownabouttheenvironment. Themoreinformationknownabouttheenvironmentthehigher theprobabilityagoallocation, determinedbytheexplorationcriteria,willcontainnewob stacleinformation.Ifnoprior knowledgeoftheenvironmentisutilizedtheMahalanobisdi stancecollapsestoastandard Euclideandistancemeasurement. Thegoallocationsarecalculatedbyeachindividualvehicl e,withtheassumptionthateach vehiclelocationisavailableforinclusioninthecalculat ionof x G .Goallocationsforeachvehicle arecalculatedindependentlyofeachother.Thereforeeach vehicle'sgoallocationisbasedon thecurrentglobalmapinformationandthecurrentvehiclel ocations.Byusingthismethodfor goalcalculationavehiclesearchpatterncanbeintentiona llyxedorcalculatedusingadifferent performancecriteria,butstillinuencethegoallocation softheremainingvehicles. 33

PAGE 34

CHAPTER5 TRAJECTORYPLANNING 5.1Introduction Onemajordisadvantagewithmanypathplannersistheassump tionthattheenvironment isknowninadvance.Methodsthatexplicitlydealwiththene edtoreplanandcanreevaluate thepathasitisbeingexecutedareknownason-linealgorith ms,andthetrajectorytheyproduce issometimesreferredtoasaconditionalplan.Atrueon-lin ealgorithmshouldbeableto generateapreliminarytrajectoryeveninthecompleteabse nceofanymap.Thesealgorithms maynotguaranteeanoptimaltrajectoryhowevertheydoguar anteeaplannedtrajectoryand connectivity[19]. Sample-basedmotionplanningisperformedasasearchovert hedenedcongurationspace generatingsamplestolearnabouttheproblemspacebeingse arch.Sample-basedalgorithms reliesonrandomsamplingofstatestoexplorethecongurat ionspace,butcanalsobebasedon adeterministicsamplingscheme.Samplesrepresentingthe congurationspacearestoredina datastructurewhichcanbeusedtoapproximatethespace.Th edatastructureiscomposedof nodesandedges;nodesrepresentsampledstatesinthecong urationspaceandedgesrepresenta validpathconnectingtwostates.Thedatastructureisstor edasaexpandingtree.Sample-based approacheshaveappealingpropertiessuchasguaranteedco nnectivity,arecomputationally inexpensive,andareeasilymadetosuitthedynamicsconstr aintsofthesystem.Sample-based motionplanningmethodsalsoareprobabilisticallycomple te;meaningthatifasolutionexists, theprobabilitytonditconvergestotheonewhenthecomput ationtimeapproachesinnity.If nopathisfoundthroughthecongurationspaceitcouldmean thatnovalidpathexistsorthe samplingprocesswasinsufcienttoadequatelyexplorethe space[18]. RapidlyExploringRandomTree(RRT)plannersareaclassofr andommotionplanning algorithmthatcanbeusedforsystemsinvolvingkinodynami cconstraints[14][16][17]. 5.2RapidlyExploringRandomTree TheRapidly-ExploringRandomTree(RRT)algorithmprovide sarandomizedmethodfor theconstructionofroadmaps.Thegoalistorapidlyandefc ientlyexplorethestate-spaceor 34

PAGE 35

congurationspace.TheRRTalgorithmwasdevelopedbyLaVa lleandKuffnerspecicallyto handleproblemsthatinvolvedynamicsanddifferentialcon straints.AnadvantageofanRRTis thattheycanbedirectlyappliedtokinodynamicplanning;w hichstemsfromthefactthatRRTs donotrequireanyconnectionstobemadebetweenpairsofcon gurationsasinprobabilistic roadmaps.Throughrapidexplorationofthespace,theRRTca nachieveefcientroadmap-style solutionsbetweentwopoints,andaswithallsample-basedp lannershasbeenshowntobe probabilisticallycomplete. WhenRRTsareusedforMAVpathplanning,thetreenodesarepo tentialMAVwaypoints andthebranchesarepathstothewaypoints.Atreeisinitial lycomposedoftheUAV'sposition asasinglenode.ArandomMAVstateisgeneratedandthetreei sextendedtowardtherandom pointcreatinganewbranchandnodeasoutlinedinAlgorithm 1.Whenapathisfoundora maximumnumberofiterationsisreached,theRRTterminates Algorithm1Rapidly-ExploringRandomTrees 1.Initializeatreecontainingonenodethestartinglocati onoftheUAV 2.whileapathhasnotbeenfounddo3. X rand =arandomUAVstate 4. X near =Thestateinthetreethatisclosestto X rand 5.ifthetreecanbeconnectedfrom X near to X rand withoutcollisionthen 6.extendthetreefrom X near to X rand 7.endif8.endwhile PathPlanningisviewedasthesearchinspace X ,foracontinuouspathfromaninitialstate x init toagoalstate x goal .Itisassumedthatthespace X isaboundedregionwhichcontainsa xedobstacleregion, X obs ,andfreespace, X free .TheRRTwillbeconstructedsothatallofits verticesarewithin X free andeachedgeoftheRRTwillcorrespondtoapaththatliesent irelyin X free .Collisiondetectionwillbeperformedbyanincrementalme thod,andwillbedescribedin futuresection. Foragiveninitialstatelocation, x init ,anRRT, T ,with K verticesisconstructedasfollows. Therstnodeof T isinitializedtobeequalto x init ,forasingleforwardplanningRRT.Then 35

PAGE 36

A B Figure5-1.Rapidly-ExploringRandomTreeExpansionA)Sam pleRandomNodeB)Extend TreeforeachiterationoftheRRTgeneratingalgorithmanewrand omstate, x rand isselectedfrom thecongurationspace X .Thenodenearest x rand intermsofadistancemetric r isfoundand usedasthepointforexpansion.Anincrementalcontrolinpu t u isselectedwhichminimizesthe distancefrom x near to x rand .Thenewnodeandedgemustbelocatedinfreespaceandchecke d forobstaclecollision.Thetree, T ,isupdatedwiththenewnodeandcontrolinput. 5.3Sampling Traditionallyrandomizedmotionplanningalgorithmsusea uniformsampledistribution togenerate x rand whichtendstogrowthetreeinsuchawaythattheentirespace iscovered.In robotmotionplanningapplications,thepresenceofobstac lesoftenrestrictstheuseof“greedy” solutionswhichinvolvetherobotproceedingdirectlytoth egoal.Theuseoftheuniform distributionincreasesthealgorithmsrobustnesswhensol vingprobleminwhichthespaceisnot convex,meaningthespacecontainslocalminimumswhichmay resultintheplanningalgorithm notbeingabletondavalidpath.However,whenthespaceisc onvexandthevehicledynamics arefastenoughtoensureadequatemaneuverability,usinga uniformdistributionisunnecessary andcomputationallyexpensive,becausethe“obvious”solu tionisnotexploredimmediately. Anotheroptionwouldbetouseadistributionwhichwasbiase dinsuchawaytogenerate withsomeprobabilitysampleswhichfallclosetothegoallo cation.Thepossibledrawbackofa biasedsearchschemeorany“greedy”searchstrategyisthat itcangetstuckinlocalminimaand failtondavalidpath. 36

PAGE 37

0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 200 400 600 800 1000 1200 1400 1600 1800 2000 XY A 500 1000 1500 2000 500 1000 1500 2000 250 500 750 X YZ B Figure5-2.Rapidly-ExploringRandomTreew/KinematicCon straintsA)Two-Dimensional RRTB)Three-DimensionalRRT 5.4CollisionAvoidance Onceithasbeendecidedwherethesampleswillbeplaced,the nextproblemistodetermine whetherthecongurationisincollision.Thus,collisiond etectionisacriticalcomponentof sampling-basedplanning.Inmostmotionplanningapplicat ions,themajorityofcomputation timeisspentoncollisionchecking.Thechoiceofthecollis iondetectorisextremelyimportant, asmostofthetimespentbyplannersisdevotedtocollisionc hecking,bothforvalidating samplesandedgesconnectingsamples.Theproblemofobstac leavoidanceforautonomous aircraftiscomplicatedbythedynamicconstraintsoftheve hicle.Alargebodyofworkexists thathasfocusedontheproblemofobstacleavoidanceformob ilegroundrobots.Incontrastto groundvehiclesandrotorcraftwhichcanalwaysstop,aircr aftmustmaintainaminimumairspeed inordertogeneratesufcientlifttoremainaloft.Incompa risontohelicopterswhichcanturnin place,thelateraldynamicsofaircraftlimittheturningca pabilitiesofthesystem. Theobstacleavoidancealgorithmdevelopedinthisworkass umestheUAVisstabilizedby alowlevelautopilotandthatthestandardkinematicmodeld escribesthenavigationbehaviorof thevehicle. Thethreedimensionalobstaclesarecomposedoftwodimensi onalconvexpolygons,as denedinChapter 3 .Eachconvexpolygon,asshowninFigure 5-3 ,representsanobstacleface. Apolygonisincollisionwithapotentialincrementalpaths egment,denedbetweennodes x i and x j ,ifthereexistsapoint r whichliesonthelineconnectingthetwonodesandcontained 37

PAGE 38

A B Figure5-3.CollisionCheckA)Line-PlaneIntersectionB)P ointInsidePolygon insidethepolygon.Thepoint r iscontainedinsidethepolygonifthesumoftheinteriorang les, a i ,isequalto2 p 38

PAGE 39

CHAPTER6 SIMULATIONRESULTS Asimulationisusedtodemonstratetheteamdistributionan dmappingstrategy.The simulationconsistsofatwoMAVteamgovernedbyindividual kinematicmodels.Kinematic modelswereusedinsteadofanonlineardynamicmodeltoredu cecomplexityandcomputational cost.Thepurposeoftheproposedsimulationistotesttheco verageandteamdistribution algorithms,thereforetheassumptionismadethatthekinem aticmodelsufcientlydescribesthe ightcharacteristicswhichwouldbeexpectedfromamoreac curatedynamicmodelcombined withasuitablecontrolsystem. ThereisacameramountedonthenoseofeachMAV,alignedpara llelwiththelongitudinal axis.Thecameraisupdatedatasensorrateof.01Hz.Thissim ulationwasrunusingafeature pointmeshoverthefacesofthebuildingsataneveninterval .Ateachsensortimestepthe featurepointsinthecamerafrustrumarelocatedusingscen ereconstructionandprocessed withthepreviouslydescribeddimensionalityreduction,p lanettingdescribedinChapter 3 Optimalimplementationsofthescenereconstructionanaly sisisunnecessary.Therefore,the actualcomputationtimewillbeignoredandthesensorratea ndscenereconstructionprocess willbeupdatedatratesfoundinpublishedliterature.Fort hisexample,thescenereconstruction algorithmwillberunningat0.01Hz. Forthissimulation,itisassumedthatperfectfeaturepoin textractionandtrackingis available.Thereisalsoassumedtobeperfectstateestimat ion,perfectterrainmapping,and perfectpathplanning.Clearlytheseassumptionsareunrea listicbutwillsufcetodemonstrate theeffectivenessofthemappingcriteria. TheMAVteamwillythroughanenvironmentdesignedtodemon stratetheperformanceof themapcriteriainanurbanenvironment.Theenvironmentis showninFigure 6-1 .Themission objectiveistoensurethevehiclesaredistributedintheen vironmentandefcientlymapthe unknownenvironment. PathplanningisaccomplishedusingtheRRTalgorithm,asde scribedinChapter 5 .The RRTprovidesakinodynamicallyfeasible,collisionfreetr ajectoryforthecurrentlyknown 39

PAGE 40

Figure6-1.SimulatedUrbanEnvironmentenvironmentmap.Theplannedpathconnectsthevehiclefrom itscurrentposetothecurrently denedgoallocation,determinedfromthemappingcriteria .Theplannedpathservesasa 'conditionaltrajectory',andonlyensuresafeasiblepath forthecurrentenvironmentmap.The conditionaltrajectorydoesnotguaranteeobstacleavoida ncefortheunknownportionofthe environment,noranoptimaltrajectorytothegoallocation .Becausethevehicleisoperatingina changingstateofenvironmentknowledge,theRRTalgorithm isusedasan'on-line'pathplanner whichreplansthetrajectoryatagivenrate.TheRRTalgorit hmwillprovideanewtrajectoryata rateof.25Hz. 0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000 0 500 1000 x (ft) y (ft)altitude (ft) Figure6-2.SimulationatT=0s Thesimulationisinitializedwithacompletelyunknownenv ironment,withthesearcharea boundedby [ 3000 ft :; 3000 ft :; 1000 ft : ] environmentvolume.Thetwovehicleteamisinitialized 40

PAGE 41

slightlylaterallyseperatedatequalaltitudeswithident icalinitialattitudes.Theinitialgoal locationasdeterminedfromthemappingcriteriaisbasedco mpletelyonvehiclelocation,since therearenoinitiallyknownobstacles.Figure 6-2 showstheinitialenvironment,vehiclelocations andconditionaltrajectories.Theinitialgoallocationsq uicklyresultindivergingtrajectories, distributingthevehiclestoopposingenvironmentboundar ies. Whenateamvehicleapproacheswithinadenedproximityoft heexplorationgoal locationthemapcriteriarecalculatesanewexplorationgo albasedonthecurrentmapand teamknowledge.Thenewgoallocationmovesthevehicleaway fromitscurrentposition,team vehiclepositionandknownmapknowledge. 0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000 0 500 1000 x (ft) y (ft)altitude (ft) Figure6-3.SimulationatT=80s Figure 6-3 showsthereplannedgoallocationandnewconditionaltraje ctory. Aftercontinuingthesimulationfor150seconds65%obstacl ecoveragewasachieved. Obstaclecoverageisdenedasthepercentofmappedobstacl esurfacearea. Thesimulationwasalsorunwithrandomlyselectedgoalloca tions.Thereforecurrentmap knowledgeandteamdistributionwerenotincludedinthemap criteria.Thesimulationwasagain runfor150seconds,with60%oftheobstaclesmapped.Usingt hemapcriteriatheteamvehicles remainedmorewidelydistributed.Thetwosimulations,ran domandproposedmethod,are onlyintendedtocomparetheoverallrepresentativebehavi orofeachmethodastheyeffectteam distributionandcoverage.Noguaranteesaremadethatthep ercentageofcoveragewouldremain 41

PAGE 42

0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000 0 500 1000 x (ft) y (ft)altitude (ft) Figure6-4.SimulationatT=150s 0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000y (ft) x (ft) altitude (ft) Figure6-5.SimulationatT=150sTopViewthesameforsubsequentsimulationruns.Thisisduetothera ndomnatureofthetrajectory generationandthereforethepathdenedtowardeachnewgoa llocation. 42

PAGE 43

CHAPTER7 CONCLUSION 7.1Summary MicroAirVehicles(MAVs)areareasonableplatformforvisi on-basedautonomous environmentmapping.Camerasarerelativelysmallandligh tweight,makingthemespecially wellsuitedforvehicleswithsmallpayloads.Theyprovidea richstreamofdatadescribingtheir environment,suitableformapbuilding.MultipleMAVteams havethepotentialtoexhibitmany advantagesoversinglevehiclesystems,howeverteamdistr ibutionandtaskassignmentsmustbe efcientlygovernedtorealizetheaddedbenets. Useofgeometricmapstorepresentthesensedenvironmental lowsforsufcientdetailto bemapped.Thegeometricrepresentationprovidesasuitabl emapforawiderangeofmission tasks,providedanadequategeometricmodelisused.Themod elmustbeabletoaccurately captureenvironmentfeatureswithenoughdetailtosatisfy themissionrequirements.Inthiscase anurbanenvironmentcanbewellmodeledbypolygonplaneseg mentsprovidingasuitablemap formissionplanning,collisionavoidance,andobstaclere cognition. Themappingcriteriadevelopedinthisthesisprovedtobeav alidexplorationmethod, allowingfortheintelligentselectionofgoallocationsto governvehiclemovementandteam distribution.Themappingcriteriaexhibitedadvantageso verotheruncoordinated,random vehicleexplorationstrategies,andwasfoundtobeeasilys caledforanynumberofteamvehicles withoutincreasingsystemcomplexity. TheuseofRRTpathplannersfortrajectorygenerationprovi dedaexiblemethodfor computingconditionalvehicletrajectorieswhichareable tobequicklyreplannedwhennewmap knowledgebecomesavailable.RRTalgorithmsalsoallowedk inodynamicconstraintsforeach vehicletobetailoredforheterogeneousvehicleteamsaddi nganotherlayerofexibility. Thesimulationsarelimitedbytheassumptionsofperfectfe aturepointdetection,feature tracking,andscenereconstructionbutsufcetodemonstra tethefeasibilityandperformanceof themappingcriteriaproposed. 43

PAGE 44

7.2FutureDirection Recentlymanyresearcheffortshaveconsideredtheproblem ofsimultaneouslocalization andmapping(SLAM).TheSLAMprocessusesasinglemobilerob ottosimultaneouslygenerate aglobalenvironmentmapsandlocalizethevehiclewithinth eenvironment.Asomewhat differentproblem,isthatofcooperativelocalizationand mapping(CLAM).Basically,CLAM involvestwoormorerobotscooperatingtobuildamapofthee nvironment.TheCooperative strategyisnotonlyaimedatsimplyincreasingthespeedwit hwhichthemapisconstructed,but alsoitisaimedatincreasingtheaccuracyoftheresultantm apsandvehiclelocationestimates. 44

PAGE 45

REFERENCES [1]Broida,T.andChellappa,R.,“EstimationofObjectMoti onParametersfromNoisy Images,” IEEETransactionsonPatternAnalysisandMachineIntellig ence ,Vol.8,No.1, pp.90-99,Jan.1986. [2]Brunskill,E.,andRoy,N.,“SLAMusingIncrementalProb abilisticPCAandDimensionalityReduction,” IEEEInternationalConferenceonRoboticsandAutomation ,April 2005. [3]Canny,J.F.,“AComputationalApproachtoEdgeDetectio n,” IEEETransactionson PatternAnalysisandMachineIntelligence ,Vol.8,No.6,November1986,pp.679-698. [4]Castro,G.J.,Nieto,J.,Gallego,L.M.,Pastor,L.,Cabe llo,E.,“AnEffectiveCamera CalibrationMethod,” IEEE0-7803-4484-7/98 ,1998,pp.171-174. [5]Choset,H.,Lynch,K.,Hutchinson,S.,Kantor,G.,Burga rd,W.,Kavarki,L.,andThrun, S.,“PrinciplesofRobotMotion:Theory,Algorithms,andIm plementations”, MITPress Cambridge,MA,2005. [6]Choset,H.andNagatani,K.,“TopologicalSimultaneous LocalizationandMapping (SLAM):TowardExactLocalizationWithoutExplicitLocali zation,” IEEETransactions onRoboticsandAutomation ,Vol.17,No.2,April2001. [7]Duda,R.O.,Hart,P.E.,andStrok,D.G.,“PatternClassi cation,” JohnWileyandSons, Inc. ,2001. [8]Elfes,A.“Sonar-BasedReal-WorldMappingandNavigati on,” IEEEJournalofRobotics andAutomation ,Vol.RA-3,No.3,June1987. [9]Elfes,A.,“UsingOccupancyGridsforMobileRobotPerce ptionandNavigation,” Computer ,Vol.22,Issue6,pp.46-57,1989. [10]Forsyth,D.A.andPonce,J., ComputerVision:AModernApproach ,Prentice-Hall Publishers,UpperSaddleRiver,NJ,2003. [11]Hammerly,G.,andElkan,C.,“Learningthekink-means” NeuralInformationProcessingSystems ,15,2004. [12]Harris,C.andStephens,M.,“ACombinedCornerandEdge Detector,” Proceedingsofthe AlveyVisionConference ,1988,pp.147-151. [13]Hartley,R.,“InDefenseoftheEight-PointAlgorithm, ” IEEETransactionsonPattern AnalysisandMachineIntelligence ,Vol.19,No.6,June1997. [14]Kuffner,J.,andLaValle,S.,“RRTConnect:Anefcient ApproachtoSingle-Query PathPlanning,” ProceedingsoftheIEEEInternationalConferenceonRoboti csand Automation ,2000,pp.995-1001. 45

PAGE 46

[15]Kuipers,B.andByun,Y.,“ARobotExplorationandMappi ngStrategyBasedonSemantic HierarchyofSpacialRepresentations”, JournalofRoboticsandAutonomousSystems 8:47-63,1991. [16]LaValle,S.M.,andKuffner,J.J.,“RandomizedKinodyn amicPlanning,” International JournalofRoboticsResearch ,Vol.20,No.5,May2001,378-400. [17]LaValle,S.M., PlanningAlgorithms ,CambridgeUniversityPress,Alsoavailable at:http://msl.cs.uiuc.edu/planning/,2006. [18]Latombe,J.C., RobotMotionPlanning ,Boston,MA:KluwerAcademic,1991. [19]Laumond,J.P., RobotMotionPlanningandControl ,Onlinebook:http://www.laas.fr/ jpl/book.html. [20]Longuet-Higgins,H.C.,“AComputerAlgorithmforReco nstructingaScenefromTwo Projections,” Nature ,Vol.293,pp.133-135,Sept1981. [21]Lucas,B.andKanade,T.,“AnIterativeImageRegistrat ionTechniquewithanApplication toStereoVision,” ProceedingsoftheDARPAImageUnderstandingWorkshop ,1981,pp. 121-130. [22]Ma,Y.,Soatto,S.,Kosekca,J.,andSastryS.,“AnInvit ationto3-DVision:FromImages toGeometricModels,” Springer-VerlagNewYork ,Inc.2004. [23]Mahon,I.,andWilliams,S.,“Three-DimensionalRobot icMapping”, Proceedingsofthe 2003AustralasianConferenceonRoboticsandAutomation ,Brisbane,Queensland,2003. [24]Martin,C.,andThrunS.,“OnlineAcquisitionofCompac tVolumetricMapswithMobile Robots”, InIEEEInternationalConferenceonRoboticsandAutomatio n ,Washington,DC, 2002. [25]MacQueen,J.B.,“SomeMethodsforClassicationandAn alysisofMultivariateObservations”, Proceedingsofthe5thBerkeleySymposiumonMathematicalS tatisticsand Probability ,pp.281-297,1967. [26]Oliensis,J.,“ACritiqueofStructure-from-MotionAl gorithms”, ComputerVisionand ImageUnderstanding ,80:172-214,2000. [27]Prazenica,R.,Watkins,A.,Kurdila,A.,Ke,Q.,andKan adeT.,“Vision-BasedKalman FilteringforAircraftStateEstimationandStructurefrom Motion”, AIAAGuidance, Navigation,andControlConferenceandExhibit ,SanFrancisco,California,August2005. [28]Surmann,H.,Nuchter,A.,andHertzberg,J.,“AnAutono mousMobileRobotwitha3D LaserRangeFinderfor3DExplorationandDigitalizationof IndoorEnvironments”, RoboticsandAutonomousSystems ,Vol.45,pp.181-198,2003. 46

PAGE 47

[29]Thrun,S.,Gutmann,J.-S.,Fox,D.,Burgard,W.,andKui pers,B.,“IntegratingTopological andMetricMapsforMobileRobotNavigation:AStasticalApp roach”, InProceedingsof theNationalConferenceonArticialIntelligence(AAAI) ,1998. [30]Thrun,S.,Burgard,W.,andFoxD.,“AReal-TimeAlgorit hmforMobileRobotMappingWithApplicationstoMulti-Robotand3DMapping”, InIEEEInternationalConferenceon RoboticsandAutomation ,2000 [31]Tipping,M.,andBishop,C.,“ProbabilisticPrincipal ComponentAnalysis”, Journalofthe RoyalStatisticalSociety ,SeriesB,61,Part3,pp.611-622. [32]Tomasi,C.andKanade,T.,“ShapeandMotionfromImageS treamsUnderOrthography”, InternationalJournalofComputerVision ,Vol.9,No.2,pp.137-154. [33]OfceoftheSecretaryofDefense,UnitedStates,“Unma nnedAircraftSystemsRoadmap 2005-2030”,August2005. [34]Thomas,J.andOliensisJ.,“DealingwithNoiseinMulti frameStructurefromMotion”, ComputerVisionandImageUnderstanding ,Vol.76,No.2,November,pp.109-124,1999. [35]Weng,J.,Zhang,Y.,andHwangW.,“CandidCovariance-F reeIncrementalPrinciple ComponentAnalysis”, IEEETransactionsonPatternAnalysisandMachineIntellig ence 2003. 47

PAGE 48

BIOGRAPHICALSKETCH EricBranchwasbornandraisedinHollywood,Florida.Heatt endedtheUniversityof WestFloridainPensacola,Florida.DuringhistimeinPensa cola,heworkedasaightline attendantatPensacolaRegionAirport,wherehestartedhis ighttrainingandeventually earnedaCommercialPilotsLicense.In2001,hetransferred totheUniversityofFlorida,where hereceivedaBachelorofSciencedegreeinaerospaceengine eringinAugust2003.After graduation,EricwenttoworkforNavalAirSystemsCommand( NAVAIR)asaFlightTest Engineer.EriclaterreturnedtotheUniversityofFloridat oearnaMasterofSciencedegreein aerospaceengineering.Hisgraduateresearchinvolvedthe developmentofvision-basedight controlmethodologies. 48